problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_1488
|
rasdani/github-patches
|
git_diff
|
google__openhtf-1112
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unused `six` import in monitor code
In `openhtf/core/monitors.py`, it looks like there is an unused import of the `six` module:
https://github.com/google/openhtf/blob/c85fb069a1ce407e82bb47a8fb1b64220e974c5f/openhtf/core/monitors.py#L58
If the aforementioned import is in fact not needed, then it should be deleted.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openhtf/core/monitors.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Monitors provide a mechanism for periodically collecting a measurement.
15
16 Monitors are implemented similar to phase functions - they are decorated
17 with plugs.plug() to pass plugs in. The return value of a monitor
18 function, however, will be used to append a value to a measurement.
19
20 Monitors by default poll at a rate of 1 second between invocations of
21 the monitor function. The poll interval (given in milliseconds) determines the
22 approximate frequency at which values will be sampled. A sample is considered
23 to have been taken at the time when the monitor function *returns*, not when
24 it is called.
25
26 The approximate average duration of calls to the monitor function is taken into
27 account, so that samples are obtained on as close to interval_ms boundaries as
28 can be. A poll interval of 0 will cause the monitor function to be called in a
29 tight loop with no delays.
30
31 Example:
32
33 @plugs.plug(current_meter=current_meter.CurrentMeter)
34 def CurrentMonitor(test, current_meter):
35 return current_meter.GetReading()
36
37 @monitors.monitors('current_draw', CurrentMonitor, units=units.AMPERE)
38 def MyPhase(test):
39 # Do some stuff for a while...
40
41 # MyPhase will have a dimensioned measurement on it, with units of 'AMPERE' and
42 # a single dimension of 'MILLISECONDS', and will have values for roughly every
43 # second while MyPhase was executing.
44 """
45
46 import functools
47 import inspect
48 import time
49 from typing import Any, Callable, Dict, Optional, Text
50
51 import openhtf
52 from openhtf import plugs
53 from openhtf.core import measurements
54 from openhtf.core import phase_descriptor
55 from openhtf.core import test_state as core_test_state
56 from openhtf.util import threads
57 from openhtf.util import units as uom
58 import six
59
60
61 class _MonitorThread(threads.KillableThread):
62 """Background thread that runs a monitor."""
63
64 daemon = True
65
66 def __init__(self, measurement_name: Text,
67 monitor_desc: phase_descriptor.PhaseDescriptor,
68 extra_kwargs: Dict[Any, Any],
69 test_state: core_test_state.TestState, interval_ms: int):
70 super(_MonitorThread,
71 self).__init__(name='%s_MonitorThread' % measurement_name)
72 self.measurement_name = measurement_name
73 self.monitor_desc = monitor_desc
74 self.test_state = test_state
75 self.interval_ms = interval_ms
76 self.extra_kwargs = extra_kwargs
77
78 def get_value(self) -> Any:
79 argspec = inspect.getfullargspec(self.monitor_desc.func)
80 argspec_args = argspec.args
81 argspec_keywords = argspec.varkw
82 if argspec_keywords:
83 # Monitor phase takes **kwargs, so just pass everything in.
84 kwargs = self.extra_kwargs
85 else:
86 # Only pass in args that the monitor phase takes.
87 kwargs = {
88 arg: val for arg, val in self.extra_kwargs if arg in argspec_args
89 }
90 return self.monitor_desc.with_args(**kwargs)(self.test_state)
91
92 def _thread_proc(self):
93 measurement = getattr(self.test_state.test_api.measurements,
94 self.measurement_name)
95 start_time = time.time()
96
97 # Special case tight-loop monitoring.
98 if not self.interval_ms:
99 while True:
100 measurement[(time.time() - start_time) * 1000] = self.get_value()
101
102 # Helper to take sample, return sample number and sample duration.
103 def _take_sample():
104 pre_time, value, post_time = time.time(), self.get_value(), time.time()
105 measurement[(post_time - start_time) * 1000] = value
106 return (int((post_time - start_time) * 1000 / self.interval_ms),
107 (post_time - pre_time) * 1000)
108
109 # Track the last sample number, and an approximation of the mean time
110 # it takes to sample (so we can account for it in how long we sleep).
111 last_sample, mean_sample_ms = _take_sample()
112 while True:
113 # Find what sample number (float) we would be on if we sampled now.
114 current_time = time.time()
115 new_sample = ((((current_time - start_time) * 1000) + mean_sample_ms) /
116 self.interval_ms)
117 if new_sample < last_sample + 1:
118 time.sleep(start_time - current_time +
119 ((last_sample + 1) * self.interval_ms / 1000.0) -
120 (mean_sample_ms / 1000.0))
121 continue
122 elif new_sample > last_sample + 2:
123 self.test_state.state_logger.warning(
124 'Monitor for "%s" skipping %s sample(s).', self.measurement_name,
125 new_sample - last_sample - 1)
126 last_sample, cur_sample_ms = _take_sample()
127 # Approximate 10-element sliding window average.
128 mean_sample_ms = ((9 * mean_sample_ms) + cur_sample_ms) / 10.0
129
130
131 def monitors(
132 measurement_name: Text,
133 monitor_func: phase_descriptor.PhaseT,
134 units: Optional[uom.UnitDescriptor] = None,
135 poll_interval_ms: int = 1000
136 ) -> Callable[[phase_descriptor.PhaseT], phase_descriptor.PhaseDescriptor]:
137 """Returns a decorator that wraps a phase with a monitor."""
138 monitor_desc = openhtf.PhaseDescriptor.wrap_or_copy(monitor_func)
139
140 def wrapper(
141 phase_func: phase_descriptor.PhaseT) -> phase_descriptor.PhaseDescriptor:
142 phase_desc = openhtf.PhaseDescriptor.wrap_or_copy(phase_func)
143
144 # Re-key this dict so we don't have to worry about collisions with
145 # plug.plug() decorators on the phase function. Since we aren't
146 # updating kwargs here, we don't have to worry about collisions with
147 # kwarg names.
148 monitor_plugs = {('_' * idx) + measurement_name + '_monitor': plug.cls
149 for idx, plug in enumerate(monitor_desc.plugs, start=1)}
150
151 @openhtf.PhaseOptions(requires_state=True)
152 @plugs.plug(update_kwargs=False, **monitor_plugs)
153 @openhtf.measures(
154 measurements.Measurement(measurement_name).with_units(
155 units).with_dimensions(uom.MILLISECOND))
156 @functools.wraps(phase_desc.func)
157 def monitored_phase_func(test_state, *args, **kwargs):
158 # Start monitor thread, it will run monitor_desc periodically.
159 monitor_thread = _MonitorThread(measurement_name, monitor_desc,
160 phase_desc.extra_kwargs, test_state,
161 poll_interval_ms)
162 monitor_thread.start()
163 try:
164 return phase_desc(test_state, *args, **kwargs)
165 finally:
166 monitor_thread.kill()
167 monitor_thread.join()
168
169 return monitored_phase_func
170
171 return wrapper
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openhtf/core/monitors.py b/openhtf/core/monitors.py
--- a/openhtf/core/monitors.py
+++ b/openhtf/core/monitors.py
@@ -55,7 +55,6 @@
from openhtf.core import test_state as core_test_state
from openhtf.util import threads
from openhtf.util import units as uom
-import six
class _MonitorThread(threads.KillableThread):
|
{"golden_diff": "diff --git a/openhtf/core/monitors.py b/openhtf/core/monitors.py\n--- a/openhtf/core/monitors.py\n+++ b/openhtf/core/monitors.py\n@@ -55,7 +55,6 @@\n from openhtf.core import test_state as core_test_state\n from openhtf.util import threads\n from openhtf.util import units as uom\n-import six\n \n \n class _MonitorThread(threads.KillableThread):\n", "issue": "Unused `six` import in monitor code\nIn `openhtf/core/monitors.py`, it looks like there is an unused import of the `six` module:\r\nhttps://github.com/google/openhtf/blob/c85fb069a1ce407e82bb47a8fb1b64220e974c5f/openhtf/core/monitors.py#L58\r\n\r\nIf the aforementioned import is in fact not needed, then it should be deleted.\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Monitors provide a mechanism for periodically collecting a measurement.\n\nMonitors are implemented similar to phase functions - they are decorated\nwith plugs.plug() to pass plugs in. The return value of a monitor\nfunction, however, will be used to append a value to a measurement.\n\nMonitors by default poll at a rate of 1 second between invocations of\nthe monitor function. The poll interval (given in milliseconds) determines the\napproximate frequency at which values will be sampled. A sample is considered\nto have been taken at the time when the monitor function *returns*, not when\nit is called.\n\nThe approximate average duration of calls to the monitor function is taken into\naccount, so that samples are obtained on as close to interval_ms boundaries as\ncan be. A poll interval of 0 will cause the monitor function to be called in a\ntight loop with no delays.\n\nExample:\n\[email protected](current_meter=current_meter.CurrentMeter)\ndef CurrentMonitor(test, current_meter):\n return current_meter.GetReading()\n\[email protected]('current_draw', CurrentMonitor, units=units.AMPERE)\ndef MyPhase(test):\n # Do some stuff for a while...\n\n# MyPhase will have a dimensioned measurement on it, with units of 'AMPERE' and\n# a single dimension of 'MILLISECONDS', and will have values for roughly every\n# second while MyPhase was executing.\n\"\"\"\n\nimport functools\nimport inspect\nimport time\nfrom typing import Any, Callable, Dict, Optional, Text\n\nimport openhtf\nfrom openhtf import plugs\nfrom openhtf.core import measurements\nfrom openhtf.core import phase_descriptor\nfrom openhtf.core import test_state as core_test_state\nfrom openhtf.util import threads\nfrom openhtf.util import units as uom\nimport six\n\n\nclass _MonitorThread(threads.KillableThread):\n \"\"\"Background thread that runs a monitor.\"\"\"\n\n daemon = True\n\n def __init__(self, measurement_name: Text,\n monitor_desc: phase_descriptor.PhaseDescriptor,\n extra_kwargs: Dict[Any, Any],\n test_state: core_test_state.TestState, interval_ms: int):\n super(_MonitorThread,\n self).__init__(name='%s_MonitorThread' % measurement_name)\n self.measurement_name = measurement_name\n self.monitor_desc = monitor_desc\n self.test_state = test_state\n self.interval_ms = interval_ms\n self.extra_kwargs = extra_kwargs\n\n def get_value(self) -> Any:\n argspec = inspect.getfullargspec(self.monitor_desc.func)\n argspec_args = argspec.args\n argspec_keywords = argspec.varkw\n if argspec_keywords:\n # Monitor phase takes **kwargs, so just pass everything in.\n kwargs = self.extra_kwargs\n else:\n # Only pass in args that the monitor phase takes.\n kwargs = {\n arg: val for arg, val in self.extra_kwargs if arg in argspec_args\n }\n return self.monitor_desc.with_args(**kwargs)(self.test_state)\n\n def _thread_proc(self):\n measurement = getattr(self.test_state.test_api.measurements,\n self.measurement_name)\n start_time = time.time()\n\n # Special case tight-loop monitoring.\n if not self.interval_ms:\n while True:\n measurement[(time.time() - start_time) * 1000] = self.get_value()\n\n # Helper to take sample, return sample number and sample duration.\n def _take_sample():\n pre_time, value, post_time = time.time(), self.get_value(), time.time()\n measurement[(post_time - start_time) * 1000] = value\n return (int((post_time - start_time) * 1000 / self.interval_ms),\n (post_time - pre_time) * 1000)\n\n # Track the last sample number, and an approximation of the mean time\n # it takes to sample (so we can account for it in how long we sleep).\n last_sample, mean_sample_ms = _take_sample()\n while True:\n # Find what sample number (float) we would be on if we sampled now.\n current_time = time.time()\n new_sample = ((((current_time - start_time) * 1000) + mean_sample_ms) /\n self.interval_ms)\n if new_sample < last_sample + 1:\n time.sleep(start_time - current_time +\n ((last_sample + 1) * self.interval_ms / 1000.0) -\n (mean_sample_ms / 1000.0))\n continue\n elif new_sample > last_sample + 2:\n self.test_state.state_logger.warning(\n 'Monitor for \"%s\" skipping %s sample(s).', self.measurement_name,\n new_sample - last_sample - 1)\n last_sample, cur_sample_ms = _take_sample()\n # Approximate 10-element sliding window average.\n mean_sample_ms = ((9 * mean_sample_ms) + cur_sample_ms) / 10.0\n\n\ndef monitors(\n measurement_name: Text,\n monitor_func: phase_descriptor.PhaseT,\n units: Optional[uom.UnitDescriptor] = None,\n poll_interval_ms: int = 1000\n) -> Callable[[phase_descriptor.PhaseT], phase_descriptor.PhaseDescriptor]:\n \"\"\"Returns a decorator that wraps a phase with a monitor.\"\"\"\n monitor_desc = openhtf.PhaseDescriptor.wrap_or_copy(monitor_func)\n\n def wrapper(\n phase_func: phase_descriptor.PhaseT) -> phase_descriptor.PhaseDescriptor:\n phase_desc = openhtf.PhaseDescriptor.wrap_or_copy(phase_func)\n\n # Re-key this dict so we don't have to worry about collisions with\n # plug.plug() decorators on the phase function. Since we aren't\n # updating kwargs here, we don't have to worry about collisions with\n # kwarg names.\n monitor_plugs = {('_' * idx) + measurement_name + '_monitor': plug.cls\n for idx, plug in enumerate(monitor_desc.plugs, start=1)}\n\n @openhtf.PhaseOptions(requires_state=True)\n @plugs.plug(update_kwargs=False, **monitor_plugs)\n @openhtf.measures(\n measurements.Measurement(measurement_name).with_units(\n units).with_dimensions(uom.MILLISECOND))\n @functools.wraps(phase_desc.func)\n def monitored_phase_func(test_state, *args, **kwargs):\n # Start monitor thread, it will run monitor_desc periodically.\n monitor_thread = _MonitorThread(measurement_name, monitor_desc,\n phase_desc.extra_kwargs, test_state,\n poll_interval_ms)\n monitor_thread.start()\n try:\n return phase_desc(test_state, *args, **kwargs)\n finally:\n monitor_thread.kill()\n monitor_thread.join()\n\n return monitored_phase_func\n\n return wrapper\n", "path": "openhtf/core/monitors.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Monitors provide a mechanism for periodically collecting a measurement.\n\nMonitors are implemented similar to phase functions - they are decorated\nwith plugs.plug() to pass plugs in. The return value of a monitor\nfunction, however, will be used to append a value to a measurement.\n\nMonitors by default poll at a rate of 1 second between invocations of\nthe monitor function. The poll interval (given in milliseconds) determines the\napproximate frequency at which values will be sampled. A sample is considered\nto have been taken at the time when the monitor function *returns*, not when\nit is called.\n\nThe approximate average duration of calls to the monitor function is taken into\naccount, so that samples are obtained on as close to interval_ms boundaries as\ncan be. A poll interval of 0 will cause the monitor function to be called in a\ntight loop with no delays.\n\nExample:\n\[email protected](current_meter=current_meter.CurrentMeter)\ndef CurrentMonitor(test, current_meter):\n return current_meter.GetReading()\n\[email protected]('current_draw', CurrentMonitor, units=units.AMPERE)\ndef MyPhase(test):\n # Do some stuff for a while...\n\n# MyPhase will have a dimensioned measurement on it, with units of 'AMPERE' and\n# a single dimension of 'MILLISECONDS', and will have values for roughly every\n# second while MyPhase was executing.\n\"\"\"\n\nimport functools\nimport inspect\nimport time\nfrom typing import Any, Callable, Dict, Optional, Text\n\nimport openhtf\nfrom openhtf import plugs\nfrom openhtf.core import measurements\nfrom openhtf.core import phase_descriptor\nfrom openhtf.core import test_state as core_test_state\nfrom openhtf.util import threads\nfrom openhtf.util import units as uom\n\n\nclass _MonitorThread(threads.KillableThread):\n \"\"\"Background thread that runs a monitor.\"\"\"\n\n daemon = True\n\n def __init__(self, measurement_name: Text,\n monitor_desc: phase_descriptor.PhaseDescriptor,\n extra_kwargs: Dict[Any, Any],\n test_state: core_test_state.TestState, interval_ms: int):\n super(_MonitorThread,\n self).__init__(name='%s_MonitorThread' % measurement_name)\n self.measurement_name = measurement_name\n self.monitor_desc = monitor_desc\n self.test_state = test_state\n self.interval_ms = interval_ms\n self.extra_kwargs = extra_kwargs\n\n def get_value(self) -> Any:\n argspec = inspect.getfullargspec(self.monitor_desc.func)\n argspec_args = argspec.args\n argspec_keywords = argspec.varkw\n if argspec_keywords:\n # Monitor phase takes **kwargs, so just pass everything in.\n kwargs = self.extra_kwargs\n else:\n # Only pass in args that the monitor phase takes.\n kwargs = {\n arg: val for arg, val in self.extra_kwargs if arg in argspec_args\n }\n return self.monitor_desc.with_args(**kwargs)(self.test_state)\n\n def _thread_proc(self):\n measurement = getattr(self.test_state.test_api.measurements,\n self.measurement_name)\n start_time = time.time()\n\n # Special case tight-loop monitoring.\n if not self.interval_ms:\n while True:\n measurement[(time.time() - start_time) * 1000] = self.get_value()\n\n # Helper to take sample, return sample number and sample duration.\n def _take_sample():\n pre_time, value, post_time = time.time(), self.get_value(), time.time()\n measurement[(post_time - start_time) * 1000] = value\n return (int((post_time - start_time) * 1000 / self.interval_ms),\n (post_time - pre_time) * 1000)\n\n # Track the last sample number, and an approximation of the mean time\n # it takes to sample (so we can account for it in how long we sleep).\n last_sample, mean_sample_ms = _take_sample()\n while True:\n # Find what sample number (float) we would be on if we sampled now.\n current_time = time.time()\n new_sample = ((((current_time - start_time) * 1000) + mean_sample_ms) /\n self.interval_ms)\n if new_sample < last_sample + 1:\n time.sleep(start_time - current_time +\n ((last_sample + 1) * self.interval_ms / 1000.0) -\n (mean_sample_ms / 1000.0))\n continue\n elif new_sample > last_sample + 2:\n self.test_state.state_logger.warning(\n 'Monitor for \"%s\" skipping %s sample(s).', self.measurement_name,\n new_sample - last_sample - 1)\n last_sample, cur_sample_ms = _take_sample()\n # Approximate 10-element sliding window average.\n mean_sample_ms = ((9 * mean_sample_ms) + cur_sample_ms) / 10.0\n\n\ndef monitors(\n measurement_name: Text,\n monitor_func: phase_descriptor.PhaseT,\n units: Optional[uom.UnitDescriptor] = None,\n poll_interval_ms: int = 1000\n) -> Callable[[phase_descriptor.PhaseT], phase_descriptor.PhaseDescriptor]:\n \"\"\"Returns a decorator that wraps a phase with a monitor.\"\"\"\n monitor_desc = openhtf.PhaseDescriptor.wrap_or_copy(monitor_func)\n\n def wrapper(\n phase_func: phase_descriptor.PhaseT) -> phase_descriptor.PhaseDescriptor:\n phase_desc = openhtf.PhaseDescriptor.wrap_or_copy(phase_func)\n\n # Re-key this dict so we don't have to worry about collisions with\n # plug.plug() decorators on the phase function. Since we aren't\n # updating kwargs here, we don't have to worry about collisions with\n # kwarg names.\n monitor_plugs = {('_' * idx) + measurement_name + '_monitor': plug.cls\n for idx, plug in enumerate(monitor_desc.plugs, start=1)}\n\n @openhtf.PhaseOptions(requires_state=True)\n @plugs.plug(update_kwargs=False, **monitor_plugs)\n @openhtf.measures(\n measurements.Measurement(measurement_name).with_units(\n units).with_dimensions(uom.MILLISECOND))\n @functools.wraps(phase_desc.func)\n def monitored_phase_func(test_state, *args, **kwargs):\n # Start monitor thread, it will run monitor_desc periodically.\n monitor_thread = _MonitorThread(measurement_name, monitor_desc,\n phase_desc.extra_kwargs, test_state,\n poll_interval_ms)\n monitor_thread.start()\n try:\n return phase_desc(test_state, *args, **kwargs)\n finally:\n monitor_thread.kill()\n monitor_thread.join()\n\n return monitored_phase_func\n\n return wrapper\n", "path": "openhtf/core/monitors.py"}]}
| 2,393 | 102 |
gh_patches_debug_4424
|
rasdani/github-patches
|
git_diff
|
mozilla__bugbug-2654
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replace typing.Dict with dict
It is now possible to use `dict` directly instead of `typing.Dict` in type definitions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import os
7
8 from setuptools import find_packages, setup
9
10 here = os.path.dirname(__file__)
11
12
13 def read_requirements(file_):
14 with open(os.path.join(here, file_)) as f:
15 return sorted(list(set(line.split("#")[0].strip() for line in f)))
16
17
18 install_requires = read_requirements("requirements.txt")
19
20
21 with open(os.path.join(here, "VERSION")) as f:
22 version = f.read().strip()
23
24 # Read the extra requirements
25 extras = ["nlp", "nn"]
26
27 extras_require = {}
28
29 for extra in extras:
30 extras_require[extra] = read_requirements("extra-%s-requirements.txt" % extra)
31
32
33 setup(
34 name="bugbug",
35 version=version,
36 description="ML tools for Mozilla projects",
37 author="Marco Castelluccio",
38 author_email="[email protected]",
39 install_requires=install_requires,
40 extras_require=extras_require,
41 packages=find_packages(exclude=["contrib", "docs", "tests"]),
42 include_package_data=True,
43 license="MPL2",
44 entry_points={
45 "console_scripts": [
46 "bugbug-data-commits = scripts.commit_retriever:main",
47 "bugbug-data-bugzilla = scripts.bug_retriever:main",
48 "bugbug-data-test-scheduling-history = scripts.test_scheduling_history_retriever:main",
49 "bugbug-data-revisions = scripts.revision_retriever:main",
50 "bugbug-train = scripts.trainer:main",
51 "bugbug-train-similarity = scripts.similarity_trainer:main",
52 "bugbug-check = scripts.check:main",
53 "bugbug-microannotate-generate = scripts.microannotate_generator:main",
54 "bugbug-classify-commit = scripts.commit_classifier:main",
55 "bugbug-classify-bug = scripts.bug_classifier:main",
56 "bugbug-regressor-finder = scripts.regressor_finder:main",
57 "bugbug-retrieve-training-metrics = scripts.retrieve_training_metrics:main",
58 "bugbug-analyze-training-metrics = scripts.analyze_training_metrics:main",
59 "bugbug-check-all-metrics = scripts.check_all_metrics:main",
60 "bugbug-past-bugs-by-unit = scripts.past_bugs_by_unit:main",
61 "bugbug-testing-policy-stats = scripts.testing_policy_stats:main",
62 "bugbug-generate-landings-risk-report = scripts.generate_landings_risk_report:main",
63 "bugbug-shadow-scheduler-stats = scripts.shadow_scheduler_stats:main",
64 "bugbug-data-github = scripts.github_issue_retriever:main",
65 ]
66 },
67 classifiers=[
68 "Programming Language :: Python :: 3.7",
69 "Programming Language :: Python :: 3.8",
70 "Programming Language :: Python :: 3.9",
71 "Programming Language :: Python :: 3 :: Only",
72 "License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
73 ],
74 )
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -65,8 +65,6 @@
]
},
classifiers=[
- "Programming Language :: Python :: 3.7",
- "Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3 :: Only",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -65,8 +65,6 @@\n ]\n },\n classifiers=[\n- \"Programming Language :: Python :: 3.7\",\n- \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)\",\n", "issue": "Replace typing.Dict with dict\nIt is now possible to use `dict` directly instead of `typing.Dict` in type definitions.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport os\n\nfrom setuptools import find_packages, setup\n\nhere = os.path.dirname(__file__)\n\n\ndef read_requirements(file_):\n with open(os.path.join(here, file_)) as f:\n return sorted(list(set(line.split(\"#\")[0].strip() for line in f)))\n\n\ninstall_requires = read_requirements(\"requirements.txt\")\n\n\nwith open(os.path.join(here, \"VERSION\")) as f:\n version = f.read().strip()\n\n# Read the extra requirements\nextras = [\"nlp\", \"nn\"]\n\nextras_require = {}\n\nfor extra in extras:\n extras_require[extra] = read_requirements(\"extra-%s-requirements.txt\" % extra)\n\n\nsetup(\n name=\"bugbug\",\n version=version,\n description=\"ML tools for Mozilla projects\",\n author=\"Marco Castelluccio\",\n author_email=\"[email protected]\",\n install_requires=install_requires,\n extras_require=extras_require,\n packages=find_packages(exclude=[\"contrib\", \"docs\", \"tests\"]),\n include_package_data=True,\n license=\"MPL2\",\n entry_points={\n \"console_scripts\": [\n \"bugbug-data-commits = scripts.commit_retriever:main\",\n \"bugbug-data-bugzilla = scripts.bug_retriever:main\",\n \"bugbug-data-test-scheduling-history = scripts.test_scheduling_history_retriever:main\",\n \"bugbug-data-revisions = scripts.revision_retriever:main\",\n \"bugbug-train = scripts.trainer:main\",\n \"bugbug-train-similarity = scripts.similarity_trainer:main\",\n \"bugbug-check = scripts.check:main\",\n \"bugbug-microannotate-generate = scripts.microannotate_generator:main\",\n \"bugbug-classify-commit = scripts.commit_classifier:main\",\n \"bugbug-classify-bug = scripts.bug_classifier:main\",\n \"bugbug-regressor-finder = scripts.regressor_finder:main\",\n \"bugbug-retrieve-training-metrics = scripts.retrieve_training_metrics:main\",\n \"bugbug-analyze-training-metrics = scripts.analyze_training_metrics:main\",\n \"bugbug-check-all-metrics = scripts.check_all_metrics:main\",\n \"bugbug-past-bugs-by-unit = scripts.past_bugs_by_unit:main\",\n \"bugbug-testing-policy-stats = scripts.testing_policy_stats:main\",\n \"bugbug-generate-landings-risk-report = scripts.generate_landings_risk_report:main\",\n \"bugbug-shadow-scheduler-stats = scripts.shadow_scheduler_stats:main\",\n \"bugbug-data-github = scripts.github_issue_retriever:main\",\n ]\n },\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport os\n\nfrom setuptools import find_packages, setup\n\nhere = os.path.dirname(__file__)\n\n\ndef read_requirements(file_):\n with open(os.path.join(here, file_)) as f:\n return sorted(list(set(line.split(\"#\")[0].strip() for line in f)))\n\n\ninstall_requires = read_requirements(\"requirements.txt\")\n\n\nwith open(os.path.join(here, \"VERSION\")) as f:\n version = f.read().strip()\n\n# Read the extra requirements\nextras = [\"nlp\", \"nn\"]\n\nextras_require = {}\n\nfor extra in extras:\n extras_require[extra] = read_requirements(\"extra-%s-requirements.txt\" % extra)\n\n\nsetup(\n name=\"bugbug\",\n version=version,\n description=\"ML tools for Mozilla projects\",\n author=\"Marco Castelluccio\",\n author_email=\"[email protected]\",\n install_requires=install_requires,\n extras_require=extras_require,\n packages=find_packages(exclude=[\"contrib\", \"docs\", \"tests\"]),\n include_package_data=True,\n license=\"MPL2\",\n entry_points={\n \"console_scripts\": [\n \"bugbug-data-commits = scripts.commit_retriever:main\",\n \"bugbug-data-bugzilla = scripts.bug_retriever:main\",\n \"bugbug-data-test-scheduling-history = scripts.test_scheduling_history_retriever:main\",\n \"bugbug-data-revisions = scripts.revision_retriever:main\",\n \"bugbug-train = scripts.trainer:main\",\n \"bugbug-train-similarity = scripts.similarity_trainer:main\",\n \"bugbug-check = scripts.check:main\",\n \"bugbug-microannotate-generate = scripts.microannotate_generator:main\",\n \"bugbug-classify-commit = scripts.commit_classifier:main\",\n \"bugbug-classify-bug = scripts.bug_classifier:main\",\n \"bugbug-regressor-finder = scripts.regressor_finder:main\",\n \"bugbug-retrieve-training-metrics = scripts.retrieve_training_metrics:main\",\n \"bugbug-analyze-training-metrics = scripts.analyze_training_metrics:main\",\n \"bugbug-check-all-metrics = scripts.check_all_metrics:main\",\n \"bugbug-past-bugs-by-unit = scripts.past_bugs_by_unit:main\",\n \"bugbug-testing-policy-stats = scripts.testing_policy_stats:main\",\n \"bugbug-generate-landings-risk-report = scripts.generate_landings_risk_report:main\",\n \"bugbug-shadow-scheduler-stats = scripts.shadow_scheduler_stats:main\",\n \"bugbug-data-github = scripts.github_issue_retriever:main\",\n ]\n },\n classifiers=[\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)\",\n ],\n)\n", "path": "setup.py"}]}
| 1,128 | 111 |
gh_patches_debug_18705
|
rasdani/github-patches
|
git_diff
|
weecology__retriever-147
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mac App MySQL installations failing based on "Incorrect string value"
MCDB:
```
INSERT INTO MCDB.trapping (site_id, initial_year, final_year, n_sampling_months, trap_nights, months_of_sampling, pitfall_traps, small_traps, large_traps, snap_traps, notes) VALUES (1206, '1995', '1995', '1', '580', 'November', '1', '0', '0', '0', 'each pitfall trapline 100-500 m?; unclear how many or length');
(1366, "Incorrect string value: '\\xB2; unc...' for column 'notes' at row 1")
```
MarineSize (Barnes 2008):
```
INSERT INTO MarineSize.main (record_number, in_ref_id, individual_id, predator, predator_common_name, predator_taxon, predator_lifestage, type_of_feeding_interaction, predator_length, predator_length_unit, predator_dimension_measured, predator_standard_length, predator_fork_length, predator_total_length, predator_tl_fl_sl_conversion_reference, standardised_predator_length, predator_measurement_type, predator_length_mass_conversion_method, predator_length_mass_conversion_reference, predator_quality_of_length_mass_conversion, predator_mass, predator_mass_unit, predator_mass_check, predator_mass_check_diff, predator_ratio_mass_mass, si_predator_mass, diet_coverage, prey, prey_common_name, prey_taxon, prey_length, prey_length_unit, prey_conversion_to_length_method, prey_quality_of_conversion_to_length, prey_conversion_to_length_reference, si_prey_length, prey_dimension_measured, prey_width, prey_width_unit, prey_measurement_type, prey_mass, prey_mass_unit, prey_mass_check, prey_mass_check_diff, prey_ratio_mass_mass, si_prey_mass, prey_conversion_to_mass_method, prey_conversion_to_mass_reference, prey_quality_of_conversion_to_mass, geographic_location, latitude, lonitude, depth, mean_annual_temp, sd_annual_temp, mean_pp, sd_pp, reference, specific_habitat, notes_assumptions) VALUES (1, 'ATSH063', 1, 'Rhizoprionodon terraenovae', 'Atlantic sharpnose shark', 'ectotherm vertebrate', 'adult', 'predacious/piscivorous', 7.8000E+02, 'mm', 'fork length', 7.5433E+02, 7.8000E+02, 9.3990E+02, 'Fishbase (species)', 9.3990E+01, 'individual', 'M=0.0056SL^2.897', 'Bonfil et al. (1990)', 1, 1.5399E+03, 'g', 4.3453E+04, 4.1913E+04, 2.8218E+01, 1.5399E+03, 'all', 'teleosts/molluscs/crustaceans', 'teleosts/molluscs/crustaceans', 'mixed', 1.1259E+02, 'mm', null, 0, null, 1.1259E+01, 'length', null, null, 'individual', 1.4274E+01, 'g', 7.4699E+01, 6.0425E+01, 5.2333E+00, 1.4274E+01, 'M=0.01L^3', 'Generalised', 5, 'Apalachicola Bay, Florida', '29?40\'N', '85?10\'W', 30, 24.1, 4.2, 866, 214, 'Bethea et al (2004)', 'Coastal Bay', null);
(1366, "Incorrect string value: '\\xBA40'N' for column 'latitude' at row 1")
```
McGlinn2010:
```
INSERT INTO McGlinn2010.species (spnum, spcode, family, genus, species, variety, subspecies, spname, binomia_auth, trinomial_auth) VALUES (257, 'seneplat', 'Asteraceae', 'Packera', 'plattensis', '', '', 'Packera plattensis', '(Nutt.) W.A. Weber & A. L?ve', '');
(1366, "Incorrect string value: '\\xF6ve' for column 'binomia_auth' at row 1")
```
All of these datasets install fine using the source installation. When using the .app build from the CLI they also all report:
```
Couldn't create database (unsupported operand type(s) for +: 'NoneType' and 'str'). Trying to continue anyway.
```
but this appears to be reported by all of the datasets, including those that are successfully installed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 """Use the following command to install retriever: python setup.py install"""
2
3 from setuptools import setup
4 import platform
5
6 p = platform.platform().lower()
7 extra_includes = []
8 if "darwin" in p:
9 try: import py2app
10 except ImportError: pass
11 extra_includes = []
12 elif "win" in p:
13 try: import py2exe
14 except ImportError: pass
15 import sys
16 extra_includes = ['pyodbc', 'inspect']
17 sys.path.append("C:\\Windows\\winsxs\\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91")
18 from __init__ import VERSION
19
20
21 def clean_version(v):
22 if v == 'master':
23 return '1.0.0'
24 return v.replace('v', '').replace('.rc', '').replace('.beta', '')
25
26 packages = [
27 'retriever.lib',
28 'retriever.engines',
29 'retriever.app',
30 'retriever',
31 ]
32
33 try:
34 import pymysql
35 mysql_module = 'pymysql'
36 except ImportError:
37 try:
38 import MySQLdb
39 mysql_module = 'MySQLdb'
40 except ImportError:
41 mysql_module = 'pymysql'
42
43 includes = [
44 'xlrd',
45 'wx',
46 mysql_module,
47 'psycopg2',
48 'sqlite3',
49 ] + extra_includes
50
51 excludes = [
52 'pyreadline',
53 'doctest',
54 'optparse',
55 'getopt',
56 'pickle',
57 'calendar',
58 'pdb',
59 'inspect',
60 'email',
61 'pywin', 'pywin.debugger',
62 'pywin.debugger.dbgcon',
63 'pywin.dialogs', 'pywin.dialogs.list',
64 'Tkconstants', 'Tkinter', 'tcl',
65 ]
66
67
68 setup(name='retriever',
69 version=clean_version(VERSION),
70 description='EcoData Retriever',
71 author='Ben Morris',
72 author_email='[email protected]',
73 url='http://www.ecodataretriever.org',
74 packages=packages,
75 package_dir={
76 'retriever':''
77 },
78 entry_points={
79 'console_scripts': [
80 'retriever = retriever.__main__:main',
81 ],
82 },
83 install_requires=[
84 'xlrd',
85 ],
86
87 # py2exe flags
88 console = [{'script': "__main__.py",
89 'dest_base': "retriever",
90 'icon_resources':[(1,'icon.ico')]
91 }],
92 zipfile = None,
93
94 # py2app flags
95 app=['__main__.py'],
96 data_files=[('', ['CITATION'])],
97 setup_requires=['py2app'] if 'darwin' in p else [],
98
99 # options
100 options = {'py2exe': {'bundle_files': 1,
101 'compressed': 2,
102 'optimize': 2,
103 'packages': packages,
104 'includes': includes,
105 'excludes': excludes,
106 },
107 'py2app': {'packages': ['retriever'],
108 'includes': includes,
109 'site_packages': True,
110 'resources': [],
111 'optimize': 2,
112 'argv_emulation': True,
113 'no_chdir': True,
114 },
115 },
116 )
117
118
119 try:
120 from compile import compile
121 compile()
122 except:
123 pass
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -97,6 +97,7 @@
setup_requires=['py2app'] if 'darwin' in p else [],
# options
+ # optimize is set to 1 of py2app to avoid errors with pymysql
options = {'py2exe': {'bundle_files': 1,
'compressed': 2,
'optimize': 2,
@@ -108,7 +109,7 @@
'includes': includes,
'site_packages': True,
'resources': [],
- 'optimize': 2,
+ 'optimize': 1,
'argv_emulation': True,
'no_chdir': True,
},
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -97,6 +97,7 @@\n setup_requires=['py2app'] if 'darwin' in p else [],\n \n # options\n+ # optimize is set to 1 of py2app to avoid errors with pymysql\n options = {'py2exe': {'bundle_files': 1,\n 'compressed': 2,\n 'optimize': 2,\n@@ -108,7 +109,7 @@\n 'includes': includes,\n 'site_packages': True,\n 'resources': [],\n- 'optimize': 2,\n+ 'optimize': 1,\n 'argv_emulation': True,\n 'no_chdir': True,\n },\n", "issue": "Mac App MySQL installations failing based on \"Incorrect string value\"\nMCDB:\n\n```\nINSERT INTO MCDB.trapping (site_id, initial_year, final_year, n_sampling_months, trap_nights, months_of_sampling, pitfall_traps, small_traps, large_traps, snap_traps, notes) VALUES (1206, '1995', '1995', '1', '580', 'November', '1', '0', '0', '0', 'each pitfall trapline 100-500 m?; unclear how many or length');\n(1366, \"Incorrect string value: '\\\\xB2; unc...' for column 'notes' at row 1\")\n```\n\nMarineSize (Barnes 2008):\n\n```\nINSERT INTO MarineSize.main (record_number, in_ref_id, individual_id, predator, predator_common_name, predator_taxon, predator_lifestage, type_of_feeding_interaction, predator_length, predator_length_unit, predator_dimension_measured, predator_standard_length, predator_fork_length, predator_total_length, predator_tl_fl_sl_conversion_reference, standardised_predator_length, predator_measurement_type, predator_length_mass_conversion_method, predator_length_mass_conversion_reference, predator_quality_of_length_mass_conversion, predator_mass, predator_mass_unit, predator_mass_check, predator_mass_check_diff, predator_ratio_mass_mass, si_predator_mass, diet_coverage, prey, prey_common_name, prey_taxon, prey_length, prey_length_unit, prey_conversion_to_length_method, prey_quality_of_conversion_to_length, prey_conversion_to_length_reference, si_prey_length, prey_dimension_measured, prey_width, prey_width_unit, prey_measurement_type, prey_mass, prey_mass_unit, prey_mass_check, prey_mass_check_diff, prey_ratio_mass_mass, si_prey_mass, prey_conversion_to_mass_method, prey_conversion_to_mass_reference, prey_quality_of_conversion_to_mass, geographic_location, latitude, lonitude, depth, mean_annual_temp, sd_annual_temp, mean_pp, sd_pp, reference, specific_habitat, notes_assumptions) VALUES (1, 'ATSH063', 1, 'Rhizoprionodon terraenovae', 'Atlantic sharpnose shark', 'ectotherm vertebrate', 'adult', 'predacious/piscivorous', 7.8000E+02, 'mm', 'fork length', 7.5433E+02, 7.8000E+02, 9.3990E+02, 'Fishbase (species)', 9.3990E+01, 'individual', 'M=0.0056SL^2.897', 'Bonfil et al. (1990)', 1, 1.5399E+03, 'g', 4.3453E+04, 4.1913E+04, 2.8218E+01, 1.5399E+03, 'all', 'teleosts/molluscs/crustaceans', 'teleosts/molluscs/crustaceans', 'mixed', 1.1259E+02, 'mm', null, 0, null, 1.1259E+01, 'length', null, null, 'individual', 1.4274E+01, 'g', 7.4699E+01, 6.0425E+01, 5.2333E+00, 1.4274E+01, 'M=0.01L^3', 'Generalised', 5, 'Apalachicola Bay, Florida', '29?40\\'N', '85?10\\'W', 30, 24.1, 4.2, 866, 214, 'Bethea et al (2004)', 'Coastal Bay', null);\n(1366, \"Incorrect string value: '\\\\xBA40'N' for column 'latitude' at row 1\")\n```\n\nMcGlinn2010:\n\n```\nINSERT INTO McGlinn2010.species (spnum, spcode, family, genus, species, variety, subspecies, spname, binomia_auth, trinomial_auth) VALUES (257, 'seneplat', 'Asteraceae', 'Packera', 'plattensis', '', '', 'Packera plattensis', '(Nutt.) W.A. Weber & A. L?ve', '');\n(1366, \"Incorrect string value: '\\\\xF6ve' for column 'binomia_auth' at row 1\")\n```\n\nAll of these datasets install fine using the source installation. When using the .app build from the CLI they also all report:\n\n```\nCouldn't create database (unsupported operand type(s) for +: 'NoneType' and 'str'). Trying to continue anyway.\n```\n\nbut this appears to be reported by all of the datasets, including those that are successfully installed.\n\n", "before_files": [{"content": "\"\"\"Use the following command to install retriever: python setup.py install\"\"\"\n\nfrom setuptools import setup\nimport platform\n\np = platform.platform().lower()\nextra_includes = []\nif \"darwin\" in p:\n try: import py2app\n except ImportError: pass\n extra_includes = []\nelif \"win\" in p:\n try: import py2exe\n except ImportError: pass\n import sys\n extra_includes = ['pyodbc', 'inspect']\n sys.path.append(\"C:\\\\Windows\\\\winsxs\\\\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91\")\nfrom __init__ import VERSION\n\n\ndef clean_version(v):\n if v == 'master':\n return '1.0.0'\n return v.replace('v', '').replace('.rc', '').replace('.beta', '')\n\npackages = [\n 'retriever.lib',\n 'retriever.engines',\n 'retriever.app',\n 'retriever',\n ]\n\ntry:\n import pymysql\n mysql_module = 'pymysql'\nexcept ImportError:\n try:\n import MySQLdb\n mysql_module = 'MySQLdb'\n except ImportError:\n mysql_module = 'pymysql'\n\nincludes = [\n 'xlrd',\n 'wx',\n mysql_module,\n 'psycopg2',\n 'sqlite3',\n ] + extra_includes\n \nexcludes = [\n 'pyreadline',\n 'doctest',\n 'optparse',\n 'getopt',\n 'pickle',\n 'calendar',\n 'pdb',\n 'inspect',\n 'email',\n 'pywin', 'pywin.debugger',\n 'pywin.debugger.dbgcon',\n 'pywin.dialogs', 'pywin.dialogs.list',\n 'Tkconstants', 'Tkinter', 'tcl',\n ]\n\n\nsetup(name='retriever',\n version=clean_version(VERSION),\n description='EcoData Retriever',\n author='Ben Morris',\n author_email='[email protected]',\n url='http://www.ecodataretriever.org',\n packages=packages,\n package_dir={\n 'retriever':''\n },\n entry_points={\n 'console_scripts': [\n 'retriever = retriever.__main__:main',\n ],\n },\n install_requires=[\n 'xlrd',\n ],\n\n # py2exe flags\n console = [{'script': \"__main__.py\",\n 'dest_base': \"retriever\",\n 'icon_resources':[(1,'icon.ico')]\n }],\n zipfile = None,\n\n # py2app flags\n app=['__main__.py'],\n data_files=[('', ['CITATION'])],\n setup_requires=['py2app'] if 'darwin' in p else [],\n\n # options\n options = {'py2exe': {'bundle_files': 1,\n 'compressed': 2,\n 'optimize': 2,\n 'packages': packages,\n 'includes': includes,\n 'excludes': excludes,\n },\n 'py2app': {'packages': ['retriever'],\n 'includes': includes,\n 'site_packages': True,\n 'resources': [],\n 'optimize': 2,\n 'argv_emulation': True,\n 'no_chdir': True,\n },\n },\n )\n\n\ntry:\n from compile import compile\n compile()\nexcept:\n pass\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"Use the following command to install retriever: python setup.py install\"\"\"\n\nfrom setuptools import setup\nimport platform\n\np = platform.platform().lower()\nextra_includes = []\nif \"darwin\" in p:\n try: import py2app\n except ImportError: pass\n extra_includes = []\nelif \"win\" in p:\n try: import py2exe\n except ImportError: pass\n import sys\n extra_includes = ['pyodbc', 'inspect']\n sys.path.append(\"C:\\\\Windows\\\\winsxs\\\\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91\")\nfrom __init__ import VERSION\n\n\ndef clean_version(v):\n if v == 'master':\n return '1.0.0'\n return v.replace('v', '').replace('.rc', '').replace('.beta', '')\n\npackages = [\n 'retriever.lib',\n 'retriever.engines',\n 'retriever.app',\n 'retriever',\n ]\n\ntry:\n import pymysql\n mysql_module = 'pymysql'\nexcept ImportError:\n try:\n import MySQLdb\n mysql_module = 'MySQLdb'\n except ImportError:\n mysql_module = 'pymysql'\n\nincludes = [\n 'xlrd',\n 'wx',\n mysql_module,\n 'psycopg2',\n 'sqlite3',\n ] + extra_includes\n \nexcludes = [\n 'pyreadline',\n 'doctest',\n 'optparse',\n 'getopt',\n 'pickle',\n 'calendar',\n 'pdb',\n 'inspect',\n 'email',\n 'pywin', 'pywin.debugger',\n 'pywin.debugger.dbgcon',\n 'pywin.dialogs', 'pywin.dialogs.list',\n 'Tkconstants', 'Tkinter', 'tcl',\n ]\n\n\nsetup(name='retriever',\n version=clean_version(VERSION),\n description='EcoData Retriever',\n author='Ben Morris',\n author_email='[email protected]',\n url='http://www.ecodataretriever.org',\n packages=packages,\n package_dir={\n 'retriever':''\n },\n entry_points={\n 'console_scripts': [\n 'retriever = retriever.__main__:main',\n ],\n },\n install_requires=[\n 'xlrd',\n ],\n\n # py2exe flags\n console = [{'script': \"__main__.py\",\n 'dest_base': \"retriever\",\n 'icon_resources':[(1,'icon.ico')]\n }],\n zipfile = None,\n\n # py2app flags\n app=['__main__.py'],\n data_files=[('', ['CITATION'])],\n setup_requires=['py2app'] if 'darwin' in p else [],\n\n # options\n # optimize is set to 1 of py2app to avoid errors with pymysql\n options = {'py2exe': {'bundle_files': 1,\n 'compressed': 2,\n 'optimize': 2,\n 'packages': packages,\n 'includes': includes,\n 'excludes': excludes,\n },\n 'py2app': {'packages': ['retriever'],\n 'includes': includes,\n 'site_packages': True,\n 'resources': [],\n 'optimize': 1,\n 'argv_emulation': True,\n 'no_chdir': True,\n },\n },\n )\n\n\ntry:\n from compile import compile\n compile()\nexcept:\n pass\n", "path": "setup.py"}]}
| 2,414 | 167 |
gh_patches_debug_24527
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-7570
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NoneType Issue
I dropped a yaml file into a new OU/SubOU and its not working, though it works in other OUs just fine. Nothing was changed in the file but I am still getting this error, not sure why.
```
Traceback (most recent call last):
File "/root/.pyenv/versions/3.9.12/bin/custodian", line 8, in <module>
sys.exit(main())
File "/root/.pyenv/versions/3.9.12/lib/python3.9/site-packages/c7n/cli.py", line 363, in main
command(config)
File "/root/.pyenv/versions/3.9.12/lib/python3.9/site-packages/c7n/commands.py", line 219, in validate
structure.validate(data)
File "/root/.pyenv/versions/3.9.12/lib/python3.9/site-packages/c7n/structure.py", line 48, in validate
self.validate_policy(p)
File "/root/.pyenv/versions/3.9.12/lib/python3.9/site-packages/c7n/structure.py", line 78, in validate_policy
for a in p.get('actions', ()):
TypeError: 'NoneType' object is not iterable
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/structure.py`
Content:
```
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3
4 import json
5
6 from c7n.exceptions import PolicyValidationError
7
8
9 class StructureParser:
10 """Provide fast validation and inspection of a policy file.
11
12 Intent is to provide more humane validation for top level errors
13 instead of printing full schema as error message.
14 """
15 allowed_file_keys = {'vars', 'policies'}
16 required_policy_keys = {'name', 'resource'}
17 allowed_policy_keys = {'name', 'resource', 'title', 'description', 'mode',
18 'tags', 'max-resources', 'metadata', 'query',
19 'filters', 'actions', 'source', 'conditions',
20 # legacy keys subject to deprecation.
21 'region', 'start', 'end', 'tz', 'max-resources-percent',
22 'comments', 'comment'}
23
24 def validate(self, data):
25 if not isinstance(data, dict):
26 raise PolicyValidationError((
27 "Policy file top level data structure "
28 "should be a mapping/dict, instead found:%s") % (
29 type(data).__name__))
30 dkeys = set(data.keys())
31
32 extra = dkeys.difference(self.allowed_file_keys)
33 if extra:
34 raise PolicyValidationError((
35 'Policy files top level keys are %s, found extra: %s' % (
36 ', '.join(self.allowed_file_keys),
37 ', '.join(extra))))
38
39 if 'policies' not in data:
40 raise PolicyValidationError("`policies` list missing")
41
42 pdata = data.get('policies', [])
43 if not isinstance(pdata, list):
44 raise PolicyValidationError((
45 '`policies` key should be an array/list found: %s' % (
46 type(pdata).__name__)))
47 for p in pdata:
48 self.validate_policy(p)
49
50 def validate_policy(self, p):
51 if not isinstance(p, dict):
52 raise PolicyValidationError((
53 'policy must be a dictionary/mapping found:%s policy:\n %s' % (
54 type(p).__name__, json.dumps(p, indent=2))))
55 pkeys = set(p)
56 if self.required_policy_keys.difference(pkeys):
57 raise PolicyValidationError(
58 'policy missing required keys (name, resource) data:\n %s' % (
59 json.dumps(p, indent=2)))
60 if pkeys.difference(self.allowed_policy_keys):
61 raise PolicyValidationError(
62 'policy:%s has unknown keys: %s' % (
63 p['name'], ','.join(pkeys.difference(self.allowed_policy_keys))))
64 if not isinstance(p.get('filters', []), (list, type(None))):
65 raise PolicyValidationError((
66 'policy:%s must use a list for filters found:%s' % (
67 p['name'], type(p['filters']).__name__)))
68 element_types = (dict, str)
69 for f in p.get('filters', ()):
70 if not isinstance(f, element_types):
71 raise PolicyValidationError((
72 'policy:%s filter must be a mapping/dict found:%s' % (
73 p.get('name', 'unknown'), type(f).__name__)))
74 if not isinstance(p.get('actions', []), (list, type(None))):
75 raise PolicyValidationError((
76 'policy:%s must use a list for actions found:%s' % (
77 p.get('name', 'unknown'), type(p['actions']).__name__)))
78 for a in p.get('actions', ()):
79 if not isinstance(a, element_types):
80 raise PolicyValidationError((
81 'policy:%s action must be a mapping/dict found:%s' % (
82 p.get('name', 'unknown'), type(a).__name__)))
83
84 def get_resource_types(self, data):
85 resources = set()
86 for p in data.get('policies', []):
87 rtype = p['resource']
88 if '.' not in rtype:
89 rtype = 'aws.%s' % rtype
90 resources.add(rtype)
91 return resources
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/c7n/structure.py b/c7n/structure.py
--- a/c7n/structure.py
+++ b/c7n/structure.py
@@ -66,7 +66,7 @@
'policy:%s must use a list for filters found:%s' % (
p['name'], type(p['filters']).__name__)))
element_types = (dict, str)
- for f in p.get('filters', ()):
+ for f in p.get('filters', ()) or []:
if not isinstance(f, element_types):
raise PolicyValidationError((
'policy:%s filter must be a mapping/dict found:%s' % (
@@ -75,7 +75,7 @@
raise PolicyValidationError((
'policy:%s must use a list for actions found:%s' % (
p.get('name', 'unknown'), type(p['actions']).__name__)))
- for a in p.get('actions', ()):
+ for a in p.get('actions', ()) or []:
if not isinstance(a, element_types):
raise PolicyValidationError((
'policy:%s action must be a mapping/dict found:%s' % (
|
{"golden_diff": "diff --git a/c7n/structure.py b/c7n/structure.py\n--- a/c7n/structure.py\n+++ b/c7n/structure.py\n@@ -66,7 +66,7 @@\n 'policy:%s must use a list for filters found:%s' % (\n p['name'], type(p['filters']).__name__)))\n element_types = (dict, str)\n- for f in p.get('filters', ()):\n+ for f in p.get('filters', ()) or []:\n if not isinstance(f, element_types):\n raise PolicyValidationError((\n 'policy:%s filter must be a mapping/dict found:%s' % (\n@@ -75,7 +75,7 @@\n raise PolicyValidationError((\n 'policy:%s must use a list for actions found:%s' % (\n p.get('name', 'unknown'), type(p['actions']).__name__)))\n- for a in p.get('actions', ()):\n+ for a in p.get('actions', ()) or []:\n if not isinstance(a, element_types):\n raise PolicyValidationError((\n 'policy:%s action must be a mapping/dict found:%s' % (\n", "issue": "NoneType Issue\nI dropped a yaml file into a new OU/SubOU and its not working, though it works in other OUs just fine. Nothing was changed in the file but I am still getting this error, not sure why.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/root/.pyenv/versions/3.9.12/bin/custodian\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/root/.pyenv/versions/3.9.12/lib/python3.9/site-packages/c7n/cli.py\", line 363, in main\r\n command(config)\r\n File \"/root/.pyenv/versions/3.9.12/lib/python3.9/site-packages/c7n/commands.py\", line 219, in validate\r\n structure.validate(data)\r\n File \"/root/.pyenv/versions/3.9.12/lib/python3.9/site-packages/c7n/structure.py\", line 48, in validate\r\n self.validate_policy(p)\r\n File \"/root/.pyenv/versions/3.9.12/lib/python3.9/site-packages/c7n/structure.py\", line 78, in validate_policy\r\n for a in p.get('actions', ()):\r\nTypeError: 'NoneType' object is not iterable\r\n```\n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\n\nimport json\n\nfrom c7n.exceptions import PolicyValidationError\n\n\nclass StructureParser:\n \"\"\"Provide fast validation and inspection of a policy file.\n\n Intent is to provide more humane validation for top level errors\n instead of printing full schema as error message.\n \"\"\"\n allowed_file_keys = {'vars', 'policies'}\n required_policy_keys = {'name', 'resource'}\n allowed_policy_keys = {'name', 'resource', 'title', 'description', 'mode',\n 'tags', 'max-resources', 'metadata', 'query',\n 'filters', 'actions', 'source', 'conditions',\n # legacy keys subject to deprecation.\n 'region', 'start', 'end', 'tz', 'max-resources-percent',\n 'comments', 'comment'}\n\n def validate(self, data):\n if not isinstance(data, dict):\n raise PolicyValidationError((\n \"Policy file top level data structure \"\n \"should be a mapping/dict, instead found:%s\") % (\n type(data).__name__))\n dkeys = set(data.keys())\n\n extra = dkeys.difference(self.allowed_file_keys)\n if extra:\n raise PolicyValidationError((\n 'Policy files top level keys are %s, found extra: %s' % (\n ', '.join(self.allowed_file_keys),\n ', '.join(extra))))\n\n if 'policies' not in data:\n raise PolicyValidationError(\"`policies` list missing\")\n\n pdata = data.get('policies', [])\n if not isinstance(pdata, list):\n raise PolicyValidationError((\n '`policies` key should be an array/list found: %s' % (\n type(pdata).__name__)))\n for p in pdata:\n self.validate_policy(p)\n\n def validate_policy(self, p):\n if not isinstance(p, dict):\n raise PolicyValidationError((\n 'policy must be a dictionary/mapping found:%s policy:\\n %s' % (\n type(p).__name__, json.dumps(p, indent=2))))\n pkeys = set(p)\n if self.required_policy_keys.difference(pkeys):\n raise PolicyValidationError(\n 'policy missing required keys (name, resource) data:\\n %s' % (\n json.dumps(p, indent=2)))\n if pkeys.difference(self.allowed_policy_keys):\n raise PolicyValidationError(\n 'policy:%s has unknown keys: %s' % (\n p['name'], ','.join(pkeys.difference(self.allowed_policy_keys))))\n if not isinstance(p.get('filters', []), (list, type(None))):\n raise PolicyValidationError((\n 'policy:%s must use a list for filters found:%s' % (\n p['name'], type(p['filters']).__name__)))\n element_types = (dict, str)\n for f in p.get('filters', ()):\n if not isinstance(f, element_types):\n raise PolicyValidationError((\n 'policy:%s filter must be a mapping/dict found:%s' % (\n p.get('name', 'unknown'), type(f).__name__)))\n if not isinstance(p.get('actions', []), (list, type(None))):\n raise PolicyValidationError((\n 'policy:%s must use a list for actions found:%s' % (\n p.get('name', 'unknown'), type(p['actions']).__name__)))\n for a in p.get('actions', ()):\n if not isinstance(a, element_types):\n raise PolicyValidationError((\n 'policy:%s action must be a mapping/dict found:%s' % (\n p.get('name', 'unknown'), type(a).__name__)))\n\n def get_resource_types(self, data):\n resources = set()\n for p in data.get('policies', []):\n rtype = p['resource']\n if '.' not in rtype:\n rtype = 'aws.%s' % rtype\n resources.add(rtype)\n return resources\n", "path": "c7n/structure.py"}], "after_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\n\nimport json\n\nfrom c7n.exceptions import PolicyValidationError\n\n\nclass StructureParser:\n \"\"\"Provide fast validation and inspection of a policy file.\n\n Intent is to provide more humane validation for top level errors\n instead of printing full schema as error message.\n \"\"\"\n allowed_file_keys = {'vars', 'policies'}\n required_policy_keys = {'name', 'resource'}\n allowed_policy_keys = {'name', 'resource', 'title', 'description', 'mode',\n 'tags', 'max-resources', 'metadata', 'query',\n 'filters', 'actions', 'source', 'conditions',\n # legacy keys subject to deprecation.\n 'region', 'start', 'end', 'tz', 'max-resources-percent',\n 'comments', 'comment'}\n\n def validate(self, data):\n if not isinstance(data, dict):\n raise PolicyValidationError((\n \"Policy file top level data structure \"\n \"should be a mapping/dict, instead found:%s\") % (\n type(data).__name__))\n dkeys = set(data.keys())\n\n extra = dkeys.difference(self.allowed_file_keys)\n if extra:\n raise PolicyValidationError((\n 'Policy files top level keys are %s, found extra: %s' % (\n ', '.join(self.allowed_file_keys),\n ', '.join(extra))))\n\n if 'policies' not in data:\n raise PolicyValidationError(\"`policies` list missing\")\n\n pdata = data.get('policies', [])\n if not isinstance(pdata, list):\n raise PolicyValidationError((\n '`policies` key should be an array/list found: %s' % (\n type(pdata).__name__)))\n for p in pdata:\n self.validate_policy(p)\n\n def validate_policy(self, p):\n if not isinstance(p, dict):\n raise PolicyValidationError((\n 'policy must be a dictionary/mapping found:%s policy:\\n %s' % (\n type(p).__name__, json.dumps(p, indent=2))))\n pkeys = set(p)\n if self.required_policy_keys.difference(pkeys):\n raise PolicyValidationError(\n 'policy missing required keys (name, resource) data:\\n %s' % (\n json.dumps(p, indent=2)))\n if pkeys.difference(self.allowed_policy_keys):\n raise PolicyValidationError(\n 'policy:%s has unknown keys: %s' % (\n p['name'], ','.join(pkeys.difference(self.allowed_policy_keys))))\n if not isinstance(p.get('filters', []), (list, type(None))):\n raise PolicyValidationError((\n 'policy:%s must use a list for filters found:%s' % (\n p['name'], type(p['filters']).__name__)))\n element_types = (dict, str)\n for f in p.get('filters', ()) or []:\n if not isinstance(f, element_types):\n raise PolicyValidationError((\n 'policy:%s filter must be a mapping/dict found:%s' % (\n p.get('name', 'unknown'), type(f).__name__)))\n if not isinstance(p.get('actions', []), (list, type(None))):\n raise PolicyValidationError((\n 'policy:%s must use a list for actions found:%s' % (\n p.get('name', 'unknown'), type(p['actions']).__name__)))\n for a in p.get('actions', ()) or []:\n if not isinstance(a, element_types):\n raise PolicyValidationError((\n 'policy:%s action must be a mapping/dict found:%s' % (\n p.get('name', 'unknown'), type(a).__name__)))\n\n def get_resource_types(self, data):\n resources = set()\n for p in data.get('policies', []):\n rtype = p['resource']\n if '.' not in rtype:\n rtype = 'aws.%s' % rtype\n resources.add(rtype)\n return resources\n", "path": "c7n/structure.py"}]}
| 1,557 | 254 |
gh_patches_debug_7875
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-875
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect parsing of complex urls in django
Sentry is parsing a complex URL as `/api/{version})/log` instead of `/api/{version}/log`.
<img width="207" alt="Screenshot 2020-10-17 at 10 40 47 AM" src="https://user-images.githubusercontent.com/4463796/96328987-70cb1c80-1066-11eb-94a4-ff8e15fb81ed.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/django/transactions.py`
Content:
```
1 """
2 Copied from raven-python. Used for
3 `DjangoIntegration(transaction_fron="raven_legacy")`.
4 """
5
6 from __future__ import absolute_import
7
8 import re
9
10 from sentry_sdk._types import MYPY
11
12 if MYPY:
13 from django.urls.resolvers import URLResolver
14 from typing import Dict
15 from typing import List
16 from typing import Optional
17 from django.urls.resolvers import URLPattern
18 from typing import Tuple
19 from typing import Union
20 from re import Pattern
21
22 try:
23 from django.urls import get_resolver
24 except ImportError:
25 from django.core.urlresolvers import get_resolver
26
27
28 def get_regex(resolver_or_pattern):
29 # type: (Union[URLPattern, URLResolver]) -> Pattern[str]
30 """Utility method for django's deprecated resolver.regex"""
31 try:
32 regex = resolver_or_pattern.regex
33 except AttributeError:
34 regex = resolver_or_pattern.pattern.regex
35 return regex
36
37
38 class RavenResolver(object):
39 _optional_group_matcher = re.compile(r"\(\?\:([^\)]+)\)")
40 _named_group_matcher = re.compile(r"\(\?P<(\w+)>[^\)]+\)")
41 _non_named_group_matcher = re.compile(r"\([^\)]+\)")
42 # [foo|bar|baz]
43 _either_option_matcher = re.compile(r"\[([^\]]+)\|([^\]]+)\]")
44 _camel_re = re.compile(r"([A-Z]+)([a-z])")
45
46 _cache = {} # type: Dict[URLPattern, str]
47
48 def _simplify(self, pattern):
49 # type: (str) -> str
50 r"""
51 Clean up urlpattern regexes into something readable by humans:
52
53 From:
54 > "^(?P<sport_slug>\w+)/athletes/(?P<athlete_slug>\w+)/$"
55
56 To:
57 > "{sport_slug}/athletes/{athlete_slug}/"
58 """
59 # remove optional params
60 # TODO(dcramer): it'd be nice to change these into [%s] but it currently
61 # conflicts with the other rules because we're doing regexp matches
62 # rather than parsing tokens
63 result = self._optional_group_matcher.sub(lambda m: "%s" % m.group(1), pattern)
64
65 # handle named groups first
66 result = self._named_group_matcher.sub(lambda m: "{%s}" % m.group(1), result)
67
68 # handle non-named groups
69 result = self._non_named_group_matcher.sub("{var}", result)
70
71 # handle optional params
72 result = self._either_option_matcher.sub(lambda m: m.group(1), result)
73
74 # clean up any outstanding regex-y characters.
75 result = (
76 result.replace("^", "")
77 .replace("$", "")
78 .replace("?", "")
79 .replace("//", "/")
80 .replace("\\", "")
81 )
82
83 return result
84
85 def _resolve(self, resolver, path, parents=None):
86 # type: (URLResolver, str, Optional[List[URLResolver]]) -> Optional[str]
87
88 match = get_regex(resolver).search(path) # Django < 2.0
89
90 if not match:
91 return None
92
93 if parents is None:
94 parents = [resolver]
95 elif resolver not in parents:
96 parents = parents + [resolver]
97
98 new_path = path[match.end() :]
99 for pattern in resolver.url_patterns:
100 # this is an include()
101 if not pattern.callback:
102 match_ = self._resolve(pattern, new_path, parents)
103 if match_:
104 return match_
105 continue
106 elif not get_regex(pattern).search(new_path):
107 continue
108
109 try:
110 return self._cache[pattern]
111 except KeyError:
112 pass
113
114 prefix = "".join(self._simplify(get_regex(p).pattern) for p in parents)
115 result = prefix + self._simplify(get_regex(pattern).pattern)
116 if not result.startswith("/"):
117 result = "/" + result
118 self._cache[pattern] = result
119 return result
120
121 return None
122
123 def resolve(
124 self,
125 path, # type: str
126 urlconf=None, # type: Union[None, Tuple[URLPattern, URLPattern, URLResolver], Tuple[URLPattern]]
127 ):
128 # type: (...) -> str
129 resolver = get_resolver(urlconf)
130 match = self._resolve(resolver, path)
131 return match or path
132
133
134 LEGACY_RESOLVER = RavenResolver()
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/integrations/django/transactions.py b/sentry_sdk/integrations/django/transactions.py
--- a/sentry_sdk/integrations/django/transactions.py
+++ b/sentry_sdk/integrations/django/transactions.py
@@ -37,7 +37,7 @@
class RavenResolver(object):
_optional_group_matcher = re.compile(r"\(\?\:([^\)]+)\)")
- _named_group_matcher = re.compile(r"\(\?P<(\w+)>[^\)]+\)")
+ _named_group_matcher = re.compile(r"\(\?P<(\w+)>[^\)]+\)+")
_non_named_group_matcher = re.compile(r"\([^\)]+\)")
# [foo|bar|baz]
_either_option_matcher = re.compile(r"\[([^\]]+)\|([^\]]+)\]")
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/django/transactions.py b/sentry_sdk/integrations/django/transactions.py\n--- a/sentry_sdk/integrations/django/transactions.py\n+++ b/sentry_sdk/integrations/django/transactions.py\n@@ -37,7 +37,7 @@\n \n class RavenResolver(object):\n _optional_group_matcher = re.compile(r\"\\(\\?\\:([^\\)]+)\\)\")\n- _named_group_matcher = re.compile(r\"\\(\\?P<(\\w+)>[^\\)]+\\)\")\n+ _named_group_matcher = re.compile(r\"\\(\\?P<(\\w+)>[^\\)]+\\)+\")\n _non_named_group_matcher = re.compile(r\"\\([^\\)]+\\)\")\n # [foo|bar|baz]\n _either_option_matcher = re.compile(r\"\\[([^\\]]+)\\|([^\\]]+)\\]\")\n", "issue": "Incorrect parsing of complex urls in django\nSentry is parsing a complex URL as `/api/{version})/log` instead of `/api/{version}/log`.\r\n\r\n<img width=\"207\" alt=\"Screenshot 2020-10-17 at 10 40 47 AM\" src=\"https://user-images.githubusercontent.com/4463796/96328987-70cb1c80-1066-11eb-94a4-ff8e15fb81ed.png\">\r\n\n", "before_files": [{"content": "\"\"\"\nCopied from raven-python. Used for\n`DjangoIntegration(transaction_fron=\"raven_legacy\")`.\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport re\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from django.urls.resolvers import URLResolver\n from typing import Dict\n from typing import List\n from typing import Optional\n from django.urls.resolvers import URLPattern\n from typing import Tuple\n from typing import Union\n from re import Pattern\n\ntry:\n from django.urls import get_resolver\nexcept ImportError:\n from django.core.urlresolvers import get_resolver\n\n\ndef get_regex(resolver_or_pattern):\n # type: (Union[URLPattern, URLResolver]) -> Pattern[str]\n \"\"\"Utility method for django's deprecated resolver.regex\"\"\"\n try:\n regex = resolver_or_pattern.regex\n except AttributeError:\n regex = resolver_or_pattern.pattern.regex\n return regex\n\n\nclass RavenResolver(object):\n _optional_group_matcher = re.compile(r\"\\(\\?\\:([^\\)]+)\\)\")\n _named_group_matcher = re.compile(r\"\\(\\?P<(\\w+)>[^\\)]+\\)\")\n _non_named_group_matcher = re.compile(r\"\\([^\\)]+\\)\")\n # [foo|bar|baz]\n _either_option_matcher = re.compile(r\"\\[([^\\]]+)\\|([^\\]]+)\\]\")\n _camel_re = re.compile(r\"([A-Z]+)([a-z])\")\n\n _cache = {} # type: Dict[URLPattern, str]\n\n def _simplify(self, pattern):\n # type: (str) -> str\n r\"\"\"\n Clean up urlpattern regexes into something readable by humans:\n\n From:\n > \"^(?P<sport_slug>\\w+)/athletes/(?P<athlete_slug>\\w+)/$\"\n\n To:\n > \"{sport_slug}/athletes/{athlete_slug}/\"\n \"\"\"\n # remove optional params\n # TODO(dcramer): it'd be nice to change these into [%s] but it currently\n # conflicts with the other rules because we're doing regexp matches\n # rather than parsing tokens\n result = self._optional_group_matcher.sub(lambda m: \"%s\" % m.group(1), pattern)\n\n # handle named groups first\n result = self._named_group_matcher.sub(lambda m: \"{%s}\" % m.group(1), result)\n\n # handle non-named groups\n result = self._non_named_group_matcher.sub(\"{var}\", result)\n\n # handle optional params\n result = self._either_option_matcher.sub(lambda m: m.group(1), result)\n\n # clean up any outstanding regex-y characters.\n result = (\n result.replace(\"^\", \"\")\n .replace(\"$\", \"\")\n .replace(\"?\", \"\")\n .replace(\"//\", \"/\")\n .replace(\"\\\\\", \"\")\n )\n\n return result\n\n def _resolve(self, resolver, path, parents=None):\n # type: (URLResolver, str, Optional[List[URLResolver]]) -> Optional[str]\n\n match = get_regex(resolver).search(path) # Django < 2.0\n\n if not match:\n return None\n\n if parents is None:\n parents = [resolver]\n elif resolver not in parents:\n parents = parents + [resolver]\n\n new_path = path[match.end() :]\n for pattern in resolver.url_patterns:\n # this is an include()\n if not pattern.callback:\n match_ = self._resolve(pattern, new_path, parents)\n if match_:\n return match_\n continue\n elif not get_regex(pattern).search(new_path):\n continue\n\n try:\n return self._cache[pattern]\n except KeyError:\n pass\n\n prefix = \"\".join(self._simplify(get_regex(p).pattern) for p in parents)\n result = prefix + self._simplify(get_regex(pattern).pattern)\n if not result.startswith(\"/\"):\n result = \"/\" + result\n self._cache[pattern] = result\n return result\n\n return None\n\n def resolve(\n self,\n path, # type: str\n urlconf=None, # type: Union[None, Tuple[URLPattern, URLPattern, URLResolver], Tuple[URLPattern]]\n ):\n # type: (...) -> str\n resolver = get_resolver(urlconf)\n match = self._resolve(resolver, path)\n return match or path\n\n\nLEGACY_RESOLVER = RavenResolver()\n", "path": "sentry_sdk/integrations/django/transactions.py"}], "after_files": [{"content": "\"\"\"\nCopied from raven-python. Used for\n`DjangoIntegration(transaction_fron=\"raven_legacy\")`.\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport re\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from django.urls.resolvers import URLResolver\n from typing import Dict\n from typing import List\n from typing import Optional\n from django.urls.resolvers import URLPattern\n from typing import Tuple\n from typing import Union\n from re import Pattern\n\ntry:\n from django.urls import get_resolver\nexcept ImportError:\n from django.core.urlresolvers import get_resolver\n\n\ndef get_regex(resolver_or_pattern):\n # type: (Union[URLPattern, URLResolver]) -> Pattern[str]\n \"\"\"Utility method for django's deprecated resolver.regex\"\"\"\n try:\n regex = resolver_or_pattern.regex\n except AttributeError:\n regex = resolver_or_pattern.pattern.regex\n return regex\n\n\nclass RavenResolver(object):\n _optional_group_matcher = re.compile(r\"\\(\\?\\:([^\\)]+)\\)\")\n _named_group_matcher = re.compile(r\"\\(\\?P<(\\w+)>[^\\)]+\\)+\")\n _non_named_group_matcher = re.compile(r\"\\([^\\)]+\\)\")\n # [foo|bar|baz]\n _either_option_matcher = re.compile(r\"\\[([^\\]]+)\\|([^\\]]+)\\]\")\n _camel_re = re.compile(r\"([A-Z]+)([a-z])\")\n\n _cache = {} # type: Dict[URLPattern, str]\n\n def _simplify(self, pattern):\n # type: (str) -> str\n r\"\"\"\n Clean up urlpattern regexes into something readable by humans:\n\n From:\n > \"^(?P<sport_slug>\\w+)/athletes/(?P<athlete_slug>\\w+)/$\"\n\n To:\n > \"{sport_slug}/athletes/{athlete_slug}/\"\n \"\"\"\n # remove optional params\n # TODO(dcramer): it'd be nice to change these into [%s] but it currently\n # conflicts with the other rules because we're doing regexp matches\n # rather than parsing tokens\n result = self._optional_group_matcher.sub(lambda m: \"%s\" % m.group(1), pattern)\n\n # handle named groups first\n result = self._named_group_matcher.sub(lambda m: \"{%s}\" % m.group(1), result)\n\n # handle non-named groups\n result = self._non_named_group_matcher.sub(\"{var}\", result)\n\n # handle optional params\n result = self._either_option_matcher.sub(lambda m: m.group(1), result)\n\n # clean up any outstanding regex-y characters.\n result = (\n result.replace(\"^\", \"\")\n .replace(\"$\", \"\")\n .replace(\"?\", \"\")\n .replace(\"//\", \"/\")\n .replace(\"\\\\\", \"\")\n )\n\n return result\n\n def _resolve(self, resolver, path, parents=None):\n # type: (URLResolver, str, Optional[List[URLResolver]]) -> Optional[str]\n\n match = get_regex(resolver).search(path) # Django < 2.0\n\n if not match:\n return None\n\n if parents is None:\n parents = [resolver]\n elif resolver not in parents:\n parents = parents + [resolver]\n\n new_path = path[match.end() :]\n for pattern in resolver.url_patterns:\n # this is an include()\n if not pattern.callback:\n match_ = self._resolve(pattern, new_path, parents)\n if match_:\n return match_\n continue\n elif not get_regex(pattern).search(new_path):\n continue\n\n try:\n return self._cache[pattern]\n except KeyError:\n pass\n\n prefix = \"\".join(self._simplify(get_regex(p).pattern) for p in parents)\n result = prefix + self._simplify(get_regex(pattern).pattern)\n if not result.startswith(\"/\"):\n result = \"/\" + result\n self._cache[pattern] = result\n return result\n\n return None\n\n def resolve(\n self,\n path, # type: str\n urlconf=None, # type: Union[None, Tuple[URLPattern, URLPattern, URLResolver], Tuple[URLPattern]]\n ):\n # type: (...) -> str\n resolver = get_resolver(urlconf)\n match = self._resolve(resolver, path)\n return match or path\n\n\nLEGACY_RESOLVER = RavenResolver()\n", "path": "sentry_sdk/integrations/django/transactions.py"}]}
| 1,665 | 192 |
gh_patches_debug_19764
|
rasdani/github-patches
|
git_diff
|
feast-dev__feast-2686
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Parquet Schema Inference only supports File, not directory
When using a FileSource that is in Parquet format, if the source happens to be a directory of partitioned Parquet files, the following lines throw an error:
https://github.com/feast-dev/feast/blob/01d3568168bb9febb9fbda4988283b3886c32a31/sdk/python/feast/infra/offline_stores/file_source.py#L182-L184
`OSError: Expected file path, but /home/ubuntu/project/data/driver_stats_partitioned is a directory`
How to replicate:
1. Start with a demo feast project (`feast init`)
2. Create a partitioned Parquet Dataset. Use the following to create a dataset with only a single timestamp for inference
```
import pyarrow.parquet as pq
df = pq.read_table("./data/driver_stats.parquet")
df = df.drop(["created"])
pq.write_to_dataset(df, "./data/driver_stats_partitioned")
```
3. Update the file source in `example.py` to look like this:
```
driver_hourly_stats = FileSource(
path="/home/ubuntu/cado-feast/feature_store/exciting_sunbeam/data/driver_stats_partitioned2",
)
```
4. Run `feast apply`
For now, I've been able to fix by updating the above lines to:
```
schema = ParquetDataset(
path if filesystem is None else filesystem.open_input_file(path)
).schema.to_arrow_schema()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/feast/infra/offline_stores/file_source.py`
Content:
```
1 import warnings
2 from typing import Callable, Dict, Iterable, List, Optional, Tuple
3
4 from pyarrow._fs import FileSystem
5 from pyarrow._s3fs import S3FileSystem
6 from pyarrow.parquet import ParquetFile
7
8 from feast import type_map
9 from feast.data_format import FileFormat, ParquetFormat
10 from feast.data_source import DataSource
11 from feast.feature_logging import LoggingDestination
12 from feast.protos.feast.core.DataSource_pb2 import DataSource as DataSourceProto
13 from feast.protos.feast.core.FeatureService_pb2 import (
14 LoggingConfig as LoggingConfigProto,
15 )
16 from feast.protos.feast.core.SavedDataset_pb2 import (
17 SavedDatasetStorage as SavedDatasetStorageProto,
18 )
19 from feast.repo_config import RepoConfig
20 from feast.saved_dataset import SavedDatasetStorage
21 from feast.value_type import ValueType
22
23
24 class FileSource(DataSource):
25 def __init__(
26 self,
27 *args,
28 path: Optional[str] = None,
29 event_timestamp_column: Optional[str] = "",
30 file_format: Optional[FileFormat] = None,
31 created_timestamp_column: Optional[str] = "",
32 field_mapping: Optional[Dict[str, str]] = None,
33 date_partition_column: Optional[str] = "",
34 s3_endpoint_override: Optional[str] = None,
35 name: Optional[str] = "",
36 description: Optional[str] = "",
37 tags: Optional[Dict[str, str]] = None,
38 owner: Optional[str] = "",
39 timestamp_field: Optional[str] = "",
40 ):
41 """Create a FileSource from a file containing feature data. Only Parquet format supported.
42
43 Args:
44
45 path: File path to file containing feature data. Must contain an event_timestamp column, entity columns and
46 feature columns.
47 event_timestamp_column(optional): (Deprecated) Event timestamp column used for point in time joins of feature values.
48 created_timestamp_column (optional): Timestamp column when row was created, used for deduplicating rows.
49 file_format (optional): Explicitly set the file format. Allows Feast to bypass inferring the file format.
50 field_mapping: A dictionary mapping of column names in this data source to feature names in a feature table
51 or view. Only used for feature columns, not entities or timestamp columns.
52 date_partition_column (optional): Timestamp column used for partitioning.
53 s3_endpoint_override (optional): Overrides AWS S3 enpoint with custom S3 storage
54 name (optional): Name for the file source. Defaults to the path.
55 description (optional): A human-readable description.
56 tags (optional): A dictionary of key-value pairs to store arbitrary metadata.
57 owner (optional): The owner of the file source, typically the email of the primary
58 maintainer.
59 timestamp_field (optional): Event timestamp foe;d used for point in time
60 joins of feature values.
61
62 Examples:
63 >>> from feast import FileSource
64 >>> file_source = FileSource(path="my_features.parquet", timestamp_field="event_timestamp")
65 """
66 positional_attributes = ["path"]
67 _path = path
68 if args:
69 if args:
70 warnings.warn(
71 (
72 "File Source parameters should be specified as a keyword argument instead of a positional arg."
73 "Feast 0.23+ will not support positional arguments to construct File sources"
74 ),
75 DeprecationWarning,
76 )
77 if len(args) > len(positional_attributes):
78 raise ValueError(
79 f"Only {', '.join(positional_attributes)} are allowed as positional args when defining "
80 f"File sources, for backwards compatibility."
81 )
82 if len(args) >= 1:
83 _path = args[0]
84 if _path is None:
85 raise ValueError(
86 'No "path" argument provided. Please set "path" to the location of your file source.'
87 )
88 self.file_options = FileOptions(
89 file_format=file_format,
90 uri=_path,
91 s3_endpoint_override=s3_endpoint_override,
92 )
93
94 if date_partition_column:
95 warnings.warn(
96 (
97 "The argument 'date_partition_column' is not supported for File sources."
98 "It will be removed in Feast 0.23+"
99 ),
100 DeprecationWarning,
101 )
102
103 super().__init__(
104 name=name if name else path,
105 event_timestamp_column=event_timestamp_column,
106 created_timestamp_column=created_timestamp_column,
107 field_mapping=field_mapping,
108 description=description,
109 tags=tags,
110 owner=owner,
111 timestamp_field=timestamp_field,
112 )
113
114 # Note: Python requires redefining hash in child classes that override __eq__
115 def __hash__(self):
116 return super().__hash__()
117
118 def __eq__(self, other):
119 if not isinstance(other, FileSource):
120 raise TypeError("Comparisons should only involve FileSource class objects.")
121
122 return (
123 super().__eq__(other)
124 and self.path == other.path
125 and self.file_options.file_format == other.file_options.file_format
126 and self.file_options.s3_endpoint_override
127 == other.file_options.s3_endpoint_override
128 )
129
130 @property
131 def path(self):
132 """
133 Returns the path of this file data source.
134 """
135 return self.file_options.uri
136
137 @staticmethod
138 def from_proto(data_source: DataSourceProto):
139 return FileSource(
140 name=data_source.name,
141 field_mapping=dict(data_source.field_mapping),
142 file_format=FileFormat.from_proto(data_source.file_options.file_format),
143 path=data_source.file_options.uri,
144 timestamp_field=data_source.timestamp_field,
145 created_timestamp_column=data_source.created_timestamp_column,
146 s3_endpoint_override=data_source.file_options.s3_endpoint_override,
147 description=data_source.description,
148 tags=dict(data_source.tags),
149 owner=data_source.owner,
150 )
151
152 def to_proto(self) -> DataSourceProto:
153 data_source_proto = DataSourceProto(
154 name=self.name,
155 type=DataSourceProto.BATCH_FILE,
156 field_mapping=self.field_mapping,
157 file_options=self.file_options.to_proto(),
158 description=self.description,
159 tags=self.tags,
160 owner=self.owner,
161 )
162
163 data_source_proto.timestamp_field = self.timestamp_field
164 data_source_proto.created_timestamp_column = self.created_timestamp_column
165
166 return data_source_proto
167
168 def validate(self, config: RepoConfig):
169 # TODO: validate a FileSource
170 pass
171
172 @staticmethod
173 def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:
174 return type_map.pa_to_feast_value_type
175
176 def get_table_column_names_and_types(
177 self, config: RepoConfig
178 ) -> Iterable[Tuple[str, str]]:
179 filesystem, path = FileSource.create_filesystem_and_path(
180 self.path, self.file_options.s3_endpoint_override
181 )
182 schema = ParquetFile(
183 path if filesystem is None else filesystem.open_input_file(path)
184 ).schema_arrow
185 return zip(schema.names, map(str, schema.types))
186
187 @staticmethod
188 def create_filesystem_and_path(
189 path: str, s3_endpoint_override: str
190 ) -> Tuple[Optional[FileSystem], str]:
191 if path.startswith("s3://"):
192 s3fs = S3FileSystem(
193 endpoint_override=s3_endpoint_override if s3_endpoint_override else None
194 )
195 return s3fs, path.replace("s3://", "")
196 else:
197 return None, path
198
199 def get_table_query_string(self) -> str:
200 pass
201
202
203 class FileOptions:
204 """
205 Configuration options for a file data source.
206 """
207
208 def __init__(
209 self,
210 file_format: Optional[FileFormat],
211 s3_endpoint_override: Optional[str],
212 uri: Optional[str],
213 ):
214 """
215 Initializes a FileOptions object.
216
217 Args:
218 file_format (optional): File source format, e.g. parquet.
219 s3_endpoint_override (optional): Custom s3 endpoint (used only with s3 uri).
220 uri (optional): File source url, e.g. s3:// or local file.
221 """
222 self.file_format = file_format
223 self.uri = uri or ""
224 self.s3_endpoint_override = s3_endpoint_override or ""
225
226 @classmethod
227 def from_proto(cls, file_options_proto: DataSourceProto.FileOptions):
228 """
229 Creates a FileOptions from a protobuf representation of a file option
230
231 Args:
232 file_options_proto: a protobuf representation of a datasource
233
234 Returns:
235 Returns a FileOptions object based on the file_options protobuf
236 """
237 file_options = cls(
238 file_format=FileFormat.from_proto(file_options_proto.file_format),
239 uri=file_options_proto.uri,
240 s3_endpoint_override=file_options_proto.s3_endpoint_override,
241 )
242 return file_options
243
244 def to_proto(self) -> DataSourceProto.FileOptions:
245 """
246 Converts an FileOptionsProto object to its protobuf representation.
247
248 Returns:
249 FileOptionsProto protobuf
250 """
251 file_options_proto = DataSourceProto.FileOptions(
252 file_format=(
253 None if self.file_format is None else self.file_format.to_proto()
254 ),
255 uri=self.uri,
256 s3_endpoint_override=self.s3_endpoint_override,
257 )
258
259 return file_options_proto
260
261
262 class SavedDatasetFileStorage(SavedDatasetStorage):
263 _proto_attr_name = "file_storage"
264
265 file_options: FileOptions
266
267 def __init__(
268 self,
269 path: str,
270 file_format: FileFormat = ParquetFormat(),
271 s3_endpoint_override: Optional[str] = None,
272 ):
273 self.file_options = FileOptions(
274 file_format=file_format,
275 s3_endpoint_override=s3_endpoint_override,
276 uri=path,
277 )
278
279 @staticmethod
280 def from_proto(storage_proto: SavedDatasetStorageProto) -> SavedDatasetStorage:
281 file_options = FileOptions.from_proto(storage_proto.file_storage)
282 return SavedDatasetFileStorage(
283 path=file_options.uri,
284 file_format=file_options.file_format,
285 s3_endpoint_override=file_options.s3_endpoint_override,
286 )
287
288 def to_proto(self) -> SavedDatasetStorageProto:
289 return SavedDatasetStorageProto(file_storage=self.file_options.to_proto())
290
291 def to_data_source(self) -> DataSource:
292 return FileSource(
293 path=self.file_options.uri,
294 file_format=self.file_options.file_format,
295 s3_endpoint_override=self.file_options.s3_endpoint_override,
296 )
297
298
299 class FileLoggingDestination(LoggingDestination):
300 _proto_kind = "file_destination"
301
302 path: str
303 s3_endpoint_override: str
304 partition_by: Optional[List[str]]
305
306 def __init__(
307 self,
308 *,
309 path: str,
310 s3_endpoint_override="",
311 partition_by: Optional[List[str]] = None,
312 ):
313 self.path = path
314 self.s3_endpoint_override = s3_endpoint_override
315 self.partition_by = partition_by
316
317 @classmethod
318 def from_proto(cls, config_proto: LoggingConfigProto) -> "LoggingDestination":
319 return FileLoggingDestination(
320 path=config_proto.file_destination.path,
321 s3_endpoint_override=config_proto.file_destination.s3_endpoint_override,
322 partition_by=list(config_proto.file_destination.partition_by)
323 if config_proto.file_destination.partition_by
324 else None,
325 )
326
327 def to_proto(self) -> LoggingConfigProto:
328 return LoggingConfigProto(
329 file_destination=LoggingConfigProto.FileDestination(
330 path=self.path,
331 s3_endpoint_override=self.s3_endpoint_override,
332 partition_by=self.partition_by,
333 )
334 )
335
336 def to_data_source(self) -> DataSource:
337 return FileSource(
338 path=self.path,
339 file_format=ParquetFormat(),
340 s3_endpoint_override=self.s3_endpoint_override,
341 )
342
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sdk/python/feast/infra/offline_stores/file_source.py b/sdk/python/feast/infra/offline_stores/file_source.py
--- a/sdk/python/feast/infra/offline_stores/file_source.py
+++ b/sdk/python/feast/infra/offline_stores/file_source.py
@@ -3,7 +3,7 @@
from pyarrow._fs import FileSystem
from pyarrow._s3fs import S3FileSystem
-from pyarrow.parquet import ParquetFile
+from pyarrow.parquet import ParquetDataset
from feast import type_map
from feast.data_format import FileFormat, ParquetFormat
@@ -179,9 +179,9 @@
filesystem, path = FileSource.create_filesystem_and_path(
self.path, self.file_options.s3_endpoint_override
)
- schema = ParquetFile(
+ schema = ParquetDataset(
path if filesystem is None else filesystem.open_input_file(path)
- ).schema_arrow
+ ).schema.to_arrow_schema()
return zip(schema.names, map(str, schema.types))
@staticmethod
|
{"golden_diff": "diff --git a/sdk/python/feast/infra/offline_stores/file_source.py b/sdk/python/feast/infra/offline_stores/file_source.py\n--- a/sdk/python/feast/infra/offline_stores/file_source.py\n+++ b/sdk/python/feast/infra/offline_stores/file_source.py\n@@ -3,7 +3,7 @@\n \n from pyarrow._fs import FileSystem\n from pyarrow._s3fs import S3FileSystem\n-from pyarrow.parquet import ParquetFile\n+from pyarrow.parquet import ParquetDataset\n \n from feast import type_map\n from feast.data_format import FileFormat, ParquetFormat\n@@ -179,9 +179,9 @@\n filesystem, path = FileSource.create_filesystem_and_path(\n self.path, self.file_options.s3_endpoint_override\n )\n- schema = ParquetFile(\n+ schema = ParquetDataset(\n path if filesystem is None else filesystem.open_input_file(path)\n- ).schema_arrow\n+ ).schema.to_arrow_schema()\n return zip(schema.names, map(str, schema.types))\n \n @staticmethod\n", "issue": "Parquet Schema Inference only supports File, not directory\nWhen using a FileSource that is in Parquet format, if the source happens to be a directory of partitioned Parquet files, the following lines throw an error:\r\n\r\nhttps://github.com/feast-dev/feast/blob/01d3568168bb9febb9fbda4988283b3886c32a31/sdk/python/feast/infra/offline_stores/file_source.py#L182-L184\r\n\r\n`OSError: Expected file path, but /home/ubuntu/project/data/driver_stats_partitioned is a directory`\r\n\r\nHow to replicate:\r\n\r\n1. Start with a demo feast project (`feast init`)\r\n2. Create a partitioned Parquet Dataset. Use the following to create a dataset with only a single timestamp for inference\r\n```\r\nimport pyarrow.parquet as pq\r\ndf = pq.read_table(\"./data/driver_stats.parquet\")\r\ndf = df.drop([\"created\"])\r\npq.write_to_dataset(df, \"./data/driver_stats_partitioned\")\r\n```\r\n3. Update the file source in `example.py` to look like this:\r\n```\r\ndriver_hourly_stats = FileSource(\r\n path=\"/home/ubuntu/cado-feast/feature_store/exciting_sunbeam/data/driver_stats_partitioned2\",\r\n)\r\n```\r\n\r\n4. Run `feast apply`\r\nFor now, I've been able to fix by updating the above lines to:\r\n```\r\nschema = ParquetDataset(\r\n path if filesystem is None else filesystem.open_input_file(path)\r\n).schema.to_arrow_schema()\r\n```\n", "before_files": [{"content": "import warnings\nfrom typing import Callable, Dict, Iterable, List, Optional, Tuple\n\nfrom pyarrow._fs import FileSystem\nfrom pyarrow._s3fs import S3FileSystem\nfrom pyarrow.parquet import ParquetFile\n\nfrom feast import type_map\nfrom feast.data_format import FileFormat, ParquetFormat\nfrom feast.data_source import DataSource\nfrom feast.feature_logging import LoggingDestination\nfrom feast.protos.feast.core.DataSource_pb2 import DataSource as DataSourceProto\nfrom feast.protos.feast.core.FeatureService_pb2 import (\n LoggingConfig as LoggingConfigProto,\n)\nfrom feast.protos.feast.core.SavedDataset_pb2 import (\n SavedDatasetStorage as SavedDatasetStorageProto,\n)\nfrom feast.repo_config import RepoConfig\nfrom feast.saved_dataset import SavedDatasetStorage\nfrom feast.value_type import ValueType\n\n\nclass FileSource(DataSource):\n def __init__(\n self,\n *args,\n path: Optional[str] = None,\n event_timestamp_column: Optional[str] = \"\",\n file_format: Optional[FileFormat] = None,\n created_timestamp_column: Optional[str] = \"\",\n field_mapping: Optional[Dict[str, str]] = None,\n date_partition_column: Optional[str] = \"\",\n s3_endpoint_override: Optional[str] = None,\n name: Optional[str] = \"\",\n description: Optional[str] = \"\",\n tags: Optional[Dict[str, str]] = None,\n owner: Optional[str] = \"\",\n timestamp_field: Optional[str] = \"\",\n ):\n \"\"\"Create a FileSource from a file containing feature data. Only Parquet format supported.\n\n Args:\n\n path: File path to file containing feature data. Must contain an event_timestamp column, entity columns and\n feature columns.\n event_timestamp_column(optional): (Deprecated) Event timestamp column used for point in time joins of feature values.\n created_timestamp_column (optional): Timestamp column when row was created, used for deduplicating rows.\n file_format (optional): Explicitly set the file format. Allows Feast to bypass inferring the file format.\n field_mapping: A dictionary mapping of column names in this data source to feature names in a feature table\n or view. Only used for feature columns, not entities or timestamp columns.\n date_partition_column (optional): Timestamp column used for partitioning.\n s3_endpoint_override (optional): Overrides AWS S3 enpoint with custom S3 storage\n name (optional): Name for the file source. Defaults to the path.\n description (optional): A human-readable description.\n tags (optional): A dictionary of key-value pairs to store arbitrary metadata.\n owner (optional): The owner of the file source, typically the email of the primary\n maintainer.\n timestamp_field (optional): Event timestamp foe;d used for point in time\n joins of feature values.\n\n Examples:\n >>> from feast import FileSource\n >>> file_source = FileSource(path=\"my_features.parquet\", timestamp_field=\"event_timestamp\")\n \"\"\"\n positional_attributes = [\"path\"]\n _path = path\n if args:\n if args:\n warnings.warn(\n (\n \"File Source parameters should be specified as a keyword argument instead of a positional arg.\"\n \"Feast 0.23+ will not support positional arguments to construct File sources\"\n ),\n DeprecationWarning,\n )\n if len(args) > len(positional_attributes):\n raise ValueError(\n f\"Only {', '.join(positional_attributes)} are allowed as positional args when defining \"\n f\"File sources, for backwards compatibility.\"\n )\n if len(args) >= 1:\n _path = args[0]\n if _path is None:\n raise ValueError(\n 'No \"path\" argument provided. Please set \"path\" to the location of your file source.'\n )\n self.file_options = FileOptions(\n file_format=file_format,\n uri=_path,\n s3_endpoint_override=s3_endpoint_override,\n )\n\n if date_partition_column:\n warnings.warn(\n (\n \"The argument 'date_partition_column' is not supported for File sources.\"\n \"It will be removed in Feast 0.23+\"\n ),\n DeprecationWarning,\n )\n\n super().__init__(\n name=name if name else path,\n event_timestamp_column=event_timestamp_column,\n created_timestamp_column=created_timestamp_column,\n field_mapping=field_mapping,\n description=description,\n tags=tags,\n owner=owner,\n timestamp_field=timestamp_field,\n )\n\n # Note: Python requires redefining hash in child classes that override __eq__\n def __hash__(self):\n return super().__hash__()\n\n def __eq__(self, other):\n if not isinstance(other, FileSource):\n raise TypeError(\"Comparisons should only involve FileSource class objects.\")\n\n return (\n super().__eq__(other)\n and self.path == other.path\n and self.file_options.file_format == other.file_options.file_format\n and self.file_options.s3_endpoint_override\n == other.file_options.s3_endpoint_override\n )\n\n @property\n def path(self):\n \"\"\"\n Returns the path of this file data source.\n \"\"\"\n return self.file_options.uri\n\n @staticmethod\n def from_proto(data_source: DataSourceProto):\n return FileSource(\n name=data_source.name,\n field_mapping=dict(data_source.field_mapping),\n file_format=FileFormat.from_proto(data_source.file_options.file_format),\n path=data_source.file_options.uri,\n timestamp_field=data_source.timestamp_field,\n created_timestamp_column=data_source.created_timestamp_column,\n s3_endpoint_override=data_source.file_options.s3_endpoint_override,\n description=data_source.description,\n tags=dict(data_source.tags),\n owner=data_source.owner,\n )\n\n def to_proto(self) -> DataSourceProto:\n data_source_proto = DataSourceProto(\n name=self.name,\n type=DataSourceProto.BATCH_FILE,\n field_mapping=self.field_mapping,\n file_options=self.file_options.to_proto(),\n description=self.description,\n tags=self.tags,\n owner=self.owner,\n )\n\n data_source_proto.timestamp_field = self.timestamp_field\n data_source_proto.created_timestamp_column = self.created_timestamp_column\n\n return data_source_proto\n\n def validate(self, config: RepoConfig):\n # TODO: validate a FileSource\n pass\n\n @staticmethod\n def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:\n return type_map.pa_to_feast_value_type\n\n def get_table_column_names_and_types(\n self, config: RepoConfig\n ) -> Iterable[Tuple[str, str]]:\n filesystem, path = FileSource.create_filesystem_and_path(\n self.path, self.file_options.s3_endpoint_override\n )\n schema = ParquetFile(\n path if filesystem is None else filesystem.open_input_file(path)\n ).schema_arrow\n return zip(schema.names, map(str, schema.types))\n\n @staticmethod\n def create_filesystem_and_path(\n path: str, s3_endpoint_override: str\n ) -> Tuple[Optional[FileSystem], str]:\n if path.startswith(\"s3://\"):\n s3fs = S3FileSystem(\n endpoint_override=s3_endpoint_override if s3_endpoint_override else None\n )\n return s3fs, path.replace(\"s3://\", \"\")\n else:\n return None, path\n\n def get_table_query_string(self) -> str:\n pass\n\n\nclass FileOptions:\n \"\"\"\n Configuration options for a file data source.\n \"\"\"\n\n def __init__(\n self,\n file_format: Optional[FileFormat],\n s3_endpoint_override: Optional[str],\n uri: Optional[str],\n ):\n \"\"\"\n Initializes a FileOptions object.\n\n Args:\n file_format (optional): File source format, e.g. parquet.\n s3_endpoint_override (optional): Custom s3 endpoint (used only with s3 uri).\n uri (optional): File source url, e.g. s3:// or local file.\n \"\"\"\n self.file_format = file_format\n self.uri = uri or \"\"\n self.s3_endpoint_override = s3_endpoint_override or \"\"\n\n @classmethod\n def from_proto(cls, file_options_proto: DataSourceProto.FileOptions):\n \"\"\"\n Creates a FileOptions from a protobuf representation of a file option\n\n Args:\n file_options_proto: a protobuf representation of a datasource\n\n Returns:\n Returns a FileOptions object based on the file_options protobuf\n \"\"\"\n file_options = cls(\n file_format=FileFormat.from_proto(file_options_proto.file_format),\n uri=file_options_proto.uri,\n s3_endpoint_override=file_options_proto.s3_endpoint_override,\n )\n return file_options\n\n def to_proto(self) -> DataSourceProto.FileOptions:\n \"\"\"\n Converts an FileOptionsProto object to its protobuf representation.\n\n Returns:\n FileOptionsProto protobuf\n \"\"\"\n file_options_proto = DataSourceProto.FileOptions(\n file_format=(\n None if self.file_format is None else self.file_format.to_proto()\n ),\n uri=self.uri,\n s3_endpoint_override=self.s3_endpoint_override,\n )\n\n return file_options_proto\n\n\nclass SavedDatasetFileStorage(SavedDatasetStorage):\n _proto_attr_name = \"file_storage\"\n\n file_options: FileOptions\n\n def __init__(\n self,\n path: str,\n file_format: FileFormat = ParquetFormat(),\n s3_endpoint_override: Optional[str] = None,\n ):\n self.file_options = FileOptions(\n file_format=file_format,\n s3_endpoint_override=s3_endpoint_override,\n uri=path,\n )\n\n @staticmethod\n def from_proto(storage_proto: SavedDatasetStorageProto) -> SavedDatasetStorage:\n file_options = FileOptions.from_proto(storage_proto.file_storage)\n return SavedDatasetFileStorage(\n path=file_options.uri,\n file_format=file_options.file_format,\n s3_endpoint_override=file_options.s3_endpoint_override,\n )\n\n def to_proto(self) -> SavedDatasetStorageProto:\n return SavedDatasetStorageProto(file_storage=self.file_options.to_proto())\n\n def to_data_source(self) -> DataSource:\n return FileSource(\n path=self.file_options.uri,\n file_format=self.file_options.file_format,\n s3_endpoint_override=self.file_options.s3_endpoint_override,\n )\n\n\nclass FileLoggingDestination(LoggingDestination):\n _proto_kind = \"file_destination\"\n\n path: str\n s3_endpoint_override: str\n partition_by: Optional[List[str]]\n\n def __init__(\n self,\n *,\n path: str,\n s3_endpoint_override=\"\",\n partition_by: Optional[List[str]] = None,\n ):\n self.path = path\n self.s3_endpoint_override = s3_endpoint_override\n self.partition_by = partition_by\n\n @classmethod\n def from_proto(cls, config_proto: LoggingConfigProto) -> \"LoggingDestination\":\n return FileLoggingDestination(\n path=config_proto.file_destination.path,\n s3_endpoint_override=config_proto.file_destination.s3_endpoint_override,\n partition_by=list(config_proto.file_destination.partition_by)\n if config_proto.file_destination.partition_by\n else None,\n )\n\n def to_proto(self) -> LoggingConfigProto:\n return LoggingConfigProto(\n file_destination=LoggingConfigProto.FileDestination(\n path=self.path,\n s3_endpoint_override=self.s3_endpoint_override,\n partition_by=self.partition_by,\n )\n )\n\n def to_data_source(self) -> DataSource:\n return FileSource(\n path=self.path,\n file_format=ParquetFormat(),\n s3_endpoint_override=self.s3_endpoint_override,\n )\n", "path": "sdk/python/feast/infra/offline_stores/file_source.py"}], "after_files": [{"content": "import warnings\nfrom typing import Callable, Dict, Iterable, List, Optional, Tuple\n\nfrom pyarrow._fs import FileSystem\nfrom pyarrow._s3fs import S3FileSystem\nfrom pyarrow.parquet import ParquetDataset\n\nfrom feast import type_map\nfrom feast.data_format import FileFormat, ParquetFormat\nfrom feast.data_source import DataSource\nfrom feast.feature_logging import LoggingDestination\nfrom feast.protos.feast.core.DataSource_pb2 import DataSource as DataSourceProto\nfrom feast.protos.feast.core.FeatureService_pb2 import (\n LoggingConfig as LoggingConfigProto,\n)\nfrom feast.protos.feast.core.SavedDataset_pb2 import (\n SavedDatasetStorage as SavedDatasetStorageProto,\n)\nfrom feast.repo_config import RepoConfig\nfrom feast.saved_dataset import SavedDatasetStorage\nfrom feast.value_type import ValueType\n\n\nclass FileSource(DataSource):\n def __init__(\n self,\n *args,\n path: Optional[str] = None,\n event_timestamp_column: Optional[str] = \"\",\n file_format: Optional[FileFormat] = None,\n created_timestamp_column: Optional[str] = \"\",\n field_mapping: Optional[Dict[str, str]] = None,\n date_partition_column: Optional[str] = \"\",\n s3_endpoint_override: Optional[str] = None,\n name: Optional[str] = \"\",\n description: Optional[str] = \"\",\n tags: Optional[Dict[str, str]] = None,\n owner: Optional[str] = \"\",\n timestamp_field: Optional[str] = \"\",\n ):\n \"\"\"Create a FileSource from a file containing feature data. Only Parquet format supported.\n\n Args:\n\n path: File path to file containing feature data. Must contain an event_timestamp column, entity columns and\n feature columns.\n event_timestamp_column(optional): (Deprecated) Event timestamp column used for point in time joins of feature values.\n created_timestamp_column (optional): Timestamp column when row was created, used for deduplicating rows.\n file_format (optional): Explicitly set the file format. Allows Feast to bypass inferring the file format.\n field_mapping: A dictionary mapping of column names in this data source to feature names in a feature table\n or view. Only used for feature columns, not entities or timestamp columns.\n date_partition_column (optional): Timestamp column used for partitioning.\n s3_endpoint_override (optional): Overrides AWS S3 enpoint with custom S3 storage\n name (optional): Name for the file source. Defaults to the path.\n description (optional): A human-readable description.\n tags (optional): A dictionary of key-value pairs to store arbitrary metadata.\n owner (optional): The owner of the file source, typically the email of the primary\n maintainer.\n timestamp_field (optional): Event timestamp foe;d used for point in time\n joins of feature values.\n\n Examples:\n >>> from feast import FileSource\n >>> file_source = FileSource(path=\"my_features.parquet\", timestamp_field=\"event_timestamp\")\n \"\"\"\n positional_attributes = [\"path\"]\n _path = path\n if args:\n if args:\n warnings.warn(\n (\n \"File Source parameters should be specified as a keyword argument instead of a positional arg.\"\n \"Feast 0.23+ will not support positional arguments to construct File sources\"\n ),\n DeprecationWarning,\n )\n if len(args) > len(positional_attributes):\n raise ValueError(\n f\"Only {', '.join(positional_attributes)} are allowed as positional args when defining \"\n f\"File sources, for backwards compatibility.\"\n )\n if len(args) >= 1:\n _path = args[0]\n if _path is None:\n raise ValueError(\n 'No \"path\" argument provided. Please set \"path\" to the location of your file source.'\n )\n self.file_options = FileOptions(\n file_format=file_format,\n uri=_path,\n s3_endpoint_override=s3_endpoint_override,\n )\n\n if date_partition_column:\n warnings.warn(\n (\n \"The argument 'date_partition_column' is not supported for File sources.\"\n \"It will be removed in Feast 0.23+\"\n ),\n DeprecationWarning,\n )\n\n super().__init__(\n name=name if name else path,\n event_timestamp_column=event_timestamp_column,\n created_timestamp_column=created_timestamp_column,\n field_mapping=field_mapping,\n description=description,\n tags=tags,\n owner=owner,\n timestamp_field=timestamp_field,\n )\n\n # Note: Python requires redefining hash in child classes that override __eq__\n def __hash__(self):\n return super().__hash__()\n\n def __eq__(self, other):\n if not isinstance(other, FileSource):\n raise TypeError(\"Comparisons should only involve FileSource class objects.\")\n\n return (\n super().__eq__(other)\n and self.path == other.path\n and self.file_options.file_format == other.file_options.file_format\n and self.file_options.s3_endpoint_override\n == other.file_options.s3_endpoint_override\n )\n\n @property\n def path(self):\n \"\"\"\n Returns the path of this file data source.\n \"\"\"\n return self.file_options.uri\n\n @staticmethod\n def from_proto(data_source: DataSourceProto):\n return FileSource(\n name=data_source.name,\n field_mapping=dict(data_source.field_mapping),\n file_format=FileFormat.from_proto(data_source.file_options.file_format),\n path=data_source.file_options.uri,\n timestamp_field=data_source.timestamp_field,\n created_timestamp_column=data_source.created_timestamp_column,\n s3_endpoint_override=data_source.file_options.s3_endpoint_override,\n description=data_source.description,\n tags=dict(data_source.tags),\n owner=data_source.owner,\n )\n\n def to_proto(self) -> DataSourceProto:\n data_source_proto = DataSourceProto(\n name=self.name,\n type=DataSourceProto.BATCH_FILE,\n field_mapping=self.field_mapping,\n file_options=self.file_options.to_proto(),\n description=self.description,\n tags=self.tags,\n owner=self.owner,\n )\n\n data_source_proto.timestamp_field = self.timestamp_field\n data_source_proto.created_timestamp_column = self.created_timestamp_column\n\n return data_source_proto\n\n def validate(self, config: RepoConfig):\n # TODO: validate a FileSource\n pass\n\n @staticmethod\n def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:\n return type_map.pa_to_feast_value_type\n\n def get_table_column_names_and_types(\n self, config: RepoConfig\n ) -> Iterable[Tuple[str, str]]:\n filesystem, path = FileSource.create_filesystem_and_path(\n self.path, self.file_options.s3_endpoint_override\n )\n schema = ParquetDataset(\n path if filesystem is None else filesystem.open_input_file(path)\n ).schema.to_arrow_schema()\n return zip(schema.names, map(str, schema.types))\n\n @staticmethod\n def create_filesystem_and_path(\n path: str, s3_endpoint_override: str\n ) -> Tuple[Optional[FileSystem], str]:\n if path.startswith(\"s3://\"):\n s3fs = S3FileSystem(\n endpoint_override=s3_endpoint_override if s3_endpoint_override else None\n )\n return s3fs, path.replace(\"s3://\", \"\")\n else:\n return None, path\n\n def get_table_query_string(self) -> str:\n pass\n\n\nclass FileOptions:\n \"\"\"\n Configuration options for a file data source.\n \"\"\"\n\n def __init__(\n self,\n file_format: Optional[FileFormat],\n s3_endpoint_override: Optional[str],\n uri: Optional[str],\n ):\n \"\"\"\n Initializes a FileOptions object.\n\n Args:\n file_format (optional): File source format, e.g. parquet.\n s3_endpoint_override (optional): Custom s3 endpoint (used only with s3 uri).\n uri (optional): File source url, e.g. s3:// or local file.\n \"\"\"\n self.file_format = file_format\n self.uri = uri or \"\"\n self.s3_endpoint_override = s3_endpoint_override or \"\"\n\n @classmethod\n def from_proto(cls, file_options_proto: DataSourceProto.FileOptions):\n \"\"\"\n Creates a FileOptions from a protobuf representation of a file option\n\n Args:\n file_options_proto: a protobuf representation of a datasource\n\n Returns:\n Returns a FileOptions object based on the file_options protobuf\n \"\"\"\n file_options = cls(\n file_format=FileFormat.from_proto(file_options_proto.file_format),\n uri=file_options_proto.uri,\n s3_endpoint_override=file_options_proto.s3_endpoint_override,\n )\n return file_options\n\n def to_proto(self) -> DataSourceProto.FileOptions:\n \"\"\"\n Converts an FileOptionsProto object to its protobuf representation.\n\n Returns:\n FileOptionsProto protobuf\n \"\"\"\n file_options_proto = DataSourceProto.FileOptions(\n file_format=(\n None if self.file_format is None else self.file_format.to_proto()\n ),\n uri=self.uri,\n s3_endpoint_override=self.s3_endpoint_override,\n )\n\n return file_options_proto\n\n\nclass SavedDatasetFileStorage(SavedDatasetStorage):\n _proto_attr_name = \"file_storage\"\n\n file_options: FileOptions\n\n def __init__(\n self,\n path: str,\n file_format: FileFormat = ParquetFormat(),\n s3_endpoint_override: Optional[str] = None,\n ):\n self.file_options = FileOptions(\n file_format=file_format,\n s3_endpoint_override=s3_endpoint_override,\n uri=path,\n )\n\n @staticmethod\n def from_proto(storage_proto: SavedDatasetStorageProto) -> SavedDatasetStorage:\n file_options = FileOptions.from_proto(storage_proto.file_storage)\n return SavedDatasetFileStorage(\n path=file_options.uri,\n file_format=file_options.file_format,\n s3_endpoint_override=file_options.s3_endpoint_override,\n )\n\n def to_proto(self) -> SavedDatasetStorageProto:\n return SavedDatasetStorageProto(file_storage=self.file_options.to_proto())\n\n def to_data_source(self) -> DataSource:\n return FileSource(\n path=self.file_options.uri,\n file_format=self.file_options.file_format,\n s3_endpoint_override=self.file_options.s3_endpoint_override,\n )\n\n\nclass FileLoggingDestination(LoggingDestination):\n _proto_kind = \"file_destination\"\n\n path: str\n s3_endpoint_override: str\n partition_by: Optional[List[str]]\n\n def __init__(\n self,\n *,\n path: str,\n s3_endpoint_override=\"\",\n partition_by: Optional[List[str]] = None,\n ):\n self.path = path\n self.s3_endpoint_override = s3_endpoint_override\n self.partition_by = partition_by\n\n @classmethod\n def from_proto(cls, config_proto: LoggingConfigProto) -> \"LoggingDestination\":\n return FileLoggingDestination(\n path=config_proto.file_destination.path,\n s3_endpoint_override=config_proto.file_destination.s3_endpoint_override,\n partition_by=list(config_proto.file_destination.partition_by)\n if config_proto.file_destination.partition_by\n else None,\n )\n\n def to_proto(self) -> LoggingConfigProto:\n return LoggingConfigProto(\n file_destination=LoggingConfigProto.FileDestination(\n path=self.path,\n s3_endpoint_override=self.s3_endpoint_override,\n partition_by=self.partition_by,\n )\n )\n\n def to_data_source(self) -> DataSource:\n return FileSource(\n path=self.path,\n file_format=ParquetFormat(),\n s3_endpoint_override=self.s3_endpoint_override,\n )\n", "path": "sdk/python/feast/infra/offline_stores/file_source.py"}]}
| 3,973 | 239 |
gh_patches_debug_26851
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-1620
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FILMON HLS Problem with Sat1
- [x] This is a bug report.
### Description
The Channel Sat1 on Filmon can not be played with Streamlink
### Expected / Actual behavior
Normally all streams from FilmOn can be played via the streamlink. The channel Sat1 is played via the HLS protocol
### Reproduction steps / Explicit stream URLs to test
http://www.filmon.com/tv/sat-1-schweiz
### Logs
```
127.0.0.1 - - [25/Mar/2018 21:23:39] "GET /http://www.filmon.com/tv/rts-deux HTTP/1.1" 200 -
127.0.0.1 - - [25/Mar/2018 21:23:39] URL: http://www.filmon.com/tv/rts-deux Quality: best
[streamlinksrv][info] Streams:
[u'low', u'high', 'worst', 'best']
127.0.0.1 - - [25/Mar/2018 21:23:45] "GET /http://www.filmon.com/tv/sat-1-schweiz HTTP/1.1" 200 -
127.0.0.1 - - [25/Mar/2018 21:23:45] URL: http://www.filmon.com/tv/sat-1-schweiz Quality: best
[streamlinksrv][error] Plugin error: Unable to open URL: http://www.filmon.com/api-v2/channel/sat-1-schweiz?protocol=hls (404 Client Error: Not Found for url: http://www.filmon.com/api-v2/channel/sat-1-schweiz?protocol=hls)
[streamlinksrv][info] Closing currently open stream...
[streamlinksrv][error] Got exception: End Of Data!
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/filmon.py`
Content:
```
1 import re
2
3 import time
4
5 from streamlink import StreamError
6 from streamlink.plugin import Plugin
7 from streamlink.plugin.api import http, validate
8 from streamlink.stream import HLSStream
9
10
11 class FilmOnHLS(HLSStream):
12 __shortname__ = "hls-filmon"
13
14 def __init__(self, session_, channel=None, vod_id=None, quality="high", **args):
15 super(FilmOnHLS, self).__init__(session_, None, **args)
16 self.logger = self.session.logger.new_module("stream.hls-filmon")
17 self.channel = channel
18 self.vod_id = vod_id
19 if self.channel is None and self.vod_id is None:
20 raise ValueError("channel or vod_id must be set")
21 self.quality = quality
22 self.api = FilmOnAPI()
23 self._url = None
24 self.watch_timeout = 0
25
26 def _get_stream_data(self):
27 if self.channel:
28 self.logger.debug("Reloading FilmOn channel playlist: {0}", self.channel)
29 data = self.api.channel(self.channel)
30 for stream in data["streams"]:
31 yield stream
32 elif self.vod_id:
33 self.logger.debug("Reloading FilmOn VOD playlist: {0}", self.vod_id)
34 data = self.api.vod(self.vod_id)
35 for _, stream in data["streams"].items():
36 yield stream
37
38 @property
39 def url(self):
40 # If the watch timeout has passed then refresh the playlist from the API
41 if int(time.time()) >= self.watch_timeout:
42 for stream in self._get_stream_data():
43 if stream["quality"] == self.quality:
44 self.watch_timeout = int(time.time()) + stream["watch-timeout"]
45 self._url = stream["url"]
46 return self._url
47 raise StreamError("cannot refresh FilmOn HLS Stream playlist")
48 else:
49 return self._url
50
51 def to_url(self):
52 url = self.url
53 expires = self.watch_timeout - time.time()
54 if expires < 0:
55 raise TypeError("Stream has expired and cannot be converted to a URL")
56 return url
57
58
59 class FilmOnAPI(object):
60 channel_url = "http://www.filmon.com/api-v2/channel/{0}?protocol=hls"
61 vod_url = "http://www.filmon.com/vod/info/{0}"
62
63 stream_schema = {
64 "quality": validate.text,
65 "url": validate.url(),
66 "watch-timeout": int
67 }
68 api_schema = validate.Schema(
69 {
70 "data": {
71 "streams": validate.any(
72 {validate.text: stream_schema},
73 [stream_schema]
74 )
75 }
76 },
77 validate.get("data")
78 )
79
80 def channel(self, channel):
81 res = http.get(self.channel_url.format(channel))
82 return http.json(res, schema=self.api_schema)
83
84 def vod(self, vod_id):
85 res = http.get(self.vod_url.format(vod_id))
86 return http.json(res, schema=self.api_schema)
87
88
89 class Filmon(Plugin):
90 url_re = re.compile(r"""https?://(?:\w+\.)?filmon.(?:tv|com)/
91 (?:
92 (tv|channel)/(?P<channel>[^/]+)|
93 vod/view/(?P<vod_id>\d+)-|
94 group/
95 )
96 """, re.VERBOSE)
97
98 _channel_id_re = re.compile(r'channel_id\s*?=\s*"(\d+)"')
99 _channel_id_schema = validate.Schema(
100 validate.transform(_channel_id_re.search),
101 validate.any(None, validate.get(1))
102 )
103
104 quality_weights = {
105 "high": 720,
106 "low": 480
107 }
108
109 def __init__(self, url):
110 super(Filmon, self).__init__(url)
111 self.api = FilmOnAPI()
112
113 @classmethod
114 def can_handle_url(cls, url):
115 return cls.url_re.match(url) is not None
116
117 @classmethod
118 def stream_weight(cls, key):
119 weight = cls.quality_weights.get(key)
120 if weight:
121 return weight, "filmon"
122
123 return Plugin.stream_weight(key)
124
125 def _get_streams(self):
126 url_m = self.url_re.match(self.url)
127
128 channel = url_m and url_m.group("channel")
129 vod_id = url_m and url_m.group("vod_id")
130
131 if vod_id:
132 data = self.api.vod(vod_id)
133 for _, stream in data["streams"].items():
134 yield stream["quality"], FilmOnHLS(self.session, vod_id=vod_id, quality=stream["quality"])
135
136 else:
137 if not channel:
138 channel = http.get(self.url, schema=self._channel_id_schema)
139 data = self.api.channel(channel)
140 for stream in data["streams"]:
141 yield stream["quality"], FilmOnHLS(self.session, channel=channel, quality=stream["quality"])
142
143
144 __plugin__ = Filmon
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/filmon.py b/src/streamlink/plugins/filmon.py
--- a/src/streamlink/plugins/filmon.py
+++ b/src/streamlink/plugins/filmon.py
@@ -89,16 +89,17 @@
class Filmon(Plugin):
url_re = re.compile(r"""https?://(?:\w+\.)?filmon.(?:tv|com)/
(?:
- (tv|channel)/(?P<channel>[^/]+)|
+ tv/|
+ channel/(?P<channel>\d+)|
vod/view/(?P<vod_id>\d+)-|
group/
)
""", re.VERBOSE)
- _channel_id_re = re.compile(r'channel_id\s*?=\s*"(\d+)"')
+ _channel_id_re = re.compile(r"""channel_id\s*=\s*(?P<quote>['"]?)(?P<value>\d+)(?P=quote)""")
_channel_id_schema = validate.Schema(
validate.transform(_channel_id_re.search),
- validate.any(None, validate.get(1))
+ validate.any(None, validate.get("value"))
)
quality_weights = {
@@ -136,6 +137,7 @@
else:
if not channel:
channel = http.get(self.url, schema=self._channel_id_schema)
+ self.logger.debug("Found channel ID: {0}", channel)
data = self.api.channel(channel)
for stream in data["streams"]:
yield stream["quality"], FilmOnHLS(self.session, channel=channel, quality=stream["quality"])
|
{"golden_diff": "diff --git a/src/streamlink/plugins/filmon.py b/src/streamlink/plugins/filmon.py\n--- a/src/streamlink/plugins/filmon.py\n+++ b/src/streamlink/plugins/filmon.py\n@@ -89,16 +89,17 @@\n class Filmon(Plugin):\n url_re = re.compile(r\"\"\"https?://(?:\\w+\\.)?filmon.(?:tv|com)/\n (?:\n- (tv|channel)/(?P<channel>[^/]+)|\n+ tv/|\n+ channel/(?P<channel>\\d+)|\n vod/view/(?P<vod_id>\\d+)-|\n group/\n )\n \"\"\", re.VERBOSE)\n \n- _channel_id_re = re.compile(r'channel_id\\s*?=\\s*\"(\\d+)\"')\n+ _channel_id_re = re.compile(r\"\"\"channel_id\\s*=\\s*(?P<quote>['\"]?)(?P<value>\\d+)(?P=quote)\"\"\")\n _channel_id_schema = validate.Schema(\n validate.transform(_channel_id_re.search),\n- validate.any(None, validate.get(1))\n+ validate.any(None, validate.get(\"value\"))\n )\n \n quality_weights = {\n@@ -136,6 +137,7 @@\n else:\n if not channel:\n channel = http.get(self.url, schema=self._channel_id_schema)\n+ self.logger.debug(\"Found channel ID: {0}\", channel)\n data = self.api.channel(channel)\n for stream in data[\"streams\"]:\n yield stream[\"quality\"], FilmOnHLS(self.session, channel=channel, quality=stream[\"quality\"])\n", "issue": "FILMON HLS Problem with Sat1 \n- [x] This is a bug report.\r\n\r\n### Description\r\n\r\nThe Channel Sat1 on Filmon can not be played with Streamlink\r\n\r\n### Expected / Actual behavior\r\n\r\nNormally all streams from FilmOn can be played via the streamlink. The channel Sat1 is played via the HLS protocol\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n\r\nhttp://www.filmon.com/tv/sat-1-schweiz\r\n\r\n### Logs\r\n\r\n```\r\n127.0.0.1 - - [25/Mar/2018 21:23:39] \"GET /http://www.filmon.com/tv/rts-deux HTTP/1.1\" 200 -\r\n127.0.0.1 - - [25/Mar/2018 21:23:39] URL: http://www.filmon.com/tv/rts-deux Quality: best\r\n[streamlinksrv][info] Streams:\r\n[u'low', u'high', 'worst', 'best']\r\n127.0.0.1 - - [25/Mar/2018 21:23:45] \"GET /http://www.filmon.com/tv/sat-1-schweiz HTTP/1.1\" 200 -\r\n127.0.0.1 - - [25/Mar/2018 21:23:45] URL: http://www.filmon.com/tv/sat-1-schweiz Quality: best\r\n[streamlinksrv][error] Plugin error: Unable to open URL: http://www.filmon.com/api-v2/channel/sat-1-schweiz?protocol=hls (404 Client Error: Not Found for url: http://www.filmon.com/api-v2/channel/sat-1-schweiz?protocol=hls)\r\n[streamlinksrv][info] Closing currently open stream...\r\n[streamlinksrv][error] Got exception: End Of Data!\r\n```\r\n\r\n\n", "before_files": [{"content": "import re\n\nimport time\n\nfrom streamlink import StreamError\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, validate\nfrom streamlink.stream import HLSStream\n\n\nclass FilmOnHLS(HLSStream):\n __shortname__ = \"hls-filmon\"\n\n def __init__(self, session_, channel=None, vod_id=None, quality=\"high\", **args):\n super(FilmOnHLS, self).__init__(session_, None, **args)\n self.logger = self.session.logger.new_module(\"stream.hls-filmon\")\n self.channel = channel\n self.vod_id = vod_id\n if self.channel is None and self.vod_id is None:\n raise ValueError(\"channel or vod_id must be set\")\n self.quality = quality\n self.api = FilmOnAPI()\n self._url = None\n self.watch_timeout = 0\n\n def _get_stream_data(self):\n if self.channel:\n self.logger.debug(\"Reloading FilmOn channel playlist: {0}\", self.channel)\n data = self.api.channel(self.channel)\n for stream in data[\"streams\"]:\n yield stream\n elif self.vod_id:\n self.logger.debug(\"Reloading FilmOn VOD playlist: {0}\", self.vod_id)\n data = self.api.vod(self.vod_id)\n for _, stream in data[\"streams\"].items():\n yield stream\n\n @property\n def url(self):\n # If the watch timeout has passed then refresh the playlist from the API\n if int(time.time()) >= self.watch_timeout:\n for stream in self._get_stream_data():\n if stream[\"quality\"] == self.quality:\n self.watch_timeout = int(time.time()) + stream[\"watch-timeout\"]\n self._url = stream[\"url\"]\n return self._url\n raise StreamError(\"cannot refresh FilmOn HLS Stream playlist\")\n else:\n return self._url\n\n def to_url(self):\n url = self.url\n expires = self.watch_timeout - time.time()\n if expires < 0:\n raise TypeError(\"Stream has expired and cannot be converted to a URL\")\n return url\n\n\nclass FilmOnAPI(object):\n channel_url = \"http://www.filmon.com/api-v2/channel/{0}?protocol=hls\"\n vod_url = \"http://www.filmon.com/vod/info/{0}\"\n\n stream_schema = {\n \"quality\": validate.text,\n \"url\": validate.url(),\n \"watch-timeout\": int\n }\n api_schema = validate.Schema(\n {\n \"data\": {\n \"streams\": validate.any(\n {validate.text: stream_schema},\n [stream_schema]\n )\n }\n },\n validate.get(\"data\")\n )\n\n def channel(self, channel):\n res = http.get(self.channel_url.format(channel))\n return http.json(res, schema=self.api_schema)\n\n def vod(self, vod_id):\n res = http.get(self.vod_url.format(vod_id))\n return http.json(res, schema=self.api_schema)\n\n\nclass Filmon(Plugin):\n url_re = re.compile(r\"\"\"https?://(?:\\w+\\.)?filmon.(?:tv|com)/\n (?:\n (tv|channel)/(?P<channel>[^/]+)|\n vod/view/(?P<vod_id>\\d+)-|\n group/\n )\n \"\"\", re.VERBOSE)\n\n _channel_id_re = re.compile(r'channel_id\\s*?=\\s*\"(\\d+)\"')\n _channel_id_schema = validate.Schema(\n validate.transform(_channel_id_re.search),\n validate.any(None, validate.get(1))\n )\n\n quality_weights = {\n \"high\": 720,\n \"low\": 480\n }\n\n def __init__(self, url):\n super(Filmon, self).__init__(url)\n self.api = FilmOnAPI()\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n @classmethod\n def stream_weight(cls, key):\n weight = cls.quality_weights.get(key)\n if weight:\n return weight, \"filmon\"\n\n return Plugin.stream_weight(key)\n\n def _get_streams(self):\n url_m = self.url_re.match(self.url)\n\n channel = url_m and url_m.group(\"channel\")\n vod_id = url_m and url_m.group(\"vod_id\")\n\n if vod_id:\n data = self.api.vod(vod_id)\n for _, stream in data[\"streams\"].items():\n yield stream[\"quality\"], FilmOnHLS(self.session, vod_id=vod_id, quality=stream[\"quality\"])\n\n else:\n if not channel:\n channel = http.get(self.url, schema=self._channel_id_schema)\n data = self.api.channel(channel)\n for stream in data[\"streams\"]:\n yield stream[\"quality\"], FilmOnHLS(self.session, channel=channel, quality=stream[\"quality\"])\n\n\n__plugin__ = Filmon\n", "path": "src/streamlink/plugins/filmon.py"}], "after_files": [{"content": "import re\n\nimport time\n\nfrom streamlink import StreamError\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, validate\nfrom streamlink.stream import HLSStream\n\n\nclass FilmOnHLS(HLSStream):\n __shortname__ = \"hls-filmon\"\n\n def __init__(self, session_, channel=None, vod_id=None, quality=\"high\", **args):\n super(FilmOnHLS, self).__init__(session_, None, **args)\n self.logger = self.session.logger.new_module(\"stream.hls-filmon\")\n self.channel = channel\n self.vod_id = vod_id\n if self.channel is None and self.vod_id is None:\n raise ValueError(\"channel or vod_id must be set\")\n self.quality = quality\n self.api = FilmOnAPI()\n self._url = None\n self.watch_timeout = 0\n\n def _get_stream_data(self):\n if self.channel:\n self.logger.debug(\"Reloading FilmOn channel playlist: {0}\", self.channel)\n data = self.api.channel(self.channel)\n for stream in data[\"streams\"]:\n yield stream\n elif self.vod_id:\n self.logger.debug(\"Reloading FilmOn VOD playlist: {0}\", self.vod_id)\n data = self.api.vod(self.vod_id)\n for _, stream in data[\"streams\"].items():\n yield stream\n\n @property\n def url(self):\n # If the watch timeout has passed then refresh the playlist from the API\n if int(time.time()) >= self.watch_timeout:\n for stream in self._get_stream_data():\n if stream[\"quality\"] == self.quality:\n self.watch_timeout = int(time.time()) + stream[\"watch-timeout\"]\n self._url = stream[\"url\"]\n return self._url\n raise StreamError(\"cannot refresh FilmOn HLS Stream playlist\")\n else:\n return self._url\n\n def to_url(self):\n url = self.url\n expires = self.watch_timeout - time.time()\n if expires < 0:\n raise TypeError(\"Stream has expired and cannot be converted to a URL\")\n return url\n\n\nclass FilmOnAPI(object):\n channel_url = \"http://www.filmon.com/api-v2/channel/{0}?protocol=hls\"\n vod_url = \"http://www.filmon.com/vod/info/{0}\"\n\n stream_schema = {\n \"quality\": validate.text,\n \"url\": validate.url(),\n \"watch-timeout\": int\n }\n api_schema = validate.Schema(\n {\n \"data\": {\n \"streams\": validate.any(\n {validate.text: stream_schema},\n [stream_schema]\n )\n }\n },\n validate.get(\"data\")\n )\n\n def channel(self, channel):\n res = http.get(self.channel_url.format(channel))\n return http.json(res, schema=self.api_schema)\n\n def vod(self, vod_id):\n res = http.get(self.vod_url.format(vod_id))\n return http.json(res, schema=self.api_schema)\n\n\nclass Filmon(Plugin):\n url_re = re.compile(r\"\"\"https?://(?:\\w+\\.)?filmon.(?:tv|com)/\n (?:\n tv/|\n channel/(?P<channel>\\d+)|\n vod/view/(?P<vod_id>\\d+)-|\n group/\n )\n \"\"\", re.VERBOSE)\n\n _channel_id_re = re.compile(r\"\"\"channel_id\\s*=\\s*(?P<quote>['\"]?)(?P<value>\\d+)(?P=quote)\"\"\")\n _channel_id_schema = validate.Schema(\n validate.transform(_channel_id_re.search),\n validate.any(None, validate.get(\"value\"))\n )\n\n quality_weights = {\n \"high\": 720,\n \"low\": 480\n }\n\n def __init__(self, url):\n super(Filmon, self).__init__(url)\n self.api = FilmOnAPI()\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n @classmethod\n def stream_weight(cls, key):\n weight = cls.quality_weights.get(key)\n if weight:\n return weight, \"filmon\"\n\n return Plugin.stream_weight(key)\n\n def _get_streams(self):\n url_m = self.url_re.match(self.url)\n\n channel = url_m and url_m.group(\"channel\")\n vod_id = url_m and url_m.group(\"vod_id\")\n\n if vod_id:\n data = self.api.vod(vod_id)\n for _, stream in data[\"streams\"].items():\n yield stream[\"quality\"], FilmOnHLS(self.session, vod_id=vod_id, quality=stream[\"quality\"])\n\n else:\n if not channel:\n channel = http.get(self.url, schema=self._channel_id_schema)\n self.logger.debug(\"Found channel ID: {0}\", channel)\n data = self.api.channel(channel)\n for stream in data[\"streams\"]:\n yield stream[\"quality\"], FilmOnHLS(self.session, channel=channel, quality=stream[\"quality\"])\n\n\n__plugin__ = Filmon\n", "path": "src/streamlink/plugins/filmon.py"}]}
| 2,127 | 356 |
gh_patches_debug_5874
|
rasdani/github-patches
|
git_diff
|
python-poetry__poetry-1862
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document the --no-root option
<!--
Hi there! Thank you for wanting to make Poetry better.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] I have searched the [documentation](https://python-poetry.org/docs/) and believe that my question is not covered.
## Feature Request
<!-- Now feel free to write your idea for improvement. Thanks again 🙌 ❤️ -->
The `--no-root` option described in https://github.com/python-poetry/poetry/issues/1525 works fine for installation. Unfortunately I found it only when looking for duplicate issues before raising this. `poetry help install` does not describe that option.
Please add it to the `help install` output.
Document the --no-root option
<!--
Hi there! Thank you for wanting to make Poetry better.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] I have searched the [documentation](https://python-poetry.org/docs/) and believe that my question is not covered.
## Feature Request
<!-- Now feel free to write your idea for improvement. Thanks again 🙌 ❤️ -->
The `--no-root` option described in https://github.com/python-poetry/poetry/issues/1525 works fine for installation. Unfortunately I found it only when looking for duplicate issues before raising this. `poetry help install` does not describe that option.
Please add it to the `help install` output.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/console/commands/install.py`
Content:
```
1 from cleo import option
2
3 from .env_command import EnvCommand
4
5
6 class InstallCommand(EnvCommand):
7
8 name = "install"
9 description = "Installs the project dependencies."
10
11 options = [
12 option("no-dev", None, "Do not install the development dependencies."),
13 option(
14 "no-root", None, "Do not install the root package (the current project)."
15 ),
16 option(
17 "dry-run",
18 None,
19 "Output the operations but do not execute anything "
20 "(implicitly enables --verbose).",
21 ),
22 option(
23 "extras",
24 "E",
25 "Extra sets of dependencies to install.",
26 flag=False,
27 multiple=True,
28 ),
29 ]
30
31 help = """The <info>install</info> command reads the <comment>poetry.lock</> file from
32 the current directory, processes it, and downloads and installs all the
33 libraries and dependencies outlined in that file. If the file does not
34 exist it will look for <comment>pyproject.toml</> and do the same.
35
36 <info>poetry install</info>
37 """
38
39 _loggers = ["poetry.repositories.pypi_repository"]
40
41 def handle(self):
42 from clikit.io import NullIO
43 from poetry.installation.installer import Installer
44 from poetry.masonry.builders import EditableBuilder
45 from poetry.masonry.utils.module import ModuleOrPackageNotFound
46
47 installer = Installer(
48 self.io, self.env, self.poetry.package, self.poetry.locker, self.poetry.pool
49 )
50
51 extras = []
52 for extra in self.option("extras"):
53 if " " in extra:
54 extras += [e.strip() for e in extra.split(" ")]
55 else:
56 extras.append(extra)
57
58 installer.extras(extras)
59 installer.dev_mode(not self.option("no-dev"))
60 installer.dry_run(self.option("dry-run"))
61 installer.verbose(self.option("verbose"))
62
63 return_code = installer.run()
64
65 if return_code != 0:
66 return return_code
67
68 if self.option("no-root"):
69 return 0
70
71 try:
72 builder = EditableBuilder(self.poetry, self._env, NullIO())
73 except ModuleOrPackageNotFound:
74 # This is likely due to the fact that the project is an application
75 # not following the structure expected by Poetry
76 # If this is a true error it will be picked up later by build anyway.
77 return 0
78
79 self.line(
80 " - Installing <c1>{}</c1> (<b>{}</b>)".format(
81 self.poetry.package.pretty_name, self.poetry.package.pretty_version
82 )
83 )
84
85 if self.option("dry-run"):
86 return 0
87
88 builder.build()
89
90 return 0
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/poetry/console/commands/install.py b/poetry/console/commands/install.py
--- a/poetry/console/commands/install.py
+++ b/poetry/console/commands/install.py
@@ -34,6 +34,12 @@
exist it will look for <comment>pyproject.toml</> and do the same.
<info>poetry install</info>
+
+By default, the above command will also install the current project. To install only the
+dependencies and not including the current project, run the command with the
+<info>--no-root</info> option like below:
+
+<info> poetry install --no-root</info>
"""
_loggers = ["poetry.repositories.pypi_repository"]
|
{"golden_diff": "diff --git a/poetry/console/commands/install.py b/poetry/console/commands/install.py\n--- a/poetry/console/commands/install.py\n+++ b/poetry/console/commands/install.py\n@@ -34,6 +34,12 @@\n exist it will look for <comment>pyproject.toml</> and do the same.\n \n <info>poetry install</info>\n+\n+By default, the above command will also install the current project. To install only the\n+dependencies and not including the current project, run the command with the\n+<info>--no-root</info> option like below:\n+\n+<info> poetry install --no-root</info>\n \"\"\"\n \n _loggers = [\"poetry.repositories.pypi_repository\"]\n", "issue": "Document the --no-root option\n<!--\r\n Hi there! Thank you for wanting to make Poetry better.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] I have searched the [documentation](https://python-poetry.org/docs/) and believe that my question is not covered.\r\n\r\n## Feature Request\r\n<!-- Now feel free to write your idea for improvement. Thanks again \ud83d\ude4c \u2764\ufe0f -->\r\nThe `--no-root` option described in https://github.com/python-poetry/poetry/issues/1525 works fine for installation. Unfortunately I found it only when looking for duplicate issues before raising this. `poetry help install` does not describe that option.\r\n\r\nPlease add it to the `help install` output.\nDocument the --no-root option\n<!--\r\n Hi there! Thank you for wanting to make Poetry better.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] I have searched the [documentation](https://python-poetry.org/docs/) and believe that my question is not covered.\r\n\r\n## Feature Request\r\n<!-- Now feel free to write your idea for improvement. Thanks again \ud83d\ude4c \u2764\ufe0f -->\r\nThe `--no-root` option described in https://github.com/python-poetry/poetry/issues/1525 works fine for installation. Unfortunately I found it only when looking for duplicate issues before raising this. `poetry help install` does not describe that option.\r\n\r\nPlease add it to the `help install` output.\n", "before_files": [{"content": "from cleo import option\n\nfrom .env_command import EnvCommand\n\n\nclass InstallCommand(EnvCommand):\n\n name = \"install\"\n description = \"Installs the project dependencies.\"\n\n options = [\n option(\"no-dev\", None, \"Do not install the development dependencies.\"),\n option(\n \"no-root\", None, \"Do not install the root package (the current project).\"\n ),\n option(\n \"dry-run\",\n None,\n \"Output the operations but do not execute anything \"\n \"(implicitly enables --verbose).\",\n ),\n option(\n \"extras\",\n \"E\",\n \"Extra sets of dependencies to install.\",\n flag=False,\n multiple=True,\n ),\n ]\n\n help = \"\"\"The <info>install</info> command reads the <comment>poetry.lock</> file from\nthe current directory, processes it, and downloads and installs all the\nlibraries and dependencies outlined in that file. If the file does not\nexist it will look for <comment>pyproject.toml</> and do the same.\n\n<info>poetry install</info>\n\"\"\"\n\n _loggers = [\"poetry.repositories.pypi_repository\"]\n\n def handle(self):\n from clikit.io import NullIO\n from poetry.installation.installer import Installer\n from poetry.masonry.builders import EditableBuilder\n from poetry.masonry.utils.module import ModuleOrPackageNotFound\n\n installer = Installer(\n self.io, self.env, self.poetry.package, self.poetry.locker, self.poetry.pool\n )\n\n extras = []\n for extra in self.option(\"extras\"):\n if \" \" in extra:\n extras += [e.strip() for e in extra.split(\" \")]\n else:\n extras.append(extra)\n\n installer.extras(extras)\n installer.dev_mode(not self.option(\"no-dev\"))\n installer.dry_run(self.option(\"dry-run\"))\n installer.verbose(self.option(\"verbose\"))\n\n return_code = installer.run()\n\n if return_code != 0:\n return return_code\n\n if self.option(\"no-root\"):\n return 0\n\n try:\n builder = EditableBuilder(self.poetry, self._env, NullIO())\n except ModuleOrPackageNotFound:\n # This is likely due to the fact that the project is an application\n # not following the structure expected by Poetry\n # If this is a true error it will be picked up later by build anyway.\n return 0\n\n self.line(\n \" - Installing <c1>{}</c1> (<b>{}</b>)\".format(\n self.poetry.package.pretty_name, self.poetry.package.pretty_version\n )\n )\n\n if self.option(\"dry-run\"):\n return 0\n\n builder.build()\n\n return 0\n", "path": "poetry/console/commands/install.py"}], "after_files": [{"content": "from cleo import option\n\nfrom .env_command import EnvCommand\n\n\nclass InstallCommand(EnvCommand):\n\n name = \"install\"\n description = \"Installs the project dependencies.\"\n\n options = [\n option(\"no-dev\", None, \"Do not install the development dependencies.\"),\n option(\n \"no-root\", None, \"Do not install the root package (the current project).\"\n ),\n option(\n \"dry-run\",\n None,\n \"Output the operations but do not execute anything \"\n \"(implicitly enables --verbose).\",\n ),\n option(\n \"extras\",\n \"E\",\n \"Extra sets of dependencies to install.\",\n flag=False,\n multiple=True,\n ),\n ]\n\n help = \"\"\"The <info>install</info> command reads the <comment>poetry.lock</> file from\nthe current directory, processes it, and downloads and installs all the\nlibraries and dependencies outlined in that file. If the file does not\nexist it will look for <comment>pyproject.toml</> and do the same.\n\n<info>poetry install</info>\n\nBy default, the above command will also install the current project. To install only the\ndependencies and not including the current project, run the command with the\n<info>--no-root</info> option like below:\n\n<info> poetry install --no-root</info>\n\"\"\"\n\n _loggers = [\"poetry.repositories.pypi_repository\"]\n\n def handle(self):\n from clikit.io import NullIO\n from poetry.installation.installer import Installer\n from poetry.masonry.builders import EditableBuilder\n from poetry.masonry.utils.module import ModuleOrPackageNotFound\n\n installer = Installer(\n self.io, self.env, self.poetry.package, self.poetry.locker, self.poetry.pool\n )\n\n extras = []\n for extra in self.option(\"extras\"):\n if \" \" in extra:\n extras += [e.strip() for e in extra.split(\" \")]\n else:\n extras.append(extra)\n\n installer.extras(extras)\n installer.dev_mode(not self.option(\"no-dev\"))\n installer.dry_run(self.option(\"dry-run\"))\n installer.verbose(self.option(\"verbose\"))\n\n return_code = installer.run()\n\n if return_code != 0:\n return return_code\n\n if self.option(\"no-root\"):\n return 0\n\n try:\n builder = EditableBuilder(self.poetry, self._env, NullIO())\n except ModuleOrPackageNotFound:\n # This is likely due to the fact that the project is an application\n # not following the structure expected by Poetry\n # If this is a true error it will be picked up later by build anyway.\n return 0\n\n self.line(\n \" - Installing <c1>{}</c1> (<b>{}</b>)\".format(\n self.poetry.package.pretty_name, self.poetry.package.pretty_version\n )\n )\n\n if self.option(\"dry-run\"):\n return 0\n\n builder.build()\n\n return 0\n", "path": "poetry/console/commands/install.py"}]}
| 1,491 | 162 |
gh_patches_debug_24679
|
rasdani/github-patches
|
git_diff
|
searx__searx-307
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Flick engine is broken
The flick engine seems to be broken : no result
In the console :
```
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): secure.flickr.com
DEBUG:requests.packages.urllib3.connectionpool:"GET /search/?text=test&page=1 HTTP/1.1" 302 20
DEBUG:searx.search:flickr redirect on: <Response [302]>
DEBUG:requests.packages.urllib3.connectionpool:"GET /search/?rb=1&text=test&page=1 HTTP/1.1" 302 91
DEBUG:searx.search:flickr redirect on: <Response [302]>
DEBUG:requests.packages.urllib3.connectionpool:"GET /browser/upgrade/?continue=/search/?rb=1&text=test&page=1 HTTP/1.1" 200 None
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/flickr_noapi.py`
Content:
```
1 #!/usr/bin/env python
2
3 # Flickr (Images)
4 #
5 # @website https://www.flickr.com
6 # @provide-api yes (https://secure.flickr.com/services/api/flickr.photos.search.html)
7 #
8 # @using-api no
9 # @results HTML
10 # @stable no
11 # @parse url, title, thumbnail, img_src
12
13 from urllib import urlencode
14 from json import loads
15 import re
16 from searx.engines import logger
17
18
19 logger = logger.getChild('flickr-noapi')
20
21 categories = ['images']
22
23 url = 'https://secure.flickr.com/'
24 search_url = url + 'search/?{query}&page={page}'
25 photo_url = 'https://www.flickr.com/photos/{userid}/{photoid}'
26 regex = re.compile(r"\"search-photos-models\",\"photos\":(.*}),\"totalItems\":", re.DOTALL)
27 image_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')
28
29 paging = True
30
31
32 def build_flickr_url(user_id, photo_id):
33 return photo_url.format(userid=user_id, photoid=photo_id)
34
35
36 def request(query, params):
37 params['url'] = search_url.format(query=urlencode({'text': query}),
38 page=params['pageno'])
39 return params
40
41
42 def response(resp):
43 results = []
44
45 matches = regex.search(resp.text)
46
47 if matches is None:
48 return results
49
50 match = matches.group(1)
51 search_results = loads(match)
52
53 if '_data' not in search_results:
54 return []
55
56 photos = search_results['_data']
57
58 for photo in photos:
59
60 # In paged configuration, the first pages' photos
61 # are represented by a None object
62 if photo is None:
63 continue
64
65 img_src = None
66 # From the biggest to the lowest format
67 for image_size in image_sizes:
68 if image_size in photo['sizes']:
69 img_src = photo['sizes'][image_size]['url']
70 break
71
72 if not img_src:
73 logger.debug('cannot find valid image size: {0}'.format(repr(photo)))
74 continue
75
76 if 'id' not in photo['owner']:
77 continue
78
79 # For a bigger thumbnail, keep only the url_z, not the url_n
80 if 'n' in photo['sizes']:
81 thumbnail_src = photo['sizes']['n']['url']
82 elif 'z' in photo['sizes']:
83 thumbnail_src = photo['sizes']['z']['url']
84 else:
85 thumbnail_src = img_src
86
87 url = build_flickr_url(photo['owner']['id'], photo['id'])
88
89 title = photo.get('title', '')
90
91 content = '<span class="photo-author">' +\
92 photo['owner']['username'] +\
93 '</span><br />'
94
95 if 'description' in photo:
96 content = content +\
97 '<span class="description">' +\
98 photo['description'] +\
99 '</span>'
100
101 # append result
102 results.append({'url': url,
103 'title': title,
104 'img_src': img_src,
105 'thumbnail_src': thumbnail_src,
106 'content': content,
107 'template': 'images.html'})
108
109 return results
110
```
Path: `searx/utils.py`
Content:
```
1 # import htmlentitydefs
2 import locale
3 import dateutil.parser
4 import cStringIO
5 import csv
6 import os
7 import re
8
9 from codecs import getincrementalencoder
10 from HTMLParser import HTMLParser
11 from random import choice
12
13 from searx.version import VERSION_STRING
14 from searx import settings
15 from searx import logger
16
17
18 logger = logger.getChild('utils')
19
20 ua_versions = ('31.0',
21 '32.0',
22 '33.0',
23 '34.0',
24 '35.0')
25
26 ua_os = ('Windows NT 6.3; WOW64',
27 'X11; Linux x86_64',
28 'X11; Linux x86')
29
30 ua = "Mozilla/5.0 ({os}) Gecko/20100101 Firefox/{version}"
31
32 blocked_tags = ('script',
33 'style')
34
35
36 def gen_useragent():
37 # TODO
38 return ua.format(os=choice(ua_os), version=choice(ua_versions))
39
40
41 def searx_useragent():
42 return 'searx/{searx_version} {suffix}'.format(
43 searx_version=VERSION_STRING,
44 suffix=settings['server'].get('useragent_suffix', ''))
45
46
47 def highlight_content(content, query):
48
49 if not content:
50 return None
51 # ignoring html contents
52 # TODO better html content detection
53 if content.find('<') != -1:
54 return content
55
56 query = query.decode('utf-8')
57 if content.lower().find(query.lower()) > -1:
58 query_regex = u'({0})'.format(re.escape(query))
59 content = re.sub(query_regex, '<span class="highlight">\\1</span>',
60 content, flags=re.I | re.U)
61 else:
62 regex_parts = []
63 for chunk in query.split():
64 if len(chunk) == 1:
65 regex_parts.append(u'\W+{0}\W+'.format(re.escape(chunk)))
66 else:
67 regex_parts.append(u'{0}'.format(re.escape(chunk)))
68 query_regex = u'({0})'.format('|'.join(regex_parts))
69 content = re.sub(query_regex, '<span class="highlight">\\1</span>',
70 content, flags=re.I | re.U)
71
72 return content
73
74
75 class HTMLTextExtractor(HTMLParser):
76 def __init__(self):
77 HTMLParser.__init__(self)
78 self.result = []
79 self.tags = []
80
81 def handle_starttag(self, tag, attrs):
82 self.tags.append(tag)
83
84 def handle_endtag(self, tag):
85 if not self.tags:
86 return
87
88 if tag != self.tags[-1]:
89 raise Exception("invalid html")
90
91 self.tags.pop()
92
93 def is_valid_tag(self):
94 return not self.tags or self.tags[-1] not in blocked_tags
95
96 def handle_data(self, d):
97 if not self.is_valid_tag():
98 return
99 self.result.append(d)
100
101 def handle_charref(self, number):
102 if not self.is_valid_tag():
103 return
104 if number[0] in (u'x', u'X'):
105 codepoint = int(number[1:], 16)
106 else:
107 codepoint = int(number)
108 self.result.append(unichr(codepoint))
109
110 def handle_entityref(self, name):
111 if not self.is_valid_tag():
112 return
113 # codepoint = htmlentitydefs.name2codepoint[name]
114 # self.result.append(unichr(codepoint))
115 self.result.append(name)
116
117 def get_text(self):
118 return u''.join(self.result).strip()
119
120
121 def html_to_text(html):
122 html = html.replace('\n', ' ')
123 html = ' '.join(html.split())
124 s = HTMLTextExtractor()
125 s.feed(html)
126 return s.get_text()
127
128
129 class UnicodeWriter:
130 """
131 A CSV writer which will write rows to CSV file "f",
132 which is encoded in the given encoding.
133 """
134
135 def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
136 # Redirect output to a queue
137 self.queue = cStringIO.StringIO()
138 self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
139 self.stream = f
140 self.encoder = getincrementalencoder(encoding)()
141
142 def writerow(self, row):
143 unicode_row = []
144 for col in row:
145 if type(col) == str or type(col) == unicode:
146 unicode_row.append(col.encode('utf-8').strip())
147 else:
148 unicode_row.append(col)
149 self.writer.writerow(unicode_row)
150 # Fetch UTF-8 output from the queue ...
151 data = self.queue.getvalue()
152 data = data.decode("utf-8")
153 # ... and reencode it into the target encoding
154 data = self.encoder.encode(data)
155 # write to the target stream
156 self.stream.write(data)
157 # empty queue
158 self.queue.truncate(0)
159
160 def writerows(self, rows):
161 for row in rows:
162 self.writerow(row)
163
164
165 def get_themes(root):
166 """Returns available themes list."""
167
168 static_path = os.path.join(root, 'static')
169 templates_path = os.path.join(root, 'templates')
170
171 themes = os.listdir(os.path.join(static_path, 'themes'))
172 return static_path, templates_path, themes
173
174
175 def get_static_files(base_path):
176 base_path = os.path.join(base_path, 'static')
177 static_files = set()
178 base_path_length = len(base_path) + 1
179 for directory, _, files in os.walk(base_path):
180 for filename in files:
181 f = os.path.join(directory[base_path_length:], filename)
182 static_files.add(f)
183 return static_files
184
185
186 def get_result_templates(base_path):
187 base_path = os.path.join(base_path, 'templates')
188 result_templates = set()
189 base_path_length = len(base_path) + 1
190 for directory, _, files in os.walk(base_path):
191 if directory.endswith('result_templates'):
192 for filename in files:
193 f = os.path.join(directory[base_path_length:], filename)
194 result_templates.add(f)
195 return result_templates
196
197
198 def format_date_by_locale(date_string, locale_string):
199 # strftime works only on dates after 1900
200 parsed_date = dateutil.parser.parse(date_string)
201 if parsed_date.year <= 1900:
202 return parsed_date.isoformat().split('T')[0]
203
204 orig_locale = locale.getlocale()[0]
205 try:
206 locale.setlocale(locale.LC_ALL, locale_string)
207 except:
208 logger.warning('cannot set locale: {0}'.format(locale_string))
209 formatted_date = parsed_date.strftime(locale.nl_langinfo(locale.D_FMT))
210 try:
211 locale.setlocale(locale.LC_ALL, orig_locale)
212 except:
213 logger.warning('cannot set original locale: {0}'.format(orig_locale))
214 return formatted_date
215
216
217 def dict_subset(d, properties):
218 result = {}
219 for k in properties:
220 if k in d:
221 result[k] = d[k]
222 return result
223
224
225 def prettify_url(url):
226 if len(url) > 74:
227 return u'{0}[...]{1}'.format(url[:35], url[-35:])
228 else:
229 return url
230
231
232 def get_blocked_engines(engines, cookies):
233 if 'blocked_engines' not in cookies:
234 return [(engine_name, category) for engine_name in engines
235 for category in engines[engine_name].categories if engines[engine_name].disabled]
236
237 blocked_engine_strings = cookies.get('blocked_engines', '').split(',')
238 blocked_engines = []
239
240 if not blocked_engine_strings:
241 return blocked_engines
242
243 for engine_string in blocked_engine_strings:
244 if engine_string.find('__') > -1:
245 engine, category = engine_string.split('__', 1)
246 if engine in engines and category in engines[engine].categories:
247 blocked_engines.append((engine, category))
248 elif engine_string in engines:
249 for category in engines[engine_string].categories:
250 blocked_engines.append((engine_string, category))
251
252 return blocked_engines
253
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/engines/flickr_noapi.py b/searx/engines/flickr_noapi.py
--- a/searx/engines/flickr_noapi.py
+++ b/searx/engines/flickr_noapi.py
@@ -20,8 +20,8 @@
categories = ['images']
-url = 'https://secure.flickr.com/'
-search_url = url + 'search/?{query}&page={page}'
+url = 'https://www.flickr.com/'
+search_url = url + 'search?{query}&page={page}'
photo_url = 'https://www.flickr.com/photos/{userid}/{photoid}'
regex = re.compile(r"\"search-photos-models\",\"photos\":(.*}),\"totalItems\":", re.DOTALL)
image_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')
diff --git a/searx/utils.py b/searx/utils.py
--- a/searx/utils.py
+++ b/searx/utils.py
@@ -17,17 +17,16 @@
logger = logger.getChild('utils')
-ua_versions = ('31.0',
- '32.0',
- '33.0',
+ua_versions = ('33.0',
'34.0',
- '35.0')
+ '35.0',
+ '36.0',
+ '37.0')
ua_os = ('Windows NT 6.3; WOW64',
'X11; Linux x86_64',
'X11; Linux x86')
-
-ua = "Mozilla/5.0 ({os}) Gecko/20100101 Firefox/{version}"
+ua = "Mozilla/5.0 ({os}; rv:{version}) Gecko/20100101 Firefox/{version}"
blocked_tags = ('script',
'style')
|
{"golden_diff": "diff --git a/searx/engines/flickr_noapi.py b/searx/engines/flickr_noapi.py\n--- a/searx/engines/flickr_noapi.py\n+++ b/searx/engines/flickr_noapi.py\n@@ -20,8 +20,8 @@\n \n categories = ['images']\n \n-url = 'https://secure.flickr.com/'\n-search_url = url + 'search/?{query}&page={page}'\n+url = 'https://www.flickr.com/'\n+search_url = url + 'search?{query}&page={page}'\n photo_url = 'https://www.flickr.com/photos/{userid}/{photoid}'\n regex = re.compile(r\"\\\"search-photos-models\\\",\\\"photos\\\":(.*}),\\\"totalItems\\\":\", re.DOTALL)\n image_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')\ndiff --git a/searx/utils.py b/searx/utils.py\n--- a/searx/utils.py\n+++ b/searx/utils.py\n@@ -17,17 +17,16 @@\n \n logger = logger.getChild('utils')\n \n-ua_versions = ('31.0',\n- '32.0',\n- '33.0',\n+ua_versions = ('33.0',\n '34.0',\n- '35.0')\n+ '35.0',\n+ '36.0',\n+ '37.0')\n \n ua_os = ('Windows NT 6.3; WOW64',\n 'X11; Linux x86_64',\n 'X11; Linux x86')\n-\n-ua = \"Mozilla/5.0 ({os}) Gecko/20100101 Firefox/{version}\"\n+ua = \"Mozilla/5.0 ({os}; rv:{version}) Gecko/20100101 Firefox/{version}\"\n \n blocked_tags = ('script',\n 'style')\n", "issue": "Flick engine is broken\nThe flick engine seems to be broken : no result \n\nIn the console :\n\n```\nINFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): secure.flickr.com\nDEBUG:requests.packages.urllib3.connectionpool:\"GET /search/?text=test&page=1 HTTP/1.1\" 302 20\nDEBUG:searx.search:flickr redirect on: <Response [302]>\nDEBUG:requests.packages.urllib3.connectionpool:\"GET /search/?rb=1&text=test&page=1 HTTP/1.1\" 302 91\nDEBUG:searx.search:flickr redirect on: <Response [302]>\nDEBUG:requests.packages.urllib3.connectionpool:\"GET /browser/upgrade/?continue=/search/?rb=1&text=test&page=1 HTTP/1.1\" 200 None\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# Flickr (Images)\n#\n# @website https://www.flickr.com\n# @provide-api yes (https://secure.flickr.com/services/api/flickr.photos.search.html)\n#\n# @using-api no\n# @results HTML\n# @stable no\n# @parse url, title, thumbnail, img_src\n\nfrom urllib import urlencode\nfrom json import loads\nimport re\nfrom searx.engines import logger\n\n\nlogger = logger.getChild('flickr-noapi')\n\ncategories = ['images']\n\nurl = 'https://secure.flickr.com/'\nsearch_url = url + 'search/?{query}&page={page}'\nphoto_url = 'https://www.flickr.com/photos/{userid}/{photoid}'\nregex = re.compile(r\"\\\"search-photos-models\\\",\\\"photos\\\":(.*}),\\\"totalItems\\\":\", re.DOTALL)\nimage_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')\n\npaging = True\n\n\ndef build_flickr_url(user_id, photo_id):\n return photo_url.format(userid=user_id, photoid=photo_id)\n\n\ndef request(query, params):\n params['url'] = search_url.format(query=urlencode({'text': query}),\n page=params['pageno'])\n return params\n\n\ndef response(resp):\n results = []\n\n matches = regex.search(resp.text)\n\n if matches is None:\n return results\n\n match = matches.group(1)\n search_results = loads(match)\n\n if '_data' not in search_results:\n return []\n\n photos = search_results['_data']\n\n for photo in photos:\n\n # In paged configuration, the first pages' photos\n # are represented by a None object\n if photo is None:\n continue\n\n img_src = None\n # From the biggest to the lowest format\n for image_size in image_sizes:\n if image_size in photo['sizes']:\n img_src = photo['sizes'][image_size]['url']\n break\n\n if not img_src:\n logger.debug('cannot find valid image size: {0}'.format(repr(photo)))\n continue\n\n if 'id' not in photo['owner']:\n continue\n\n# For a bigger thumbnail, keep only the url_z, not the url_n\n if 'n' in photo['sizes']:\n thumbnail_src = photo['sizes']['n']['url']\n elif 'z' in photo['sizes']:\n thumbnail_src = photo['sizes']['z']['url']\n else:\n thumbnail_src = img_src\n\n url = build_flickr_url(photo['owner']['id'], photo['id'])\n\n title = photo.get('title', '')\n\n content = '<span class=\"photo-author\">' +\\\n photo['owner']['username'] +\\\n '</span><br />'\n\n if 'description' in photo:\n content = content +\\\n '<span class=\"description\">' +\\\n photo['description'] +\\\n '</span>'\n\n # append result\n results.append({'url': url,\n 'title': title,\n 'img_src': img_src,\n 'thumbnail_src': thumbnail_src,\n 'content': content,\n 'template': 'images.html'})\n\n return results\n", "path": "searx/engines/flickr_noapi.py"}, {"content": "# import htmlentitydefs\nimport locale\nimport dateutil.parser\nimport cStringIO\nimport csv\nimport os\nimport re\n\nfrom codecs import getincrementalencoder\nfrom HTMLParser import HTMLParser\nfrom random import choice\n\nfrom searx.version import VERSION_STRING\nfrom searx import settings\nfrom searx import logger\n\n\nlogger = logger.getChild('utils')\n\nua_versions = ('31.0',\n '32.0',\n '33.0',\n '34.0',\n '35.0')\n\nua_os = ('Windows NT 6.3; WOW64',\n 'X11; Linux x86_64',\n 'X11; Linux x86')\n\nua = \"Mozilla/5.0 ({os}) Gecko/20100101 Firefox/{version}\"\n\nblocked_tags = ('script',\n 'style')\n\n\ndef gen_useragent():\n # TODO\n return ua.format(os=choice(ua_os), version=choice(ua_versions))\n\n\ndef searx_useragent():\n return 'searx/{searx_version} {suffix}'.format(\n searx_version=VERSION_STRING,\n suffix=settings['server'].get('useragent_suffix', ''))\n\n\ndef highlight_content(content, query):\n\n if not content:\n return None\n # ignoring html contents\n # TODO better html content detection\n if content.find('<') != -1:\n return content\n\n query = query.decode('utf-8')\n if content.lower().find(query.lower()) > -1:\n query_regex = u'({0})'.format(re.escape(query))\n content = re.sub(query_regex, '<span class=\"highlight\">\\\\1</span>',\n content, flags=re.I | re.U)\n else:\n regex_parts = []\n for chunk in query.split():\n if len(chunk) == 1:\n regex_parts.append(u'\\W+{0}\\W+'.format(re.escape(chunk)))\n else:\n regex_parts.append(u'{0}'.format(re.escape(chunk)))\n query_regex = u'({0})'.format('|'.join(regex_parts))\n content = re.sub(query_regex, '<span class=\"highlight\">\\\\1</span>',\n content, flags=re.I | re.U)\n\n return content\n\n\nclass HTMLTextExtractor(HTMLParser):\n def __init__(self):\n HTMLParser.__init__(self)\n self.result = []\n self.tags = []\n\n def handle_starttag(self, tag, attrs):\n self.tags.append(tag)\n\n def handle_endtag(self, tag):\n if not self.tags:\n return\n\n if tag != self.tags[-1]:\n raise Exception(\"invalid html\")\n\n self.tags.pop()\n\n def is_valid_tag(self):\n return not self.tags or self.tags[-1] not in blocked_tags\n\n def handle_data(self, d):\n if not self.is_valid_tag():\n return\n self.result.append(d)\n\n def handle_charref(self, number):\n if not self.is_valid_tag():\n return\n if number[0] in (u'x', u'X'):\n codepoint = int(number[1:], 16)\n else:\n codepoint = int(number)\n self.result.append(unichr(codepoint))\n\n def handle_entityref(self, name):\n if not self.is_valid_tag():\n return\n # codepoint = htmlentitydefs.name2codepoint[name]\n # self.result.append(unichr(codepoint))\n self.result.append(name)\n\n def get_text(self):\n return u''.join(self.result).strip()\n\n\ndef html_to_text(html):\n html = html.replace('\\n', ' ')\n html = ' '.join(html.split())\n s = HTMLTextExtractor()\n s.feed(html)\n return s.get_text()\n\n\nclass UnicodeWriter:\n \"\"\"\n A CSV writer which will write rows to CSV file \"f\",\n which is encoded in the given encoding.\n \"\"\"\n\n def __init__(self, f, dialect=csv.excel, encoding=\"utf-8\", **kwds):\n # Redirect output to a queue\n self.queue = cStringIO.StringIO()\n self.writer = csv.writer(self.queue, dialect=dialect, **kwds)\n self.stream = f\n self.encoder = getincrementalencoder(encoding)()\n\n def writerow(self, row):\n unicode_row = []\n for col in row:\n if type(col) == str or type(col) == unicode:\n unicode_row.append(col.encode('utf-8').strip())\n else:\n unicode_row.append(col)\n self.writer.writerow(unicode_row)\n # Fetch UTF-8 output from the queue ...\n data = self.queue.getvalue()\n data = data.decode(\"utf-8\")\n # ... and reencode it into the target encoding\n data = self.encoder.encode(data)\n # write to the target stream\n self.stream.write(data)\n # empty queue\n self.queue.truncate(0)\n\n def writerows(self, rows):\n for row in rows:\n self.writerow(row)\n\n\ndef get_themes(root):\n \"\"\"Returns available themes list.\"\"\"\n\n static_path = os.path.join(root, 'static')\n templates_path = os.path.join(root, 'templates')\n\n themes = os.listdir(os.path.join(static_path, 'themes'))\n return static_path, templates_path, themes\n\n\ndef get_static_files(base_path):\n base_path = os.path.join(base_path, 'static')\n static_files = set()\n base_path_length = len(base_path) + 1\n for directory, _, files in os.walk(base_path):\n for filename in files:\n f = os.path.join(directory[base_path_length:], filename)\n static_files.add(f)\n return static_files\n\n\ndef get_result_templates(base_path):\n base_path = os.path.join(base_path, 'templates')\n result_templates = set()\n base_path_length = len(base_path) + 1\n for directory, _, files in os.walk(base_path):\n if directory.endswith('result_templates'):\n for filename in files:\n f = os.path.join(directory[base_path_length:], filename)\n result_templates.add(f)\n return result_templates\n\n\ndef format_date_by_locale(date_string, locale_string):\n # strftime works only on dates after 1900\n parsed_date = dateutil.parser.parse(date_string)\n if parsed_date.year <= 1900:\n return parsed_date.isoformat().split('T')[0]\n\n orig_locale = locale.getlocale()[0]\n try:\n locale.setlocale(locale.LC_ALL, locale_string)\n except:\n logger.warning('cannot set locale: {0}'.format(locale_string))\n formatted_date = parsed_date.strftime(locale.nl_langinfo(locale.D_FMT))\n try:\n locale.setlocale(locale.LC_ALL, orig_locale)\n except:\n logger.warning('cannot set original locale: {0}'.format(orig_locale))\n return formatted_date\n\n\ndef dict_subset(d, properties):\n result = {}\n for k in properties:\n if k in d:\n result[k] = d[k]\n return result\n\n\ndef prettify_url(url):\n if len(url) > 74:\n return u'{0}[...]{1}'.format(url[:35], url[-35:])\n else:\n return url\n\n\ndef get_blocked_engines(engines, cookies):\n if 'blocked_engines' not in cookies:\n return [(engine_name, category) for engine_name in engines\n for category in engines[engine_name].categories if engines[engine_name].disabled]\n\n blocked_engine_strings = cookies.get('blocked_engines', '').split(',')\n blocked_engines = []\n\n if not blocked_engine_strings:\n return blocked_engines\n\n for engine_string in blocked_engine_strings:\n if engine_string.find('__') > -1:\n engine, category = engine_string.split('__', 1)\n if engine in engines and category in engines[engine].categories:\n blocked_engines.append((engine, category))\n elif engine_string in engines:\n for category in engines[engine_string].categories:\n blocked_engines.append((engine_string, category))\n\n return blocked_engines\n", "path": "searx/utils.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# Flickr (Images)\n#\n# @website https://www.flickr.com\n# @provide-api yes (https://secure.flickr.com/services/api/flickr.photos.search.html)\n#\n# @using-api no\n# @results HTML\n# @stable no\n# @parse url, title, thumbnail, img_src\n\nfrom urllib import urlencode\nfrom json import loads\nimport re\nfrom searx.engines import logger\n\n\nlogger = logger.getChild('flickr-noapi')\n\ncategories = ['images']\n\nurl = 'https://www.flickr.com/'\nsearch_url = url + 'search?{query}&page={page}'\nphoto_url = 'https://www.flickr.com/photos/{userid}/{photoid}'\nregex = re.compile(r\"\\\"search-photos-models\\\",\\\"photos\\\":(.*}),\\\"totalItems\\\":\", re.DOTALL)\nimage_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')\n\npaging = True\n\n\ndef build_flickr_url(user_id, photo_id):\n return photo_url.format(userid=user_id, photoid=photo_id)\n\n\ndef request(query, params):\n params['url'] = search_url.format(query=urlencode({'text': query}),\n page=params['pageno'])\n return params\n\n\ndef response(resp):\n results = []\n\n matches = regex.search(resp.text)\n\n if matches is None:\n return results\n\n match = matches.group(1)\n search_results = loads(match)\n\n if '_data' not in search_results:\n return []\n\n photos = search_results['_data']\n\n for photo in photos:\n\n # In paged configuration, the first pages' photos\n # are represented by a None object\n if photo is None:\n continue\n\n img_src = None\n # From the biggest to the lowest format\n for image_size in image_sizes:\n if image_size in photo['sizes']:\n img_src = photo['sizes'][image_size]['url']\n break\n\n if not img_src:\n logger.debug('cannot find valid image size: {0}'.format(repr(photo)))\n continue\n\n if 'id' not in photo['owner']:\n continue\n\n# For a bigger thumbnail, keep only the url_z, not the url_n\n if 'n' in photo['sizes']:\n thumbnail_src = photo['sizes']['n']['url']\n elif 'z' in photo['sizes']:\n thumbnail_src = photo['sizes']['z']['url']\n else:\n thumbnail_src = img_src\n\n url = build_flickr_url(photo['owner']['id'], photo['id'])\n\n title = photo.get('title', '')\n\n content = '<span class=\"photo-author\">' +\\\n photo['owner']['username'] +\\\n '</span><br />'\n\n if 'description' in photo:\n content = content +\\\n '<span class=\"description\">' +\\\n photo['description'] +\\\n '</span>'\n\n # append result\n results.append({'url': url,\n 'title': title,\n 'img_src': img_src,\n 'thumbnail_src': thumbnail_src,\n 'content': content,\n 'template': 'images.html'})\n\n return results\n", "path": "searx/engines/flickr_noapi.py"}, {"content": "# import htmlentitydefs\nimport locale\nimport dateutil.parser\nimport cStringIO\nimport csv\nimport os\nimport re\n\nfrom codecs import getincrementalencoder\nfrom HTMLParser import HTMLParser\nfrom random import choice\n\nfrom searx.version import VERSION_STRING\nfrom searx import settings\nfrom searx import logger\n\n\nlogger = logger.getChild('utils')\n\nua_versions = ('33.0',\n '34.0',\n '35.0',\n '36.0',\n '37.0')\n\nua_os = ('Windows NT 6.3; WOW64',\n 'X11; Linux x86_64',\n 'X11; Linux x86')\nua = \"Mozilla/5.0 ({os}; rv:{version}) Gecko/20100101 Firefox/{version}\"\n\nblocked_tags = ('script',\n 'style')\n\n\ndef gen_useragent():\n # TODO\n return ua.format(os=choice(ua_os), version=choice(ua_versions))\n\n\ndef searx_useragent():\n return 'searx/{searx_version} {suffix}'.format(\n searx_version=VERSION_STRING,\n suffix=settings['server'].get('useragent_suffix', ''))\n\n\ndef highlight_content(content, query):\n\n if not content:\n return None\n # ignoring html contents\n # TODO better html content detection\n if content.find('<') != -1:\n return content\n\n query = query.decode('utf-8')\n if content.lower().find(query.lower()) > -1:\n query_regex = u'({0})'.format(re.escape(query))\n content = re.sub(query_regex, '<span class=\"highlight\">\\\\1</span>',\n content, flags=re.I | re.U)\n else:\n regex_parts = []\n for chunk in query.split():\n if len(chunk) == 1:\n regex_parts.append(u'\\W+{0}\\W+'.format(re.escape(chunk)))\n else:\n regex_parts.append(u'{0}'.format(re.escape(chunk)))\n query_regex = u'({0})'.format('|'.join(regex_parts))\n content = re.sub(query_regex, '<span class=\"highlight\">\\\\1</span>',\n content, flags=re.I | re.U)\n\n return content\n\n\nclass HTMLTextExtractor(HTMLParser):\n def __init__(self):\n HTMLParser.__init__(self)\n self.result = []\n self.tags = []\n\n def handle_starttag(self, tag, attrs):\n self.tags.append(tag)\n\n def handle_endtag(self, tag):\n if not self.tags:\n return\n\n if tag != self.tags[-1]:\n raise Exception(\"invalid html\")\n\n self.tags.pop()\n\n def is_valid_tag(self):\n return not self.tags or self.tags[-1] not in blocked_tags\n\n def handle_data(self, d):\n if not self.is_valid_tag():\n return\n self.result.append(d)\n\n def handle_charref(self, number):\n if not self.is_valid_tag():\n return\n if number[0] in (u'x', u'X'):\n codepoint = int(number[1:], 16)\n else:\n codepoint = int(number)\n self.result.append(unichr(codepoint))\n\n def handle_entityref(self, name):\n if not self.is_valid_tag():\n return\n # codepoint = htmlentitydefs.name2codepoint[name]\n # self.result.append(unichr(codepoint))\n self.result.append(name)\n\n def get_text(self):\n return u''.join(self.result).strip()\n\n\ndef html_to_text(html):\n html = html.replace('\\n', ' ')\n html = ' '.join(html.split())\n s = HTMLTextExtractor()\n s.feed(html)\n return s.get_text()\n\n\nclass UnicodeWriter:\n \"\"\"\n A CSV writer which will write rows to CSV file \"f\",\n which is encoded in the given encoding.\n \"\"\"\n\n def __init__(self, f, dialect=csv.excel, encoding=\"utf-8\", **kwds):\n # Redirect output to a queue\n self.queue = cStringIO.StringIO()\n self.writer = csv.writer(self.queue, dialect=dialect, **kwds)\n self.stream = f\n self.encoder = getincrementalencoder(encoding)()\n\n def writerow(self, row):\n unicode_row = []\n for col in row:\n if type(col) == str or type(col) == unicode:\n unicode_row.append(col.encode('utf-8').strip())\n else:\n unicode_row.append(col)\n self.writer.writerow(unicode_row)\n # Fetch UTF-8 output from the queue ...\n data = self.queue.getvalue()\n data = data.decode(\"utf-8\")\n # ... and reencode it into the target encoding\n data = self.encoder.encode(data)\n # write to the target stream\n self.stream.write(data)\n # empty queue\n self.queue.truncate(0)\n\n def writerows(self, rows):\n for row in rows:\n self.writerow(row)\n\n\ndef get_themes(root):\n \"\"\"Returns available themes list.\"\"\"\n\n static_path = os.path.join(root, 'static')\n templates_path = os.path.join(root, 'templates')\n\n themes = os.listdir(os.path.join(static_path, 'themes'))\n return static_path, templates_path, themes\n\n\ndef get_static_files(base_path):\n base_path = os.path.join(base_path, 'static')\n static_files = set()\n base_path_length = len(base_path) + 1\n for directory, _, files in os.walk(base_path):\n for filename in files:\n f = os.path.join(directory[base_path_length:], filename)\n static_files.add(f)\n return static_files\n\n\ndef get_result_templates(base_path):\n base_path = os.path.join(base_path, 'templates')\n result_templates = set()\n base_path_length = len(base_path) + 1\n for directory, _, files in os.walk(base_path):\n if directory.endswith('result_templates'):\n for filename in files:\n f = os.path.join(directory[base_path_length:], filename)\n result_templates.add(f)\n return result_templates\n\n\ndef format_date_by_locale(date_string, locale_string):\n # strftime works only on dates after 1900\n parsed_date = dateutil.parser.parse(date_string)\n if parsed_date.year <= 1900:\n return parsed_date.isoformat().split('T')[0]\n\n orig_locale = locale.getlocale()[0]\n try:\n locale.setlocale(locale.LC_ALL, locale_string)\n except:\n logger.warning('cannot set locale: {0}'.format(locale_string))\n formatted_date = parsed_date.strftime(locale.nl_langinfo(locale.D_FMT))\n try:\n locale.setlocale(locale.LC_ALL, orig_locale)\n except:\n logger.warning('cannot set original locale: {0}'.format(orig_locale))\n return formatted_date\n\n\ndef dict_subset(d, properties):\n result = {}\n for k in properties:\n if k in d:\n result[k] = d[k]\n return result\n\n\ndef prettify_url(url):\n if len(url) > 74:\n return u'{0}[...]{1}'.format(url[:35], url[-35:])\n else:\n return url\n\n\ndef get_blocked_engines(engines, cookies):\n if 'blocked_engines' not in cookies:\n return [(engine_name, category) for engine_name in engines\n for category in engines[engine_name].categories if engines[engine_name].disabled]\n\n blocked_engine_strings = cookies.get('blocked_engines', '').split(',')\n blocked_engines = []\n\n if not blocked_engine_strings:\n return blocked_engines\n\n for engine_string in blocked_engine_strings:\n if engine_string.find('__') > -1:\n engine, category = engine_string.split('__', 1)\n if engine in engines and category in engines[engine].categories:\n blocked_engines.append((engine, category))\n elif engine_string in engines:\n for category in engines[engine_string].categories:\n blocked_engines.append((engine_string, category))\n\n return blocked_engines\n", "path": "searx/utils.py"}]}
| 3,839 | 447 |
gh_patches_debug_5564
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-930
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Log app args
For easier debugging, we should log the arguments apps are called with.
Requested by @mjwilde
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/app/bash.py`
Content:
```
1 import logging
2 from functools import update_wrapper
3 from inspect import signature, Parameter
4
5 from parsl.app.errors import wrap_error
6 from parsl.app.futures import DataFuture
7 from parsl.app.app import AppBase
8 from parsl.dataflow.dflow import DataFlowKernelLoader
9
10 logger = logging.getLogger(__name__)
11
12
13 def remote_side_bash_executor(func, *args, **kwargs):
14 """Execute the bash app type function and return the command line string.
15
16 This string is reformatted with the *args, and **kwargs
17 from call time.
18 """
19 import os
20 import time
21 import subprocess
22 import logging
23 import parsl.app.errors as pe
24
25 logging.basicConfig(filename='/tmp/bashexec.{0}.log'.format(time.time()), level=logging.DEBUG)
26
27 # start_t = time.time()
28
29 func_name = func.__name__
30
31 partial_cmdline = None
32
33 # Try to run the func to compose the commandline
34 try:
35 # Execute the func to get the commandline
36 partial_cmdline = func(*args, **kwargs)
37 # Reformat the commandline with current args and kwargs
38 executable = partial_cmdline.format(*args, **kwargs)
39
40 except AttributeError as e:
41 if partial_cmdline is not None:
42 raise pe.AppBadFormatting("App formatting failed for app '{}' with AttributeError: {}".format(func_name, e))
43 else:
44 raise pe.BashAppNoReturn("Bash app '{}' did not return a value, or returned none - with this exception: {}".format(func_name, e), None)
45
46 except IndexError as e:
47 raise pe.AppBadFormatting("App formatting failed for app '{}' with IndexError: {}".format(func_name, e))
48 except Exception as e:
49 logging.error("Caught exception during formatting of app '{}': {}".format(func_name, e))
50 raise e
51
52 logging.debug("Executable: %s", executable)
53
54 # Updating stdout, stderr if values passed at call time.
55
56 def open_std_fd(fdname):
57 # fdname is 'stdout' or 'stderr'
58 stdfspec = kwargs.get(fdname) # spec is str name or tuple (name, mode)
59 if stdfspec is None:
60 return None
61 elif isinstance(stdfspec, str):
62 fname = stdfspec
63 mode = 'a+'
64 elif isinstance(stdfspec, tuple):
65 if len(stdfspec) != 2:
66 raise pe.BadStdStreamFile("std descriptor %s has incorrect tuple length %s" % (fdname, len(stdfspec)), TypeError('Bad Tuple Length'))
67 fname, mode = stdfspec
68 else:
69 raise pe.BadStdStreamFile("std descriptor %s has unexpected type %s" % (fdname, str(type(stdfspec))), TypeError('Bad Tuple Type'))
70 try:
71 fd = open(fname, mode)
72 except Exception as e:
73 raise pe.BadStdStreamFile(fname, e)
74 return fd
75
76 std_out = open_std_fd('stdout')
77 std_err = open_std_fd('stderr')
78 timeout = kwargs.get('walltime')
79
80 returncode = None
81 try:
82 proc = subprocess.Popen(executable, stdout=std_out, stderr=std_err, shell=True, executable='/bin/bash')
83 proc.wait(timeout=timeout)
84 returncode = proc.returncode
85
86 except subprocess.TimeoutExpired:
87 # print("Timeout")
88 raise pe.AppTimeout("[{}] App exceeded walltime: {}".format(func_name, timeout))
89
90 except Exception as e:
91 # print("Caught exception: ", e)
92 raise pe.AppException("[{}] App caught exception: {}".format(func_name, proc.returncode), e)
93
94 if returncode != 0:
95 raise pe.AppFailure("[{}] App failed with exit code: {}".format(func_name, proc.returncode), proc.returncode)
96
97 # TODO : Add support for globs here
98
99 missing = []
100 for outputfile in kwargs.get('outputs', []):
101 fpath = outputfile
102 if type(outputfile) != str:
103 fpath = outputfile.filepath
104
105 if not os.path.exists(fpath):
106 missing.extend([outputfile])
107
108 if missing:
109 raise pe.MissingOutputs("[{}] Missing outputs".format(func_name), missing)
110
111 # exec_duration = time.time() - start_t
112 return returncode
113
114
115 class BashApp(AppBase):
116
117 def __init__(self, func, data_flow_kernel=None, walltime=60, cache=False, executors='all'):
118 super().__init__(func, data_flow_kernel=data_flow_kernel, walltime=60, executors=executors, cache=cache)
119 self.kwargs = {}
120
121 # We duplicate the extraction of parameter defaults
122 # to self.kwargs to ensure availability at point of
123 # command string format. Refer: #349
124 sig = signature(func)
125
126 for s in sig.parameters:
127 if sig.parameters[s].default != Parameter.empty:
128 self.kwargs[s] = sig.parameters[s].default
129
130 def __call__(self, *args, **kwargs):
131 """Handle the call to a Bash app.
132
133 Args:
134 - Arbitrary
135
136 Kwargs:
137 - Arbitrary
138
139 Returns:
140 If outputs=[...] was a kwarg then:
141 App_fut, [Data_Futures...]
142 else:
143 App_fut
144
145 """
146 # Update kwargs in the app definition with ones passed in at calltime
147 self.kwargs.update(kwargs)
148
149 if self.data_flow_kernel is None:
150 dfk = DataFlowKernelLoader.dfk()
151 else:
152 dfk = self.data_flow_kernel
153
154 app_fut = dfk.submit(wrap_error(update_wrapper(remote_side_bash_executor, self.func)),
155 self.func, *args,
156 executors=self.executors,
157 fn_hash=self.func_hash,
158 cache=self.cache,
159 **self.kwargs)
160
161 out_futs = [DataFuture(app_fut, o, tid=app_fut.tid)
162 for o in kwargs.get('outputs', [])]
163 app_fut._outputs = out_futs
164
165 return app_fut
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsl/app/bash.py b/parsl/app/bash.py
--- a/parsl/app/bash.py
+++ b/parsl/app/bash.py
@@ -77,6 +77,9 @@
std_err = open_std_fd('stderr')
timeout = kwargs.get('walltime')
+ if std_err is not None:
+ print('--> executable follows <--\n{}\n--> end executable <--'.format(executable), file=std_err)
+
returncode = None
try:
proc = subprocess.Popen(executable, stdout=std_out, stderr=std_err, shell=True, executable='/bin/bash')
|
{"golden_diff": "diff --git a/parsl/app/bash.py b/parsl/app/bash.py\n--- a/parsl/app/bash.py\n+++ b/parsl/app/bash.py\n@@ -77,6 +77,9 @@\n std_err = open_std_fd('stderr')\n timeout = kwargs.get('walltime')\n \n+ if std_err is not None:\n+ print('--> executable follows <--\\n{}\\n--> end executable <--'.format(executable), file=std_err)\n+\n returncode = None\n try:\n proc = subprocess.Popen(executable, stdout=std_out, stderr=std_err, shell=True, executable='/bin/bash')\n", "issue": "Log app args\nFor easier debugging, we should log the arguments apps are called with.\r\n\r\nRequested by @mjwilde \n", "before_files": [{"content": "import logging\nfrom functools import update_wrapper\nfrom inspect import signature, Parameter\n\nfrom parsl.app.errors import wrap_error\nfrom parsl.app.futures import DataFuture\nfrom parsl.app.app import AppBase\nfrom parsl.dataflow.dflow import DataFlowKernelLoader\n\nlogger = logging.getLogger(__name__)\n\n\ndef remote_side_bash_executor(func, *args, **kwargs):\n \"\"\"Execute the bash app type function and return the command line string.\n\n This string is reformatted with the *args, and **kwargs\n from call time.\n \"\"\"\n import os\n import time\n import subprocess\n import logging\n import parsl.app.errors as pe\n\n logging.basicConfig(filename='/tmp/bashexec.{0}.log'.format(time.time()), level=logging.DEBUG)\n\n # start_t = time.time()\n\n func_name = func.__name__\n\n partial_cmdline = None\n\n # Try to run the func to compose the commandline\n try:\n # Execute the func to get the commandline\n partial_cmdline = func(*args, **kwargs)\n # Reformat the commandline with current args and kwargs\n executable = partial_cmdline.format(*args, **kwargs)\n\n except AttributeError as e:\n if partial_cmdline is not None:\n raise pe.AppBadFormatting(\"App formatting failed for app '{}' with AttributeError: {}\".format(func_name, e))\n else:\n raise pe.BashAppNoReturn(\"Bash app '{}' did not return a value, or returned none - with this exception: {}\".format(func_name, e), None)\n\n except IndexError as e:\n raise pe.AppBadFormatting(\"App formatting failed for app '{}' with IndexError: {}\".format(func_name, e))\n except Exception as e:\n logging.error(\"Caught exception during formatting of app '{}': {}\".format(func_name, e))\n raise e\n\n logging.debug(\"Executable: %s\", executable)\n\n # Updating stdout, stderr if values passed at call time.\n\n def open_std_fd(fdname):\n # fdname is 'stdout' or 'stderr'\n stdfspec = kwargs.get(fdname) # spec is str name or tuple (name, mode)\n if stdfspec is None:\n return None\n elif isinstance(stdfspec, str):\n fname = stdfspec\n mode = 'a+'\n elif isinstance(stdfspec, tuple):\n if len(stdfspec) != 2:\n raise pe.BadStdStreamFile(\"std descriptor %s has incorrect tuple length %s\" % (fdname, len(stdfspec)), TypeError('Bad Tuple Length'))\n fname, mode = stdfspec\n else:\n raise pe.BadStdStreamFile(\"std descriptor %s has unexpected type %s\" % (fdname, str(type(stdfspec))), TypeError('Bad Tuple Type'))\n try:\n fd = open(fname, mode)\n except Exception as e:\n raise pe.BadStdStreamFile(fname, e)\n return fd\n\n std_out = open_std_fd('stdout')\n std_err = open_std_fd('stderr')\n timeout = kwargs.get('walltime')\n\n returncode = None\n try:\n proc = subprocess.Popen(executable, stdout=std_out, stderr=std_err, shell=True, executable='/bin/bash')\n proc.wait(timeout=timeout)\n returncode = proc.returncode\n\n except subprocess.TimeoutExpired:\n # print(\"Timeout\")\n raise pe.AppTimeout(\"[{}] App exceeded walltime: {}\".format(func_name, timeout))\n\n except Exception as e:\n # print(\"Caught exception: \", e)\n raise pe.AppException(\"[{}] App caught exception: {}\".format(func_name, proc.returncode), e)\n\n if returncode != 0:\n raise pe.AppFailure(\"[{}] App failed with exit code: {}\".format(func_name, proc.returncode), proc.returncode)\n\n # TODO : Add support for globs here\n\n missing = []\n for outputfile in kwargs.get('outputs', []):\n fpath = outputfile\n if type(outputfile) != str:\n fpath = outputfile.filepath\n\n if not os.path.exists(fpath):\n missing.extend([outputfile])\n\n if missing:\n raise pe.MissingOutputs(\"[{}] Missing outputs\".format(func_name), missing)\n\n # exec_duration = time.time() - start_t\n return returncode\n\n\nclass BashApp(AppBase):\n\n def __init__(self, func, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n super().__init__(func, data_flow_kernel=data_flow_kernel, walltime=60, executors=executors, cache=cache)\n self.kwargs = {}\n\n # We duplicate the extraction of parameter defaults\n # to self.kwargs to ensure availability at point of\n # command string format. Refer: #349\n sig = signature(func)\n\n for s in sig.parameters:\n if sig.parameters[s].default != Parameter.empty:\n self.kwargs[s] = sig.parameters[s].default\n\n def __call__(self, *args, **kwargs):\n \"\"\"Handle the call to a Bash app.\n\n Args:\n - Arbitrary\n\n Kwargs:\n - Arbitrary\n\n Returns:\n If outputs=[...] was a kwarg then:\n App_fut, [Data_Futures...]\n else:\n App_fut\n\n \"\"\"\n # Update kwargs in the app definition with ones passed in at calltime\n self.kwargs.update(kwargs)\n\n if self.data_flow_kernel is None:\n dfk = DataFlowKernelLoader.dfk()\n else:\n dfk = self.data_flow_kernel\n\n app_fut = dfk.submit(wrap_error(update_wrapper(remote_side_bash_executor, self.func)),\n self.func, *args,\n executors=self.executors,\n fn_hash=self.func_hash,\n cache=self.cache,\n **self.kwargs)\n\n out_futs = [DataFuture(app_fut, o, tid=app_fut.tid)\n for o in kwargs.get('outputs', [])]\n app_fut._outputs = out_futs\n\n return app_fut\n", "path": "parsl/app/bash.py"}], "after_files": [{"content": "import logging\nfrom functools import update_wrapper\nfrom inspect import signature, Parameter\n\nfrom parsl.app.errors import wrap_error\nfrom parsl.app.futures import DataFuture\nfrom parsl.app.app import AppBase\nfrom parsl.dataflow.dflow import DataFlowKernelLoader\n\nlogger = logging.getLogger(__name__)\n\n\ndef remote_side_bash_executor(func, *args, **kwargs):\n \"\"\"Execute the bash app type function and return the command line string.\n\n This string is reformatted with the *args, and **kwargs\n from call time.\n \"\"\"\n import os\n import time\n import subprocess\n import logging\n import parsl.app.errors as pe\n\n logging.basicConfig(filename='/tmp/bashexec.{0}.log'.format(time.time()), level=logging.DEBUG)\n\n # start_t = time.time()\n\n func_name = func.__name__\n\n partial_cmdline = None\n\n # Try to run the func to compose the commandline\n try:\n # Execute the func to get the commandline\n partial_cmdline = func(*args, **kwargs)\n # Reformat the commandline with current args and kwargs\n executable = partial_cmdline.format(*args, **kwargs)\n\n except AttributeError as e:\n if partial_cmdline is not None:\n raise pe.AppBadFormatting(\"App formatting failed for app '{}' with AttributeError: {}\".format(func_name, e))\n else:\n raise pe.BashAppNoReturn(\"Bash app '{}' did not return a value, or returned none - with this exception: {}\".format(func_name, e), None)\n\n except IndexError as e:\n raise pe.AppBadFormatting(\"App formatting failed for app '{}' with IndexError: {}\".format(func_name, e))\n except Exception as e:\n logging.error(\"Caught exception during formatting of app '{}': {}\".format(func_name, e))\n raise e\n\n logging.debug(\"Executable: %s\", executable)\n\n # Updating stdout, stderr if values passed at call time.\n\n def open_std_fd(fdname):\n # fdname is 'stdout' or 'stderr'\n stdfspec = kwargs.get(fdname) # spec is str name or tuple (name, mode)\n if stdfspec is None:\n return None\n elif isinstance(stdfspec, str):\n fname = stdfspec\n mode = 'a+'\n elif isinstance(stdfspec, tuple):\n if len(stdfspec) != 2:\n raise pe.BadStdStreamFile(\"std descriptor %s has incorrect tuple length %s\" % (fdname, len(stdfspec)), TypeError('Bad Tuple Length'))\n fname, mode = stdfspec\n else:\n raise pe.BadStdStreamFile(\"std descriptor %s has unexpected type %s\" % (fdname, str(type(stdfspec))), TypeError('Bad Tuple Type'))\n try:\n fd = open(fname, mode)\n except Exception as e:\n raise pe.BadStdStreamFile(fname, e)\n return fd\n\n std_out = open_std_fd('stdout')\n std_err = open_std_fd('stderr')\n timeout = kwargs.get('walltime')\n\n if std_err is not None:\n print('--> executable follows <--\\n{}\\n--> end executable <--'.format(executable), file=std_err)\n\n returncode = None\n try:\n proc = subprocess.Popen(executable, stdout=std_out, stderr=std_err, shell=True, executable='/bin/bash')\n proc.wait(timeout=timeout)\n returncode = proc.returncode\n\n except subprocess.TimeoutExpired:\n # print(\"Timeout\")\n raise pe.AppTimeout(\"[{}] App exceeded walltime: {}\".format(func_name, timeout))\n\n except Exception as e:\n # print(\"Caught exception: \", e)\n raise pe.AppException(\"[{}] App caught exception: {}\".format(func_name, proc.returncode), e)\n\n if returncode != 0:\n raise pe.AppFailure(\"[{}] App failed with exit code: {}\".format(func_name, proc.returncode), proc.returncode)\n\n # TODO : Add support for globs here\n\n missing = []\n for outputfile in kwargs.get('outputs', []):\n fpath = outputfile\n if type(outputfile) != str:\n fpath = outputfile.filepath\n\n if not os.path.exists(fpath):\n missing.extend([outputfile])\n\n if missing:\n raise pe.MissingOutputs(\"[{}] Missing outputs\".format(func_name), missing)\n\n # exec_duration = time.time() - start_t\n return returncode\n\n\nclass BashApp(AppBase):\n\n def __init__(self, func, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n super().__init__(func, data_flow_kernel=data_flow_kernel, walltime=60, executors=executors, cache=cache)\n self.kwargs = {}\n\n # We duplicate the extraction of parameter defaults\n # to self.kwargs to ensure availability at point of\n # command string format. Refer: #349\n sig = signature(func)\n\n for s in sig.parameters:\n if sig.parameters[s].default != Parameter.empty:\n self.kwargs[s] = sig.parameters[s].default\n\n def __call__(self, *args, **kwargs):\n \"\"\"Handle the call to a Bash app.\n\n Args:\n - Arbitrary\n\n Kwargs:\n - Arbitrary\n\n Returns:\n If outputs=[...] was a kwarg then:\n App_fut, [Data_Futures...]\n else:\n App_fut\n\n \"\"\"\n # Update kwargs in the app definition with ones passed in at calltime\n self.kwargs.update(kwargs)\n\n if self.data_flow_kernel is None:\n dfk = DataFlowKernelLoader.dfk()\n else:\n dfk = self.data_flow_kernel\n\n app_fut = dfk.submit(wrap_error(update_wrapper(remote_side_bash_executor, self.func)),\n self.func, *args,\n executors=self.executors,\n fn_hash=self.func_hash,\n cache=self.cache,\n **self.kwargs)\n\n out_futs = [DataFuture(app_fut, o, tid=app_fut.tid)\n for o in kwargs.get('outputs', [])]\n app_fut._outputs = out_futs\n\n return app_fut\n", "path": "parsl/app/bash.py"}]}
| 1,991 | 139 |
gh_patches_debug_24219
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-9546
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Please replace MultiSelect widget with something more compact, effective and nicer looking
ORIGINALLY POSTED AS PANEL FEATURE REQUEST AT https://github.com/holoviz/panel/issues/874
#### My Pain
I know of experience that many of my dashboard have multiple multi-selections.
The MultiSelect widget of panel is not compact so it takes up a lot of space.
Furthermore I find the MultiSelect not very nicely looking. I would like my dashboards to look appealing and fresh.
Futhermore I think navigating and selecting in the MultiSelect widget is slow as soon as you have to start scrolling in the MultiSelect

#### Solution
Implement compact, efficient and nicer looking MultiSelect.
It should work as a dropdown with multiselect.
#### Additional Context
You can get inspiration from Dash, Streamlit and Tableau that all have a much more compact and modern looking widget.


FYI. Tableau has both a more compact Dropdown and something similar to the MultiSelect.
Here it's used an that's where I have my evaluation from. You can find it in the Gallery at awesome-panel.org.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/models/widgets/inputs.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 ''' Various kinds of input widgets and form controls.
8
9 '''
10
11 #-----------------------------------------------------------------------------
12 # Boilerplate
13 #-----------------------------------------------------------------------------
14 import logging # isort:skip
15 log = logging.getLogger(__name__)
16
17 #-----------------------------------------------------------------------------
18 # Imports
19 #-----------------------------------------------------------------------------
20
21 # Bokeh imports
22 from ...core.enums import CalendarPosition
23 from ...core.has_props import abstract
24 from ...core.properties import (
25 Bool,
26 ColorHex,
27 Date,
28 Dict,
29 Either,
30 Enum,
31 Float,
32 Int,
33 List,
34 PositiveInt,
35 String,
36 Tuple,
37 )
38 from .widget import Widget
39
40 #-----------------------------------------------------------------------------
41 # Globals and constants
42 #-----------------------------------------------------------------------------
43
44 __all__ = (
45 'AutocompleteInput',
46 'ColorPicker',
47 'DatePicker',
48 'FileInput',
49 'InputWidget',
50 'MultiSelect',
51 'PasswordInput',
52 'Select',
53 'Spinner',
54 'TextInput',
55 'TextAreaInput'
56 )
57
58 #-----------------------------------------------------------------------------
59 # Dev API
60 #-----------------------------------------------------------------------------
61
62
63 @abstract
64 class InputWidget(Widget):
65 ''' Abstract base class for input widgets.
66
67 '''
68
69 title = String(default="", help="""
70 Widget's label.
71 """)
72
73 @classmethod
74 def coerce_value(cls, val):
75 prop_obj = cls.lookup('value')
76 if isinstance(prop_obj, Float):
77 return float(val)
78 elif isinstance(prop_obj, Int):
79 return int(val)
80 elif isinstance(prop_obj, String):
81 return str(val)
82 else:
83 return val
84
85 #-----------------------------------------------------------------------------
86 # General API
87 #-----------------------------------------------------------------------------
88
89 class FileInput(Widget):
90 ''' Present a file-chooser dialog to users and return the contents of a
91 selected file.
92
93 '''
94
95 value = String(default="", readonly=True, help="""
96 A base64-encoded string of the contents of the selected file.
97 """)
98
99 mime_type = String(default="", readonly=True, help="""
100 The mime type of the selected file.
101 """)
102
103 filename = String(default="", readonly=True, help="""
104 The filename of the selected file.
105
106 .. note::
107 The full file path is not included since browsers will not provide
108 access to that information for security reasons.
109 """)
110
111 accept = String(default="", help="""
112 Comma-separated list of standard HTML file input filters that restrict what
113 files the user can pick from. Values can be:
114
115 `<file extension>`:
116 Specific file extension(s) (e.g: .gif, .jpg, .png, .doc) are pickable
117
118 `audio/*`:
119 all sound files are pickable
120
121 `video/*`:
122 all video files are pickable
123
124 `image/*`:
125 all image files are pickable
126
127 `<media type>`:
128 A valid `IANA Media Type`_, with no parameters.
129
130 .. _IANA Media Type: https://www.iana.org/assignments/media-types/media-types.xhtml
131 """)
132
133
134 class TextInput(InputWidget):
135 ''' Single-line input widget.
136
137 '''
138
139 value = String(default="", help="""
140 Initial or entered text value.
141
142 Change events are triggered whenever <enter> is pressed.
143 """)
144
145 value_input = String(default="", help="""
146 Initial or current value.
147
148 Change events are triggered whenever any update happens, i.e. on every
149 keypress.
150 """)
151
152 placeholder = String(default="", help="""
153 Placeholder for empty input field.
154 """)
155
156
157 class TextAreaInput(TextInput):
158 ''' Multi-line input widget.
159
160 '''
161
162 cols = Int(default=20, help="""
163 Specifies the width of the text area (in average character width). Default: 20
164 """)
165
166 rows = Int(default=2, help="""
167 Specifies the height of the text area (in lines). Default: 2
168 """)
169
170 max_length = Int(default=500, help="""
171 Max count of characters in field
172 """)
173
174
175 class PasswordInput(TextInput):
176 ''' Single-line password input widget.
177
178 This widget hides the input value so that it is not visible in the browser.
179
180 .. warning::
181 Secure transmission of the password to Bokeh server application code
182 requires configuring the server for SSL (i.e. HTTPS) termination.
183
184 '''
185
186
187 class AutocompleteInput(TextInput):
188 ''' Single-line input widget with auto-completion.
189
190 '''
191
192 completions = List(String, help="""
193 A list of completion strings. This will be used to guide the
194 user upon typing the beginning of a desired value.
195 """)
196
197 min_characters = PositiveInt(default=2, help="""
198 The number of characters a user must type before completions are presented.
199 """)
200
201
202 class Select(InputWidget):
203 ''' Single-select widget.
204
205 '''
206 options = Either(List(Either(String, Tuple(Either(Int, String), String))),
207 Dict(String, List(Either(String, Tuple(Either(Int, String), String)))), help="""
208 Available selection options. Options may be provided either as a list of
209 possible string values, or as a list of tuples, each of the form
210 ``(value, label)``. In the latter case, the visible widget text for each
211 value will be corresponding given label. Option groupings can be provided
212 by supplying a dictionary object whose values are in the aforementioned
213 list format
214 """)
215
216 value = String(default="", help="""
217 Initial or selected value.
218 """)
219
220 class MultiSelect(InputWidget):
221 ''' Multi-select widget.
222
223 '''
224
225 options = List(Either(String, Tuple(String, String)), help="""
226 Available selection options. Options may be provided either as a list of
227 possible string values, or as a list of tuples, each of the form
228 ``(value, label)``. In the latter case, the visible widget text for each
229 value will be corresponding given label.
230 """)
231
232 value = List(String, help="""
233 Initial or selected values.
234 """)
235
236 size = Int(default=4, help="""
237 The number of visible options in the dropdown list. (This uses the
238 ``select`` HTML element's ``size`` attribute. Some browsers might not
239 show less than 3 options.)
240 """)
241
242
243 class DatePicker(InputWidget):
244 ''' Calendar-based date picker widget.
245
246 '''
247
248 value = Date(help="""
249 The initial or picked date.
250 """)
251
252 min_date = Date(default=None, help="""
253 Optional earliest allowable date.
254 """)
255
256 max_date = Date(default=None, help="""
257 Optional latest allowable date.
258 """)
259
260 disabled_dates = List(Either(Date, Tuple(Date, Date)), default=[], help="""
261 A list of dates of ``(start, end)`` date ranges to make unavailable for
262 selection. All other dates will be avalable.
263
264 .. note::
265 Only one of ``disabled_dates`` and ``enabled_dates`` should be specified.
266 """)
267
268 enabled_dates = List(Either(Date, Tuple(Date, Date)), default=[], help="""
269 A list of dates of ``(start, end)`` date ranges to make available for
270 selection. All other dates will be unavailable.
271
272 .. note::
273 Only one of ``disabled_dates`` and ``enabled_dates`` should be specified.
274 """)
275
276 position = Enum(CalendarPosition, default="auto", help="""
277 Where the calendar is rendered relative to the input when ``inline`` is False.
278 """)
279
280 inline = Bool(default=False, help="""
281 Whether the calendar sholud be displayed inline.
282 """)
283
284 class ColorPicker(InputWidget):
285 ''' Color picker widget
286
287 .. warning::
288 This widget as a limited support on *Internet Explorer* (it will be displayed
289 as a simple text input).
290
291 '''
292
293 color = ColorHex(default='#000000', help="""
294 The initial color of the picked color (named or hexadecimal)
295 """)
296
297 class Spinner(InputWidget):
298 ''' Spinner widget for numerical inputs
299
300 '''
301
302 value = Float(default=0, help="""
303 The initial value of the spinner
304 """)
305
306 step = Float(default=1, help="""
307 The step added or subtracted to the current value
308 """)
309
310 low = Float(help="""
311 Optional lowest allowable value.
312 """)
313
314 high = Float(help="""
315 Optional highest allowable value.
316 """)
317
318 #-----------------------------------------------------------------------------
319 # Private API
320 #-----------------------------------------------------------------------------
321
322 #-----------------------------------------------------------------------------
323 # Code
324 #-----------------------------------------------------------------------------
325
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bokeh/models/widgets/inputs.py b/bokeh/models/widgets/inputs.py
--- a/bokeh/models/widgets/inputs.py
+++ b/bokeh/models/widgets/inputs.py
@@ -47,6 +47,7 @@
'DatePicker',
'FileInput',
'InputWidget',
+ 'MultiChoice',
'MultiSelect',
'PasswordInput',
'Select',
@@ -240,6 +241,42 @@
""")
+class MultiChoice(InputWidget):
+ ''' MultiChoice widget.
+
+ '''
+
+ options = List(Either(String, Tuple(String, String)), help="""
+ Available selection options. Options may be provided either as a list of
+ possible string values, or as a list of tuples, each of the form
+ ``(value, label)``. In the latter case, the visible widget text for each
+ value will be corresponding given label.
+ """)
+
+ value = List(String, help="""
+ Initial or selected values.
+ """)
+
+ delete_button = Bool(default=True, help="""
+ Whether to add a button to remove a selected option.
+ """)
+
+ max_items = Int(default=None, help="""
+ The maximum number of items that can be selected.
+ """)
+
+ option_limit = Int(default=None, help="""
+ The number of choices that will be rendered in the dropdown.
+ """)
+
+ placeholder = String(default=None, help="""
+ A string that is displayed if not item is added.
+ """)
+
+ solid = Bool(default=True, help="""
+ Specify whether the choices should be solidly filled.""")
+
+
class DatePicker(InputWidget):
''' Calendar-based date picker widget.
|
{"golden_diff": "diff --git a/bokeh/models/widgets/inputs.py b/bokeh/models/widgets/inputs.py\n--- a/bokeh/models/widgets/inputs.py\n+++ b/bokeh/models/widgets/inputs.py\n@@ -47,6 +47,7 @@\n 'DatePicker',\n 'FileInput',\n 'InputWidget',\n+ 'MultiChoice',\n 'MultiSelect',\n 'PasswordInput',\n 'Select',\n@@ -240,6 +241,42 @@\n \"\"\")\n \n \n+class MultiChoice(InputWidget):\n+ ''' MultiChoice widget.\n+\n+ '''\n+\n+ options = List(Either(String, Tuple(String, String)), help=\"\"\"\n+ Available selection options. Options may be provided either as a list of\n+ possible string values, or as a list of tuples, each of the form\n+ ``(value, label)``. In the latter case, the visible widget text for each\n+ value will be corresponding given label.\n+ \"\"\")\n+\n+ value = List(String, help=\"\"\"\n+ Initial or selected values.\n+ \"\"\")\n+\n+ delete_button = Bool(default=True, help=\"\"\"\n+ Whether to add a button to remove a selected option.\n+ \"\"\")\n+\n+ max_items = Int(default=None, help=\"\"\"\n+ The maximum number of items that can be selected.\n+ \"\"\")\n+\n+ option_limit = Int(default=None, help=\"\"\"\n+ The number of choices that will be rendered in the dropdown.\n+ \"\"\")\n+\n+ placeholder = String(default=None, help=\"\"\"\n+ A string that is displayed if not item is added.\n+ \"\"\")\n+\n+ solid = Bool(default=True, help=\"\"\"\n+ Specify whether the choices should be solidly filled.\"\"\")\n+\n+\n class DatePicker(InputWidget):\n ''' Calendar-based date picker widget.\n", "issue": "Please replace MultiSelect widget with something more compact, effective and nicer looking\nORIGINALLY POSTED AS PANEL FEATURE REQUEST AT https://github.com/holoviz/panel/issues/874\r\n\r\n#### My Pain\r\n\r\nI know of experience that many of my dashboard have multiple multi-selections. \r\n\r\nThe MultiSelect widget of panel is not compact so it takes up a lot of space.\r\n\r\nFurthermore I find the MultiSelect not very nicely looking. I would like my dashboards to look appealing and fresh.\r\n\r\nFuthermore I think navigating and selecting in the MultiSelect widget is slow as soon as you have to start scrolling in the MultiSelect\r\n\r\n\r\n\r\n#### Solution\r\n\r\nImplement compact, efficient and nicer looking MultiSelect. \r\n\r\nIt should work as a dropdown with multiselect.\r\n\r\n#### Additional Context\r\n\r\nYou can get inspiration from Dash, Streamlit and Tableau that all have a much more compact and modern looking widget.\r\n\r\n\r\n\r\n\r\n\r\nFYI. Tableau has both a more compact Dropdown and something similar to the MultiSelect.\r\n\r\nHere it's used an that's where I have my evaluation from. You can find it in the Gallery at awesome-panel.org.\r\n\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Various kinds of input widgets and form controls.\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Bokeh imports\nfrom ...core.enums import CalendarPosition\nfrom ...core.has_props import abstract\nfrom ...core.properties import (\n Bool,\n ColorHex,\n Date,\n Dict,\n Either,\n Enum,\n Float,\n Int,\n List,\n PositiveInt,\n String,\n Tuple,\n)\nfrom .widget import Widget\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'AutocompleteInput',\n 'ColorPicker',\n 'DatePicker',\n 'FileInput',\n 'InputWidget',\n 'MultiSelect',\n 'PasswordInput',\n 'Select',\n 'Spinner',\n 'TextInput',\n 'TextAreaInput'\n)\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n\n@abstract\nclass InputWidget(Widget):\n ''' Abstract base class for input widgets.\n\n '''\n\n title = String(default=\"\", help=\"\"\"\n Widget's label.\n \"\"\")\n\n @classmethod\n def coerce_value(cls, val):\n prop_obj = cls.lookup('value')\n if isinstance(prop_obj, Float):\n return float(val)\n elif isinstance(prop_obj, Int):\n return int(val)\n elif isinstance(prop_obj, String):\n return str(val)\n else:\n return val\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\nclass FileInput(Widget):\n ''' Present a file-chooser dialog to users and return the contents of a\n selected file.\n\n '''\n\n value = String(default=\"\", readonly=True, help=\"\"\"\n A base64-encoded string of the contents of the selected file.\n \"\"\")\n\n mime_type = String(default=\"\", readonly=True, help=\"\"\"\n The mime type of the selected file.\n \"\"\")\n\n filename = String(default=\"\", readonly=True, help=\"\"\"\n The filename of the selected file.\n\n .. note::\n The full file path is not included since browsers will not provide\n access to that information for security reasons.\n \"\"\")\n\n accept = String(default=\"\", help=\"\"\"\n Comma-separated list of standard HTML file input filters that restrict what\n files the user can pick from. Values can be:\n\n `<file extension>`:\n Specific file extension(s) (e.g: .gif, .jpg, .png, .doc) are pickable\n\n `audio/*`:\n all sound files are pickable\n\n `video/*`:\n all video files are pickable\n\n `image/*`:\n all image files are pickable\n\n `<media type>`:\n A valid `IANA Media Type`_, with no parameters.\n\n .. _IANA Media Type: https://www.iana.org/assignments/media-types/media-types.xhtml\n \"\"\")\n\n\nclass TextInput(InputWidget):\n ''' Single-line input widget.\n\n '''\n\n value = String(default=\"\", help=\"\"\"\n Initial or entered text value.\n\n Change events are triggered whenever <enter> is pressed.\n \"\"\")\n\n value_input = String(default=\"\", help=\"\"\"\n Initial or current value.\n\n Change events are triggered whenever any update happens, i.e. on every\n keypress.\n \"\"\")\n\n placeholder = String(default=\"\", help=\"\"\"\n Placeholder for empty input field.\n \"\"\")\n\n\nclass TextAreaInput(TextInput):\n ''' Multi-line input widget.\n\n '''\n\n cols = Int(default=20, help=\"\"\"\n Specifies the width of the text area (in average character width). Default: 20\n \"\"\")\n\n rows = Int(default=2, help=\"\"\"\n Specifies the height of the text area (in lines). Default: 2\n \"\"\")\n\n max_length = Int(default=500, help=\"\"\"\n Max count of characters in field\n \"\"\")\n\n\nclass PasswordInput(TextInput):\n ''' Single-line password input widget.\n\n This widget hides the input value so that it is not visible in the browser.\n\n .. warning::\n Secure transmission of the password to Bokeh server application code\n requires configuring the server for SSL (i.e. HTTPS) termination.\n\n '''\n\n\nclass AutocompleteInput(TextInput):\n ''' Single-line input widget with auto-completion.\n\n '''\n\n completions = List(String, help=\"\"\"\n A list of completion strings. This will be used to guide the\n user upon typing the beginning of a desired value.\n \"\"\")\n\n min_characters = PositiveInt(default=2, help=\"\"\"\n The number of characters a user must type before completions are presented.\n \"\"\")\n\n\nclass Select(InputWidget):\n ''' Single-select widget.\n\n '''\n options = Either(List(Either(String, Tuple(Either(Int, String), String))),\n Dict(String, List(Either(String, Tuple(Either(Int, String), String)))), help=\"\"\"\n Available selection options. Options may be provided either as a list of\n possible string values, or as a list of tuples, each of the form\n ``(value, label)``. In the latter case, the visible widget text for each\n value will be corresponding given label. Option groupings can be provided\n by supplying a dictionary object whose values are in the aforementioned\n list format\n \"\"\")\n\n value = String(default=\"\", help=\"\"\"\n Initial or selected value.\n \"\"\")\n\nclass MultiSelect(InputWidget):\n ''' Multi-select widget.\n\n '''\n\n options = List(Either(String, Tuple(String, String)), help=\"\"\"\n Available selection options. Options may be provided either as a list of\n possible string values, or as a list of tuples, each of the form\n ``(value, label)``. In the latter case, the visible widget text for each\n value will be corresponding given label.\n \"\"\")\n\n value = List(String, help=\"\"\"\n Initial or selected values.\n \"\"\")\n\n size = Int(default=4, help=\"\"\"\n The number of visible options in the dropdown list. (This uses the\n ``select`` HTML element's ``size`` attribute. Some browsers might not\n show less than 3 options.)\n \"\"\")\n\n\nclass DatePicker(InputWidget):\n ''' Calendar-based date picker widget.\n\n '''\n\n value = Date(help=\"\"\"\n The initial or picked date.\n \"\"\")\n\n min_date = Date(default=None, help=\"\"\"\n Optional earliest allowable date.\n \"\"\")\n\n max_date = Date(default=None, help=\"\"\"\n Optional latest allowable date.\n \"\"\")\n\n disabled_dates = List(Either(Date, Tuple(Date, Date)), default=[], help=\"\"\"\n A list of dates of ``(start, end)`` date ranges to make unavailable for\n selection. All other dates will be avalable.\n\n .. note::\n Only one of ``disabled_dates`` and ``enabled_dates`` should be specified.\n \"\"\")\n\n enabled_dates = List(Either(Date, Tuple(Date, Date)), default=[], help=\"\"\"\n A list of dates of ``(start, end)`` date ranges to make available for\n selection. All other dates will be unavailable.\n\n .. note::\n Only one of ``disabled_dates`` and ``enabled_dates`` should be specified.\n \"\"\")\n\n position = Enum(CalendarPosition, default=\"auto\", help=\"\"\"\n Where the calendar is rendered relative to the input when ``inline`` is False.\n \"\"\")\n\n inline = Bool(default=False, help=\"\"\"\n Whether the calendar sholud be displayed inline.\n \"\"\")\n\nclass ColorPicker(InputWidget):\n ''' Color picker widget\n\n .. warning::\n This widget as a limited support on *Internet Explorer* (it will be displayed\n as a simple text input).\n\n '''\n\n color = ColorHex(default='#000000', help=\"\"\"\n The initial color of the picked color (named or hexadecimal)\n \"\"\")\n\nclass Spinner(InputWidget):\n ''' Spinner widget for numerical inputs\n\n '''\n\n value = Float(default=0, help=\"\"\"\n The initial value of the spinner\n \"\"\")\n\n step = Float(default=1, help=\"\"\"\n The step added or subtracted to the current value\n \"\"\")\n\n low = Float(help=\"\"\"\n Optional lowest allowable value.\n \"\"\")\n\n high = Float(help=\"\"\"\n Optional highest allowable value.\n \"\"\")\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/models/widgets/inputs.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Various kinds of input widgets and form controls.\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Bokeh imports\nfrom ...core.enums import CalendarPosition\nfrom ...core.has_props import abstract\nfrom ...core.properties import (\n Bool,\n ColorHex,\n Date,\n Dict,\n Either,\n Enum,\n Float,\n Int,\n List,\n PositiveInt,\n String,\n Tuple,\n)\nfrom .widget import Widget\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'AutocompleteInput',\n 'ColorPicker',\n 'DatePicker',\n 'FileInput',\n 'InputWidget',\n 'MultiChoice',\n 'MultiSelect',\n 'PasswordInput',\n 'Select',\n 'Spinner',\n 'TextInput',\n 'TextAreaInput'\n)\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n\n@abstract\nclass InputWidget(Widget):\n ''' Abstract base class for input widgets.\n\n '''\n\n title = String(default=\"\", help=\"\"\"\n Widget's label.\n \"\"\")\n\n @classmethod\n def coerce_value(cls, val):\n prop_obj = cls.lookup('value')\n if isinstance(prop_obj, Float):\n return float(val)\n elif isinstance(prop_obj, Int):\n return int(val)\n elif isinstance(prop_obj, String):\n return str(val)\n else:\n return val\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\nclass FileInput(Widget):\n ''' Present a file-chooser dialog to users and return the contents of a\n selected file.\n\n '''\n\n value = String(default=\"\", readonly=True, help=\"\"\"\n A base64-encoded string of the contents of the selected file.\n \"\"\")\n\n mime_type = String(default=\"\", readonly=True, help=\"\"\"\n The mime type of the selected file.\n \"\"\")\n\n filename = String(default=\"\", readonly=True, help=\"\"\"\n The filename of the selected file.\n\n .. note::\n The full file path is not included since browsers will not provide\n access to that information for security reasons.\n \"\"\")\n\n accept = String(default=\"\", help=\"\"\"\n Comma-separated list of standard HTML file input filters that restrict what\n files the user can pick from. Values can be:\n\n `<file extension>`:\n Specific file extension(s) (e.g: .gif, .jpg, .png, .doc) are pickable\n\n `audio/*`:\n all sound files are pickable\n\n `video/*`:\n all video files are pickable\n\n `image/*`:\n all image files are pickable\n\n `<media type>`:\n A valid `IANA Media Type`_, with no parameters.\n\n .. _IANA Media Type: https://www.iana.org/assignments/media-types/media-types.xhtml\n \"\"\")\n\n\nclass TextInput(InputWidget):\n ''' Single-line input widget.\n\n '''\n\n value = String(default=\"\", help=\"\"\"\n Initial or entered text value.\n\n Change events are triggered whenever <enter> is pressed.\n \"\"\")\n\n value_input = String(default=\"\", help=\"\"\"\n Initial or current value.\n\n Change events are triggered whenever any update happens, i.e. on every\n keypress.\n \"\"\")\n\n placeholder = String(default=\"\", help=\"\"\"\n Placeholder for empty input field.\n \"\"\")\n\n\nclass TextAreaInput(TextInput):\n ''' Multi-line input widget.\n\n '''\n\n cols = Int(default=20, help=\"\"\"\n Specifies the width of the text area (in average character width). Default: 20\n \"\"\")\n\n rows = Int(default=2, help=\"\"\"\n Specifies the height of the text area (in lines). Default: 2\n \"\"\")\n\n max_length = Int(default=500, help=\"\"\"\n Max count of characters in field\n \"\"\")\n\n\nclass PasswordInput(TextInput):\n ''' Single-line password input widget.\n\n This widget hides the input value so that it is not visible in the browser.\n\n .. warning::\n Secure transmission of the password to Bokeh server application code\n requires configuring the server for SSL (i.e. HTTPS) termination.\n\n '''\n\n\nclass AutocompleteInput(TextInput):\n ''' Single-line input widget with auto-completion.\n\n '''\n\n completions = List(String, help=\"\"\"\n A list of completion strings. This will be used to guide the\n user upon typing the beginning of a desired value.\n \"\"\")\n\n min_characters = PositiveInt(default=2, help=\"\"\"\n The number of characters a user must type before completions are presented.\n \"\"\")\n\n\nclass Select(InputWidget):\n ''' Single-select widget.\n\n '''\n options = Either(List(Either(String, Tuple(Either(Int, String), String))),\n Dict(String, List(Either(String, Tuple(Either(Int, String), String)))), help=\"\"\"\n Available selection options. Options may be provided either as a list of\n possible string values, or as a list of tuples, each of the form\n ``(value, label)``. In the latter case, the visible widget text for each\n value will be corresponding given label. Option groupings can be provided\n by supplying a dictionary object whose values are in the aforementioned\n list format\n \"\"\")\n\n value = String(default=\"\", help=\"\"\"\n Initial or selected value.\n \"\"\")\n\nclass MultiSelect(InputWidget):\n ''' Multi-select widget.\n\n '''\n\n options = List(Either(String, Tuple(String, String)), help=\"\"\"\n Available selection options. Options may be provided either as a list of\n possible string values, or as a list of tuples, each of the form\n ``(value, label)``. In the latter case, the visible widget text for each\n value will be corresponding given label.\n \"\"\")\n\n value = List(String, help=\"\"\"\n Initial or selected values.\n \"\"\")\n\n size = Int(default=4, help=\"\"\"\n The number of visible options in the dropdown list. (This uses the\n ``select`` HTML element's ``size`` attribute. Some browsers might not\n show less than 3 options.)\n \"\"\")\n\n\nclass MultiChoice(InputWidget):\n ''' MultiChoice widget.\n\n '''\n\n options = List(Either(String, Tuple(String, String)), help=\"\"\"\n Available selection options. Options may be provided either as a list of\n possible string values, or as a list of tuples, each of the form\n ``(value, label)``. In the latter case, the visible widget text for each\n value will be corresponding given label.\n \"\"\")\n\n value = List(String, help=\"\"\"\n Initial or selected values.\n \"\"\")\n\n delete_button = Bool(default=True, help=\"\"\"\n Whether to add a button to remove a selected option.\n \"\"\")\n\n max_items = Int(default=None, help=\"\"\"\n The maximum number of items that can be selected.\n \"\"\")\n\n option_limit = Int(default=None, help=\"\"\"\n The number of choices that will be rendered in the dropdown.\n \"\"\")\n\n placeholder = String(default=None, help=\"\"\"\n A string that is displayed if not item is added.\n \"\"\")\n\n solid = Bool(default=True, help=\"\"\"\n Specify whether the choices should be solidly filled.\"\"\")\n\n\nclass DatePicker(InputWidget):\n ''' Calendar-based date picker widget.\n\n '''\n\n value = Date(help=\"\"\"\n The initial or picked date.\n \"\"\")\n\n min_date = Date(default=None, help=\"\"\"\n Optional earliest allowable date.\n \"\"\")\n\n max_date = Date(default=None, help=\"\"\"\n Optional latest allowable date.\n \"\"\")\n\n disabled_dates = List(Either(Date, Tuple(Date, Date)), default=[], help=\"\"\"\n A list of dates of ``(start, end)`` date ranges to make unavailable for\n selection. All other dates will be avalable.\n\n .. note::\n Only one of ``disabled_dates`` and ``enabled_dates`` should be specified.\n \"\"\")\n\n enabled_dates = List(Either(Date, Tuple(Date, Date)), default=[], help=\"\"\"\n A list of dates of ``(start, end)`` date ranges to make available for\n selection. All other dates will be unavailable.\n\n .. note::\n Only one of ``disabled_dates`` and ``enabled_dates`` should be specified.\n \"\"\")\n\n position = Enum(CalendarPosition, default=\"auto\", help=\"\"\"\n Where the calendar is rendered relative to the input when ``inline`` is False.\n \"\"\")\n\n inline = Bool(default=False, help=\"\"\"\n Whether the calendar sholud be displayed inline.\n \"\"\")\n\nclass ColorPicker(InputWidget):\n ''' Color picker widget\n\n .. warning::\n This widget as a limited support on *Internet Explorer* (it will be displayed\n as a simple text input).\n\n '''\n\n color = ColorHex(default='#000000', help=\"\"\"\n The initial color of the picked color (named or hexadecimal)\n \"\"\")\n\nclass Spinner(InputWidget):\n ''' Spinner widget for numerical inputs\n\n '''\n\n value = Float(default=0, help=\"\"\"\n The initial value of the spinner\n \"\"\")\n\n step = Float(default=1, help=\"\"\"\n The step added or subtracted to the current value\n \"\"\")\n\n low = Float(help=\"\"\"\n Optional lowest allowable value.\n \"\"\")\n\n high = Float(help=\"\"\"\n Optional highest allowable value.\n \"\"\")\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/models/widgets/inputs.py"}]}
| 3,493 | 394 |
gh_patches_debug_1187
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-6051
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Alembic operations fail with multiple head revisions
## Description
All Alembic operations fail with Alembic error:
ERROR [alembic.util.messaging] Multiple head revisions are present for given argument 'head'; please specify a specific target revision, '<branchname>@head' to narrow to a specific head, or 'heads' for all heads
Cf. consistent recent failures of CI jobs `app-tests` and `staging-test-with-rebase` since #5974.
## Steps to Reproduce
`make test` on `develop`; open or push to a PR; etc.
## Expected Behavior
Alembic operations succeed and Alembic-based tests pass.
## Actual Behavior
All Alembic operations and tests fail with Alembic error:
ERROR [alembic.util.messaging] Multiple head revisions are present for given argument 'head'; please specify a specific target revision, '<branchname>@head' to narrow to a specific head, or 'heads' for all heads
## Comments
This is essentially an Alembic-level merge-conflict. PR forthcoming with the one-line fix.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py`
Content:
```
1 """unique_index_for_instanceconfig_valid_until
2
3 Revision ID: 1ddb81fb88c2
4 Revises: 92fba0be98e9
5 Create Date: 2021-06-04 17:28:25.725563
6
7 """
8 from alembic import op
9 import sqlalchemy as sa
10
11
12 # revision identifiers, used by Alembic.
13 revision = '1ddb81fb88c2'
14 down_revision = '92fba0be98e9'
15 branch_labels = None
16 depends_on = None
17
18
19 def upgrade():
20 # ### commands auto generated by Alembic - please adjust! ###
21 with op.batch_alter_table('instance_config', schema=None) as batch_op:
22 batch_op.create_index('ix_one_active_instance_config', [sa.text('valid_until IS NULL')], unique=True, sqlite_where=sa.text('valid_until IS NULL'))
23
24 # ### end Alembic commands ###
25
26
27 def downgrade():
28 # ### commands auto generated by Alembic - please adjust! ###
29 with op.batch_alter_table('instance_config', schema=None) as batch_op:
30 batch_op.drop_index('ix_one_active_instance_config')
31
32 # ### end Alembic commands ###
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py b/securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py
--- a/securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py
+++ b/securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py
@@ -11,7 +11,7 @@
# revision identifiers, used by Alembic.
revision = '1ddb81fb88c2'
-down_revision = '92fba0be98e9'
+down_revision = 'b060f38c0c31'
branch_labels = None
depends_on = None
|
{"golden_diff": "diff --git a/securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py b/securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py\n--- a/securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py\n+++ b/securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py\n@@ -11,7 +11,7 @@\n \n # revision identifiers, used by Alembic.\n revision = '1ddb81fb88c2'\n-down_revision = '92fba0be98e9'\n+down_revision = 'b060f38c0c31'\n branch_labels = None\n depends_on = None\n", "issue": "Alembic operations fail with multiple head revisions\n## Description\r\n\r\nAll Alembic operations fail with Alembic error:\r\n\r\n ERROR [alembic.util.messaging] Multiple head revisions are present for given argument 'head'; please specify a specific target revision, '<branchname>@head' to narrow to a specific head, or 'heads' for all heads\r\n\r\nCf. consistent recent failures of CI jobs `app-tests` and `staging-test-with-rebase` since #5974.\r\n\r\n## Steps to Reproduce\r\n\r\n`make test` on `develop`; open or push to a PR; etc.\r\n\r\n## Expected Behavior\r\n\r\nAlembic operations succeed and Alembic-based tests pass.\r\n\r\n## Actual Behavior\r\n\r\nAll Alembic operations and tests fail with Alembic error:\r\n\r\n ERROR [alembic.util.messaging] Multiple head revisions are present for given argument 'head'; please specify a specific target revision, '<branchname>@head' to narrow to a specific head, or 'heads' for all heads\r\n\r\n## Comments\r\n\r\nThis is essentially an Alembic-level merge-conflict. PR forthcoming with the one-line fix.\n", "before_files": [{"content": "\"\"\"unique_index_for_instanceconfig_valid_until\n\nRevision ID: 1ddb81fb88c2\nRevises: 92fba0be98e9\nCreate Date: 2021-06-04 17:28:25.725563\n\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '1ddb81fb88c2'\ndown_revision = '92fba0be98e9'\nbranch_labels = None\ndepends_on = None\n\n\ndef upgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n with op.batch_alter_table('instance_config', schema=None) as batch_op:\n batch_op.create_index('ix_one_active_instance_config', [sa.text('valid_until IS NULL')], unique=True, sqlite_where=sa.text('valid_until IS NULL'))\n\n # ### end Alembic commands ###\n\n\ndef downgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n with op.batch_alter_table('instance_config', schema=None) as batch_op:\n batch_op.drop_index('ix_one_active_instance_config')\n\n # ### end Alembic commands ###\n", "path": "securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py"}], "after_files": [{"content": "\"\"\"unique_index_for_instanceconfig_valid_until\n\nRevision ID: 1ddb81fb88c2\nRevises: 92fba0be98e9\nCreate Date: 2021-06-04 17:28:25.725563\n\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '1ddb81fb88c2'\ndown_revision = 'b060f38c0c31'\nbranch_labels = None\ndepends_on = None\n\n\ndef upgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n with op.batch_alter_table('instance_config', schema=None) as batch_op:\n batch_op.create_index('ix_one_active_instance_config', [sa.text('valid_until IS NULL')], unique=True, sqlite_where=sa.text('valid_until IS NULL'))\n\n # ### end Alembic commands ###\n\n\ndef downgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n with op.batch_alter_table('instance_config', schema=None) as batch_op:\n batch_op.drop_index('ix_one_active_instance_config')\n\n # ### end Alembic commands ###\n", "path": "securedrop/alembic/versions/1ddb81fb88c2_unique_index_for_instanceconfig_valid_.py"}]}
| 848 | 199 |
gh_patches_debug_25395
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-1444
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unexpected E1029 error
*cfn-lint version: (0.29.1)*
*Description of issue.*
After this version was released, I started getting an error when linting a template. This error specific to `BuildSpec` attributes for a `AWS::CodeBuild::Project` project.
E1029 Found an embedded parameter outside of an "Fn::Sub" at
Resources/MyCodeBuild/Properties/Source/BuildSpec
cloudformation.json:151:11
I mocked up a JSON template that showcases the problem and [attached](https://github.com/aws-cloudformation/cfn-python-lint/files/4383494/cloudformation.txt) it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/functions/SubNeeded.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import re
6 import six
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9
10
11 class SubNeeded(CloudFormationLintRule):
12 """Check if a substitution string exists without a substitution function"""
13 id = 'E1029'
14 shortdesc = 'Sub is required if a variable is used in a string'
15 description = 'If a substitution variable exists in a string but isn\'t wrapped with the Fn::Sub function the deployment will fail.'
16 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'
17 tags = ['functions', 'sub']
18
19 # Free-form text properties to exclude from this rule
20 # content is part of AWS::CloudFormation::Init
21 excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',
22 'CloudWatchAlarmDefinition', 'TopicRulePayload']
23 api_excludes = ['Uri', 'Body']
24
25 # IAM Policy has special variables that don't require !Sub, Check for these
26 # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html
27 # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html
28 # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html
29 # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down
30 # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html
31 resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}',
32 '${aws:TokenIssueTime}', '${aws:principaltype}',
33 '${aws:SecureTransport}', '${aws:SourceIp}',
34 '${aws:UserAgent}', '${aws:userid}',
35 '${aws:username}', '${ec2:SourceInstanceARN}',
36 '${iot:Connection.Thing.ThingName}',
37 '${iot:Connection.Thing.ThingTypeName}',
38 '${iot:Connection.Thing.IsAttached}',
39 '${iot:ClientId}', '${transfer:HomeBucket}',
40 '${transfer:HomeDirectory}', '${transfer:HomeFolder}',
41 '${transfer:UserName}', '${redshift:DbUser}',
42 '${cognito-identity.amazonaws.com:aud}',
43 '${cognito-identity.amazonaws.com:sub}',
44 '${cognito-identity.amazonaws.com:amr}']
45
46 # https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html
47 condition_excludes = [
48 '${redshift:DbUser}',
49 ]
50
51 def __init__(self):
52 """Init"""
53 super(SubNeeded, self).__init__()
54 self.config_definition = {
55 'custom_excludes': {
56 'default': '',
57 'type': 'string'
58 }
59 }
60 self.configure()
61 self.subParameterRegex = re.compile(r'(\$\{[A-Za-z0-9_:\.]+\})')
62
63 def _match_values(self, cfnelem, path):
64 """Recursively search for values matching the searchRegex"""
65 values = []
66 if isinstance(cfnelem, dict):
67 for key in cfnelem:
68 pathprop = path[:]
69 pathprop.append(key)
70 values.extend(self._match_values(cfnelem[key], pathprop))
71 elif isinstance(cfnelem, list):
72 for index, item in enumerate(cfnelem):
73 pathprop = path[:]
74 pathprop.append(index)
75 values.extend(self._match_values(item, pathprop))
76 else:
77 # Leaf node
78 if isinstance(cfnelem, six.string_types): # and re.match(searchRegex, cfnelem):
79 for variable in re.findall(self.subParameterRegex, cfnelem):
80 values.append(path + [variable])
81
82 return values
83
84 def match_values(self, cfn):
85 """
86 Search for values in all parts of the templates that match the searchRegex
87 """
88 results = []
89 results.extend(self._match_values(cfn.template, []))
90 # Globals are removed during a transform. They need to be checked manually
91 results.extend(self._match_values(cfn.template.get('Globals', {}), []))
92 return results
93
94 def _api_exceptions(self, value):
95 """ Key value exceptions """
96 parameter_search = re.compile(r'^\$\{stageVariables\..*\}$')
97 return re.match(parameter_search, value)
98
99 def _variable_custom_excluded(self, value):
100 """ User-defined exceptions for variables, anywhere in the file """
101 custom_excludes = self.config['custom_excludes']
102 if custom_excludes:
103 custom_search = re.compile(custom_excludes)
104 return re.match(custom_search, value)
105 return False
106
107 def match(self, cfn):
108 """Basic Rule Matching"""
109
110 matches = []
111
112 # Get a list of paths to every leaf node string containing at least one ${parameter}
113 parameter_string_paths = self.match_values(cfn)
114 # We want to search all of the paths to check if each one contains an 'Fn::Sub'
115 for parameter_string_path in parameter_string_paths:
116 if parameter_string_path[0] in ['Parameters']:
117 continue
118 # Exclude the special IAM variables
119 variable = parameter_string_path[-1]
120
121 if 'Resource' in parameter_string_path:
122 if variable in self.resource_excludes:
123 continue
124 if 'NotResource' in parameter_string_path:
125 if variable in self.resource_excludes:
126 continue
127 if 'Condition' in parameter_string_path:
128 if variable in self.condition_excludes:
129 continue
130
131 # Exclude variables that match custom exclude filters, if configured
132 # (for third-party tools that pre-process templates before uploading them to AWS)
133 if self._variable_custom_excluded(variable):
134 continue
135
136 # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)
137 if variable.startswith('${!'):
138 continue
139
140 found_sub = False
141 # Does the path contain an 'Fn::Sub'?
142 for step in parameter_string_path:
143 if step in self.api_excludes:
144 if self._api_exceptions(parameter_string_path[-1]):
145 found_sub = True
146 elif step == 'Fn::Sub' or step in self.excludes:
147 found_sub = True
148
149 # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly
150 if not found_sub:
151 # Remove the last item (the variable) to prevent multiple errors on 1 line errors
152 path = parameter_string_path[:-1]
153 message = 'Found an embedded parameter outside of an "Fn::Sub" at {}'.format(
154 '/'.join(map(str, path)))
155 matches.append(RuleMatch(path, message))
156
157 return matches
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py
--- a/src/cfnlint/rules/functions/SubNeeded.py
+++ b/src/cfnlint/rules/functions/SubNeeded.py
@@ -19,7 +19,7 @@
# Free-form text properties to exclude from this rule
# content is part of AWS::CloudFormation::Init
excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',
- 'CloudWatchAlarmDefinition', 'TopicRulePayload']
+ 'CloudWatchAlarmDefinition', 'TopicRulePayload', 'BuildSpec']
api_excludes = ['Uri', 'Body']
# IAM Policy has special variables that don't require !Sub, Check for these
@@ -150,8 +150,8 @@
if not found_sub:
# Remove the last item (the variable) to prevent multiple errors on 1 line errors
path = parameter_string_path[:-1]
- message = 'Found an embedded parameter outside of an "Fn::Sub" at {}'.format(
- '/'.join(map(str, path)))
+ message = 'Found an embedded parameter "{}" outside of an "Fn::Sub" at {}'.format(
+ variable, '/'.join(map(str, path)))
matches.append(RuleMatch(path, message))
return matches
|
{"golden_diff": "diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py\n--- a/src/cfnlint/rules/functions/SubNeeded.py\n+++ b/src/cfnlint/rules/functions/SubNeeded.py\n@@ -19,7 +19,7 @@\n # Free-form text properties to exclude from this rule\n # content is part of AWS::CloudFormation::Init\n excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',\n- 'CloudWatchAlarmDefinition', 'TopicRulePayload']\n+ 'CloudWatchAlarmDefinition', 'TopicRulePayload', 'BuildSpec']\n api_excludes = ['Uri', 'Body']\n \n # IAM Policy has special variables that don't require !Sub, Check for these\n@@ -150,8 +150,8 @@\n if not found_sub:\n # Remove the last item (the variable) to prevent multiple errors on 1 line errors\n path = parameter_string_path[:-1]\n- message = 'Found an embedded parameter outside of an \"Fn::Sub\" at {}'.format(\n- '/'.join(map(str, path)))\n+ message = 'Found an embedded parameter \"{}\" outside of an \"Fn::Sub\" at {}'.format(\n+ variable, '/'.join(map(str, path)))\n matches.append(RuleMatch(path, message))\n \n return matches\n", "issue": "Unexpected E1029 error\n*cfn-lint version: (0.29.1)*\r\n\r\n*Description of issue.*\r\n\r\nAfter this version was released, I started getting an error when linting a template. This error specific to `BuildSpec` attributes for a `AWS::CodeBuild::Project` project.\r\n\r\n E1029 Found an embedded parameter outside of an \"Fn::Sub\" at \r\n Resources/MyCodeBuild/Properties/Source/BuildSpec\r\n cloudformation.json:151:11\r\n\r\nI mocked up a JSON template that showcases the problem and [attached](https://github.com/aws-cloudformation/cfn-python-lint/files/4383494/cloudformation.txt) it.\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass SubNeeded(CloudFormationLintRule):\n \"\"\"Check if a substitution string exists without a substitution function\"\"\"\n id = 'E1029'\n shortdesc = 'Sub is required if a variable is used in a string'\n description = 'If a substitution variable exists in a string but isn\\'t wrapped with the Fn::Sub function the deployment will fail.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n # Free-form text properties to exclude from this rule\n # content is part of AWS::CloudFormation::Init\n excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',\n 'CloudWatchAlarmDefinition', 'TopicRulePayload']\n api_excludes = ['Uri', 'Body']\n\n # IAM Policy has special variables that don't require !Sub, Check for these\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html\n # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html\n resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}',\n '${aws:TokenIssueTime}', '${aws:principaltype}',\n '${aws:SecureTransport}', '${aws:SourceIp}',\n '${aws:UserAgent}', '${aws:userid}',\n '${aws:username}', '${ec2:SourceInstanceARN}',\n '${iot:Connection.Thing.ThingName}',\n '${iot:Connection.Thing.ThingTypeName}',\n '${iot:Connection.Thing.IsAttached}',\n '${iot:ClientId}', '${transfer:HomeBucket}',\n '${transfer:HomeDirectory}', '${transfer:HomeFolder}',\n '${transfer:UserName}', '${redshift:DbUser}',\n '${cognito-identity.amazonaws.com:aud}',\n '${cognito-identity.amazonaws.com:sub}',\n '${cognito-identity.amazonaws.com:amr}']\n\n # https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html\n condition_excludes = [\n '${redshift:DbUser}',\n ]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(SubNeeded, self).__init__()\n self.config_definition = {\n 'custom_excludes': {\n 'default': '',\n 'type': 'string'\n }\n }\n self.configure()\n self.subParameterRegex = re.compile(r'(\\$\\{[A-Za-z0-9_:\\.]+\\})')\n\n def _match_values(self, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n values = []\n if isinstance(cfnelem, dict):\n for key in cfnelem:\n pathprop = path[:]\n pathprop.append(key)\n values.extend(self._match_values(cfnelem[key], pathprop))\n elif isinstance(cfnelem, list):\n for index, item in enumerate(cfnelem):\n pathprop = path[:]\n pathprop.append(index)\n values.extend(self._match_values(item, pathprop))\n else:\n # Leaf node\n if isinstance(cfnelem, six.string_types): # and re.match(searchRegex, cfnelem):\n for variable in re.findall(self.subParameterRegex, cfnelem):\n values.append(path + [variable])\n\n return values\n\n def match_values(self, cfn):\n \"\"\"\n Search for values in all parts of the templates that match the searchRegex\n \"\"\"\n results = []\n results.extend(self._match_values(cfn.template, []))\n # Globals are removed during a transform. They need to be checked manually\n results.extend(self._match_values(cfn.template.get('Globals', {}), []))\n return results\n\n def _api_exceptions(self, value):\n \"\"\" Key value exceptions \"\"\"\n parameter_search = re.compile(r'^\\$\\{stageVariables\\..*\\}$')\n return re.match(parameter_search, value)\n\n def _variable_custom_excluded(self, value):\n \"\"\" User-defined exceptions for variables, anywhere in the file \"\"\"\n custom_excludes = self.config['custom_excludes']\n if custom_excludes:\n custom_search = re.compile(custom_excludes)\n return re.match(custom_search, value)\n return False\n\n def match(self, cfn):\n \"\"\"Basic Rule Matching\"\"\"\n\n matches = []\n\n # Get a list of paths to every leaf node string containing at least one ${parameter}\n parameter_string_paths = self.match_values(cfn)\n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n if parameter_string_path[0] in ['Parameters']:\n continue\n # Exclude the special IAM variables\n variable = parameter_string_path[-1]\n\n if 'Resource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n if 'NotResource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n if 'Condition' in parameter_string_path:\n if variable in self.condition_excludes:\n continue\n\n # Exclude variables that match custom exclude filters, if configured\n # (for third-party tools that pre-process templates before uploading them to AWS)\n if self._variable_custom_excluded(variable):\n continue\n\n # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)\n if variable.startswith('${!'):\n continue\n\n found_sub = False\n # Does the path contain an 'Fn::Sub'?\n for step in parameter_string_path:\n if step in self.api_excludes:\n if self._api_exceptions(parameter_string_path[-1]):\n found_sub = True\n elif step == 'Fn::Sub' or step in self.excludes:\n found_sub = True\n\n # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly\n if not found_sub:\n # Remove the last item (the variable) to prevent multiple errors on 1 line errors\n path = parameter_string_path[:-1]\n message = 'Found an embedded parameter outside of an \"Fn::Sub\" at {}'.format(\n '/'.join(map(str, path)))\n matches.append(RuleMatch(path, message))\n\n return matches\n", "path": "src/cfnlint/rules/functions/SubNeeded.py"}], "after_files": [{"content": "\"\"\"\nCopyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass SubNeeded(CloudFormationLintRule):\n \"\"\"Check if a substitution string exists without a substitution function\"\"\"\n id = 'E1029'\n shortdesc = 'Sub is required if a variable is used in a string'\n description = 'If a substitution variable exists in a string but isn\\'t wrapped with the Fn::Sub function the deployment will fail.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n # Free-form text properties to exclude from this rule\n # content is part of AWS::CloudFormation::Init\n excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',\n 'CloudWatchAlarmDefinition', 'TopicRulePayload', 'BuildSpec']\n api_excludes = ['Uri', 'Body']\n\n # IAM Policy has special variables that don't require !Sub, Check for these\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html\n # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html\n resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}',\n '${aws:TokenIssueTime}', '${aws:principaltype}',\n '${aws:SecureTransport}', '${aws:SourceIp}',\n '${aws:UserAgent}', '${aws:userid}',\n '${aws:username}', '${ec2:SourceInstanceARN}',\n '${iot:Connection.Thing.ThingName}',\n '${iot:Connection.Thing.ThingTypeName}',\n '${iot:Connection.Thing.IsAttached}',\n '${iot:ClientId}', '${transfer:HomeBucket}',\n '${transfer:HomeDirectory}', '${transfer:HomeFolder}',\n '${transfer:UserName}', '${redshift:DbUser}',\n '${cognito-identity.amazonaws.com:aud}',\n '${cognito-identity.amazonaws.com:sub}',\n '${cognito-identity.amazonaws.com:amr}']\n\n # https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html\n condition_excludes = [\n '${redshift:DbUser}',\n ]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(SubNeeded, self).__init__()\n self.config_definition = {\n 'custom_excludes': {\n 'default': '',\n 'type': 'string'\n }\n }\n self.configure()\n self.subParameterRegex = re.compile(r'(\\$\\{[A-Za-z0-9_:\\.]+\\})')\n\n def _match_values(self, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n values = []\n if isinstance(cfnelem, dict):\n for key in cfnelem:\n pathprop = path[:]\n pathprop.append(key)\n values.extend(self._match_values(cfnelem[key], pathprop))\n elif isinstance(cfnelem, list):\n for index, item in enumerate(cfnelem):\n pathprop = path[:]\n pathprop.append(index)\n values.extend(self._match_values(item, pathprop))\n else:\n # Leaf node\n if isinstance(cfnelem, six.string_types): # and re.match(searchRegex, cfnelem):\n for variable in re.findall(self.subParameterRegex, cfnelem):\n values.append(path + [variable])\n\n return values\n\n def match_values(self, cfn):\n \"\"\"\n Search for values in all parts of the templates that match the searchRegex\n \"\"\"\n results = []\n results.extend(self._match_values(cfn.template, []))\n # Globals are removed during a transform. They need to be checked manually\n results.extend(self._match_values(cfn.template.get('Globals', {}), []))\n return results\n\n def _api_exceptions(self, value):\n \"\"\" Key value exceptions \"\"\"\n parameter_search = re.compile(r'^\\$\\{stageVariables\\..*\\}$')\n return re.match(parameter_search, value)\n\n def _variable_custom_excluded(self, value):\n \"\"\" User-defined exceptions for variables, anywhere in the file \"\"\"\n custom_excludes = self.config['custom_excludes']\n if custom_excludes:\n custom_search = re.compile(custom_excludes)\n return re.match(custom_search, value)\n return False\n\n def match(self, cfn):\n \"\"\"Basic Rule Matching\"\"\"\n\n matches = []\n\n # Get a list of paths to every leaf node string containing at least one ${parameter}\n parameter_string_paths = self.match_values(cfn)\n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n if parameter_string_path[0] in ['Parameters']:\n continue\n # Exclude the special IAM variables\n variable = parameter_string_path[-1]\n\n if 'Resource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n if 'NotResource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n if 'Condition' in parameter_string_path:\n if variable in self.condition_excludes:\n continue\n\n # Exclude variables that match custom exclude filters, if configured\n # (for third-party tools that pre-process templates before uploading them to AWS)\n if self._variable_custom_excluded(variable):\n continue\n\n # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)\n if variable.startswith('${!'):\n continue\n\n found_sub = False\n # Does the path contain an 'Fn::Sub'?\n for step in parameter_string_path:\n if step in self.api_excludes:\n if self._api_exceptions(parameter_string_path[-1]):\n found_sub = True\n elif step == 'Fn::Sub' or step in self.excludes:\n found_sub = True\n\n # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly\n if not found_sub:\n # Remove the last item (the variable) to prevent multiple errors on 1 line errors\n path = parameter_string_path[:-1]\n message = 'Found an embedded parameter \"{}\" outside of an \"Fn::Sub\" at {}'.format(\n variable, '/'.join(map(str, path)))\n matches.append(RuleMatch(path, message))\n\n return matches\n", "path": "src/cfnlint/rules/functions/SubNeeded.py"}]}
| 2,267 | 296 |
gh_patches_debug_23068
|
rasdani/github-patches
|
git_diff
|
getredash__redash-605
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Row count limitation when creating chart visualization
Graphing query with series data works fine when there are 1150 rows returned (total) but not when I go back further in time and get 1543 rows. The chart shows just one of the two data points used as series. The error in the console shows: "Highcharts error #12: www.highcharts.com/errors/12", and the link refers to the turboThreshold. I did not see any references to this when searching through the code.
Row count limitation when creating chart visualization
Graphing query with series data works fine when there are 1150 rows returned (total) but not when I go back further in time and get 1543 rows. The chart shows just one of the two data points used as series. The error in the console shows: "Highcharts error #12: www.highcharts.com/errors/12", and the link refers to the turboThreshold. I did not see any references to this when searching through the code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/handlers/static.py`
Content:
```
1 import hashlib
2 import json
3
4 from flask import render_template, send_from_directory, current_app
5 from flask_login import current_user, login_required
6
7 from redash import settings
8 from redash.wsgi import app
9
10
11 @app.route('/admin/<anything>/<whatever>')
12 @app.route('/admin/<anything>')
13 @app.route('/dashboard/<anything>')
14 @app.route('/alerts')
15 @app.route('/alerts/<pk>')
16 @app.route('/queries')
17 @app.route('/data_sources')
18 @app.route('/data_sources/<pk>')
19 @app.route('/users')
20 @app.route('/users/<pk>')
21 @app.route('/queries/<query_id>')
22 @app.route('/queries/<query_id>/<anything>')
23 @app.route('/personal')
24 @app.route('/')
25 @login_required
26 def index(**kwargs):
27 email_md5 = hashlib.md5(current_user.email.lower()).hexdigest()
28 gravatar_url = "https://www.gravatar.com/avatar/%s?s=40" % email_md5
29
30 user = {
31 'gravatar_url': gravatar_url,
32 'id': current_user.id,
33 'name': current_user.name,
34 'email': current_user.email,
35 'groups': current_user.groups,
36 'permissions': current_user.permissions
37 }
38
39 features = {
40 'clientSideMetrics': settings.CLIENT_SIDE_METRICS,
41 'allowScriptsInUserInput': settings.ALLOW_SCRIPTS_IN_USER_INPUT
42 }
43
44 return render_template("index.html", user=json.dumps(user), name=settings.NAME,
45 features=json.dumps(features),
46 analytics=settings.ANALYTICS)
47
48
49 @app.route('/<path:filename>')
50 def send_static(filename):
51 if current_app.debug:
52 cache_timeout = 0
53 else:
54 cache_timeout = None
55
56 return send_from_directory(settings.STATIC_ASSETS_PATH, filename, cache_timeout=cache_timeout)
57
```
Path: `redash/settings.py`
Content:
```
1 import json
2 import os
3 import urlparse
4 from funcy import distinct
5
6
7 def parse_db_url(url):
8 url_parts = urlparse.urlparse(url)
9 connection = {'threadlocals': True}
10
11 if url_parts.hostname and not url_parts.path:
12 connection['name'] = url_parts.hostname
13 else:
14 connection['name'] = url_parts.path[1:]
15 connection['host'] = url_parts.hostname
16 connection['port'] = url_parts.port
17 connection['user'] = url_parts.username
18 connection['password'] = url_parts.password
19
20 return connection
21
22
23 def fix_assets_path(path):
24 fullpath = os.path.join(os.path.dirname(__file__), path)
25 return fullpath
26
27
28 def array_from_string(str):
29 array = str.split(',')
30 if "" in array:
31 array.remove("")
32
33 return array
34
35
36 def set_from_string(str):
37 return set(array_from_string(str))
38
39
40 def parse_boolean(str):
41 return json.loads(str.lower())
42
43
44 def all_settings():
45 from types import ModuleType
46
47 settings = {}
48 for name, item in globals().iteritems():
49 if not callable(item) and not name.startswith("__") and not isinstance(item, ModuleType):
50 settings[name] = item
51
52 return settings
53
54
55 NAME = os.environ.get('REDASH_NAME', 're:dash')
56
57 REDIS_URL = os.environ.get('REDASH_REDIS_URL', "redis://localhost:6379/0")
58
59 STATSD_HOST = os.environ.get('REDASH_STATSD_HOST', "127.0.0.1")
60 STATSD_PORT = int(os.environ.get('REDASH_STATSD_PORT', "8125"))
61 STATSD_PREFIX = os.environ.get('REDASH_STATSD_PREFIX', "redash")
62
63 # Connection settings for re:dash's own database (where we store the queries, results, etc)
64 DATABASE_CONFIG = parse_db_url(os.environ.get("REDASH_DATABASE_URL", "postgresql://postgres"))
65
66 # Celery related settings
67 CELERY_BROKER = os.environ.get("REDASH_CELERY_BROKER", REDIS_URL)
68 CELERY_BACKEND = os.environ.get("REDASH_CELERY_BACKEND", CELERY_BROKER)
69
70 # The following enables periodic job (every 5 minutes) of removing unused query results.
71 QUERY_RESULTS_CLEANUP_ENABLED = parse_boolean(os.environ.get("REDASH_QUERY_RESULTS_CLEANUP_ENABLED", "true"))
72
73 AUTH_TYPE = os.environ.get("REDASH_AUTH_TYPE", "api_key")
74 PASSWORD_LOGIN_ENABLED = parse_boolean(os.environ.get("REDASH_PASSWORD_LOGIN_ENABLED", "true"))
75
76 # Google Apps domain to allow access from; any user with email in this Google Apps will be allowed
77 # access
78 GOOGLE_APPS_DOMAIN = set_from_string(os.environ.get("REDASH_GOOGLE_APPS_DOMAIN", ""))
79
80 GOOGLE_CLIENT_ID = os.environ.get("REDASH_GOOGLE_CLIENT_ID", "")
81 GOOGLE_CLIENT_SECRET = os.environ.get("REDASH_GOOGLE_CLIENT_SECRET", "")
82 GOOGLE_OAUTH_ENABLED = GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET
83
84 SAML_METADATA_URL = os.environ.get("REDASH_SAML_METADATA_URL", "")
85 SAML_LOGIN_ENABLED = SAML_METADATA_URL != ""
86 SAML_CALLBACK_SERVER_NAME = os.environ.get("REDASH_SAML_CALLBACK_SERVER_NAME", "")
87
88 STATIC_ASSETS_PATH = fix_assets_path(os.environ.get("REDASH_STATIC_ASSETS_PATH", "../rd_ui/app/"))
89 JOB_EXPIRY_TIME = int(os.environ.get("REDASH_JOB_EXPIRY_TIME", 3600 * 6))
90 COOKIE_SECRET = os.environ.get("REDASH_COOKIE_SECRET", "c292a0a3aa32397cdb050e233733900f")
91 LOG_LEVEL = os.environ.get("REDASH_LOG_LEVEL", "INFO")
92 ANALYTICS = os.environ.get("REDASH_ANALYTICS", "")
93
94 # Mail settings:
95 MAIL_SERVER = os.environ.get('REDASH_MAIL_SERVER', 'localhost')
96 MAIL_PORT = int(os.environ.get('REDASH_MAIL_PORT', 25))
97 MAIL_USE_TLS = parse_boolean(os.environ.get('REDASH_MAIL_USE_TLS', 'false'))
98 MAIL_USE_SSL = parse_boolean(os.environ.get('REDASH_MAIL_USE_SSL', 'false'))
99 MAIL_USERNAME = os.environ.get('REDASH_MAIL_USERNAME', None)
100 MAIL_PASSWORD = os.environ.get('REDASH_MAIL_PASSWORD', None)
101 MAIL_DEFAULT_SENDER = os.environ.get('REDASH_MAIL_DEFAULT_SENDER', None)
102 MAIL_MAX_EMAILS = os.environ.get('REDASH_MAIL_MAX_EMAILS', None)
103 MAIL_ASCII_ATTACHMENTS = parse_boolean(os.environ.get('REDASH_MAIL_ASCII_ATTACHMENTS', 'false'))
104
105 HOST = os.environ.get('REDASH_HOST', '')
106
107 # CORS settings for the Query Result API (and possbily future external APIs).
108 # In most cases all you need to do is set REDASH_CORS_ACCESS_CONTROL_ALLOW_ORIGIN
109 # to the calling domain (or domains in a comma separated list).
110 ACCESS_CONTROL_ALLOW_ORIGIN = set_from_string(os.environ.get("REDASH_CORS_ACCESS_CONTROL_ALLOW_ORIGIN", ""))
111 ACCESS_CONTROL_ALLOW_CREDENTIALS = parse_boolean(os.environ.get("REDASH_CORS_ACCESS_CONTROL_ALLOW_CREDENTIALS", "false"))
112 ACCESS_CONTROL_REQUEST_METHOD = os.environ.get("REDASH_CORS_ACCESS_CONTROL_REQUEST_METHOD", "GET, POST, PUT")
113 ACCESS_CONTROL_ALLOW_HEADERS = os.environ.get("REDASH_CORS_ACCESS_CONTROL_ALLOW_HEADERS", "Content-Type")
114
115 # Query Runners
116 default_query_runners = [
117 'redash.query_runner.big_query',
118 'redash.query_runner.google_spreadsheets',
119 'redash.query_runner.graphite',
120 'redash.query_runner.mongodb',
121 'redash.query_runner.mysql',
122 'redash.query_runner.pg',
123 'redash.query_runner.url',
124 'redash.query_runner.influx_db',
125 'redash.query_runner.elasticsearch',
126 'redash.query_runner.presto',
127 'redash.query_runner.hive_ds',
128 'redash.query_runner.impala_ds',
129 'redash.query_runner.vertica',
130 'redash.query_runner.treasuredata'
131 ]
132
133 enabled_query_runners = array_from_string(os.environ.get("REDASH_ENABLED_QUERY_RUNNERS", ",".join(default_query_runners)))
134 additional_query_runners = array_from_string(os.environ.get("REDASH_ADDITIONAL_QUERY_RUNNERS", ""))
135
136 QUERY_RUNNERS = distinct(enabled_query_runners + additional_query_runners)
137
138 # Support for Sentry (http://getsentry.com/). Just set your Sentry DSN to enable it:
139 SENTRY_DSN = os.environ.get("REDASH_SENTRY_DSN", "")
140
141 # Client side toggles:
142 ALLOW_SCRIPTS_IN_USER_INPUT = parse_boolean(os.environ.get("REDASH_ALLOW_SCRIPTS_IN_USER_INPUT", "false"))
143 CLIENT_SIDE_METRICS = parse_boolean(os.environ.get("REDASH_CLIENT_SIDE_METRICS", "false"))
144
145 # Features:
146 FEATURE_ALLOW_ALL_TO_EDIT_QUERIES = parse_boolean(os.environ.get("REDASH_FEATURE_ALLOW_ALL_TO_EDIT", "true"))
147 FEATURE_TABLES_PERMISSIONS = parse_boolean(os.environ.get("REDASH_FEATURE_TABLES_PERMISSIONS", "false"))
148
149 # BigQuery
150 BIGQUERY_HTTP_TIMEOUT = int(os.environ.get("REDASH_BIGQUERY_HTTP_TIMEOUT", "600"))
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redash/handlers/static.py b/redash/handlers/static.py
--- a/redash/handlers/static.py
+++ b/redash/handlers/static.py
@@ -38,7 +38,8 @@
features = {
'clientSideMetrics': settings.CLIENT_SIDE_METRICS,
- 'allowScriptsInUserInput': settings.ALLOW_SCRIPTS_IN_USER_INPUT
+ 'allowScriptsInUserInput': settings.ALLOW_SCRIPTS_IN_USER_INPUT,
+ 'highChartsTurboThreshold': settings.HIGHCHARTS_TURBO_THRESHOLD
}
return render_template("index.html", user=json.dumps(user), name=settings.NAME,
diff --git a/redash/settings.py b/redash/settings.py
--- a/redash/settings.py
+++ b/redash/settings.py
@@ -142,6 +142,8 @@
# Client side toggles:
ALLOW_SCRIPTS_IN_USER_INPUT = parse_boolean(os.environ.get("REDASH_ALLOW_SCRIPTS_IN_USER_INPUT", "false"))
CLIENT_SIDE_METRICS = parse_boolean(os.environ.get("REDASH_CLIENT_SIDE_METRICS", "false"))
+# http://api.highcharts.com/highcharts#plotOptions.series.turboThreshold
+HIGHCHARTS_TURBO_THRESHOLD = int(os.environ.get("REDASH_HIGHCHARTS_TURBO_THRESHOLD", "1000"))
# Features:
FEATURE_ALLOW_ALL_TO_EDIT_QUERIES = parse_boolean(os.environ.get("REDASH_FEATURE_ALLOW_ALL_TO_EDIT", "true"))
|
{"golden_diff": "diff --git a/redash/handlers/static.py b/redash/handlers/static.py\n--- a/redash/handlers/static.py\n+++ b/redash/handlers/static.py\n@@ -38,7 +38,8 @@\n \n features = {\n 'clientSideMetrics': settings.CLIENT_SIDE_METRICS,\n- 'allowScriptsInUserInput': settings.ALLOW_SCRIPTS_IN_USER_INPUT\n+ 'allowScriptsInUserInput': settings.ALLOW_SCRIPTS_IN_USER_INPUT,\n+ 'highChartsTurboThreshold': settings.HIGHCHARTS_TURBO_THRESHOLD\n }\n \n return render_template(\"index.html\", user=json.dumps(user), name=settings.NAME,\ndiff --git a/redash/settings.py b/redash/settings.py\n--- a/redash/settings.py\n+++ b/redash/settings.py\n@@ -142,6 +142,8 @@\n # Client side toggles:\n ALLOW_SCRIPTS_IN_USER_INPUT = parse_boolean(os.environ.get(\"REDASH_ALLOW_SCRIPTS_IN_USER_INPUT\", \"false\"))\n CLIENT_SIDE_METRICS = parse_boolean(os.environ.get(\"REDASH_CLIENT_SIDE_METRICS\", \"false\"))\n+# http://api.highcharts.com/highcharts#plotOptions.series.turboThreshold\n+HIGHCHARTS_TURBO_THRESHOLD = int(os.environ.get(\"REDASH_HIGHCHARTS_TURBO_THRESHOLD\", \"1000\"))\n \n # Features:\n FEATURE_ALLOW_ALL_TO_EDIT_QUERIES = parse_boolean(os.environ.get(\"REDASH_FEATURE_ALLOW_ALL_TO_EDIT\", \"true\"))\n", "issue": "Row count limitation when creating chart visualization\nGraphing query with series data works fine when there are 1150 rows returned (total) but not when I go back further in time and get 1543 rows. The chart shows just one of the two data points used as series. The error in the console shows: \"Highcharts error #12: www.highcharts.com/errors/12\", and the link refers to the turboThreshold. I did not see any references to this when searching through the code. \n\nRow count limitation when creating chart visualization\nGraphing query with series data works fine when there are 1150 rows returned (total) but not when I go back further in time and get 1543 rows. The chart shows just one of the two data points used as series. The error in the console shows: \"Highcharts error #12: www.highcharts.com/errors/12\", and the link refers to the turboThreshold. I did not see any references to this when searching through the code. \n\n", "before_files": [{"content": "import hashlib\nimport json\n\nfrom flask import render_template, send_from_directory, current_app\nfrom flask_login import current_user, login_required\n\nfrom redash import settings\nfrom redash.wsgi import app\n\n\[email protected]('/admin/<anything>/<whatever>')\[email protected]('/admin/<anything>')\[email protected]('/dashboard/<anything>')\[email protected]('/alerts')\[email protected]('/alerts/<pk>')\[email protected]('/queries')\[email protected]('/data_sources')\[email protected]('/data_sources/<pk>')\[email protected]('/users')\[email protected]('/users/<pk>')\[email protected]('/queries/<query_id>')\[email protected]('/queries/<query_id>/<anything>')\[email protected]('/personal')\[email protected]('/')\n@login_required\ndef index(**kwargs):\n email_md5 = hashlib.md5(current_user.email.lower()).hexdigest()\n gravatar_url = \"https://www.gravatar.com/avatar/%s?s=40\" % email_md5\n\n user = {\n 'gravatar_url': gravatar_url,\n 'id': current_user.id,\n 'name': current_user.name,\n 'email': current_user.email,\n 'groups': current_user.groups,\n 'permissions': current_user.permissions\n }\n\n features = {\n 'clientSideMetrics': settings.CLIENT_SIDE_METRICS,\n 'allowScriptsInUserInput': settings.ALLOW_SCRIPTS_IN_USER_INPUT\n }\n\n return render_template(\"index.html\", user=json.dumps(user), name=settings.NAME,\n features=json.dumps(features),\n analytics=settings.ANALYTICS)\n\n\[email protected]('/<path:filename>')\ndef send_static(filename):\n if current_app.debug:\n cache_timeout = 0\n else:\n cache_timeout = None\n\n return send_from_directory(settings.STATIC_ASSETS_PATH, filename, cache_timeout=cache_timeout)\n", "path": "redash/handlers/static.py"}, {"content": "import json\nimport os\nimport urlparse\nfrom funcy import distinct\n\n\ndef parse_db_url(url):\n url_parts = urlparse.urlparse(url)\n connection = {'threadlocals': True}\n\n if url_parts.hostname and not url_parts.path:\n connection['name'] = url_parts.hostname\n else:\n connection['name'] = url_parts.path[1:]\n connection['host'] = url_parts.hostname\n connection['port'] = url_parts.port\n connection['user'] = url_parts.username\n connection['password'] = url_parts.password\n\n return connection\n\n\ndef fix_assets_path(path):\n fullpath = os.path.join(os.path.dirname(__file__), path)\n return fullpath\n\n\ndef array_from_string(str):\n array = str.split(',')\n if \"\" in array:\n array.remove(\"\")\n\n return array\n\n\ndef set_from_string(str):\n return set(array_from_string(str))\n\n\ndef parse_boolean(str):\n return json.loads(str.lower())\n\n\ndef all_settings():\n from types import ModuleType\n\n settings = {}\n for name, item in globals().iteritems():\n if not callable(item) and not name.startswith(\"__\") and not isinstance(item, ModuleType):\n settings[name] = item\n\n return settings\n\n\nNAME = os.environ.get('REDASH_NAME', 're:dash')\n\nREDIS_URL = os.environ.get('REDASH_REDIS_URL', \"redis://localhost:6379/0\")\n\nSTATSD_HOST = os.environ.get('REDASH_STATSD_HOST', \"127.0.0.1\")\nSTATSD_PORT = int(os.environ.get('REDASH_STATSD_PORT', \"8125\"))\nSTATSD_PREFIX = os.environ.get('REDASH_STATSD_PREFIX', \"redash\")\n\n# Connection settings for re:dash's own database (where we store the queries, results, etc)\nDATABASE_CONFIG = parse_db_url(os.environ.get(\"REDASH_DATABASE_URL\", \"postgresql://postgres\"))\n\n# Celery related settings\nCELERY_BROKER = os.environ.get(\"REDASH_CELERY_BROKER\", REDIS_URL)\nCELERY_BACKEND = os.environ.get(\"REDASH_CELERY_BACKEND\", CELERY_BROKER)\n\n# The following enables periodic job (every 5 minutes) of removing unused query results.\nQUERY_RESULTS_CLEANUP_ENABLED = parse_boolean(os.environ.get(\"REDASH_QUERY_RESULTS_CLEANUP_ENABLED\", \"true\"))\n\nAUTH_TYPE = os.environ.get(\"REDASH_AUTH_TYPE\", \"api_key\")\nPASSWORD_LOGIN_ENABLED = parse_boolean(os.environ.get(\"REDASH_PASSWORD_LOGIN_ENABLED\", \"true\"))\n\n# Google Apps domain to allow access from; any user with email in this Google Apps will be allowed\n# access\nGOOGLE_APPS_DOMAIN = set_from_string(os.environ.get(\"REDASH_GOOGLE_APPS_DOMAIN\", \"\"))\n\nGOOGLE_CLIENT_ID = os.environ.get(\"REDASH_GOOGLE_CLIENT_ID\", \"\")\nGOOGLE_CLIENT_SECRET = os.environ.get(\"REDASH_GOOGLE_CLIENT_SECRET\", \"\")\nGOOGLE_OAUTH_ENABLED = GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET\n\nSAML_METADATA_URL = os.environ.get(\"REDASH_SAML_METADATA_URL\", \"\")\nSAML_LOGIN_ENABLED = SAML_METADATA_URL != \"\"\nSAML_CALLBACK_SERVER_NAME = os.environ.get(\"REDASH_SAML_CALLBACK_SERVER_NAME\", \"\")\n\nSTATIC_ASSETS_PATH = fix_assets_path(os.environ.get(\"REDASH_STATIC_ASSETS_PATH\", \"../rd_ui/app/\"))\nJOB_EXPIRY_TIME = int(os.environ.get(\"REDASH_JOB_EXPIRY_TIME\", 3600 * 6))\nCOOKIE_SECRET = os.environ.get(\"REDASH_COOKIE_SECRET\", \"c292a0a3aa32397cdb050e233733900f\")\nLOG_LEVEL = os.environ.get(\"REDASH_LOG_LEVEL\", \"INFO\")\nANALYTICS = os.environ.get(\"REDASH_ANALYTICS\", \"\")\n\n# Mail settings:\nMAIL_SERVER = os.environ.get('REDASH_MAIL_SERVER', 'localhost')\nMAIL_PORT = int(os.environ.get('REDASH_MAIL_PORT', 25))\nMAIL_USE_TLS = parse_boolean(os.environ.get('REDASH_MAIL_USE_TLS', 'false'))\nMAIL_USE_SSL = parse_boolean(os.environ.get('REDASH_MAIL_USE_SSL', 'false'))\nMAIL_USERNAME = os.environ.get('REDASH_MAIL_USERNAME', None)\nMAIL_PASSWORD = os.environ.get('REDASH_MAIL_PASSWORD', None)\nMAIL_DEFAULT_SENDER = os.environ.get('REDASH_MAIL_DEFAULT_SENDER', None)\nMAIL_MAX_EMAILS = os.environ.get('REDASH_MAIL_MAX_EMAILS', None)\nMAIL_ASCII_ATTACHMENTS = parse_boolean(os.environ.get('REDASH_MAIL_ASCII_ATTACHMENTS', 'false'))\n\nHOST = os.environ.get('REDASH_HOST', '')\n\n# CORS settings for the Query Result API (and possbily future external APIs).\n# In most cases all you need to do is set REDASH_CORS_ACCESS_CONTROL_ALLOW_ORIGIN\n# to the calling domain (or domains in a comma separated list).\nACCESS_CONTROL_ALLOW_ORIGIN = set_from_string(os.environ.get(\"REDASH_CORS_ACCESS_CONTROL_ALLOW_ORIGIN\", \"\"))\nACCESS_CONTROL_ALLOW_CREDENTIALS = parse_boolean(os.environ.get(\"REDASH_CORS_ACCESS_CONTROL_ALLOW_CREDENTIALS\", \"false\"))\nACCESS_CONTROL_REQUEST_METHOD = os.environ.get(\"REDASH_CORS_ACCESS_CONTROL_REQUEST_METHOD\", \"GET, POST, PUT\")\nACCESS_CONTROL_ALLOW_HEADERS = os.environ.get(\"REDASH_CORS_ACCESS_CONTROL_ALLOW_HEADERS\", \"Content-Type\")\n\n# Query Runners\ndefault_query_runners = [\n 'redash.query_runner.big_query',\n 'redash.query_runner.google_spreadsheets',\n 'redash.query_runner.graphite',\n 'redash.query_runner.mongodb',\n 'redash.query_runner.mysql',\n 'redash.query_runner.pg',\n 'redash.query_runner.url',\n 'redash.query_runner.influx_db',\n 'redash.query_runner.elasticsearch',\n 'redash.query_runner.presto',\n 'redash.query_runner.hive_ds',\n 'redash.query_runner.impala_ds',\n 'redash.query_runner.vertica',\n 'redash.query_runner.treasuredata'\n]\n\nenabled_query_runners = array_from_string(os.environ.get(\"REDASH_ENABLED_QUERY_RUNNERS\", \",\".join(default_query_runners)))\nadditional_query_runners = array_from_string(os.environ.get(\"REDASH_ADDITIONAL_QUERY_RUNNERS\", \"\"))\n\nQUERY_RUNNERS = distinct(enabled_query_runners + additional_query_runners)\n\n# Support for Sentry (http://getsentry.com/). Just set your Sentry DSN to enable it:\nSENTRY_DSN = os.environ.get(\"REDASH_SENTRY_DSN\", \"\")\n\n# Client side toggles:\nALLOW_SCRIPTS_IN_USER_INPUT = parse_boolean(os.environ.get(\"REDASH_ALLOW_SCRIPTS_IN_USER_INPUT\", \"false\"))\nCLIENT_SIDE_METRICS = parse_boolean(os.environ.get(\"REDASH_CLIENT_SIDE_METRICS\", \"false\"))\n\n# Features:\nFEATURE_ALLOW_ALL_TO_EDIT_QUERIES = parse_boolean(os.environ.get(\"REDASH_FEATURE_ALLOW_ALL_TO_EDIT\", \"true\"))\nFEATURE_TABLES_PERMISSIONS = parse_boolean(os.environ.get(\"REDASH_FEATURE_TABLES_PERMISSIONS\", \"false\"))\n\n# BigQuery\nBIGQUERY_HTTP_TIMEOUT = int(os.environ.get(\"REDASH_BIGQUERY_HTTP_TIMEOUT\", \"600\"))\n", "path": "redash/settings.py"}], "after_files": [{"content": "import hashlib\nimport json\n\nfrom flask import render_template, send_from_directory, current_app\nfrom flask_login import current_user, login_required\n\nfrom redash import settings\nfrom redash.wsgi import app\n\n\[email protected]('/admin/<anything>/<whatever>')\[email protected]('/admin/<anything>')\[email protected]('/dashboard/<anything>')\[email protected]('/alerts')\[email protected]('/alerts/<pk>')\[email protected]('/queries')\[email protected]('/data_sources')\[email protected]('/data_sources/<pk>')\[email protected]('/users')\[email protected]('/users/<pk>')\[email protected]('/queries/<query_id>')\[email protected]('/queries/<query_id>/<anything>')\[email protected]('/personal')\[email protected]('/')\n@login_required\ndef index(**kwargs):\n email_md5 = hashlib.md5(current_user.email.lower()).hexdigest()\n gravatar_url = \"https://www.gravatar.com/avatar/%s?s=40\" % email_md5\n\n user = {\n 'gravatar_url': gravatar_url,\n 'id': current_user.id,\n 'name': current_user.name,\n 'email': current_user.email,\n 'groups': current_user.groups,\n 'permissions': current_user.permissions\n }\n\n features = {\n 'clientSideMetrics': settings.CLIENT_SIDE_METRICS,\n 'allowScriptsInUserInput': settings.ALLOW_SCRIPTS_IN_USER_INPUT,\n 'highChartsTurboThreshold': settings.HIGHCHARTS_TURBO_THRESHOLD\n }\n\n return render_template(\"index.html\", user=json.dumps(user), name=settings.NAME,\n features=json.dumps(features),\n analytics=settings.ANALYTICS)\n\n\[email protected]('/<path:filename>')\ndef send_static(filename):\n if current_app.debug:\n cache_timeout = 0\n else:\n cache_timeout = None\n\n return send_from_directory(settings.STATIC_ASSETS_PATH, filename, cache_timeout=cache_timeout)\n", "path": "redash/handlers/static.py"}, {"content": "import json\nimport os\nimport urlparse\nfrom funcy import distinct\n\n\ndef parse_db_url(url):\n url_parts = urlparse.urlparse(url)\n connection = {'threadlocals': True}\n\n if url_parts.hostname and not url_parts.path:\n connection['name'] = url_parts.hostname\n else:\n connection['name'] = url_parts.path[1:]\n connection['host'] = url_parts.hostname\n connection['port'] = url_parts.port\n connection['user'] = url_parts.username\n connection['password'] = url_parts.password\n\n return connection\n\n\ndef fix_assets_path(path):\n fullpath = os.path.join(os.path.dirname(__file__), path)\n return fullpath\n\n\ndef array_from_string(str):\n array = str.split(',')\n if \"\" in array:\n array.remove(\"\")\n\n return array\n\n\ndef set_from_string(str):\n return set(array_from_string(str))\n\n\ndef parse_boolean(str):\n return json.loads(str.lower())\n\n\ndef all_settings():\n from types import ModuleType\n\n settings = {}\n for name, item in globals().iteritems():\n if not callable(item) and not name.startswith(\"__\") and not isinstance(item, ModuleType):\n settings[name] = item\n\n return settings\n\n\nNAME = os.environ.get('REDASH_NAME', 're:dash')\n\nREDIS_URL = os.environ.get('REDASH_REDIS_URL', \"redis://localhost:6379/0\")\n\nSTATSD_HOST = os.environ.get('REDASH_STATSD_HOST', \"127.0.0.1\")\nSTATSD_PORT = int(os.environ.get('REDASH_STATSD_PORT', \"8125\"))\nSTATSD_PREFIX = os.environ.get('REDASH_STATSD_PREFIX', \"redash\")\n\n# Connection settings for re:dash's own database (where we store the queries, results, etc)\nDATABASE_CONFIG = parse_db_url(os.environ.get(\"REDASH_DATABASE_URL\", \"postgresql://postgres\"))\n\n# Celery related settings\nCELERY_BROKER = os.environ.get(\"REDASH_CELERY_BROKER\", REDIS_URL)\nCELERY_BACKEND = os.environ.get(\"REDASH_CELERY_BACKEND\", CELERY_BROKER)\n\n# The following enables periodic job (every 5 minutes) of removing unused query results. Behind this \"feature flag\" until\n# proved to be \"safe\".\nQUERY_RESULTS_CLEANUP_ENABLED = parse_boolean(os.environ.get(\"REDASH_QUERY_RESULTS_CLEANUP_ENABLED\", \"true\"))\n\nAUTH_TYPE = os.environ.get(\"REDASH_AUTH_TYPE\", \"api_key\")\nPASSWORD_LOGIN_ENABLED = parse_boolean(os.environ.get(\"REDASH_PASSWORD_LOGIN_ENABLED\", \"true\"))\n\n# Google Apps domain to allow access from; any user with email in this Google Apps will be allowed\n# access\nGOOGLE_APPS_DOMAIN = set_from_string(os.environ.get(\"REDASH_GOOGLE_APPS_DOMAIN\", \"\"))\n\nGOOGLE_CLIENT_ID = os.environ.get(\"REDASH_GOOGLE_CLIENT_ID\", \"\")\nGOOGLE_CLIENT_SECRET = os.environ.get(\"REDASH_GOOGLE_CLIENT_SECRET\", \"\")\nGOOGLE_OAUTH_ENABLED = GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET\n\nSAML_METADATA_URL = os.environ.get(\"REDASH_SAML_METADATA_URL\", \"\")\nSAML_LOGIN_ENABLED = SAML_METADATA_URL != \"\"\nSAML_CALLBACK_SERVER_NAME = os.environ.get(\"REDASH_SAML_CALLBACK_SERVER_NAME\", \"\")\n\nSTATIC_ASSETS_PATH = fix_assets_path(os.environ.get(\"REDASH_STATIC_ASSETS_PATH\", \"../rd_ui/app/\"))\nJOB_EXPIRY_TIME = int(os.environ.get(\"REDASH_JOB_EXPIRY_TIME\", 3600 * 6))\nCOOKIE_SECRET = os.environ.get(\"REDASH_COOKIE_SECRET\", \"c292a0a3aa32397cdb050e233733900f\")\nLOG_LEVEL = os.environ.get(\"REDASH_LOG_LEVEL\", \"INFO\")\nANALYTICS = os.environ.get(\"REDASH_ANALYTICS\", \"\")\n\n# Mail settings:\nMAIL_SERVER = os.environ.get('REDASH_MAIL_SERVER', 'localhost')\nMAIL_PORT = int(os.environ.get('REDASH_MAIL_PORT', 25))\nMAIL_USE_TLS = parse_boolean(os.environ.get('REDASH_MAIL_USE_TLS', 'false'))\nMAIL_USE_SSL = parse_boolean(os.environ.get('REDASH_MAIL_USE_SSL', 'false'))\nMAIL_USERNAME = os.environ.get('REDASH_MAIL_USERNAME', None)\nMAIL_PASSWORD = os.environ.get('REDASH_MAIL_PASSWORD', None)\nMAIL_DEFAULT_SENDER = os.environ.get('REDASH_MAIL_DEFAULT_SENDER', None)\nMAIL_MAX_EMAILS = os.environ.get('REDASH_MAIL_MAX_EMAILS', None)\nMAIL_ASCII_ATTACHMENTS = parse_boolean(os.environ.get('REDASH_MAIL_ASCII_ATTACHMENTS', 'false'))\n\nHOST = os.environ.get('REDASH_HOST', '')\n\n# CORS settings for the Query Result API (and possbily future external APIs).\n# In most cases all you need to do is set REDASH_CORS_ACCESS_CONTROL_ALLOW_ORIGIN\n# to the calling domain (or domains in a comma separated list).\nACCESS_CONTROL_ALLOW_ORIGIN = set_from_string(os.environ.get(\"REDASH_CORS_ACCESS_CONTROL_ALLOW_ORIGIN\", \"\"))\nACCESS_CONTROL_ALLOW_CREDENTIALS = parse_boolean(os.environ.get(\"REDASH_CORS_ACCESS_CONTROL_ALLOW_CREDENTIALS\", \"false\"))\nACCESS_CONTROL_REQUEST_METHOD = os.environ.get(\"REDASH_CORS_ACCESS_CONTROL_REQUEST_METHOD\", \"GET, POST, PUT\")\nACCESS_CONTROL_ALLOW_HEADERS = os.environ.get(\"REDASH_CORS_ACCESS_CONTROL_ALLOW_HEADERS\", \"Content-Type\")\n\n# Query Runners\ndefault_query_runners = [\n 'redash.query_runner.big_query',\n 'redash.query_runner.google_spreadsheets',\n 'redash.query_runner.graphite',\n 'redash.query_runner.mongodb',\n 'redash.query_runner.mysql',\n 'redash.query_runner.pg',\n 'redash.query_runner.url',\n 'redash.query_runner.influx_db',\n 'redash.query_runner.elasticsearch',\n 'redash.query_runner.presto',\n 'redash.query_runner.hive_ds',\n 'redash.query_runner.impala_ds',\n 'redash.query_runner.vertica',\n 'redash.query_runner.treasuredata'\n]\n\nenabled_query_runners = array_from_string(os.environ.get(\"REDASH_ENABLED_QUERY_RUNNERS\", \",\".join(default_query_runners)))\nadditional_query_runners = array_from_string(os.environ.get(\"REDASH_ADDITIONAL_QUERY_RUNNERS\", \"\"))\n\nQUERY_RUNNERS = distinct(enabled_query_runners + additional_query_runners)\n\n# Support for Sentry (http://getsentry.com/). Just set your Sentry DSN to enable it:\nSENTRY_DSN = os.environ.get(\"REDASH_SENTRY_DSN\", \"\")\n\n# Client side toggles:\nALLOW_SCRIPTS_IN_USER_INPUT = parse_boolean(os.environ.get(\"REDASH_ALLOW_SCRIPTS_IN_USER_INPUT\", \"false\"))\nCLIENT_SIDE_METRICS = parse_boolean(os.environ.get(\"REDASH_CLIENT_SIDE_METRICS\", \"false\"))\n# http://api.highcharts.com/highcharts#plotOptions.series.turboThreshold\nHIGHCHARTS_TURBO_THRESHOLD = int(os.environ.get(\"REDASH_HIGHCHARTS_TURBO_THRESHOLD\", \"1000\"))\n\n# Features:\nFEATURE_ALLOW_ALL_TO_EDIT_QUERIES = parse_boolean(os.environ.get(\"REDASH_FEATURE_ALLOW_ALL_TO_EDIT\", \"true\"))\nFEATURE_TABLES_PERMISSIONS = parse_boolean(os.environ.get(\"REDASH_FEATURE_TABLES_PERMISSIONS\", \"false\"))\n\n# BigQuery\nBIGQUERY_HTTP_TIMEOUT = int(os.environ.get(\"REDASH_BIGQUERY_HTTP_TIMEOUT\", \"600\"))\n", "path": "redash/settings.py"}]}
| 2,854 | 319 |
gh_patches_debug_33549
|
rasdani/github-patches
|
git_diff
|
nextcloud__appstore-56
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Test if zip bombs are possible
We should know if zip bombs are possible currently
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nextcloudappstore/core/api/v1/release/parser.py`
Content:
```
1 import re
2 import tarfile # type: ignore
3 import lxml.etree # type: ignore
4 from typing import Dict, Any, Tuple
5
6 from nextcloudappstore.core.api.v1.release import ReleaseConfig
7 from nextcloudappstore.core.versioning import pad_max_version, pad_min_version
8 from rest_framework.exceptions import APIException # type: ignore
9
10
11 class MaxSizeAppMetadataXmlException(APIException):
12 pass
13
14
15 class InvalidAppMetadataXmlException(APIException):
16 pass
17
18
19 class UnsupportedAppArchiveException(APIException):
20 pass
21
22
23 class InvalidAppPackageStructureException(APIException):
24 pass
25
26
27 class XMLSyntaxError(APIException):
28 pass
29
30
31 class GunZipAppMetadataExtractor:
32 def __init__(self, config: ReleaseConfig) -> None:
33 """
34 :argument config the config
35 """
36 self.config = config
37 self.app_folder_regex = re.compile(r'^[a-z]+[a-z_]*$')
38
39 def extract_app_metadata(self, archive_path: str) -> Tuple[str, str]:
40 """
41 Extracts the info.xml from an tar.gz archive
42 :argument archive_path the path to the tar.gz archive
43 :raises InvalidAppPackageStructureException if the first level folder
44 does not equal the app_id or no info.xml file could be found in the
45 appinfo folder
46 :return the info.xml as string
47 """
48 if not tarfile.is_tarfile(archive_path):
49 msg = '%s is not a valid tar.gz archive ' % archive_path
50 raise UnsupportedAppArchiveException(msg)
51
52 with tarfile.open(archive_path, 'r:gz') as tar:
53 result = self._parse_archive(tar)
54 return result
55
56 def _parse_archive(self, tar: Any) -> Tuple[str, str]:
57 folder = list(
58 filter(lambda name: re.match(self.app_folder_regex, name),
59 tar.getnames()
60 )
61 )
62 if len(folder) > 1:
63 msg = 'More than one possible app folder found'
64 raise InvalidAppPackageStructureException(msg)
65 elif len(folder) == 0:
66 msg = 'No possible app folder found. App folder must contain ' \
67 'only lowercase ASCII characters or underscores'
68 raise InvalidAppPackageStructureException(msg)
69
70 app_id = folder[0]
71 info_path = '%s/appinfo/info.xml' % app_id
72 try:
73 app_member = tar.getmember(app_id)
74 appinfo_member = tar.getmember('%s/appinfo' % app_id)
75 info_member = tar.getmember(info_path)
76 possible_links = [app_member, appinfo_member, info_member]
77
78 for possible_link in possible_links:
79 if possible_link.issym() or possible_link.islnk():
80 msg = 'Symlinks and hard links can not be used for %s' %\
81 possible_link
82 raise InvalidAppPackageStructureException(msg)
83
84 if info_member.size > self.config.max_info_size:
85 msg = '%s was bigger than allowed %i bytes' % (
86 info_path, self.config.max_info_size)
87 raise MaxSizeAppMetadataXmlException(msg)
88 info_file = tar.extractfile(info_member)
89 return info_file.read().decode('utf-8'), app_id
90 except KeyError:
91 msg = 'Could not find %s file inside the archive' % info_path
92 raise InvalidAppPackageStructureException(msg)
93
94
95 def element_to_dict(element: Any) -> Dict:
96 type = element.get('type')
97 key = element.tag.replace('-', '_')
98 if type == 'int':
99 return {key: int(element.text)}
100 elif type == 'list':
101 return {key: list(map(element_to_dict, element.iterchildren()))}
102 elif type == 'min-version':
103 return {key: pad_min_version(element.text)}
104 elif type == 'max-version':
105 return {key: pad_max_version(element.text)}
106 elif len(list(element)) > 0:
107 contents = {}
108 for child in element.iterchildren():
109 contents.update(element_to_dict(child))
110 return {key: contents}
111 else:
112 return {key: element.text}
113
114
115 def parse_app_metadata(xml: str, schema: str, pre_xslt: str,
116 xslt: str) -> Dict:
117 """
118 Parses, validates and maps the xml onto a dict
119 :argument xml the info.xml string to parse
120 :argument schema the schema xml as string
121 :argument pre_xslt xslt which is run before validation to ensure that
122 everything is in the correct order and that unknown elements are excluded
123 :argument xslt the xslt to transform it to a matching structure
124 :raises InvalidAppMetadataXmlException if the schema does not validate
125 :return the parsed xml as dict
126 """
127 parser = lxml.etree.XMLParser(resolve_entities=False, no_network=True,
128 remove_comments=True, load_dtd=False,
129 remove_blank_text=True, dtd_validation=False
130 )
131 try:
132 doc = lxml.etree.fromstring(bytes(xml, encoding='utf-8'), parser)
133 except lxml.etree.XMLSyntaxError as e:
134 msg = 'info.xml contains malformed xml: %s' % e
135 raise XMLSyntaxError(msg)
136 for _ in doc.iter(lxml.etree.Entity):
137 raise InvalidAppMetadataXmlException('Must not contain entities')
138 pre_transform = lxml.etree.XSLT(lxml.etree.XML(pre_xslt))
139 pre_transformed_doc = pre_transform(doc)
140 schema_doc = lxml.etree.fromstring(bytes(schema, encoding='utf-8'), parser)
141 schema = lxml.etree.XMLSchema(schema_doc)
142 try:
143 schema.assertValid(pre_transformed_doc) # type: ignore
144 except lxml.etree.DocumentInvalid as e:
145 msg = 'info.xml did not validate: %s' % e
146 raise InvalidAppMetadataXmlException(msg)
147 transform = lxml.etree.XSLT(lxml.etree.XML(xslt))
148 transformed_doc = transform(pre_transformed_doc)
149 mapped = element_to_dict(transformed_doc.getroot())
150 return mapped
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nextcloudappstore/core/api/v1/release/parser.py b/nextcloudappstore/core/api/v1/release/parser.py
--- a/nextcloudappstore/core/api/v1/release/parser.py
+++ b/nextcloudappstore/core/api/v1/release/parser.py
@@ -77,20 +77,44 @@
for possible_link in possible_links:
if possible_link.issym() or possible_link.islnk():
- msg = 'Symlinks and hard links can not be used for %s' %\
+ msg = 'Symlinks and hard links can not be used for %s' % \
possible_link
raise InvalidAppPackageStructureException(msg)
-
- if info_member.size > self.config.max_info_size:
- msg = '%s was bigger than allowed %i bytes' % (
- info_path, self.config.max_info_size)
- raise MaxSizeAppMetadataXmlException(msg)
info_file = tar.extractfile(info_member)
- return info_file.read().decode('utf-8'), app_id
+ contents = self._stream_read_file(info_file,
+ self.config.max_info_size)
+ return contents, app_id
except KeyError:
msg = 'Could not find %s file inside the archive' % info_path
raise InvalidAppPackageStructureException(msg)
+ def _stream_read_file(self, info_file: Any, max_info_size: int) -> str:
+ """
+ Instead of reading everything in one go which is vulnerable to
+ zip bombs, stream and accumulate the bytes
+ :argument info_file: buffered io reader
+ :argument max_info_size: maximum file size in bytes
+ :raises MaxSizeAppMetadataXmlException if the maximum size was reached
+ :return: the parsed info.xml
+ """
+ # FIXME: If someone finds a less ugly version, please feel free to
+ # improve it
+ size = 0
+ result = b''
+ while True:
+ size += 1024
+ if size > max_info_size:
+ msg = 'info.xml was bigger than allowed %i bytes' % \
+ max_info_size
+ raise MaxSizeAppMetadataXmlException(msg)
+
+ chunk = info_file.read(1024)
+ if not chunk:
+ break
+ result += chunk
+
+ return result.decode('utf-8')
+
def element_to_dict(element: Any) -> Dict:
type = element.get('type')
|
{"golden_diff": "diff --git a/nextcloudappstore/core/api/v1/release/parser.py b/nextcloudappstore/core/api/v1/release/parser.py\n--- a/nextcloudappstore/core/api/v1/release/parser.py\n+++ b/nextcloudappstore/core/api/v1/release/parser.py\n@@ -77,20 +77,44 @@\n \n for possible_link in possible_links:\n if possible_link.issym() or possible_link.islnk():\n- msg = 'Symlinks and hard links can not be used for %s' %\\\n+ msg = 'Symlinks and hard links can not be used for %s' % \\\n possible_link\n raise InvalidAppPackageStructureException(msg)\n-\n- if info_member.size > self.config.max_info_size:\n- msg = '%s was bigger than allowed %i bytes' % (\n- info_path, self.config.max_info_size)\n- raise MaxSizeAppMetadataXmlException(msg)\n info_file = tar.extractfile(info_member)\n- return info_file.read().decode('utf-8'), app_id\n+ contents = self._stream_read_file(info_file,\n+ self.config.max_info_size)\n+ return contents, app_id\n except KeyError:\n msg = 'Could not find %s file inside the archive' % info_path\n raise InvalidAppPackageStructureException(msg)\n \n+ def _stream_read_file(self, info_file: Any, max_info_size: int) -> str:\n+ \"\"\"\n+ Instead of reading everything in one go which is vulnerable to\n+ zip bombs, stream and accumulate the bytes\n+ :argument info_file: buffered io reader\n+ :argument max_info_size: maximum file size in bytes\n+ :raises MaxSizeAppMetadataXmlException if the maximum size was reached\n+ :return: the parsed info.xml\n+ \"\"\"\n+ # FIXME: If someone finds a less ugly version, please feel free to\n+ # improve it\n+ size = 0\n+ result = b''\n+ while True:\n+ size += 1024\n+ if size > max_info_size:\n+ msg = 'info.xml was bigger than allowed %i bytes' % \\\n+ max_info_size\n+ raise MaxSizeAppMetadataXmlException(msg)\n+\n+ chunk = info_file.read(1024)\n+ if not chunk:\n+ break\n+ result += chunk\n+\n+ return result.decode('utf-8')\n+\n \n def element_to_dict(element: Any) -> Dict:\n type = element.get('type')\n", "issue": "Test if zip bombs are possible\nWe should know if zip bombs are possible currently\n\n", "before_files": [{"content": "import re\nimport tarfile # type: ignore\nimport lxml.etree # type: ignore\nfrom typing import Dict, Any, Tuple\n\nfrom nextcloudappstore.core.api.v1.release import ReleaseConfig\nfrom nextcloudappstore.core.versioning import pad_max_version, pad_min_version\nfrom rest_framework.exceptions import APIException # type: ignore\n\n\nclass MaxSizeAppMetadataXmlException(APIException):\n pass\n\n\nclass InvalidAppMetadataXmlException(APIException):\n pass\n\n\nclass UnsupportedAppArchiveException(APIException):\n pass\n\n\nclass InvalidAppPackageStructureException(APIException):\n pass\n\n\nclass XMLSyntaxError(APIException):\n pass\n\n\nclass GunZipAppMetadataExtractor:\n def __init__(self, config: ReleaseConfig) -> None:\n \"\"\"\n :argument config the config\n \"\"\"\n self.config = config\n self.app_folder_regex = re.compile(r'^[a-z]+[a-z_]*$')\n\n def extract_app_metadata(self, archive_path: str) -> Tuple[str, str]:\n \"\"\"\n Extracts the info.xml from an tar.gz archive\n :argument archive_path the path to the tar.gz archive\n :raises InvalidAppPackageStructureException if the first level folder\n does not equal the app_id or no info.xml file could be found in the\n appinfo folder\n :return the info.xml as string\n \"\"\"\n if not tarfile.is_tarfile(archive_path):\n msg = '%s is not a valid tar.gz archive ' % archive_path\n raise UnsupportedAppArchiveException(msg)\n\n with tarfile.open(archive_path, 'r:gz') as tar:\n result = self._parse_archive(tar)\n return result\n\n def _parse_archive(self, tar: Any) -> Tuple[str, str]:\n folder = list(\n filter(lambda name: re.match(self.app_folder_regex, name),\n tar.getnames()\n )\n )\n if len(folder) > 1:\n msg = 'More than one possible app folder found'\n raise InvalidAppPackageStructureException(msg)\n elif len(folder) == 0:\n msg = 'No possible app folder found. App folder must contain ' \\\n 'only lowercase ASCII characters or underscores'\n raise InvalidAppPackageStructureException(msg)\n\n app_id = folder[0]\n info_path = '%s/appinfo/info.xml' % app_id\n try:\n app_member = tar.getmember(app_id)\n appinfo_member = tar.getmember('%s/appinfo' % app_id)\n info_member = tar.getmember(info_path)\n possible_links = [app_member, appinfo_member, info_member]\n\n for possible_link in possible_links:\n if possible_link.issym() or possible_link.islnk():\n msg = 'Symlinks and hard links can not be used for %s' %\\\n possible_link\n raise InvalidAppPackageStructureException(msg)\n\n if info_member.size > self.config.max_info_size:\n msg = '%s was bigger than allowed %i bytes' % (\n info_path, self.config.max_info_size)\n raise MaxSizeAppMetadataXmlException(msg)\n info_file = tar.extractfile(info_member)\n return info_file.read().decode('utf-8'), app_id\n except KeyError:\n msg = 'Could not find %s file inside the archive' % info_path\n raise InvalidAppPackageStructureException(msg)\n\n\ndef element_to_dict(element: Any) -> Dict:\n type = element.get('type')\n key = element.tag.replace('-', '_')\n if type == 'int':\n return {key: int(element.text)}\n elif type == 'list':\n return {key: list(map(element_to_dict, element.iterchildren()))}\n elif type == 'min-version':\n return {key: pad_min_version(element.text)}\n elif type == 'max-version':\n return {key: pad_max_version(element.text)}\n elif len(list(element)) > 0:\n contents = {}\n for child in element.iterchildren():\n contents.update(element_to_dict(child))\n return {key: contents}\n else:\n return {key: element.text}\n\n\ndef parse_app_metadata(xml: str, schema: str, pre_xslt: str,\n xslt: str) -> Dict:\n \"\"\"\n Parses, validates and maps the xml onto a dict\n :argument xml the info.xml string to parse\n :argument schema the schema xml as string\n :argument pre_xslt xslt which is run before validation to ensure that\n everything is in the correct order and that unknown elements are excluded\n :argument xslt the xslt to transform it to a matching structure\n :raises InvalidAppMetadataXmlException if the schema does not validate\n :return the parsed xml as dict\n \"\"\"\n parser = lxml.etree.XMLParser(resolve_entities=False, no_network=True,\n remove_comments=True, load_dtd=False,\n remove_blank_text=True, dtd_validation=False\n )\n try:\n doc = lxml.etree.fromstring(bytes(xml, encoding='utf-8'), parser)\n except lxml.etree.XMLSyntaxError as e:\n msg = 'info.xml contains malformed xml: %s' % e\n raise XMLSyntaxError(msg)\n for _ in doc.iter(lxml.etree.Entity):\n raise InvalidAppMetadataXmlException('Must not contain entities')\n pre_transform = lxml.etree.XSLT(lxml.etree.XML(pre_xslt))\n pre_transformed_doc = pre_transform(doc)\n schema_doc = lxml.etree.fromstring(bytes(schema, encoding='utf-8'), parser)\n schema = lxml.etree.XMLSchema(schema_doc)\n try:\n schema.assertValid(pre_transformed_doc) # type: ignore\n except lxml.etree.DocumentInvalid as e:\n msg = 'info.xml did not validate: %s' % e\n raise InvalidAppMetadataXmlException(msg)\n transform = lxml.etree.XSLT(lxml.etree.XML(xslt))\n transformed_doc = transform(pre_transformed_doc)\n mapped = element_to_dict(transformed_doc.getroot())\n return mapped\n", "path": "nextcloudappstore/core/api/v1/release/parser.py"}], "after_files": [{"content": "import re\nimport tarfile # type: ignore\nimport lxml.etree # type: ignore\nfrom typing import Dict, Any, Tuple\n\nfrom nextcloudappstore.core.api.v1.release import ReleaseConfig\nfrom nextcloudappstore.core.versioning import pad_max_version, pad_min_version\nfrom rest_framework.exceptions import APIException # type: ignore\n\n\nclass MaxSizeAppMetadataXmlException(APIException):\n pass\n\n\nclass InvalidAppMetadataXmlException(APIException):\n pass\n\n\nclass UnsupportedAppArchiveException(APIException):\n pass\n\n\nclass InvalidAppPackageStructureException(APIException):\n pass\n\n\nclass XMLSyntaxError(APIException):\n pass\n\n\nclass GunZipAppMetadataExtractor:\n def __init__(self, config: ReleaseConfig) -> None:\n \"\"\"\n :argument config the config\n \"\"\"\n self.config = config\n self.app_folder_regex = re.compile(r'^[a-z]+[a-z_]*$')\n\n def extract_app_metadata(self, archive_path: str) -> Tuple[str, str]:\n \"\"\"\n Extracts the info.xml from an tar.gz archive\n :argument archive_path the path to the tar.gz archive\n :raises InvalidAppPackageStructureException if the first level folder\n does not equal the app_id or no info.xml file could be found in the\n appinfo folder\n :return the info.xml as string\n \"\"\"\n if not tarfile.is_tarfile(archive_path):\n msg = '%s is not a valid tar.gz archive ' % archive_path\n raise UnsupportedAppArchiveException(msg)\n\n with tarfile.open(archive_path, 'r:gz') as tar:\n result = self._parse_archive(tar)\n return result\n\n def _parse_archive(self, tar: Any) -> Tuple[str, str]:\n folder = list(\n filter(lambda name: re.match(self.app_folder_regex, name),\n tar.getnames()\n )\n )\n if len(folder) > 1:\n msg = 'More than one possible app folder found'\n raise InvalidAppPackageStructureException(msg)\n elif len(folder) == 0:\n msg = 'No possible app folder found. App folder must contain ' \\\n 'only lowercase ASCII characters or underscores'\n raise InvalidAppPackageStructureException(msg)\n\n app_id = folder[0]\n info_path = '%s/appinfo/info.xml' % app_id\n try:\n app_member = tar.getmember(app_id)\n appinfo_member = tar.getmember('%s/appinfo' % app_id)\n info_member = tar.getmember(info_path)\n possible_links = [app_member, appinfo_member, info_member]\n\n for possible_link in possible_links:\n if possible_link.issym() or possible_link.islnk():\n msg = 'Symlinks and hard links can not be used for %s' % \\\n possible_link\n raise InvalidAppPackageStructureException(msg)\n info_file = tar.extractfile(info_member)\n contents = self._stream_read_file(info_file,\n self.config.max_info_size)\n return contents, app_id\n except KeyError:\n msg = 'Could not find %s file inside the archive' % info_path\n raise InvalidAppPackageStructureException(msg)\n\n def _stream_read_file(self, info_file: Any, max_info_size: int) -> str:\n \"\"\"\n Instead of reading everything in one go which is vulnerable to\n zip bombs, stream and accumulate the bytes\n :argument info_file: buffered io reader\n :argument max_info_size: maximum file size in bytes\n :raises MaxSizeAppMetadataXmlException if the maximum size was reached\n :return: the parsed info.xml\n \"\"\"\n # FIXME: If someone finds a less ugly version, please feel free to\n # improve it\n size = 0\n result = b''\n while True:\n size += 1024\n if size > max_info_size:\n msg = 'info.xml was bigger than allowed %i bytes' % \\\n max_info_size\n raise MaxSizeAppMetadataXmlException(msg)\n\n chunk = info_file.read(1024)\n if not chunk:\n break\n result += chunk\n\n return result.decode('utf-8')\n\n\ndef element_to_dict(element: Any) -> Dict:\n type = element.get('type')\n key = element.tag.replace('-', '_')\n if type == 'int':\n return {key: int(element.text)}\n elif type == 'list':\n return {key: list(map(element_to_dict, element.iterchildren()))}\n elif type == 'min-version':\n return {key: pad_min_version(element.text)}\n elif type == 'max-version':\n return {key: pad_max_version(element.text)}\n elif len(list(element)) > 0:\n contents = {}\n for child in element.iterchildren():\n contents.update(element_to_dict(child))\n return {key: contents}\n else:\n return {key: element.text}\n\n\ndef parse_app_metadata(xml: str, schema: str, pre_xslt: str,\n xslt: str) -> Dict:\n \"\"\"\n Parses, validates and maps the xml onto a dict\n :argument xml the info.xml string to parse\n :argument schema the schema xml as string\n :argument pre_xslt xslt which is run before validation to ensure that\n everything is in the correct order and that unknown elements are excluded\n :argument xslt the xslt to transform it to a matching structure\n :raises InvalidAppMetadataXmlException if the schema does not validate\n :return the parsed xml as dict\n \"\"\"\n parser = lxml.etree.XMLParser(resolve_entities=False, no_network=True,\n remove_comments=True, load_dtd=False,\n remove_blank_text=True, dtd_validation=False\n )\n try:\n doc = lxml.etree.fromstring(bytes(xml, encoding='utf-8'), parser)\n except lxml.etree.XMLSyntaxError as e:\n msg = 'info.xml contains malformed xml: %s' % e\n raise XMLSyntaxError(msg)\n for _ in doc.iter(lxml.etree.Entity):\n raise InvalidAppMetadataXmlException('Must not contain entities')\n pre_transform = lxml.etree.XSLT(lxml.etree.XML(pre_xslt))\n pre_transformed_doc = pre_transform(doc)\n schema_doc = lxml.etree.fromstring(bytes(schema, encoding='utf-8'), parser)\n schema = lxml.etree.XMLSchema(schema_doc)\n try:\n schema.assertValid(pre_transformed_doc) # type: ignore\n except lxml.etree.DocumentInvalid as e:\n msg = 'info.xml did not validate: %s' % e\n raise InvalidAppMetadataXmlException(msg)\n transform = lxml.etree.XSLT(lxml.etree.XML(xslt))\n transformed_doc = transform(pre_transformed_doc)\n mapped = element_to_dict(transformed_doc.getroot())\n return mapped\n", "path": "nextcloudappstore/core/api/v1/release/parser.py"}]}
| 1,923 | 555 |
gh_patches_debug_27856
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-2391
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reduce memory usage of the pipeline
Author: @bmbouter (bmbouter)
Redmine Issue: 9635, https://pulp.plan.io/issues/9635
---
## Motivation
It would be nice if users could specify a desired maximum amount of RAM to be used during sync. For example, a user can say I only want 1500 MB of RAM to be used max.
## What is already in place
The stages pipeline restricts memory usage by only allowing 1000 declarative content objects between each stage (so for 8-9 stages that's 8000-9000 declarative content objects. This happens [here](https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/api.py#L217).
Interestingly the docstring says this defaults to 100, but it seems to actually be 1000!
Also the stages perform batching, so they will only taking in a limited number of items (the batch size). That happens [with minsize](https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/api.py#L84).
## Why this isn't enough
These are count-based mechnisms and don't correspond to actual MB or GB of memory used. Some content units vary a lot in how much memory each DeclarativeContent objects take up.
Another lesser problem is that it doesn't help plugin writers restrict their usage of memory in FirstStage.
## Idea
Add a new param called `max_mb` to base Remote, which defaults to None. If specified, the user will be specifying the desired maximum MB used by process syncing.
Have the queues between the stages, and the bather implementation, both check the total memory the current process is using and asyncio.sleep() polling until it goes down. This should keep the maximum amount used by all objects roughly to that number.
## Details
Introduce a new `MBSizeQueue` which is a wrapper around `asyncio.Queue` used today. It will have the same `put()` call, only wait if the amount of memory in use is greater than the remote is configured for.
Then introduce the same memory checking feature in the batcher. I'm not completely sure this second part is needed though.
We have to be very careful not to deadlock with this feature. For example, we have to account for the base case where even a single item is larger than the memory desired. Repos in pulp_rpm have had a single unit use more than 1.2G if I remember right, so if someone was syncing with 800 MB and we weren't careful to allow that unit to still flow through the pipeline we'd deadlock.....
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/plugin/stages/api.py`
Content:
```
1 import asyncio
2 import logging
3
4 from gettext import gettext as _
5
6 from django.conf import settings
7
8 from .profiler import ProfilingQueue
9
10
11 log = logging.getLogger(__name__)
12
13
14 class Stage:
15 """
16 The base class for all Stages API stages.
17
18 To make a stage, inherit from this class and implement :meth:`run` on the subclass.
19 """
20
21 def __init__(self):
22 self._in_q = None
23 self._out_q = None
24
25 def _connect(self, in_q, out_q):
26 """
27 Connect to queues within a pipeline.
28
29 Args:
30 in_q (asyncio.Queue): The stage input queue.
31 out_q (asyncio.Queue): The stage output queue.
32 """
33 self._in_q = in_q
34 self._out_q = out_q
35
36 async def __call__(self):
37 """
38 This coroutine makes the stage callable.
39
40 It calls :meth:`run` and signals the next stage that its work is finished.
41 """
42 log.debug(_("%(name)s - begin."), {"name": self})
43 await self.run()
44 await self._out_q.put(None)
45 log.debug(_("%(name)s - put end-marker."), {"name": self})
46
47 async def run(self):
48 """
49 The coroutine that is run as part of this stage.
50
51 Returns:
52 The coroutine that runs this stage.
53
54 """
55 raise NotImplementedError(_("A plugin writer must implement this method"))
56
57 async def items(self):
58 """
59 Asynchronous iterator yielding items of :class:`DeclarativeContent` from `self._in_q`.
60
61 The iterator will get instances of :class:`DeclarativeContent` one by one as they get
62 available.
63
64 Yields:
65 An instance of :class:`DeclarativeContent`
66
67 Examples:
68 Used in stages to get d_content instances one by one from `self._in_q`::
69
70 class MyStage(Stage):
71 async def run(self):
72 async for d_content in self.items():
73 # process declarative content
74 await self.put(d_content)
75
76 """
77 while True:
78 content = await self._in_q.get()
79 if content is None:
80 break
81 log.debug("%(name)s - next: %(content)s.", {"name": self, "content": content})
82 yield content
83
84 async def batches(self, minsize=500):
85 """
86 Asynchronous iterator yielding batches of :class:`DeclarativeContent` from `self._in_q`.
87
88 The iterator will try to get as many instances of
89 :class:`DeclarativeContent` as possible without blocking, but
90 at least `minsize` instances.
91
92 Args:
93 minsize (int): The minimum batch size to yield (unless it is the final batch)
94
95 Yields:
96 A list of :class:`DeclarativeContent` instances
97
98 Examples:
99 Used in stages to get large chunks of d_content instances from `self._in_q`::
100
101 class MyStage(Stage):
102 async def run(self):
103 async for batch in self.batches():
104 for d_content in batch:
105 # process declarative content
106 await self.put(d_content)
107
108 """
109 batch = []
110 shutdown = False
111 no_block = False
112 thaw_queue_event = asyncio.Event()
113
114 def add_to_batch(content):
115 nonlocal batch
116 nonlocal shutdown
117 nonlocal no_block
118 nonlocal thaw_queue_event
119
120 if content is None:
121 shutdown = True
122 log.debug(_("%(name)s - shutdown."), {"name": self})
123 else:
124 if not content.does_batch:
125 no_block = True
126 content._thaw_queue_event = thaw_queue_event
127 batch.append(content)
128
129 get_listener = asyncio.ensure_future(self._in_q.get())
130 thaw_event_listener = asyncio.ensure_future(thaw_queue_event.wait())
131 while not shutdown:
132 done, pending = await asyncio.wait(
133 [thaw_event_listener, get_listener], return_when=asyncio.FIRST_COMPLETED
134 )
135 if thaw_event_listener in done:
136 thaw_event_listener = asyncio.ensure_future(thaw_queue_event.wait())
137 no_block = True
138 if get_listener in done:
139 content = await get_listener
140 add_to_batch(content)
141 get_listener = asyncio.ensure_future(self._in_q.get())
142 while not shutdown:
143 try:
144 content = self._in_q.get_nowait()
145 except asyncio.QueueEmpty:
146 break
147 else:
148 add_to_batch(content)
149
150 if batch and (len(batch) >= minsize or shutdown or no_block):
151 log.debug(
152 _("%(name)s - next batch[%(length)d]."), {"name": self, "length": len(batch)}
153 )
154 for content in batch:
155 content._thaw_queue_event = None
156 thaw_queue_event.clear()
157 yield batch
158 batch = []
159 no_block = False
160 thaw_event_listener.cancel()
161 get_listener.cancel()
162
163 async def put(self, item):
164 """
165 Coroutine to pass items to the next stage.
166
167 Args:
168 item: A handled instance of :class:`pulpcore.plugin.stages.DeclarativeContent`
169
170 Raises:
171 ValueError: When `item` is None.
172 """
173 if item is None:
174 raise ValueError(_("(None) not permitted."))
175 await self._out_q.put(item)
176 log.debug("{name} - put: {content}".format(name=self, content=item))
177
178 def __str__(self):
179 return "[{id}] {name}".format(id=id(self), name=self.__class__.__name__)
180
181
182 async def create_pipeline(stages, maxsize=1000):
183 """
184 A coroutine that builds a Stages API linear pipeline from the list `stages` and runs it.
185
186 Each stage is an instance of a class derived from :class:`pulpcore.plugin.stages.Stage` that
187 implements the :meth:`run` coroutine. This coroutine reads asyncromously either from the
188 `items()` iterator or the `batches()` iterator and outputs the items with `put()`. Here is an
189 example of the simplest stage that only passes data::
190
191 class MyStage(Stage):
192 async def run(self):
193 async for d_content in self.items(): # Fetch items from the previous stage
194 await self.put(d_content) # Hand them over to the next stage
195
196 Args:
197 stages (list of coroutines): A list of Stages API compatible coroutines.
198 maxsize (int): The maximum amount of items a queue between two stages should hold. Optional
199 and defaults to 100.
200
201 Returns:
202 A single coroutine that can be used to run, wait, or cancel the entire pipeline with.
203 Raises:
204 ValueError: When a stage instance is specified more than once.
205 """
206 futures = []
207 history = set()
208 in_q = None
209 for i, stage in enumerate(stages):
210 if stage in history:
211 raise ValueError(_("Each stage instance must be unique."))
212 history.add(stage)
213 if i < len(stages) - 1:
214 if settings.PROFILE_STAGES_API:
215 out_q = ProfilingQueue.make_and_record_queue(stages[i + 1], i + 1, maxsize)
216 else:
217 out_q = asyncio.Queue(maxsize=maxsize)
218 else:
219 out_q = None
220 stage._connect(in_q, out_q)
221 futures.append(asyncio.ensure_future(stage()))
222 in_q = out_q
223
224 try:
225 await asyncio.gather(*futures)
226 except Exception:
227 # One of the stages raised an exception, cancel all stages...
228 pending = []
229 for task in futures:
230 if not task.done():
231 task.cancel()
232 pending.append(task)
233 # ...and run until all Exceptions show up
234 if pending:
235 await asyncio.wait(pending, timeout=60)
236 raise
237
238
239 class EndStage(Stage):
240 """
241 A Stages API stage that drains incoming items and does nothing with the items. This is
242 required at the end of all pipelines.
243
244 Without this stage, the `maxsize` of the last stage's `_out_q` could fill up and block the
245 entire pipeline.
246 """
247
248 async def __call__(self):
249 """
250 This method drains items from the last queue and drops them.
251
252 Importantly it does not try to put items into the nonexistent next queue.
253 """
254 # We overwrite __call__ here to avoid trying to put None in `self._out_q`.
255 async for _ in self.items(): # noqa
256 pass
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/plugin/stages/api.py b/pulpcore/plugin/stages/api.py
--- a/pulpcore/plugin/stages/api.py
+++ b/pulpcore/plugin/stages/api.py
@@ -179,12 +179,12 @@
return "[{id}] {name}".format(id=id(self), name=self.__class__.__name__)
-async def create_pipeline(stages, maxsize=1000):
+async def create_pipeline(stages, maxsize=1):
"""
A coroutine that builds a Stages API linear pipeline from the list `stages` and runs it.
Each stage is an instance of a class derived from :class:`pulpcore.plugin.stages.Stage` that
- implements the :meth:`run` coroutine. This coroutine reads asyncromously either from the
+ implements the :meth:`run` coroutine. This coroutine reads asynchronously either from the
`items()` iterator or the `batches()` iterator and outputs the items with `put()`. Here is an
example of the simplest stage that only passes data::
@@ -196,7 +196,7 @@
Args:
stages (list of coroutines): A list of Stages API compatible coroutines.
maxsize (int): The maximum amount of items a queue between two stages should hold. Optional
- and defaults to 100.
+ and defaults to 1.
Returns:
A single coroutine that can be used to run, wait, or cancel the entire pipeline with.
|
{"golden_diff": "diff --git a/pulpcore/plugin/stages/api.py b/pulpcore/plugin/stages/api.py\n--- a/pulpcore/plugin/stages/api.py\n+++ b/pulpcore/plugin/stages/api.py\n@@ -179,12 +179,12 @@\n return \"[{id}] {name}\".format(id=id(self), name=self.__class__.__name__)\n \n \n-async def create_pipeline(stages, maxsize=1000):\n+async def create_pipeline(stages, maxsize=1):\n \"\"\"\n A coroutine that builds a Stages API linear pipeline from the list `stages` and runs it.\n \n Each stage is an instance of a class derived from :class:`pulpcore.plugin.stages.Stage` that\n- implements the :meth:`run` coroutine. This coroutine reads asyncromously either from the\n+ implements the :meth:`run` coroutine. This coroutine reads asynchronously either from the\n `items()` iterator or the `batches()` iterator and outputs the items with `put()`. Here is an\n example of the simplest stage that only passes data::\n \n@@ -196,7 +196,7 @@\n Args:\n stages (list of coroutines): A list of Stages API compatible coroutines.\n maxsize (int): The maximum amount of items a queue between two stages should hold. Optional\n- and defaults to 100.\n+ and defaults to 1.\n \n Returns:\n A single coroutine that can be used to run, wait, or cancel the entire pipeline with.\n", "issue": "Reduce memory usage of the pipeline\nAuthor: @bmbouter (bmbouter)\n\n\nRedmine Issue: 9635, https://pulp.plan.io/issues/9635\n\n---\n\n## Motivation\r\n\r\nIt would be nice if users could specify a desired maximum amount of RAM to be used during sync. For example, a user can say I only want 1500 MB of RAM to be used max.\r\n\r\n## What is already in place\r\n\r\nThe stages pipeline restricts memory usage by only allowing 1000 declarative content objects between each stage (so for 8-9 stages that's 8000-9000 declarative content objects. This happens [here](https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/api.py#L217).\r\n\r\nInterestingly the docstring says this defaults to 100, but it seems to actually be 1000!\r\n\r\nAlso the stages perform batching, so they will only taking in a limited number of items (the batch size). That happens [with minsize](https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/api.py#L84).\r\n\r\n## Why this isn't enough\r\n\r\nThese are count-based mechnisms and don't correspond to actual MB or GB of memory used. Some content units vary a lot in how much memory each DeclarativeContent objects take up.\r\n\r\nAnother lesser problem is that it doesn't help plugin writers restrict their usage of memory in FirstStage.\r\n\r\n## Idea\r\n\r\nAdd a new param called `max_mb` to base Remote, which defaults to None. If specified, the user will be specifying the desired maximum MB used by process syncing.\r\n\r\nHave the queues between the stages, and the bather implementation, both check the total memory the current process is using and asyncio.sleep() polling until it goes down. This should keep the maximum amount used by all objects roughly to that number.\r\n\r\n## Details\r\n\r\nIntroduce a new `MBSizeQueue` which is a wrapper around `asyncio.Queue` used today. It will have the same `put()` call, only wait if the amount of memory in use is greater than the remote is configured for.\r\n\r\nThen introduce the same memory checking feature in the batcher. I'm not completely sure this second part is needed though.\r\n\r\nWe have to be very careful not to deadlock with this feature. For example, we have to account for the base case where even a single item is larger than the memory desired. Repos in pulp_rpm have had a single unit use more than 1.2G if I remember right, so if someone was syncing with 800 MB and we weren't careful to allow that unit to still flow through the pipeline we'd deadlock.....\n\n\n\n", "before_files": [{"content": "import asyncio\nimport logging\n\nfrom gettext import gettext as _\n\nfrom django.conf import settings\n\nfrom .profiler import ProfilingQueue\n\n\nlog = logging.getLogger(__name__)\n\n\nclass Stage:\n \"\"\"\n The base class for all Stages API stages.\n\n To make a stage, inherit from this class and implement :meth:`run` on the subclass.\n \"\"\"\n\n def __init__(self):\n self._in_q = None\n self._out_q = None\n\n def _connect(self, in_q, out_q):\n \"\"\"\n Connect to queues within a pipeline.\n\n Args:\n in_q (asyncio.Queue): The stage input queue.\n out_q (asyncio.Queue): The stage output queue.\n \"\"\"\n self._in_q = in_q\n self._out_q = out_q\n\n async def __call__(self):\n \"\"\"\n This coroutine makes the stage callable.\n\n It calls :meth:`run` and signals the next stage that its work is finished.\n \"\"\"\n log.debug(_(\"%(name)s - begin.\"), {\"name\": self})\n await self.run()\n await self._out_q.put(None)\n log.debug(_(\"%(name)s - put end-marker.\"), {\"name\": self})\n\n async def run(self):\n \"\"\"\n The coroutine that is run as part of this stage.\n\n Returns:\n The coroutine that runs this stage.\n\n \"\"\"\n raise NotImplementedError(_(\"A plugin writer must implement this method\"))\n\n async def items(self):\n \"\"\"\n Asynchronous iterator yielding items of :class:`DeclarativeContent` from `self._in_q`.\n\n The iterator will get instances of :class:`DeclarativeContent` one by one as they get\n available.\n\n Yields:\n An instance of :class:`DeclarativeContent`\n\n Examples:\n Used in stages to get d_content instances one by one from `self._in_q`::\n\n class MyStage(Stage):\n async def run(self):\n async for d_content in self.items():\n # process declarative content\n await self.put(d_content)\n\n \"\"\"\n while True:\n content = await self._in_q.get()\n if content is None:\n break\n log.debug(\"%(name)s - next: %(content)s.\", {\"name\": self, \"content\": content})\n yield content\n\n async def batches(self, minsize=500):\n \"\"\"\n Asynchronous iterator yielding batches of :class:`DeclarativeContent` from `self._in_q`.\n\n The iterator will try to get as many instances of\n :class:`DeclarativeContent` as possible without blocking, but\n at least `minsize` instances.\n\n Args:\n minsize (int): The minimum batch size to yield (unless it is the final batch)\n\n Yields:\n A list of :class:`DeclarativeContent` instances\n\n Examples:\n Used in stages to get large chunks of d_content instances from `self._in_q`::\n\n class MyStage(Stage):\n async def run(self):\n async for batch in self.batches():\n for d_content in batch:\n # process declarative content\n await self.put(d_content)\n\n \"\"\"\n batch = []\n shutdown = False\n no_block = False\n thaw_queue_event = asyncio.Event()\n\n def add_to_batch(content):\n nonlocal batch\n nonlocal shutdown\n nonlocal no_block\n nonlocal thaw_queue_event\n\n if content is None:\n shutdown = True\n log.debug(_(\"%(name)s - shutdown.\"), {\"name\": self})\n else:\n if not content.does_batch:\n no_block = True\n content._thaw_queue_event = thaw_queue_event\n batch.append(content)\n\n get_listener = asyncio.ensure_future(self._in_q.get())\n thaw_event_listener = asyncio.ensure_future(thaw_queue_event.wait())\n while not shutdown:\n done, pending = await asyncio.wait(\n [thaw_event_listener, get_listener], return_when=asyncio.FIRST_COMPLETED\n )\n if thaw_event_listener in done:\n thaw_event_listener = asyncio.ensure_future(thaw_queue_event.wait())\n no_block = True\n if get_listener in done:\n content = await get_listener\n add_to_batch(content)\n get_listener = asyncio.ensure_future(self._in_q.get())\n while not shutdown:\n try:\n content = self._in_q.get_nowait()\n except asyncio.QueueEmpty:\n break\n else:\n add_to_batch(content)\n\n if batch and (len(batch) >= minsize or shutdown or no_block):\n log.debug(\n _(\"%(name)s - next batch[%(length)d].\"), {\"name\": self, \"length\": len(batch)}\n )\n for content in batch:\n content._thaw_queue_event = None\n thaw_queue_event.clear()\n yield batch\n batch = []\n no_block = False\n thaw_event_listener.cancel()\n get_listener.cancel()\n\n async def put(self, item):\n \"\"\"\n Coroutine to pass items to the next stage.\n\n Args:\n item: A handled instance of :class:`pulpcore.plugin.stages.DeclarativeContent`\n\n Raises:\n ValueError: When `item` is None.\n \"\"\"\n if item is None:\n raise ValueError(_(\"(None) not permitted.\"))\n await self._out_q.put(item)\n log.debug(\"{name} - put: {content}\".format(name=self, content=item))\n\n def __str__(self):\n return \"[{id}] {name}\".format(id=id(self), name=self.__class__.__name__)\n\n\nasync def create_pipeline(stages, maxsize=1000):\n \"\"\"\n A coroutine that builds a Stages API linear pipeline from the list `stages` and runs it.\n\n Each stage is an instance of a class derived from :class:`pulpcore.plugin.stages.Stage` that\n implements the :meth:`run` coroutine. This coroutine reads asyncromously either from the\n `items()` iterator or the `batches()` iterator and outputs the items with `put()`. Here is an\n example of the simplest stage that only passes data::\n\n class MyStage(Stage):\n async def run(self):\n async for d_content in self.items(): # Fetch items from the previous stage\n await self.put(d_content) # Hand them over to the next stage\n\n Args:\n stages (list of coroutines): A list of Stages API compatible coroutines.\n maxsize (int): The maximum amount of items a queue between two stages should hold. Optional\n and defaults to 100.\n\n Returns:\n A single coroutine that can be used to run, wait, or cancel the entire pipeline with.\n Raises:\n ValueError: When a stage instance is specified more than once.\n \"\"\"\n futures = []\n history = set()\n in_q = None\n for i, stage in enumerate(stages):\n if stage in history:\n raise ValueError(_(\"Each stage instance must be unique.\"))\n history.add(stage)\n if i < len(stages) - 1:\n if settings.PROFILE_STAGES_API:\n out_q = ProfilingQueue.make_and_record_queue(stages[i + 1], i + 1, maxsize)\n else:\n out_q = asyncio.Queue(maxsize=maxsize)\n else:\n out_q = None\n stage._connect(in_q, out_q)\n futures.append(asyncio.ensure_future(stage()))\n in_q = out_q\n\n try:\n await asyncio.gather(*futures)\n except Exception:\n # One of the stages raised an exception, cancel all stages...\n pending = []\n for task in futures:\n if not task.done():\n task.cancel()\n pending.append(task)\n # ...and run until all Exceptions show up\n if pending:\n await asyncio.wait(pending, timeout=60)\n raise\n\n\nclass EndStage(Stage):\n \"\"\"\n A Stages API stage that drains incoming items and does nothing with the items. This is\n required at the end of all pipelines.\n\n Without this stage, the `maxsize` of the last stage's `_out_q` could fill up and block the\n entire pipeline.\n \"\"\"\n\n async def __call__(self):\n \"\"\"\n This method drains items from the last queue and drops them.\n\n Importantly it does not try to put items into the nonexistent next queue.\n \"\"\"\n # We overwrite __call__ here to avoid trying to put None in `self._out_q`.\n async for _ in self.items(): # noqa\n pass\n", "path": "pulpcore/plugin/stages/api.py"}], "after_files": [{"content": "import asyncio\nimport logging\n\nfrom gettext import gettext as _\n\nfrom django.conf import settings\n\nfrom .profiler import ProfilingQueue\n\n\nlog = logging.getLogger(__name__)\n\n\nclass Stage:\n \"\"\"\n The base class for all Stages API stages.\n\n To make a stage, inherit from this class and implement :meth:`run` on the subclass.\n \"\"\"\n\n def __init__(self):\n self._in_q = None\n self._out_q = None\n\n def _connect(self, in_q, out_q):\n \"\"\"\n Connect to queues within a pipeline.\n\n Args:\n in_q (asyncio.Queue): The stage input queue.\n out_q (asyncio.Queue): The stage output queue.\n \"\"\"\n self._in_q = in_q\n self._out_q = out_q\n\n async def __call__(self):\n \"\"\"\n This coroutine makes the stage callable.\n\n It calls :meth:`run` and signals the next stage that its work is finished.\n \"\"\"\n log.debug(_(\"%(name)s - begin.\"), {\"name\": self})\n await self.run()\n await self._out_q.put(None)\n log.debug(_(\"%(name)s - put end-marker.\"), {\"name\": self})\n\n async def run(self):\n \"\"\"\n The coroutine that is run as part of this stage.\n\n Returns:\n The coroutine that runs this stage.\n\n \"\"\"\n raise NotImplementedError(_(\"A plugin writer must implement this method\"))\n\n async def items(self):\n \"\"\"\n Asynchronous iterator yielding items of :class:`DeclarativeContent` from `self._in_q`.\n\n The iterator will get instances of :class:`DeclarativeContent` one by one as they get\n available.\n\n Yields:\n An instance of :class:`DeclarativeContent`\n\n Examples:\n Used in stages to get d_content instances one by one from `self._in_q`::\n\n class MyStage(Stage):\n async def run(self):\n async for d_content in self.items():\n # process declarative content\n await self.put(d_content)\n\n \"\"\"\n while True:\n content = await self._in_q.get()\n if content is None:\n break\n log.debug(\"%(name)s - next: %(content)s.\", {\"name\": self, \"content\": content})\n yield content\n\n async def batches(self, minsize=500):\n \"\"\"\n Asynchronous iterator yielding batches of :class:`DeclarativeContent` from `self._in_q`.\n\n The iterator will try to get as many instances of\n :class:`DeclarativeContent` as possible without blocking, but\n at least `minsize` instances.\n\n Args:\n minsize (int): The minimum batch size to yield (unless it is the final batch)\n\n Yields:\n A list of :class:`DeclarativeContent` instances\n\n Examples:\n Used in stages to get large chunks of d_content instances from `self._in_q`::\n\n class MyStage(Stage):\n async def run(self):\n async for batch in self.batches():\n for d_content in batch:\n # process declarative content\n await self.put(d_content)\n\n \"\"\"\n batch = []\n shutdown = False\n no_block = False\n thaw_queue_event = asyncio.Event()\n\n def add_to_batch(content):\n nonlocal batch\n nonlocal shutdown\n nonlocal no_block\n nonlocal thaw_queue_event\n\n if content is None:\n shutdown = True\n log.debug(_(\"%(name)s - shutdown.\"), {\"name\": self})\n else:\n if not content.does_batch:\n no_block = True\n content._thaw_queue_event = thaw_queue_event\n batch.append(content)\n\n get_listener = asyncio.ensure_future(self._in_q.get())\n thaw_event_listener = asyncio.ensure_future(thaw_queue_event.wait())\n while not shutdown:\n done, pending = await asyncio.wait(\n [thaw_event_listener, get_listener], return_when=asyncio.FIRST_COMPLETED\n )\n if thaw_event_listener in done:\n thaw_event_listener = asyncio.ensure_future(thaw_queue_event.wait())\n no_block = True\n if get_listener in done:\n content = await get_listener\n add_to_batch(content)\n get_listener = asyncio.ensure_future(self._in_q.get())\n while not shutdown:\n try:\n content = self._in_q.get_nowait()\n except asyncio.QueueEmpty:\n break\n else:\n add_to_batch(content)\n\n if batch and (len(batch) >= minsize or shutdown or no_block):\n log.debug(\n _(\"%(name)s - next batch[%(length)d].\"), {\"name\": self, \"length\": len(batch)}\n )\n for content in batch:\n content._thaw_queue_event = None\n thaw_queue_event.clear()\n yield batch\n batch = []\n no_block = False\n thaw_event_listener.cancel()\n get_listener.cancel()\n\n async def put(self, item):\n \"\"\"\n Coroutine to pass items to the next stage.\n\n Args:\n item: A handled instance of :class:`pulpcore.plugin.stages.DeclarativeContent`\n\n Raises:\n ValueError: When `item` is None.\n \"\"\"\n if item is None:\n raise ValueError(_(\"(None) not permitted.\"))\n await self._out_q.put(item)\n log.debug(\"{name} - put: {content}\".format(name=self, content=item))\n\n def __str__(self):\n return \"[{id}] {name}\".format(id=id(self), name=self.__class__.__name__)\n\n\nasync def create_pipeline(stages, maxsize=1):\n \"\"\"\n A coroutine that builds a Stages API linear pipeline from the list `stages` and runs it.\n\n Each stage is an instance of a class derived from :class:`pulpcore.plugin.stages.Stage` that\n implements the :meth:`run` coroutine. This coroutine reads asynchronously either from the\n `items()` iterator or the `batches()` iterator and outputs the items with `put()`. Here is an\n example of the simplest stage that only passes data::\n\n class MyStage(Stage):\n async def run(self):\n async for d_content in self.items(): # Fetch items from the previous stage\n await self.put(d_content) # Hand them over to the next stage\n\n Args:\n stages (list of coroutines): A list of Stages API compatible coroutines.\n maxsize (int): The maximum amount of items a queue between two stages should hold. Optional\n and defaults to 1.\n\n Returns:\n A single coroutine that can be used to run, wait, or cancel the entire pipeline with.\n Raises:\n ValueError: When a stage instance is specified more than once.\n \"\"\"\n futures = []\n history = set()\n in_q = None\n for i, stage in enumerate(stages):\n if stage in history:\n raise ValueError(_(\"Each stage instance must be unique.\"))\n history.add(stage)\n if i < len(stages) - 1:\n if settings.PROFILE_STAGES_API:\n out_q = ProfilingQueue.make_and_record_queue(stages[i + 1], i + 1, maxsize)\n else:\n out_q = asyncio.Queue(maxsize=maxsize)\n else:\n out_q = None\n stage._connect(in_q, out_q)\n futures.append(asyncio.ensure_future(stage()))\n in_q = out_q\n\n try:\n await asyncio.gather(*futures)\n except Exception:\n # One of the stages raised an exception, cancel all stages...\n pending = []\n for task in futures:\n if not task.done():\n task.cancel()\n pending.append(task)\n # ...and run until all Exceptions show up\n if pending:\n await asyncio.wait(pending, timeout=60)\n raise\n\n\nclass EndStage(Stage):\n \"\"\"\n A Stages API stage that drains incoming items and does nothing with the items. This is\n required at the end of all pipelines.\n\n Without this stage, the `maxsize` of the last stage's `_out_q` could fill up and block the\n entire pipeline.\n \"\"\"\n\n async def __call__(self):\n \"\"\"\n This method drains items from the last queue and drops them.\n\n Importantly it does not try to put items into the nonexistent next queue.\n \"\"\"\n # We overwrite __call__ here to avoid trying to put None in `self._out_q`.\n async for _ in self.items(): # noqa\n pass\n", "path": "pulpcore/plugin/stages/api.py"}]}
| 3,348 | 335 |
gh_patches_debug_4176
|
rasdani/github-patches
|
git_diff
|
pallets__click-1081
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add newlines between options in help text
In case of multi-line help messages for options the list of options ``cmd --help`` gets difficult to read. It would be great to have an option to toggle extra newlines after multi-line help messages / in general between option help messages.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `click/formatting.py`
Content:
```
1 from contextlib import contextmanager
2 from .termui import get_terminal_size
3 from .parser import split_opt
4 from ._compat import term_len
5
6
7 # Can force a width. This is used by the test system
8 FORCED_WIDTH = None
9
10
11 def measure_table(rows):
12 widths = {}
13 for row in rows:
14 for idx, col in enumerate(row):
15 widths[idx] = max(widths.get(idx, 0), term_len(col))
16 return tuple(y for x, y in sorted(widths.items()))
17
18
19 def iter_rows(rows, col_count):
20 for row in rows:
21 row = tuple(row)
22 yield row + ('',) * (col_count - len(row))
23
24
25 def wrap_text(text, width=78, initial_indent='', subsequent_indent='',
26 preserve_paragraphs=False):
27 """A helper function that intelligently wraps text. By default, it
28 assumes that it operates on a single paragraph of text but if the
29 `preserve_paragraphs` parameter is provided it will intelligently
30 handle paragraphs (defined by two empty lines).
31
32 If paragraphs are handled, a paragraph can be prefixed with an empty
33 line containing the ``\\b`` character (``\\x08``) to indicate that
34 no rewrapping should happen in that block.
35
36 :param text: the text that should be rewrapped.
37 :param width: the maximum width for the text.
38 :param initial_indent: the initial indent that should be placed on the
39 first line as a string.
40 :param subsequent_indent: the indent string that should be placed on
41 each consecutive line.
42 :param preserve_paragraphs: if this flag is set then the wrapping will
43 intelligently handle paragraphs.
44 """
45 from ._textwrap import TextWrapper
46 text = text.expandtabs()
47 wrapper = TextWrapper(width, initial_indent=initial_indent,
48 subsequent_indent=subsequent_indent,
49 replace_whitespace=False)
50 if not preserve_paragraphs:
51 return wrapper.fill(text)
52
53 p = []
54 buf = []
55 indent = None
56
57 def _flush_par():
58 if not buf:
59 return
60 if buf[0].strip() == '\b':
61 p.append((indent or 0, True, '\n'.join(buf[1:])))
62 else:
63 p.append((indent or 0, False, ' '.join(buf)))
64 del buf[:]
65
66 for line in text.splitlines():
67 if not line:
68 _flush_par()
69 indent = None
70 else:
71 if indent is None:
72 orig_len = term_len(line)
73 line = line.lstrip()
74 indent = orig_len - term_len(line)
75 buf.append(line)
76 _flush_par()
77
78 rv = []
79 for indent, raw, text in p:
80 with wrapper.extra_indent(' ' * indent):
81 if raw:
82 rv.append(wrapper.indent_only(text))
83 else:
84 rv.append(wrapper.fill(text))
85
86 return '\n\n'.join(rv)
87
88
89 class HelpFormatter(object):
90 """This class helps with formatting text-based help pages. It's
91 usually just needed for very special internal cases, but it's also
92 exposed so that developers can write their own fancy outputs.
93
94 At present, it always writes into memory.
95
96 :param indent_increment: the additional increment for each level.
97 :param width: the width for the text. This defaults to the terminal
98 width clamped to a maximum of 78.
99 """
100
101 def __init__(self, indent_increment=2, width=None, max_width=None):
102 self.indent_increment = indent_increment
103 if max_width is None:
104 max_width = 80
105 if width is None:
106 width = FORCED_WIDTH
107 if width is None:
108 width = max(min(get_terminal_size()[0], max_width) - 2, 50)
109 self.width = width
110 self.current_indent = 0
111 self.buffer = []
112
113 def write(self, string):
114 """Writes a unicode string into the internal buffer."""
115 self.buffer.append(string)
116
117 def indent(self):
118 """Increases the indentation."""
119 self.current_indent += self.indent_increment
120
121 def dedent(self):
122 """Decreases the indentation."""
123 self.current_indent -= self.indent_increment
124
125 def write_usage(self, prog, args='', prefix='Usage: '):
126 """Writes a usage line into the buffer.
127
128 :param prog: the program name.
129 :param args: whitespace separated list of arguments.
130 :param prefix: the prefix for the first line.
131 """
132 usage_prefix = '%*s%s ' % (self.current_indent, prefix, prog)
133 text_width = self.width - self.current_indent
134
135 if text_width >= (term_len(usage_prefix) + 20):
136 # The arguments will fit to the right of the prefix.
137 indent = ' ' * term_len(usage_prefix)
138 self.write(wrap_text(args, text_width,
139 initial_indent=usage_prefix,
140 subsequent_indent=indent))
141 else:
142 # The prefix is too long, put the arguments on the next line.
143 self.write(usage_prefix)
144 self.write('\n')
145 indent = ' ' * (max(self.current_indent, term_len(prefix)) + 4)
146 self.write(wrap_text(args, text_width,
147 initial_indent=indent,
148 subsequent_indent=indent))
149
150 self.write('\n')
151
152 def write_heading(self, heading):
153 """Writes a heading into the buffer."""
154 self.write('%*s%s:\n' % (self.current_indent, '', heading))
155
156 def write_paragraph(self):
157 """Writes a paragraph into the buffer."""
158 if self.buffer:
159 self.write('\n')
160
161 def write_text(self, text):
162 """Writes re-indented text into the buffer. This rewraps and
163 preserves paragraphs.
164 """
165 text_width = max(self.width - self.current_indent, 11)
166 indent = ' ' * self.current_indent
167 self.write(wrap_text(text, text_width,
168 initial_indent=indent,
169 subsequent_indent=indent,
170 preserve_paragraphs=True))
171 self.write('\n')
172
173 def write_dl(self, rows, col_max=30, col_spacing=2):
174 """Writes a definition list into the buffer. This is how options
175 and commands are usually formatted.
176
177 :param rows: a list of two item tuples for the terms and values.
178 :param col_max: the maximum width of the first column.
179 :param col_spacing: the number of spaces between the first and
180 second column.
181 """
182 rows = list(rows)
183 widths = measure_table(rows)
184 if len(widths) != 2:
185 raise TypeError('Expected two columns for definition list')
186
187 first_col = min(widths[0], col_max) + col_spacing
188
189 for first, second in iter_rows(rows, len(widths)):
190 self.write('%*s%s' % (self.current_indent, '', first))
191 if not second:
192 self.write('\n')
193 continue
194 if term_len(first) <= first_col - col_spacing:
195 self.write(' ' * (first_col - term_len(first)))
196 else:
197 self.write('\n')
198 self.write(' ' * (first_col + self.current_indent))
199
200 text_width = max(self.width - first_col - 2, 10)
201 wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True)
202 lines = wrapped_text.splitlines()
203
204 if lines:
205 self.write(lines[0] + '\n')
206
207 for line in lines[1:]:
208 self.write('%*s%s\n' % (first_col + self.current_indent, '', line))
209 else:
210 self.write('\n')
211
212 @contextmanager
213 def section(self, name):
214 """Helpful context manager that writes a paragraph, a heading,
215 and the indents.
216
217 :param name: the section name that is written as heading.
218 """
219 self.write_paragraph()
220 self.write_heading(name)
221 self.indent()
222 try:
223 yield
224 finally:
225 self.dedent()
226
227 @contextmanager
228 def indentation(self):
229 """A context manager that increases the indentation."""
230 self.indent()
231 try:
232 yield
233 finally:
234 self.dedent()
235
236 def getvalue(self):
237 """Returns the buffer contents."""
238 return ''.join(self.buffer)
239
240
241 def join_options(options):
242 """Given a list of option strings this joins them in the most appropriate
243 way and returns them in the form ``(formatted_string,
244 any_prefix_is_slash)`` where the second item in the tuple is a flag that
245 indicates if any of the option prefixes was a slash.
246 """
247 rv = []
248 any_prefix_is_slash = False
249 for opt in options:
250 prefix = split_opt(opt)[0]
251 if prefix == '/':
252 any_prefix_is_slash = True
253 rv.append((len(prefix), opt))
254
255 rv.sort(key=lambda x: x[0])
256
257 rv = ', '.join(x[1] for x in rv)
258 return rv, any_prefix_is_slash
259
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/click/formatting.py b/click/formatting.py
--- a/click/formatting.py
+++ b/click/formatting.py
@@ -206,6 +206,10 @@
for line in lines[1:]:
self.write('%*s%s\n' % (first_col + self.current_indent, '', line))
+
+ if len(lines) > 1:
+ # separate long help from next option
+ self.write("\n")
else:
self.write('\n')
|
{"golden_diff": "diff --git a/click/formatting.py b/click/formatting.py\n--- a/click/formatting.py\n+++ b/click/formatting.py\n@@ -206,6 +206,10 @@\n \n for line in lines[1:]:\n self.write('%*s%s\\n' % (first_col + self.current_indent, '', line))\n+\n+ if len(lines) > 1:\n+ # separate long help from next option\n+ self.write(\"\\n\")\n else:\n self.write('\\n')\n", "issue": "Add newlines between options in help text\nIn case of multi-line help messages for options the list of options ``cmd --help`` gets difficult to read. It would be great to have an option to toggle extra newlines after multi-line help messages / in general between option help messages.\n", "before_files": [{"content": "from contextlib import contextmanager\nfrom .termui import get_terminal_size\nfrom .parser import split_opt\nfrom ._compat import term_len\n\n\n# Can force a width. This is used by the test system\nFORCED_WIDTH = None\n\n\ndef measure_table(rows):\n widths = {}\n for row in rows:\n for idx, col in enumerate(row):\n widths[idx] = max(widths.get(idx, 0), term_len(col))\n return tuple(y for x, y in sorted(widths.items()))\n\n\ndef iter_rows(rows, col_count):\n for row in rows:\n row = tuple(row)\n yield row + ('',) * (col_count - len(row))\n\n\ndef wrap_text(text, width=78, initial_indent='', subsequent_indent='',\n preserve_paragraphs=False):\n \"\"\"A helper function that intelligently wraps text. By default, it\n assumes that it operates on a single paragraph of text but if the\n `preserve_paragraphs` parameter is provided it will intelligently\n handle paragraphs (defined by two empty lines).\n\n If paragraphs are handled, a paragraph can be prefixed with an empty\n line containing the ``\\\\b`` character (``\\\\x08``) to indicate that\n no rewrapping should happen in that block.\n\n :param text: the text that should be rewrapped.\n :param width: the maximum width for the text.\n :param initial_indent: the initial indent that should be placed on the\n first line as a string.\n :param subsequent_indent: the indent string that should be placed on\n each consecutive line.\n :param preserve_paragraphs: if this flag is set then the wrapping will\n intelligently handle paragraphs.\n \"\"\"\n from ._textwrap import TextWrapper\n text = text.expandtabs()\n wrapper = TextWrapper(width, initial_indent=initial_indent,\n subsequent_indent=subsequent_indent,\n replace_whitespace=False)\n if not preserve_paragraphs:\n return wrapper.fill(text)\n\n p = []\n buf = []\n indent = None\n\n def _flush_par():\n if not buf:\n return\n if buf[0].strip() == '\\b':\n p.append((indent or 0, True, '\\n'.join(buf[1:])))\n else:\n p.append((indent or 0, False, ' '.join(buf)))\n del buf[:]\n\n for line in text.splitlines():\n if not line:\n _flush_par()\n indent = None\n else:\n if indent is None:\n orig_len = term_len(line)\n line = line.lstrip()\n indent = orig_len - term_len(line)\n buf.append(line)\n _flush_par()\n\n rv = []\n for indent, raw, text in p:\n with wrapper.extra_indent(' ' * indent):\n if raw:\n rv.append(wrapper.indent_only(text))\n else:\n rv.append(wrapper.fill(text))\n\n return '\\n\\n'.join(rv)\n\n\nclass HelpFormatter(object):\n \"\"\"This class helps with formatting text-based help pages. It's\n usually just needed for very special internal cases, but it's also\n exposed so that developers can write their own fancy outputs.\n\n At present, it always writes into memory.\n\n :param indent_increment: the additional increment for each level.\n :param width: the width for the text. This defaults to the terminal\n width clamped to a maximum of 78.\n \"\"\"\n\n def __init__(self, indent_increment=2, width=None, max_width=None):\n self.indent_increment = indent_increment\n if max_width is None:\n max_width = 80\n if width is None:\n width = FORCED_WIDTH\n if width is None:\n width = max(min(get_terminal_size()[0], max_width) - 2, 50)\n self.width = width\n self.current_indent = 0\n self.buffer = []\n\n def write(self, string):\n \"\"\"Writes a unicode string into the internal buffer.\"\"\"\n self.buffer.append(string)\n\n def indent(self):\n \"\"\"Increases the indentation.\"\"\"\n self.current_indent += self.indent_increment\n\n def dedent(self):\n \"\"\"Decreases the indentation.\"\"\"\n self.current_indent -= self.indent_increment\n\n def write_usage(self, prog, args='', prefix='Usage: '):\n \"\"\"Writes a usage line into the buffer.\n\n :param prog: the program name.\n :param args: whitespace separated list of arguments.\n :param prefix: the prefix for the first line.\n \"\"\"\n usage_prefix = '%*s%s ' % (self.current_indent, prefix, prog)\n text_width = self.width - self.current_indent\n\n if text_width >= (term_len(usage_prefix) + 20):\n # The arguments will fit to the right of the prefix.\n indent = ' ' * term_len(usage_prefix)\n self.write(wrap_text(args, text_width,\n initial_indent=usage_prefix,\n subsequent_indent=indent))\n else:\n # The prefix is too long, put the arguments on the next line.\n self.write(usage_prefix)\n self.write('\\n')\n indent = ' ' * (max(self.current_indent, term_len(prefix)) + 4)\n self.write(wrap_text(args, text_width,\n initial_indent=indent,\n subsequent_indent=indent))\n\n self.write('\\n')\n\n def write_heading(self, heading):\n \"\"\"Writes a heading into the buffer.\"\"\"\n self.write('%*s%s:\\n' % (self.current_indent, '', heading))\n\n def write_paragraph(self):\n \"\"\"Writes a paragraph into the buffer.\"\"\"\n if self.buffer:\n self.write('\\n')\n\n def write_text(self, text):\n \"\"\"Writes re-indented text into the buffer. This rewraps and\n preserves paragraphs.\n \"\"\"\n text_width = max(self.width - self.current_indent, 11)\n indent = ' ' * self.current_indent\n self.write(wrap_text(text, text_width,\n initial_indent=indent,\n subsequent_indent=indent,\n preserve_paragraphs=True))\n self.write('\\n')\n\n def write_dl(self, rows, col_max=30, col_spacing=2):\n \"\"\"Writes a definition list into the buffer. This is how options\n and commands are usually formatted.\n\n :param rows: a list of two item tuples for the terms and values.\n :param col_max: the maximum width of the first column.\n :param col_spacing: the number of spaces between the first and\n second column.\n \"\"\"\n rows = list(rows)\n widths = measure_table(rows)\n if len(widths) != 2:\n raise TypeError('Expected two columns for definition list')\n\n first_col = min(widths[0], col_max) + col_spacing\n\n for first, second in iter_rows(rows, len(widths)):\n self.write('%*s%s' % (self.current_indent, '', first))\n if not second:\n self.write('\\n')\n continue\n if term_len(first) <= first_col - col_spacing:\n self.write(' ' * (first_col - term_len(first)))\n else:\n self.write('\\n')\n self.write(' ' * (first_col + self.current_indent))\n\n text_width = max(self.width - first_col - 2, 10)\n wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True)\n lines = wrapped_text.splitlines()\n\n if lines:\n self.write(lines[0] + '\\n')\n\n for line in lines[1:]:\n self.write('%*s%s\\n' % (first_col + self.current_indent, '', line))\n else:\n self.write('\\n')\n\n @contextmanager\n def section(self, name):\n \"\"\"Helpful context manager that writes a paragraph, a heading,\n and the indents.\n\n :param name: the section name that is written as heading.\n \"\"\"\n self.write_paragraph()\n self.write_heading(name)\n self.indent()\n try:\n yield\n finally:\n self.dedent()\n\n @contextmanager\n def indentation(self):\n \"\"\"A context manager that increases the indentation.\"\"\"\n self.indent()\n try:\n yield\n finally:\n self.dedent()\n\n def getvalue(self):\n \"\"\"Returns the buffer contents.\"\"\"\n return ''.join(self.buffer)\n\n\ndef join_options(options):\n \"\"\"Given a list of option strings this joins them in the most appropriate\n way and returns them in the form ``(formatted_string,\n any_prefix_is_slash)`` where the second item in the tuple is a flag that\n indicates if any of the option prefixes was a slash.\n \"\"\"\n rv = []\n any_prefix_is_slash = False\n for opt in options:\n prefix = split_opt(opt)[0]\n if prefix == '/':\n any_prefix_is_slash = True\n rv.append((len(prefix), opt))\n\n rv.sort(key=lambda x: x[0])\n\n rv = ', '.join(x[1] for x in rv)\n return rv, any_prefix_is_slash\n", "path": "click/formatting.py"}], "after_files": [{"content": "from contextlib import contextmanager\nfrom .termui import get_terminal_size\nfrom .parser import split_opt\nfrom ._compat import term_len\n\n\n# Can force a width. This is used by the test system\nFORCED_WIDTH = None\n\n\ndef measure_table(rows):\n widths = {}\n for row in rows:\n for idx, col in enumerate(row):\n widths[idx] = max(widths.get(idx, 0), term_len(col))\n return tuple(y for x, y in sorted(widths.items()))\n\n\ndef iter_rows(rows, col_count):\n for row in rows:\n row = tuple(row)\n yield row + ('',) * (col_count - len(row))\n\n\ndef wrap_text(text, width=78, initial_indent='', subsequent_indent='',\n preserve_paragraphs=False):\n \"\"\"A helper function that intelligently wraps text. By default, it\n assumes that it operates on a single paragraph of text but if the\n `preserve_paragraphs` parameter is provided it will intelligently\n handle paragraphs (defined by two empty lines).\n\n If paragraphs are handled, a paragraph can be prefixed with an empty\n line containing the ``\\\\b`` character (``\\\\x08``) to indicate that\n no rewrapping should happen in that block.\n\n :param text: the text that should be rewrapped.\n :param width: the maximum width for the text.\n :param initial_indent: the initial indent that should be placed on the\n first line as a string.\n :param subsequent_indent: the indent string that should be placed on\n each consecutive line.\n :param preserve_paragraphs: if this flag is set then the wrapping will\n intelligently handle paragraphs.\n \"\"\"\n from ._textwrap import TextWrapper\n text = text.expandtabs()\n wrapper = TextWrapper(width, initial_indent=initial_indent,\n subsequent_indent=subsequent_indent,\n replace_whitespace=False)\n if not preserve_paragraphs:\n return wrapper.fill(text)\n\n p = []\n buf = []\n indent = None\n\n def _flush_par():\n if not buf:\n return\n if buf[0].strip() == '\\b':\n p.append((indent or 0, True, '\\n'.join(buf[1:])))\n else:\n p.append((indent or 0, False, ' '.join(buf)))\n del buf[:]\n\n for line in text.splitlines():\n if not line:\n _flush_par()\n indent = None\n else:\n if indent is None:\n orig_len = term_len(line)\n line = line.lstrip()\n indent = orig_len - term_len(line)\n buf.append(line)\n _flush_par()\n\n rv = []\n for indent, raw, text in p:\n with wrapper.extra_indent(' ' * indent):\n if raw:\n rv.append(wrapper.indent_only(text))\n else:\n rv.append(wrapper.fill(text))\n\n return '\\n\\n'.join(rv)\n\n\nclass HelpFormatter(object):\n \"\"\"This class helps with formatting text-based help pages. It's\n usually just needed for very special internal cases, but it's also\n exposed so that developers can write their own fancy outputs.\n\n At present, it always writes into memory.\n\n :param indent_increment: the additional increment for each level.\n :param width: the width for the text. This defaults to the terminal\n width clamped to a maximum of 78.\n \"\"\"\n\n def __init__(self, indent_increment=2, width=None, max_width=None):\n self.indent_increment = indent_increment\n if max_width is None:\n max_width = 80\n if width is None:\n width = FORCED_WIDTH\n if width is None:\n width = max(min(get_terminal_size()[0], max_width) - 2, 50)\n self.width = width\n self.current_indent = 0\n self.buffer = []\n\n def write(self, string):\n \"\"\"Writes a unicode string into the internal buffer.\"\"\"\n self.buffer.append(string)\n\n def indent(self):\n \"\"\"Increases the indentation.\"\"\"\n self.current_indent += self.indent_increment\n\n def dedent(self):\n \"\"\"Decreases the indentation.\"\"\"\n self.current_indent -= self.indent_increment\n\n def write_usage(self, prog, args='', prefix='Usage: '):\n \"\"\"Writes a usage line into the buffer.\n\n :param prog: the program name.\n :param args: whitespace separated list of arguments.\n :param prefix: the prefix for the first line.\n \"\"\"\n usage_prefix = '%*s%s ' % (self.current_indent, prefix, prog)\n text_width = self.width - self.current_indent\n\n if text_width >= (term_len(usage_prefix) + 20):\n # The arguments will fit to the right of the prefix.\n indent = ' ' * term_len(usage_prefix)\n self.write(wrap_text(args, text_width,\n initial_indent=usage_prefix,\n subsequent_indent=indent))\n else:\n # The prefix is too long, put the arguments on the next line.\n self.write(usage_prefix)\n self.write('\\n')\n indent = ' ' * (max(self.current_indent, term_len(prefix)) + 4)\n self.write(wrap_text(args, text_width,\n initial_indent=indent,\n subsequent_indent=indent))\n\n self.write('\\n')\n\n def write_heading(self, heading):\n \"\"\"Writes a heading into the buffer.\"\"\"\n self.write('%*s%s:\\n' % (self.current_indent, '', heading))\n\n def write_paragraph(self):\n \"\"\"Writes a paragraph into the buffer.\"\"\"\n if self.buffer:\n self.write('\\n')\n\n def write_text(self, text):\n \"\"\"Writes re-indented text into the buffer. This rewraps and\n preserves paragraphs.\n \"\"\"\n text_width = max(self.width - self.current_indent, 11)\n indent = ' ' * self.current_indent\n self.write(wrap_text(text, text_width,\n initial_indent=indent,\n subsequent_indent=indent,\n preserve_paragraphs=True))\n self.write('\\n')\n\n def write_dl(self, rows, col_max=30, col_spacing=2):\n \"\"\"Writes a definition list into the buffer. This is how options\n and commands are usually formatted.\n\n :param rows: a list of two item tuples for the terms and values.\n :param col_max: the maximum width of the first column.\n :param col_spacing: the number of spaces between the first and\n second column.\n \"\"\"\n rows = list(rows)\n widths = measure_table(rows)\n if len(widths) != 2:\n raise TypeError('Expected two columns for definition list')\n\n first_col = min(widths[0], col_max) + col_spacing\n\n for first, second in iter_rows(rows, len(widths)):\n self.write('%*s%s' % (self.current_indent, '', first))\n if not second:\n self.write('\\n')\n continue\n if term_len(first) <= first_col - col_spacing:\n self.write(' ' * (first_col - term_len(first)))\n else:\n self.write('\\n')\n self.write(' ' * (first_col + self.current_indent))\n\n text_width = max(self.width - first_col - 2, 10)\n wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True)\n lines = wrapped_text.splitlines()\n\n if lines:\n self.write(lines[0] + '\\n')\n\n for line in lines[1:]:\n self.write('%*s%s\\n' % (first_col + self.current_indent, '', line))\n\n if len(lines) > 1:\n # separate long help from next option\n self.write(\"\\n\")\n else:\n self.write('\\n')\n\n @contextmanager\n def section(self, name):\n \"\"\"Helpful context manager that writes a paragraph, a heading,\n and the indents.\n\n :param name: the section name that is written as heading.\n \"\"\"\n self.write_paragraph()\n self.write_heading(name)\n self.indent()\n try:\n yield\n finally:\n self.dedent()\n\n @contextmanager\n def indentation(self):\n \"\"\"A context manager that increases the indentation.\"\"\"\n self.indent()\n try:\n yield\n finally:\n self.dedent()\n\n def getvalue(self):\n \"\"\"Returns the buffer contents.\"\"\"\n return ''.join(self.buffer)\n\n\ndef join_options(options):\n \"\"\"Given a list of option strings this joins them in the most appropriate\n way and returns them in the form ``(formatted_string,\n any_prefix_is_slash)`` where the second item in the tuple is a flag that\n indicates if any of the option prefixes was a slash.\n \"\"\"\n rv = []\n any_prefix_is_slash = False\n for opt in options:\n prefix = split_opt(opt)[0]\n if prefix == '/':\n any_prefix_is_slash = True\n rv.append((len(prefix), opt))\n\n rv.sort(key=lambda x: x[0])\n\n rv = ', '.join(x[1] for x in rv)\n return rv, any_prefix_is_slash\n", "path": "click/formatting.py"}]}
| 2,938 | 114 |
gh_patches_debug_3304
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-2699
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
0.50 bug sanic APM
Thanks for taking the time for reporting an issue!
Before reporting an issue on dd-trace-py, please be sure to provide all
necessary information.
If you're hitting a bug, make sure that you're using the latest version of this
library.
### Which version of dd-trace-py are you using?
0.50.0
### Which version of pip are you using?
19.0.3
### Which version of the libraries are you using?
Package Version
------------------- ---------
aiofiles 0.7.0
aiohttp 3.6.2
aiomysql 0.0.20
astroid 2.3.3
async-timeout 3.0.1
attrs 19.3.0
certifi 2019.9.11
cffi 1.13.2
chardet 3.0.4
Click 7.0
cryptography 2.8
ddtrace 0.50.0
Deprecated 1.2.11
elasticsearch 7.5.1
elasticsearch-async 6.2.0
h11 0.8.1
h2 3.1.1
hpack 3.0.0
hstspreload 2020.1.7
httpcore 0.3.0
httptools 0.0.13
httpx 0.9.3
hyperframe 5.2.0
idna 2.8
isort 4.3.21
lazy-object-proxy 1.4.3
mccabe 0.6.1
motor 2.4.0
multidict 5.1.0
packaging 21.0
peewee 3.13.1
pip 19.0.3
protobuf 3.17.3
pycparser 2.19
PyJWT 1.7.1
pymongo 3.11.4
PyMySQL 0.9.2
pyparsing 2.4.7
pytz 2019.3
PyYAML 5.3
requests 2.22.0
requests-async 0.5.0
rfc3986 1.3.2
sanic 21.3.4
sanic-motor 0.5.0
sanic-routing 0.6.2
sanic-scheduler 1.0.7
setuptools 40.8.0
six 1.14.0
sniffio 1.1.0
stringcase 1.2.0
tenacity 8.0.1
typed-ast 1.4.1
ujson 1.35
urllib3 1.25.6
uvloop 0.13.0
websockets 8.1
wrapt 1.11.2
yarl 1.4.2
### How can we reproduce your problem?
#### Description
It's not working patch when apply APM on Sanic
If path variable type is int on sanic route
Case code
```
@app.route('/<gam_id:int>/slot/count', methods=['GET'])
async def slot_count(request, gam_id):
try:
pass
except Exception as e:
abort(500, e)
return json(response(200, 'Complete Successfully', {}))
```
Error
```
[2021-07-13 19:50:48 +0000] [13] [ERROR] Exception occurred while handling uri: 'http://xxxxxxxxx.xxx/25/slot/count'
NoneType: None
```
### What is the result that you get?
my production env is not working on Sanic
### What is the result that you expected?
I wanna use datadog APM on my production SANIC
I made already pull request
- https://github.com/DataDog/dd-trace-py/pull/2662
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/sanic/patch.py`
Content:
```
1 import asyncio
2
3 import sanic
4
5 import ddtrace
6 from ddtrace import config
7 from ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY
8 from ddtrace.ext import SpanTypes
9 from ddtrace.pin import Pin
10 from ddtrace.utils.wrappers import unwrap as _u
11 from ddtrace.vendor import wrapt
12 from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
13
14 from .. import trace_utils
15 from ...internal.logger import get_logger
16
17
18 log = get_logger(__name__)
19
20 config._add("sanic", dict(_default_service="sanic", distributed_tracing=True))
21
22 SANIC_PRE_21 = None
23
24
25 def update_span(span, response):
26 if isinstance(response, sanic.response.BaseHTTPResponse):
27 status_code = response.status
28 response_headers = response.headers
29 else:
30 # invalid response causes ServerError exception which must be handled
31 status_code = 500
32 response_headers = None
33 trace_utils.set_http_meta(span, config.sanic, status_code=status_code, response_headers=response_headers)
34
35
36 def _wrap_response_callback(span, callback):
37 # Only for sanic 20 and older
38 # Wrap response callbacks (either sync or async function) to set HTTP
39 # response span tags
40
41 @wrapt.function_wrapper
42 def wrap_sync(wrapped, instance, args, kwargs):
43 r = wrapped(*args, **kwargs)
44 response = args[0]
45 update_span(span, response)
46 return r
47
48 @wrapt.function_wrapper
49 async def wrap_async(wrapped, instance, args, kwargs):
50 r = await wrapped(*args, **kwargs)
51 response = args[0]
52 update_span(span, response)
53 return r
54
55 if asyncio.iscoroutinefunction(callback):
56 return wrap_async(callback)
57
58 return wrap_sync(callback)
59
60
61 async def patch_request_respond(wrapped, instance, args, kwargs):
62 # Only for sanic 21 and newer
63 # Wrap the framework response to set HTTP response span tags
64 response = await wrapped(*args, **kwargs)
65 pin = Pin._find(instance.ctx)
66 if pin is not None and pin.enabled():
67 span = pin.tracer.current_span()
68 if span is not None:
69 update_span(span, response)
70 return response
71
72
73 def _get_path(request):
74 """Get path and replace path parameter values with names if route exists."""
75 path = request.path
76 try:
77 match_info = request.match_info
78 except sanic.exceptions.SanicException:
79 return path
80 for key, value in match_info.items():
81 path = path.replace(value, f"<{key}>")
82 return path
83
84
85 async def patch_run_request_middleware(wrapped, instance, args, kwargs):
86 # Set span resource from the framework request
87 request = args[0]
88 pin = Pin._find(request.ctx)
89 if pin is not None and pin.enabled():
90 span = pin.tracer.current_span()
91 if span is not None:
92 span.resource = "{} {}".format(request.method, _get_path(request))
93 return await wrapped(*args, **kwargs)
94
95
96 def patch():
97 """Patch the instrumented methods."""
98 global SANIC_PRE_21
99
100 if getattr(sanic, "__datadog_patch", False):
101 return
102 setattr(sanic, "__datadog_patch", True)
103
104 SANIC_PRE_21 = sanic.__version__[:2] < "21"
105
106 _w("sanic", "Sanic.handle_request", patch_handle_request)
107 if not SANIC_PRE_21:
108 _w("sanic", "Sanic._run_request_middleware", patch_run_request_middleware)
109 _w(sanic.request, "Request.respond", patch_request_respond)
110
111
112 def unpatch():
113 """Unpatch the instrumented methods."""
114 _u(sanic.Sanic, "handle_request")
115 if not SANIC_PRE_21:
116 _u(sanic.Sanic, "_run_request_middleware")
117 _u(sanic.request.Request, "respond")
118 if not getattr(sanic, "__datadog_patch", False):
119 return
120 setattr(sanic, "__datadog_patch", False)
121
122
123 async def patch_handle_request(wrapped, instance, args, kwargs):
124 """Wrapper for Sanic.handle_request"""
125
126 def unwrap(request, write_callback=None, stream_callback=None, **kwargs):
127 return request, write_callback, stream_callback, kwargs
128
129 request, write_callback, stream_callback, new_kwargs = unwrap(*args, **kwargs)
130
131 if request.scheme not in ("http", "https"):
132 return await wrapped(*args, **kwargs)
133
134 pin = Pin()
135 if SANIC_PRE_21:
136 # Set span resource from the framework request
137 resource = "{} {}".format(request.method, _get_path(request))
138 else:
139 # The path is not available anymore in 21.x. Get it from
140 # the _run_request_middleware instrumented method.
141 resource = None
142 pin.onto(request.ctx)
143
144 headers = request.headers.copy()
145
146 trace_utils.activate_distributed_headers(ddtrace.tracer, int_config=config.sanic, request_headers=headers)
147
148 with pin.tracer.trace(
149 "sanic.request",
150 service=trace_utils.int_service(None, config.sanic),
151 resource=resource,
152 span_type=SpanTypes.WEB,
153 ) as span:
154 sample_rate = config.sanic.get_analytics_sample_rate(use_global_config=True)
155 if sample_rate is not None:
156 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)
157
158 method = request.method
159 url = "{scheme}://{host}{path}".format(scheme=request.scheme, host=request.host, path=request.path)
160 query_string = request.query_string
161 if isinstance(query_string, bytes):
162 query_string = query_string.decode()
163 trace_utils.set_http_meta(
164 span, config.sanic, method=method, url=url, query=query_string, request_headers=headers
165 )
166
167 if write_callback is not None:
168 new_kwargs["write_callback"] = _wrap_response_callback(span, write_callback)
169 if stream_callback is not None:
170 new_kwargs["stream_callback"] = _wrap_response_callback(span, stream_callback)
171
172 return await wrapped(request, **new_kwargs)
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ddtrace/contrib/sanic/patch.py b/ddtrace/contrib/sanic/patch.py
--- a/ddtrace/contrib/sanic/patch.py
+++ b/ddtrace/contrib/sanic/patch.py
@@ -78,6 +78,11 @@
except sanic.exceptions.SanicException:
return path
for key, value in match_info.items():
+ try:
+ value = str(value)
+ except Exception:
+ # Best effort
+ continue
path = path.replace(value, f"<{key}>")
return path
|
{"golden_diff": "diff --git a/ddtrace/contrib/sanic/patch.py b/ddtrace/contrib/sanic/patch.py\n--- a/ddtrace/contrib/sanic/patch.py\n+++ b/ddtrace/contrib/sanic/patch.py\n@@ -78,6 +78,11 @@\n except sanic.exceptions.SanicException:\n return path\n for key, value in match_info.items():\n+ try:\n+ value = str(value)\n+ except Exception:\n+ # Best effort\n+ continue\n path = path.replace(value, f\"<{key}>\")\n return path\n", "issue": "0.50 bug sanic APM\nThanks for taking the time for reporting an issue!\r\n\r\nBefore reporting an issue on dd-trace-py, please be sure to provide all\r\nnecessary information.\r\n\r\nIf you're hitting a bug, make sure that you're using the latest version of this\r\nlibrary.\r\n\r\n### Which version of dd-trace-py are you using?\r\n0.50.0\r\n\r\n### Which version of pip are you using?\r\n19.0.3\r\n\r\n### Which version of the libraries are you using?\r\nPackage Version\r\n------------------- ---------\r\naiofiles 0.7.0\r\naiohttp 3.6.2\r\naiomysql 0.0.20\r\nastroid 2.3.3\r\nasync-timeout 3.0.1\r\nattrs 19.3.0\r\ncertifi 2019.9.11\r\ncffi 1.13.2\r\nchardet 3.0.4\r\nClick 7.0\r\ncryptography 2.8\r\nddtrace 0.50.0\r\nDeprecated 1.2.11\r\nelasticsearch 7.5.1\r\nelasticsearch-async 6.2.0\r\nh11 0.8.1\r\nh2 3.1.1\r\nhpack 3.0.0\r\nhstspreload 2020.1.7\r\nhttpcore 0.3.0\r\nhttptools 0.0.13\r\nhttpx 0.9.3\r\nhyperframe 5.2.0\r\nidna 2.8\r\nisort 4.3.21\r\nlazy-object-proxy 1.4.3\r\nmccabe 0.6.1\r\nmotor 2.4.0\r\nmultidict 5.1.0\r\npackaging 21.0\r\npeewee 3.13.1\r\npip 19.0.3\r\nprotobuf 3.17.3\r\npycparser 2.19\r\nPyJWT 1.7.1\r\npymongo 3.11.4\r\nPyMySQL 0.9.2\r\npyparsing 2.4.7\r\npytz 2019.3\r\nPyYAML 5.3\r\nrequests 2.22.0\r\nrequests-async 0.5.0\r\nrfc3986 1.3.2\r\nsanic 21.3.4\r\nsanic-motor 0.5.0\r\nsanic-routing 0.6.2\r\nsanic-scheduler 1.0.7\r\nsetuptools 40.8.0\r\nsix 1.14.0\r\nsniffio 1.1.0\r\nstringcase 1.2.0\r\ntenacity 8.0.1\r\ntyped-ast 1.4.1\r\nujson 1.35\r\nurllib3 1.25.6\r\nuvloop 0.13.0\r\nwebsockets 8.1\r\nwrapt 1.11.2\r\nyarl 1.4.2\r\n\r\n### How can we reproduce your problem?\r\n#### Description\r\nIt's not working patch when apply APM on Sanic\r\nIf path variable type is int on sanic route\r\n\r\nCase code\r\n```\r\[email protected]('/<gam_id:int>/slot/count', methods=['GET'])\r\nasync def slot_count(request, gam_id):\r\n try:\r\n pass\r\n except Exception as e:\r\n abort(500, e)\r\n return json(response(200, 'Complete Successfully', {}))\r\n\r\n```\r\n\r\nError\r\n```\r\n[2021-07-13 19:50:48 +0000] [13] [ERROR] Exception occurred while handling uri: 'http://xxxxxxxxx.xxx/25/slot/count'\r\nNoneType: None\r\n\r\n```\r\n\r\n### What is the result that you get?\r\nmy production env is not working on Sanic\r\n\r\n\r\n### What is the result that you expected?\r\nI wanna use datadog APM on my production SANIC\r\n\r\nI made already pull request\r\n- https://github.com/DataDog/dd-trace-py/pull/2662\r\n\n", "before_files": [{"content": "import asyncio\n\nimport sanic\n\nimport ddtrace\nfrom ddtrace import config\nfrom ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ddtrace.ext import SpanTypes\nfrom ddtrace.pin import Pin\nfrom ddtrace.utils.wrappers import unwrap as _u\nfrom ddtrace.vendor import wrapt\nfrom ddtrace.vendor.wrapt import wrap_function_wrapper as _w\n\nfrom .. import trace_utils\nfrom ...internal.logger import get_logger\n\n\nlog = get_logger(__name__)\n\nconfig._add(\"sanic\", dict(_default_service=\"sanic\", distributed_tracing=True))\n\nSANIC_PRE_21 = None\n\n\ndef update_span(span, response):\n if isinstance(response, sanic.response.BaseHTTPResponse):\n status_code = response.status\n response_headers = response.headers\n else:\n # invalid response causes ServerError exception which must be handled\n status_code = 500\n response_headers = None\n trace_utils.set_http_meta(span, config.sanic, status_code=status_code, response_headers=response_headers)\n\n\ndef _wrap_response_callback(span, callback):\n # Only for sanic 20 and older\n # Wrap response callbacks (either sync or async function) to set HTTP\n # response span tags\n\n @wrapt.function_wrapper\n def wrap_sync(wrapped, instance, args, kwargs):\n r = wrapped(*args, **kwargs)\n response = args[0]\n update_span(span, response)\n return r\n\n @wrapt.function_wrapper\n async def wrap_async(wrapped, instance, args, kwargs):\n r = await wrapped(*args, **kwargs)\n response = args[0]\n update_span(span, response)\n return r\n\n if asyncio.iscoroutinefunction(callback):\n return wrap_async(callback)\n\n return wrap_sync(callback)\n\n\nasync def patch_request_respond(wrapped, instance, args, kwargs):\n # Only for sanic 21 and newer\n # Wrap the framework response to set HTTP response span tags\n response = await wrapped(*args, **kwargs)\n pin = Pin._find(instance.ctx)\n if pin is not None and pin.enabled():\n span = pin.tracer.current_span()\n if span is not None:\n update_span(span, response)\n return response\n\n\ndef _get_path(request):\n \"\"\"Get path and replace path parameter values with names if route exists.\"\"\"\n path = request.path\n try:\n match_info = request.match_info\n except sanic.exceptions.SanicException:\n return path\n for key, value in match_info.items():\n path = path.replace(value, f\"<{key}>\")\n return path\n\n\nasync def patch_run_request_middleware(wrapped, instance, args, kwargs):\n # Set span resource from the framework request\n request = args[0]\n pin = Pin._find(request.ctx)\n if pin is not None and pin.enabled():\n span = pin.tracer.current_span()\n if span is not None:\n span.resource = \"{} {}\".format(request.method, _get_path(request))\n return await wrapped(*args, **kwargs)\n\n\ndef patch():\n \"\"\"Patch the instrumented methods.\"\"\"\n global SANIC_PRE_21\n\n if getattr(sanic, \"__datadog_patch\", False):\n return\n setattr(sanic, \"__datadog_patch\", True)\n\n SANIC_PRE_21 = sanic.__version__[:2] < \"21\"\n\n _w(\"sanic\", \"Sanic.handle_request\", patch_handle_request)\n if not SANIC_PRE_21:\n _w(\"sanic\", \"Sanic._run_request_middleware\", patch_run_request_middleware)\n _w(sanic.request, \"Request.respond\", patch_request_respond)\n\n\ndef unpatch():\n \"\"\"Unpatch the instrumented methods.\"\"\"\n _u(sanic.Sanic, \"handle_request\")\n if not SANIC_PRE_21:\n _u(sanic.Sanic, \"_run_request_middleware\")\n _u(sanic.request.Request, \"respond\")\n if not getattr(sanic, \"__datadog_patch\", False):\n return\n setattr(sanic, \"__datadog_patch\", False)\n\n\nasync def patch_handle_request(wrapped, instance, args, kwargs):\n \"\"\"Wrapper for Sanic.handle_request\"\"\"\n\n def unwrap(request, write_callback=None, stream_callback=None, **kwargs):\n return request, write_callback, stream_callback, kwargs\n\n request, write_callback, stream_callback, new_kwargs = unwrap(*args, **kwargs)\n\n if request.scheme not in (\"http\", \"https\"):\n return await wrapped(*args, **kwargs)\n\n pin = Pin()\n if SANIC_PRE_21:\n # Set span resource from the framework request\n resource = \"{} {}\".format(request.method, _get_path(request))\n else:\n # The path is not available anymore in 21.x. Get it from\n # the _run_request_middleware instrumented method.\n resource = None\n pin.onto(request.ctx)\n\n headers = request.headers.copy()\n\n trace_utils.activate_distributed_headers(ddtrace.tracer, int_config=config.sanic, request_headers=headers)\n\n with pin.tracer.trace(\n \"sanic.request\",\n service=trace_utils.int_service(None, config.sanic),\n resource=resource,\n span_type=SpanTypes.WEB,\n ) as span:\n sample_rate = config.sanic.get_analytics_sample_rate(use_global_config=True)\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n method = request.method\n url = \"{scheme}://{host}{path}\".format(scheme=request.scheme, host=request.host, path=request.path)\n query_string = request.query_string\n if isinstance(query_string, bytes):\n query_string = query_string.decode()\n trace_utils.set_http_meta(\n span, config.sanic, method=method, url=url, query=query_string, request_headers=headers\n )\n\n if write_callback is not None:\n new_kwargs[\"write_callback\"] = _wrap_response_callback(span, write_callback)\n if stream_callback is not None:\n new_kwargs[\"stream_callback\"] = _wrap_response_callback(span, stream_callback)\n\n return await wrapped(request, **new_kwargs)\n", "path": "ddtrace/contrib/sanic/patch.py"}], "after_files": [{"content": "import asyncio\n\nimport sanic\n\nimport ddtrace\nfrom ddtrace import config\nfrom ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ddtrace.ext import SpanTypes\nfrom ddtrace.pin import Pin\nfrom ddtrace.utils.wrappers import unwrap as _u\nfrom ddtrace.vendor import wrapt\nfrom ddtrace.vendor.wrapt import wrap_function_wrapper as _w\n\nfrom .. import trace_utils\nfrom ...internal.logger import get_logger\n\n\nlog = get_logger(__name__)\n\nconfig._add(\"sanic\", dict(_default_service=\"sanic\", distributed_tracing=True))\n\nSANIC_PRE_21 = None\n\n\ndef update_span(span, response):\n if isinstance(response, sanic.response.BaseHTTPResponse):\n status_code = response.status\n response_headers = response.headers\n else:\n # invalid response causes ServerError exception which must be handled\n status_code = 500\n response_headers = None\n trace_utils.set_http_meta(span, config.sanic, status_code=status_code, response_headers=response_headers)\n\n\ndef _wrap_response_callback(span, callback):\n # Only for sanic 20 and older\n # Wrap response callbacks (either sync or async function) to set HTTP\n # response span tags\n\n @wrapt.function_wrapper\n def wrap_sync(wrapped, instance, args, kwargs):\n r = wrapped(*args, **kwargs)\n response = args[0]\n update_span(span, response)\n return r\n\n @wrapt.function_wrapper\n async def wrap_async(wrapped, instance, args, kwargs):\n r = await wrapped(*args, **kwargs)\n response = args[0]\n update_span(span, response)\n return r\n\n if asyncio.iscoroutinefunction(callback):\n return wrap_async(callback)\n\n return wrap_sync(callback)\n\n\nasync def patch_request_respond(wrapped, instance, args, kwargs):\n # Only for sanic 21 and newer\n # Wrap the framework response to set HTTP response span tags\n response = await wrapped(*args, **kwargs)\n pin = Pin._find(instance.ctx)\n if pin is not None and pin.enabled():\n span = pin.tracer.current_span()\n if span is not None:\n update_span(span, response)\n return response\n\n\ndef _get_path(request):\n \"\"\"Get path and replace path parameter values with names if route exists.\"\"\"\n path = request.path\n try:\n match_info = request.match_info\n except sanic.exceptions.SanicException:\n return path\n for key, value in match_info.items():\n try:\n value = str(value)\n except Exception:\n # Best effort\n continue\n path = path.replace(value, f\"<{key}>\")\n return path\n\n\nasync def patch_run_request_middleware(wrapped, instance, args, kwargs):\n # Set span resource from the framework request\n request = args[0]\n pin = Pin._find(request.ctx)\n if pin is not None and pin.enabled():\n span = pin.tracer.current_span()\n if span is not None:\n span.resource = \"{} {}\".format(request.method, _get_path(request))\n return await wrapped(*args, **kwargs)\n\n\ndef patch():\n \"\"\"Patch the instrumented methods.\"\"\"\n global SANIC_PRE_21\n\n if getattr(sanic, \"__datadog_patch\", False):\n return\n setattr(sanic, \"__datadog_patch\", True)\n\n SANIC_PRE_21 = sanic.__version__[:2] < \"21\"\n\n _w(\"sanic\", \"Sanic.handle_request\", patch_handle_request)\n if not SANIC_PRE_21:\n _w(\"sanic\", \"Sanic._run_request_middleware\", patch_run_request_middleware)\n _w(sanic.request, \"Request.respond\", patch_request_respond)\n\n\ndef unpatch():\n \"\"\"Unpatch the instrumented methods.\"\"\"\n _u(sanic.Sanic, \"handle_request\")\n if not SANIC_PRE_21:\n _u(sanic.Sanic, \"_run_request_middleware\")\n _u(sanic.request.Request, \"respond\")\n if not getattr(sanic, \"__datadog_patch\", False):\n return\n setattr(sanic, \"__datadog_patch\", False)\n\n\nasync def patch_handle_request(wrapped, instance, args, kwargs):\n \"\"\"Wrapper for Sanic.handle_request\"\"\"\n\n def unwrap(request, write_callback=None, stream_callback=None, **kwargs):\n return request, write_callback, stream_callback, kwargs\n\n request, write_callback, stream_callback, new_kwargs = unwrap(*args, **kwargs)\n\n if request.scheme not in (\"http\", \"https\"):\n return await wrapped(*args, **kwargs)\n\n pin = Pin()\n if SANIC_PRE_21:\n # Set span resource from the framework request\n resource = \"{} {}\".format(request.method, _get_path(request))\n else:\n # The path is not available anymore in 21.x. Get it from\n # the _run_request_middleware instrumented method.\n resource = None\n pin.onto(request.ctx)\n\n headers = request.headers.copy()\n\n trace_utils.activate_distributed_headers(ddtrace.tracer, int_config=config.sanic, request_headers=headers)\n\n with pin.tracer.trace(\n \"sanic.request\",\n service=trace_utils.int_service(None, config.sanic),\n resource=resource,\n span_type=SpanTypes.WEB,\n ) as span:\n sample_rate = config.sanic.get_analytics_sample_rate(use_global_config=True)\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n method = request.method\n url = \"{scheme}://{host}{path}\".format(scheme=request.scheme, host=request.host, path=request.path)\n query_string = request.query_string\n if isinstance(query_string, bytes):\n query_string = query_string.decode()\n trace_utils.set_http_meta(\n span, config.sanic, method=method, url=url, query=query_string, request_headers=headers\n )\n\n if write_callback is not None:\n new_kwargs[\"write_callback\"] = _wrap_response_callback(span, write_callback)\n if stream_callback is not None:\n new_kwargs[\"stream_callback\"] = _wrap_response_callback(span, stream_callback)\n\n return await wrapped(request, **new_kwargs)\n", "path": "ddtrace/contrib/sanic/patch.py"}]}
| 2,997 | 127 |
gh_patches_debug_14171
|
rasdani/github-patches
|
git_diff
|
sql-machine-learning__elasticdl-373
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
When worker/master image creation failed, client should fail instead of trying to launch master.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/client/client.py`
Content:
```
1 import argparse
2 import os
3 import inspect
4 import tempfile
5 import time
6 import getpass
7 import sys
8 from string import Template
9 import docker
10 import yaml
11 from kubernetes.client.apis import core_v1_api
12 from kubernetes import config
13
14
15 def _m_file_in_docker(model_file):
16 return "/model/" + os.path.basename(model_file)
17
18 def _build_docker_image(
19 m_file, image_name, image_base="elasticdl:dev",
20 repository=None
21 ):
22 DOCKER_TEMPLATE = """
23 FROM {}
24 COPY {} {}
25 """
26
27 with tempfile.NamedTemporaryFile(mode="w+", delete=False) as df:
28 df.write(DOCKER_TEMPLATE.format(image_base, m_file, _m_file_in_docker(m_file)))
29
30 client = docker.APIClient(base_url="unix://var/run/docker.sock")
31 print("===== Building Docker Image =====")
32 for line in client.build(
33 dockerfile=df.name, path=".", rm=True, tag=image_name, decode=True
34 ):
35 text = line.get("stream", None)
36 if text:
37 sys.stdout.write(text)
38 sys.stdout.flush()
39 print("===== Docker Image Built =====")
40 if repository != None:
41 for line in client.push(image_name, stream=True, decode=True):
42 print(line)
43
44 def _gen_master_def(image_name, model_file, job_name, argv):
45 master_yaml = """
46 apiVersion: v1
47 kind: Pod
48 metadata:
49 name: "elasticdl-master-{job_name}"
50 labels:
51 purpose: test-command
52 spec:
53 containers:
54 - name: "elasticdl-master-{job_name}"
55 image: "{image_name}"
56 command: ["python"]
57 args: [
58 "-m", "elasticdl.master.main",
59 "--job_name", "{job_name}",
60 "--worker_image", "{image_name}",
61 "--model_file", "{m_file}"
62 ]
63 imagePullPolicy: IfNotPresent
64 env:
65 - name: MY_POD_IP
66 valueFrom:
67 fieldRef:
68 fieldPath: status.podIP
69 restartPolicy: Never
70 """ .format(m_file=_m_file_in_docker(model_file), image_name=image_name, job_name=job_name)
71
72 master_def = yaml.safe_load(master_yaml)
73
74 # Build master arguments
75 master_def['spec']['containers'][0]['args'].extend(argv)
76 return master_def
77
78 def _submit(image_name, model_file, job_name, argv):
79 master_def = _gen_master_def(image_name, model_file, job_name, argv)
80 config.load_kube_config()
81 api = core_v1_api.CoreV1Api()
82 resp = api.create_namespaced_pod(body=master_def, namespace="default")
83 print("Master launched. status='%s'" % str(resp.status))
84
85 def main():
86 parser = argparse.ArgumentParser(description="ElasticDL Client")
87 # Rewrite model_file argument and pass all other arguments to master.
88 parser.add_argument("--model_file", help="Path to Model file", required=True)
89 parser.add_argument("--image_base", help="Base image containing elasticdl runtime environment.", required=True)
90 parser.add_argument("--repository", help="The repository to push docker image to.")
91 parser.add_argument("--job_name", help="ElasticDL job name", required=True)
92 args, argv = parser.parse_known_args()
93
94 job_name = args.job_name + "-" + str(int(round(time.time() * 1000)))
95 image_name = args.image_base + '_' + job_name
96 _build_docker_image(args.model_file, image_name, image_base=args.image_base,
97 repository=args.repository)
98 _submit(image_name, args.model_file, job_name, argv)
99
100
101 if __name__ == "__main__":
102 main()
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticdl/client/client.py b/elasticdl/client/client.py
--- a/elasticdl/client/client.py
+++ b/elasticdl/client/client.py
@@ -32,12 +32,14 @@
for line in client.build(
dockerfile=df.name, path=".", rm=True, tag=image_name, decode=True
):
+ if "error" in line:
+ raise RuntimeError("Docker image build failure: " % line["error"])
text = line.get("stream", None)
if text:
sys.stdout.write(text)
sys.stdout.flush()
print("===== Docker Image Built =====")
- if repository != None:
+ if repository is not None:
for line in client.push(image_name, stream=True, decode=True):
print(line)
|
{"golden_diff": "diff --git a/elasticdl/client/client.py b/elasticdl/client/client.py\n--- a/elasticdl/client/client.py\n+++ b/elasticdl/client/client.py\n@@ -32,12 +32,14 @@\n for line in client.build(\n dockerfile=df.name, path=\".\", rm=True, tag=image_name, decode=True\n ):\n+ if \"error\" in line:\n+ raise RuntimeError(\"Docker image build failure: \" % line[\"error\"])\n text = line.get(\"stream\", None)\n if text:\n sys.stdout.write(text)\n sys.stdout.flush()\n print(\"===== Docker Image Built =====\")\n- if repository != None:\n+ if repository is not None:\n for line in client.push(image_name, stream=True, decode=True):\n print(line)\n", "issue": "When worker/master image creation failed, client should fail instead of trying to launch master.\n\n", "before_files": [{"content": "import argparse\nimport os\nimport inspect\nimport tempfile\nimport time\nimport getpass\nimport sys\nfrom string import Template\nimport docker\nimport yaml\nfrom kubernetes.client.apis import core_v1_api\nfrom kubernetes import config\n\n\ndef _m_file_in_docker(model_file):\n return \"/model/\" + os.path.basename(model_file)\n\ndef _build_docker_image(\n m_file, image_name, image_base=\"elasticdl:dev\",\n repository=None\n):\n DOCKER_TEMPLATE = \"\"\"\nFROM {}\nCOPY {} {}\n\"\"\"\n\n with tempfile.NamedTemporaryFile(mode=\"w+\", delete=False) as df:\n df.write(DOCKER_TEMPLATE.format(image_base, m_file, _m_file_in_docker(m_file)))\n\n client = docker.APIClient(base_url=\"unix://var/run/docker.sock\")\n print(\"===== Building Docker Image =====\")\n for line in client.build(\n dockerfile=df.name, path=\".\", rm=True, tag=image_name, decode=True\n ):\n text = line.get(\"stream\", None)\n if text:\n sys.stdout.write(text)\n sys.stdout.flush()\n print(\"===== Docker Image Built =====\")\n if repository != None:\n for line in client.push(image_name, stream=True, decode=True):\n print(line)\n\ndef _gen_master_def(image_name, model_file, job_name, argv):\n master_yaml = \"\"\"\napiVersion: v1\nkind: Pod\nmetadata:\n name: \"elasticdl-master-{job_name}\"\n labels:\n purpose: test-command\nspec:\n containers:\n - name: \"elasticdl-master-{job_name}\"\n image: \"{image_name}\"\n command: [\"python\"]\n args: [\n \"-m\", \"elasticdl.master.main\",\n \"--job_name\", \"{job_name}\",\n \"--worker_image\", \"{image_name}\",\n \"--model_file\", \"{m_file}\"\n ]\n imagePullPolicy: IfNotPresent \n env:\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n restartPolicy: Never\n\"\"\" .format(m_file=_m_file_in_docker(model_file), image_name=image_name, job_name=job_name)\n\n master_def = yaml.safe_load(master_yaml)\n\n # Build master arguments\n master_def['spec']['containers'][0]['args'].extend(argv)\n return master_def\n\ndef _submit(image_name, model_file, job_name, argv):\n master_def = _gen_master_def(image_name, model_file, job_name, argv)\n config.load_kube_config()\n api = core_v1_api.CoreV1Api()\n resp = api.create_namespaced_pod(body=master_def, namespace=\"default\")\n print(\"Master launched. status='%s'\" % str(resp.status))\n\ndef main():\n parser = argparse.ArgumentParser(description=\"ElasticDL Client\")\n # Rewrite model_file argument and pass all other arguments to master.\n parser.add_argument(\"--model_file\", help=\"Path to Model file\", required=True)\n parser.add_argument(\"--image_base\", help=\"Base image containing elasticdl runtime environment.\", required=True)\n parser.add_argument(\"--repository\", help=\"The repository to push docker image to.\")\n parser.add_argument(\"--job_name\", help=\"ElasticDL job name\", required=True)\n args, argv = parser.parse_known_args()\n\n job_name = args.job_name + \"-\" + str(int(round(time.time() * 1000)))\n image_name = args.image_base + '_' + job_name \n _build_docker_image(args.model_file, image_name, image_base=args.image_base,\n repository=args.repository)\n _submit(image_name, args.model_file, job_name, argv)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "elasticdl/client/client.py"}], "after_files": [{"content": "import argparse\nimport os\nimport inspect\nimport tempfile\nimport time\nimport getpass\nimport sys\nfrom string import Template\nimport docker\nimport yaml\nfrom kubernetes.client.apis import core_v1_api\nfrom kubernetes import config\n\n\ndef _m_file_in_docker(model_file):\n return \"/model/\" + os.path.basename(model_file)\n\ndef _build_docker_image(\n m_file, image_name, image_base=\"elasticdl:dev\",\n repository=None\n):\n DOCKER_TEMPLATE = \"\"\"\nFROM {}\nCOPY {} {}\n\"\"\"\n\n with tempfile.NamedTemporaryFile(mode=\"w+\", delete=False) as df:\n df.write(DOCKER_TEMPLATE.format(image_base, m_file, _m_file_in_docker(m_file)))\n\n client = docker.APIClient(base_url=\"unix://var/run/docker.sock\")\n print(\"===== Building Docker Image =====\")\n for line in client.build(\n dockerfile=df.name, path=\".\", rm=True, tag=image_name, decode=True\n ):\n if \"error\" in line:\n raise RuntimeError(\"Docker image build failure: \" % line[\"error\"])\n text = line.get(\"stream\", None)\n if text:\n sys.stdout.write(text)\n sys.stdout.flush()\n print(\"===== Docker Image Built =====\")\n if repository is not None:\n for line in client.push(image_name, stream=True, decode=True):\n print(line)\n\ndef _gen_master_def(image_name, model_file, job_name, argv):\n master_yaml = \"\"\"\napiVersion: v1\nkind: Pod\nmetadata:\n name: \"elasticdl-master-{job_name}\"\n labels:\n purpose: test-command\nspec:\n containers:\n - name: \"elasticdl-master-{job_name}\"\n image: \"{image_name}\"\n command: [\"python\"]\n args: [\n \"-m\", \"elasticdl.master.main\",\n \"--job_name\", \"{job_name}\",\n \"--worker_image\", \"{image_name}\",\n \"--model_file\", \"{m_file}\"\n ]\n imagePullPolicy: IfNotPresent \n env:\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n restartPolicy: Never\n\"\"\" .format(m_file=_m_file_in_docker(model_file), image_name=image_name, job_name=job_name)\n\n master_def = yaml.safe_load(master_yaml)\n\n # Build master arguments\n master_def['spec']['containers'][0]['args'].extend(argv)\n return master_def\n\ndef _submit(image_name, model_file, job_name, argv):\n master_def = _gen_master_def(image_name, model_file, job_name, argv)\n config.load_kube_config()\n api = core_v1_api.CoreV1Api()\n resp = api.create_namespaced_pod(body=master_def, namespace=\"default\")\n print(\"Master launched. status='%s'\" % str(resp.status))\n\ndef main():\n parser = argparse.ArgumentParser(description=\"ElasticDL Client\")\n # Rewrite model_file argument and pass all other arguments to master.\n parser.add_argument(\"--model_file\", help=\"Path to Model file\", required=True)\n parser.add_argument(\"--image_base\", help=\"Base image containing elasticdl runtime environment.\", required=True)\n parser.add_argument(\"--repository\", help=\"The repository to push docker image to.\")\n parser.add_argument(\"--job_name\", help=\"ElasticDL job name\", required=True)\n args, argv = parser.parse_known_args()\n\n job_name = args.job_name + \"-\" + str(int(round(time.time() * 1000)))\n image_name = args.image_base + '_' + job_name \n _build_docker_image(args.model_file, image_name, image_base=args.image_base,\n repository=args.repository)\n _submit(image_name, args.model_file, job_name, argv)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "elasticdl/client/client.py"}]}
| 1,268 | 172 |
gh_patches_debug_2469
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.aws-1197
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ec2_customer_gateway: bgp_asn is not required
### Summary
The ec2_customer_gateway module has incorrect documentation for the bgp_asn parameter.
It says the ASN must be passed when state=present, but the code defaults to 25000 if the parameter is absent. See the ensure_cgw_present() method:
```
def ensure_cgw_present(self, bgp_asn, ip_address):
if not bgp_asn:
bgp_asn = 65000
response = self.ec2.create_customer_gateway(
DryRun=False,
Type='ipsec.1',
PublicIp=ip_address,
BgpAsn=bgp_asn,
)
return response
### Issue Type
Documentation Report
### Component Name
ec2_customer_gateway
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.12.4]
config file = None
configured module search path = ['/home/neil/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/neil/.local/share/virtualenvs/community.aws-uRL047Ho/lib/python3.10/site-packages/ansible
ansible collection location = /home/neil/.ansible/collections:/usr/share/ansible/collections
executable location = /home/neil/.local/share/virtualenvs/community.aws-uRL047Ho/bin/ansible
python version = 3.10.1 (main, Jan 10 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]
jinja version = 3.1.1
libyaml = True
```
### Collection Versions
```console (paste below)
$ ansible-galaxy collection list
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
main branch, as of 2022-04-18.
### Additional Information
Suggested rewording:
```
options:
bgp_asn:
description:
- Border Gateway Protocol (BGP) Autonomous System Number (ASN), defaults to 25000.
type: int
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/modules/ec2_customer_gateway.py`
Content:
```
1 #!/usr/bin/python
2 #
3 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
4
5 from __future__ import absolute_import, division, print_function
6 __metaclass__ = type
7
8
9 DOCUMENTATION = '''
10 ---
11 module: ec2_customer_gateway
12 version_added: 1.0.0
13 short_description: Manage an AWS customer gateway
14 description:
15 - Manage an AWS customer gateway.
16 author: Michael Baydoun (@MichaelBaydoun)
17 notes:
18 - You cannot create more than one customer gateway with the same IP address. If you run an identical request more than one time, the
19 first request creates the customer gateway, and subsequent requests return information about the existing customer gateway. The subsequent
20 requests do not create new customer gateway resources.
21 - Return values contain customer_gateway and customer_gateways keys which are identical dicts. You should use
22 customer_gateway. See U(https://github.com/ansible/ansible-modules-extras/issues/2773) for details.
23 options:
24 bgp_asn:
25 description:
26 - Border Gateway Protocol (BGP) Autonomous System Number (ASN), required when I(state=present).
27 type: int
28 ip_address:
29 description:
30 - Internet-routable IP address for customers gateway, must be a static address.
31 required: true
32 type: str
33 name:
34 description:
35 - Name of the customer gateway.
36 required: true
37 type: str
38 routing:
39 description:
40 - The type of routing.
41 choices: ['static', 'dynamic']
42 default: dynamic
43 type: str
44 state:
45 description:
46 - Create or terminate the Customer Gateway.
47 default: present
48 choices: [ 'present', 'absent' ]
49 type: str
50 extends_documentation_fragment:
51 - amazon.aws.aws
52 - amazon.aws.ec2
53
54 '''
55
56 EXAMPLES = '''
57 - name: Create Customer Gateway
58 community.aws.ec2_customer_gateway:
59 bgp_asn: 12345
60 ip_address: 1.2.3.4
61 name: IndianapolisOffice
62 region: us-east-1
63 register: cgw
64
65 - name: Delete Customer Gateway
66 community.aws.ec2_customer_gateway:
67 ip_address: 1.2.3.4
68 name: IndianapolisOffice
69 state: absent
70 region: us-east-1
71 register: cgw
72 '''
73
74 RETURN = '''
75 gateway.customer_gateways:
76 description: details about the gateway that was created.
77 returned: success
78 type: complex
79 contains:
80 bgp_asn:
81 description: The Border Gateway Autonomous System Number.
82 returned: when exists and gateway is available.
83 sample: 65123
84 type: str
85 customer_gateway_id:
86 description: gateway id assigned by amazon.
87 returned: when exists and gateway is available.
88 sample: cgw-cb6386a2
89 type: str
90 ip_address:
91 description: ip address of your gateway device.
92 returned: when exists and gateway is available.
93 sample: 1.2.3.4
94 type: str
95 state:
96 description: state of gateway.
97 returned: when gateway exists and is available.
98 sample: available
99 type: str
100 tags:
101 description: Any tags on the gateway.
102 returned: when gateway exists and is available, and when tags exist.
103 type: list
104 type:
105 description: encryption type.
106 returned: when gateway exists and is available.
107 sample: ipsec.1
108 type: str
109 '''
110
111 try:
112 import botocore
113 except ImportError:
114 pass # Handled by AnsibleAWSModule
115
116 from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
117
118 from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
119 from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
120
121
122 class Ec2CustomerGatewayManager:
123
124 def __init__(self, module):
125 self.module = module
126
127 try:
128 self.ec2 = module.client('ec2')
129 except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
130 module.fail_json_aws(e, msg='Failed to connect to AWS')
131
132 @AWSRetry.jittered_backoff(delay=2, max_delay=30, retries=6, catch_extra_error_codes=['IncorrectState'])
133 def ensure_cgw_absent(self, gw_id):
134 response = self.ec2.delete_customer_gateway(
135 DryRun=False,
136 CustomerGatewayId=gw_id
137 )
138 return response
139
140 def ensure_cgw_present(self, bgp_asn, ip_address):
141 if not bgp_asn:
142 bgp_asn = 65000
143 response = self.ec2.create_customer_gateway(
144 DryRun=False,
145 Type='ipsec.1',
146 PublicIp=ip_address,
147 BgpAsn=bgp_asn,
148 )
149 return response
150
151 def tag_cgw_name(self, gw_id, name):
152 response = self.ec2.create_tags(
153 DryRun=False,
154 Resources=[
155 gw_id,
156 ],
157 Tags=[
158 {
159 'Key': 'Name',
160 'Value': name
161 },
162 ]
163 )
164 return response
165
166 def describe_gateways(self, ip_address):
167 response = self.ec2.describe_customer_gateways(
168 DryRun=False,
169 Filters=[
170 {
171 'Name': 'state',
172 'Values': [
173 'available',
174 ]
175 },
176 {
177 'Name': 'ip-address',
178 'Values': [
179 ip_address,
180 ]
181 }
182 ]
183 )
184 return response
185
186
187 def main():
188 argument_spec = dict(
189 bgp_asn=dict(required=False, type='int'),
190 ip_address=dict(required=True),
191 name=dict(required=True),
192 routing=dict(default='dynamic', choices=['dynamic', 'static']),
193 state=dict(default='present', choices=['present', 'absent']),
194 )
195
196 module = AnsibleAWSModule(
197 argument_spec=argument_spec,
198 supports_check_mode=True,
199 required_if=[
200 ('routing', 'dynamic', ['bgp_asn'])
201 ]
202 )
203
204 gw_mgr = Ec2CustomerGatewayManager(module)
205
206 name = module.params.get('name')
207
208 existing = gw_mgr.describe_gateways(module.params['ip_address'])
209
210 results = dict(changed=False)
211 if module.params['state'] == 'present':
212 if existing['CustomerGateways']:
213 existing['CustomerGateway'] = existing['CustomerGateways'][0]
214 results['gateway'] = existing
215 if existing['CustomerGateway']['Tags']:
216 tag_array = existing['CustomerGateway']['Tags']
217 for key, value in enumerate(tag_array):
218 if value['Key'] == 'Name':
219 current_name = value['Value']
220 if current_name != name:
221 results['name'] = gw_mgr.tag_cgw_name(
222 results['gateway']['CustomerGateway']['CustomerGatewayId'],
223 module.params['name'],
224 )
225 results['changed'] = True
226 else:
227 if not module.check_mode:
228 results['gateway'] = gw_mgr.ensure_cgw_present(
229 module.params['bgp_asn'],
230 module.params['ip_address'],
231 )
232 results['name'] = gw_mgr.tag_cgw_name(
233 results['gateway']['CustomerGateway']['CustomerGatewayId'],
234 module.params['name'],
235 )
236 results['changed'] = True
237
238 elif module.params['state'] == 'absent':
239 if existing['CustomerGateways']:
240 existing['CustomerGateway'] = existing['CustomerGateways'][0]
241 results['gateway'] = existing
242 if not module.check_mode:
243 results['gateway'] = gw_mgr.ensure_cgw_absent(
244 existing['CustomerGateway']['CustomerGatewayId']
245 )
246 results['changed'] = True
247
248 pretty_results = camel_dict_to_snake_dict(results)
249 module.exit_json(**pretty_results)
250
251
252 if __name__ == '__main__':
253 main()
254
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugins/modules/ec2_customer_gateway.py b/plugins/modules/ec2_customer_gateway.py
--- a/plugins/modules/ec2_customer_gateway.py
+++ b/plugins/modules/ec2_customer_gateway.py
@@ -23,7 +23,8 @@
options:
bgp_asn:
description:
- - Border Gateway Protocol (BGP) Autonomous System Number (ASN), required when I(state=present).
+ - Border Gateway Protocol (BGP) Autonomous System Number (ASN).
+ - Defaults to C(65000) if not specified when I(state=present).
type: int
ip_address:
description:
|
{"golden_diff": "diff --git a/plugins/modules/ec2_customer_gateway.py b/plugins/modules/ec2_customer_gateway.py\n--- a/plugins/modules/ec2_customer_gateway.py\n+++ b/plugins/modules/ec2_customer_gateway.py\n@@ -23,7 +23,8 @@\n options:\n bgp_asn:\n description:\n- - Border Gateway Protocol (BGP) Autonomous System Number (ASN), required when I(state=present).\n+ - Border Gateway Protocol (BGP) Autonomous System Number (ASN).\n+ - Defaults to C(65000) if not specified when I(state=present).\n type: int\n ip_address:\n description:\n", "issue": "ec2_customer_gateway: bgp_asn is not required\n### Summary\n\nThe ec2_customer_gateway module has incorrect documentation for the bgp_asn parameter.\r\n\r\nIt says the ASN must be passed when state=present, but the code defaults to 25000 if the parameter is absent. See the ensure_cgw_present() method:\r\n\r\n```\r\n def ensure_cgw_present(self, bgp_asn, ip_address):\r\n if not bgp_asn:\r\n bgp_asn = 65000\r\n response = self.ec2.create_customer_gateway(\r\n DryRun=False,\r\n Type='ipsec.1',\r\n PublicIp=ip_address,\r\n BgpAsn=bgp_asn,\r\n )\r\n return response\n\n### Issue Type\n\nDocumentation Report\n\n### Component Name\n\nec2_customer_gateway\n\n### Ansible Version\n\n```console (paste below)\r\n$ ansible --version\r\nansible [core 2.12.4]\r\n config file = None\r\n configured module search path = ['/home/neil/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/neil/.local/share/virtualenvs/community.aws-uRL047Ho/lib/python3.10/site-packages/ansible\r\n ansible collection location = /home/neil/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /home/neil/.local/share/virtualenvs/community.aws-uRL047Ho/bin/ansible\r\n python version = 3.10.1 (main, Jan 10 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]\r\n jinja version = 3.1.1\r\n libyaml = True\r\n```\r\n\n\n### Collection Versions\n\n```console (paste below)\r\n$ ansible-galaxy collection list\r\n```\r\n\n\n### Configuration\n\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\n\n\n### OS / Environment\n\nmain branch, as of 2022-04-18.\n\n### Additional Information\n\nSuggested rewording:\r\n\r\n```\r\noptions:\r\n bgp_asn:\r\n description:\r\n - Border Gateway Protocol (BGP) Autonomous System Number (ASN), defaults to 25000.\r\n type: int\r\n```\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: ec2_customer_gateway\nversion_added: 1.0.0\nshort_description: Manage an AWS customer gateway\ndescription:\n - Manage an AWS customer gateway.\nauthor: Michael Baydoun (@MichaelBaydoun)\nnotes:\n - You cannot create more than one customer gateway with the same IP address. If you run an identical request more than one time, the\n first request creates the customer gateway, and subsequent requests return information about the existing customer gateway. The subsequent\n requests do not create new customer gateway resources.\n - Return values contain customer_gateway and customer_gateways keys which are identical dicts. You should use\n customer_gateway. See U(https://github.com/ansible/ansible-modules-extras/issues/2773) for details.\noptions:\n bgp_asn:\n description:\n - Border Gateway Protocol (BGP) Autonomous System Number (ASN), required when I(state=present).\n type: int\n ip_address:\n description:\n - Internet-routable IP address for customers gateway, must be a static address.\n required: true\n type: str\n name:\n description:\n - Name of the customer gateway.\n required: true\n type: str\n routing:\n description:\n - The type of routing.\n choices: ['static', 'dynamic']\n default: dynamic\n type: str\n state:\n description:\n - Create or terminate the Customer Gateway.\n default: present\n choices: [ 'present', 'absent' ]\n type: str\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n\n'''\n\nEXAMPLES = '''\n- name: Create Customer Gateway\n community.aws.ec2_customer_gateway:\n bgp_asn: 12345\n ip_address: 1.2.3.4\n name: IndianapolisOffice\n region: us-east-1\n register: cgw\n\n- name: Delete Customer Gateway\n community.aws.ec2_customer_gateway:\n ip_address: 1.2.3.4\n name: IndianapolisOffice\n state: absent\n region: us-east-1\n register: cgw\n'''\n\nRETURN = '''\ngateway.customer_gateways:\n description: details about the gateway that was created.\n returned: success\n type: complex\n contains:\n bgp_asn:\n description: The Border Gateway Autonomous System Number.\n returned: when exists and gateway is available.\n sample: 65123\n type: str\n customer_gateway_id:\n description: gateway id assigned by amazon.\n returned: when exists and gateway is available.\n sample: cgw-cb6386a2\n type: str\n ip_address:\n description: ip address of your gateway device.\n returned: when exists and gateway is available.\n sample: 1.2.3.4\n type: str\n state:\n description: state of gateway.\n returned: when gateway exists and is available.\n sample: available\n type: str\n tags:\n description: Any tags on the gateway.\n returned: when gateway exists and is available, and when tags exist.\n type: list\n type:\n description: encryption type.\n returned: when gateway exists and is available.\n sample: ipsec.1\n type: str\n'''\n\ntry:\n import botocore\nexcept ImportError:\n pass # Handled by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\n\n\nclass Ec2CustomerGatewayManager:\n\n def __init__(self, module):\n self.module = module\n\n try:\n self.ec2 = module.client('ec2')\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to connect to AWS')\n\n @AWSRetry.jittered_backoff(delay=2, max_delay=30, retries=6, catch_extra_error_codes=['IncorrectState'])\n def ensure_cgw_absent(self, gw_id):\n response = self.ec2.delete_customer_gateway(\n DryRun=False,\n CustomerGatewayId=gw_id\n )\n return response\n\n def ensure_cgw_present(self, bgp_asn, ip_address):\n if not bgp_asn:\n bgp_asn = 65000\n response = self.ec2.create_customer_gateway(\n DryRun=False,\n Type='ipsec.1',\n PublicIp=ip_address,\n BgpAsn=bgp_asn,\n )\n return response\n\n def tag_cgw_name(self, gw_id, name):\n response = self.ec2.create_tags(\n DryRun=False,\n Resources=[\n gw_id,\n ],\n Tags=[\n {\n 'Key': 'Name',\n 'Value': name\n },\n ]\n )\n return response\n\n def describe_gateways(self, ip_address):\n response = self.ec2.describe_customer_gateways(\n DryRun=False,\n Filters=[\n {\n 'Name': 'state',\n 'Values': [\n 'available',\n ]\n },\n {\n 'Name': 'ip-address',\n 'Values': [\n ip_address,\n ]\n }\n ]\n )\n return response\n\n\ndef main():\n argument_spec = dict(\n bgp_asn=dict(required=False, type='int'),\n ip_address=dict(required=True),\n name=dict(required=True),\n routing=dict(default='dynamic', choices=['dynamic', 'static']),\n state=dict(default='present', choices=['present', 'absent']),\n )\n\n module = AnsibleAWSModule(\n argument_spec=argument_spec,\n supports_check_mode=True,\n required_if=[\n ('routing', 'dynamic', ['bgp_asn'])\n ]\n )\n\n gw_mgr = Ec2CustomerGatewayManager(module)\n\n name = module.params.get('name')\n\n existing = gw_mgr.describe_gateways(module.params['ip_address'])\n\n results = dict(changed=False)\n if module.params['state'] == 'present':\n if existing['CustomerGateways']:\n existing['CustomerGateway'] = existing['CustomerGateways'][0]\n results['gateway'] = existing\n if existing['CustomerGateway']['Tags']:\n tag_array = existing['CustomerGateway']['Tags']\n for key, value in enumerate(tag_array):\n if value['Key'] == 'Name':\n current_name = value['Value']\n if current_name != name:\n results['name'] = gw_mgr.tag_cgw_name(\n results['gateway']['CustomerGateway']['CustomerGatewayId'],\n module.params['name'],\n )\n results['changed'] = True\n else:\n if not module.check_mode:\n results['gateway'] = gw_mgr.ensure_cgw_present(\n module.params['bgp_asn'],\n module.params['ip_address'],\n )\n results['name'] = gw_mgr.tag_cgw_name(\n results['gateway']['CustomerGateway']['CustomerGatewayId'],\n module.params['name'],\n )\n results['changed'] = True\n\n elif module.params['state'] == 'absent':\n if existing['CustomerGateways']:\n existing['CustomerGateway'] = existing['CustomerGateways'][0]\n results['gateway'] = existing\n if not module.check_mode:\n results['gateway'] = gw_mgr.ensure_cgw_absent(\n existing['CustomerGateway']['CustomerGatewayId']\n )\n results['changed'] = True\n\n pretty_results = camel_dict_to_snake_dict(results)\n module.exit_json(**pretty_results)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/ec2_customer_gateway.py"}], "after_files": [{"content": "#!/usr/bin/python\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: ec2_customer_gateway\nversion_added: 1.0.0\nshort_description: Manage an AWS customer gateway\ndescription:\n - Manage an AWS customer gateway.\nauthor: Michael Baydoun (@MichaelBaydoun)\nnotes:\n - You cannot create more than one customer gateway with the same IP address. If you run an identical request more than one time, the\n first request creates the customer gateway, and subsequent requests return information about the existing customer gateway. The subsequent\n requests do not create new customer gateway resources.\n - Return values contain customer_gateway and customer_gateways keys which are identical dicts. You should use\n customer_gateway. See U(https://github.com/ansible/ansible-modules-extras/issues/2773) for details.\noptions:\n bgp_asn:\n description:\n - Border Gateway Protocol (BGP) Autonomous System Number (ASN).\n - Defaults to C(65000) if not specified when I(state=present).\n type: int\n ip_address:\n description:\n - Internet-routable IP address for customers gateway, must be a static address.\n required: true\n type: str\n name:\n description:\n - Name of the customer gateway.\n required: true\n type: str\n routing:\n description:\n - The type of routing.\n choices: ['static', 'dynamic']\n default: dynamic\n type: str\n state:\n description:\n - Create or terminate the Customer Gateway.\n default: present\n choices: [ 'present', 'absent' ]\n type: str\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n\n'''\n\nEXAMPLES = '''\n- name: Create Customer Gateway\n community.aws.ec2_customer_gateway:\n bgp_asn: 12345\n ip_address: 1.2.3.4\n name: IndianapolisOffice\n region: us-east-1\n register: cgw\n\n- name: Delete Customer Gateway\n community.aws.ec2_customer_gateway:\n ip_address: 1.2.3.4\n name: IndianapolisOffice\n state: absent\n region: us-east-1\n register: cgw\n'''\n\nRETURN = '''\ngateway.customer_gateways:\n description: details about the gateway that was created.\n returned: success\n type: complex\n contains:\n bgp_asn:\n description: The Border Gateway Autonomous System Number.\n returned: when exists and gateway is available.\n sample: 65123\n type: str\n customer_gateway_id:\n description: gateway id assigned by amazon.\n returned: when exists and gateway is available.\n sample: cgw-cb6386a2\n type: str\n ip_address:\n description: ip address of your gateway device.\n returned: when exists and gateway is available.\n sample: 1.2.3.4\n type: str\n state:\n description: state of gateway.\n returned: when gateway exists and is available.\n sample: available\n type: str\n tags:\n description: Any tags on the gateway.\n returned: when gateway exists and is available, and when tags exist.\n type: list\n type:\n description: encryption type.\n returned: when gateway exists and is available.\n sample: ipsec.1\n type: str\n'''\n\ntry:\n import botocore\nexcept ImportError:\n pass # Handled by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\n\n\nclass Ec2CustomerGatewayManager:\n\n def __init__(self, module):\n self.module = module\n\n try:\n self.ec2 = module.client('ec2')\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to connect to AWS')\n\n @AWSRetry.jittered_backoff(delay=2, max_delay=30, retries=6, catch_extra_error_codes=['IncorrectState'])\n def ensure_cgw_absent(self, gw_id):\n response = self.ec2.delete_customer_gateway(\n DryRun=False,\n CustomerGatewayId=gw_id\n )\n return response\n\n def ensure_cgw_present(self, bgp_asn, ip_address):\n if not bgp_asn:\n bgp_asn = 65000\n response = self.ec2.create_customer_gateway(\n DryRun=False,\n Type='ipsec.1',\n PublicIp=ip_address,\n BgpAsn=bgp_asn,\n )\n return response\n\n def tag_cgw_name(self, gw_id, name):\n response = self.ec2.create_tags(\n DryRun=False,\n Resources=[\n gw_id,\n ],\n Tags=[\n {\n 'Key': 'Name',\n 'Value': name\n },\n ]\n )\n return response\n\n def describe_gateways(self, ip_address):\n response = self.ec2.describe_customer_gateways(\n DryRun=False,\n Filters=[\n {\n 'Name': 'state',\n 'Values': [\n 'available',\n ]\n },\n {\n 'Name': 'ip-address',\n 'Values': [\n ip_address,\n ]\n }\n ]\n )\n return response\n\n\ndef main():\n argument_spec = dict(\n bgp_asn=dict(required=False, type='int'),\n ip_address=dict(required=True),\n name=dict(required=True),\n routing=dict(default='dynamic', choices=['dynamic', 'static']),\n state=dict(default='present', choices=['present', 'absent']),\n )\n\n module = AnsibleAWSModule(\n argument_spec=argument_spec,\n supports_check_mode=True,\n required_if=[\n ('routing', 'dynamic', ['bgp_asn'])\n ]\n )\n\n gw_mgr = Ec2CustomerGatewayManager(module)\n\n name = module.params.get('name')\n\n existing = gw_mgr.describe_gateways(module.params['ip_address'])\n\n results = dict(changed=False)\n if module.params['state'] == 'present':\n if existing['CustomerGateways']:\n existing['CustomerGateway'] = existing['CustomerGateways'][0]\n results['gateway'] = existing\n if existing['CustomerGateway']['Tags']:\n tag_array = existing['CustomerGateway']['Tags']\n for key, value in enumerate(tag_array):\n if value['Key'] == 'Name':\n current_name = value['Value']\n if current_name != name:\n results['name'] = gw_mgr.tag_cgw_name(\n results['gateway']['CustomerGateway']['CustomerGatewayId'],\n module.params['name'],\n )\n results['changed'] = True\n else:\n if not module.check_mode:\n results['gateway'] = gw_mgr.ensure_cgw_present(\n module.params['bgp_asn'],\n module.params['ip_address'],\n )\n results['name'] = gw_mgr.tag_cgw_name(\n results['gateway']['CustomerGateway']['CustomerGatewayId'],\n module.params['name'],\n )\n results['changed'] = True\n\n elif module.params['state'] == 'absent':\n if existing['CustomerGateways']:\n existing['CustomerGateway'] = existing['CustomerGateways'][0]\n results['gateway'] = existing\n if not module.check_mode:\n results['gateway'] = gw_mgr.ensure_cgw_absent(\n existing['CustomerGateway']['CustomerGatewayId']\n )\n results['changed'] = True\n\n pretty_results = camel_dict_to_snake_dict(results)\n module.exit_json(**pretty_results)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/ec2_customer_gateway.py"}]}
| 3,184 | 136 |
gh_patches_debug_9153
|
rasdani/github-patches
|
git_diff
|
RedHatInsights__insights-core-2101
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bash_version example doesn't work with json format
Running `insights run -p examples/rules -f json` results in a traceback because the `bash_version` rule puts an `InstalledRpm` object into its response:
```
TypeError: Object of type 'InstalledRpm' is not JSON serializable
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/rules/bash_version.py`
Content:
```
1 """
2 Bash Version
3 ============
4
5 This is a simple rule and can be run against the local host
6 using the following command::
7
8 $ insights-run -p examples.rules.bash_version
9
10 or from the examples/rules directory::
11
12 $ python sample_rules.py
13 """
14 from insights.core.plugins import make_pass, rule
15 from insights.parsers.installed_rpms import InstalledRpms
16
17 KEY = "BASH_VERSION"
18
19 CONTENT = "Bash RPM Version: {{ bash_version }}"
20
21
22 @rule(InstalledRpms)
23 def report(rpms):
24 bash_ver = rpms.get_max('bash')
25 return make_pass(KEY, bash_version=bash_ver)
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/rules/bash_version.py b/examples/rules/bash_version.py
--- a/examples/rules/bash_version.py
+++ b/examples/rules/bash_version.py
@@ -11,7 +11,7 @@
$ python sample_rules.py
"""
-from insights.core.plugins import make_pass, rule
+from insights.core.plugins import make_info, rule
from insights.parsers.installed_rpms import InstalledRpms
KEY = "BASH_VERSION"
@@ -21,5 +21,5 @@
@rule(InstalledRpms)
def report(rpms):
- bash_ver = rpms.get_max('bash')
- return make_pass(KEY, bash_version=bash_ver)
+ bash = rpms.get_max('bash')
+ return make_info(KEY, bash_version=bash.nvra)
|
{"golden_diff": "diff --git a/examples/rules/bash_version.py b/examples/rules/bash_version.py\n--- a/examples/rules/bash_version.py\n+++ b/examples/rules/bash_version.py\n@@ -11,7 +11,7 @@\n \n $ python sample_rules.py\n \"\"\"\n-from insights.core.plugins import make_pass, rule\n+from insights.core.plugins import make_info, rule\n from insights.parsers.installed_rpms import InstalledRpms\n \n KEY = \"BASH_VERSION\"\n@@ -21,5 +21,5 @@\n \n @rule(InstalledRpms)\n def report(rpms):\n- bash_ver = rpms.get_max('bash')\n- return make_pass(KEY, bash_version=bash_ver)\n+ bash = rpms.get_max('bash')\n+ return make_info(KEY, bash_version=bash.nvra)\n", "issue": "bash_version example doesn't work with json format\nRunning `insights run -p examples/rules -f json` results in a traceback because the `bash_version` rule puts an `InstalledRpm` object into its response:\r\n\r\n```\r\nTypeError: Object of type 'InstalledRpm' is not JSON serializable\r\n```\n", "before_files": [{"content": "\"\"\"\nBash Version\n============\n\nThis is a simple rule and can be run against the local host\nusing the following command::\n\n$ insights-run -p examples.rules.bash_version\n\nor from the examples/rules directory::\n\n$ python sample_rules.py\n\"\"\"\nfrom insights.core.plugins import make_pass, rule\nfrom insights.parsers.installed_rpms import InstalledRpms\n\nKEY = \"BASH_VERSION\"\n\nCONTENT = \"Bash RPM Version: {{ bash_version }}\"\n\n\n@rule(InstalledRpms)\ndef report(rpms):\n bash_ver = rpms.get_max('bash')\n return make_pass(KEY, bash_version=bash_ver)\n", "path": "examples/rules/bash_version.py"}], "after_files": [{"content": "\"\"\"\nBash Version\n============\n\nThis is a simple rule and can be run against the local host\nusing the following command::\n\n$ insights-run -p examples.rules.bash_version\n\nor from the examples/rules directory::\n\n$ python sample_rules.py\n\"\"\"\nfrom insights.core.plugins import make_info, rule\nfrom insights.parsers.installed_rpms import InstalledRpms\n\nKEY = \"BASH_VERSION\"\n\nCONTENT = \"Bash RPM Version: {{ bash_version }}\"\n\n\n@rule(InstalledRpms)\ndef report(rpms):\n bash = rpms.get_max('bash')\n return make_info(KEY, bash_version=bash.nvra)\n", "path": "examples/rules/bash_version.py"}]}
| 501 | 167 |
gh_patches_debug_1896
|
rasdani/github-patches
|
git_diff
|
graspologic-org__graspologic-207
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GClust bug
<img width="558" alt="Screen Shot 2019-06-22 at 3 46 06 PM" src="https://user-images.githubusercontent.com/25714207/59968259-eb346c80-9504-11e9-984c-8c13dff93a37.png">
should be `- self.min_components` rather than `- 1`
This causes an indexing error when `min_components` does not equal 1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `graspy/cluster/gclust.py`
Content:
```
1 # Copyright 2019 NeuroData (http://neurodata.io)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import numpy as np
16 import pandas as pd
17 from sklearn.metrics import adjusted_rand_score
18 from sklearn.mixture import GaussianMixture
19 from sklearn.model_selection import ParameterGrid
20
21 from .base import BaseCluster
22
23
24 class GaussianCluster(BaseCluster):
25 r"""
26 Gaussian Mixture Model (GMM)
27
28 Representation of a Gaussian mixture model probability distribution.
29 This class allows to estimate the parameters of a Gaussian mixture
30 distribution. It computes all possible models from one component to
31 max_components. The best model is given by the lowest BIC score.
32
33 Parameters
34 ----------
35 min_components : int, default=2.
36 The minimum number of mixture components to consider (unless
37 max_components=None, in which case this is the maximum number of
38 components to consider). If max_componens is not None, min_components
39 must be less than or equal to max_components.
40
41 max_components : int or None, default=None.
42 The maximum number of mixture components to consider. Must be greater
43 than or equal to min_components.
44
45 covariance_type : {'full' (default), 'tied', 'diag', 'spherical'}, optional
46 String or list/array describing the type of covariance parameters to use.
47 If a string, it must be one of:
48
49 - 'full'
50 each component has its own general covariance matrix
51 - 'tied'
52 all components share the same general covariance matrix
53 - 'diag'
54 each component has its own diagonal covariance matrix
55 - 'spherical'
56 each component has its own single variance
57 - 'all'
58 considers all covariance structures in ['spherical', 'diag', 'tied', 'full']
59 If a list/array, it must be a list/array of strings containing only
60 'spherical', 'tied', 'diag', and/or 'spherical'.
61
62 random_state : int, RandomState instance or None, optional (default=None)
63 If int, random_state is the seed used by the random number generator;
64 If RandomState instance, random_state is the random number generator;
65 If None, the random number generator is the RandomState instance used
66 by ``np.random``.
67
68 Attributes
69 ----------
70 n_components_ : int
71 Optimal number of components based on BIC.
72 covariance_type_ : str
73 Optimal covariance type based on BIC.
74 model_ : GaussianMixture object
75 Fitted GaussianMixture object fitted with optimal numeber of components
76 and optimal covariance structure.
77 bic_ : pandas.DataFrame
78 A pandas DataFrame of BIC values computed for all possible number of clusters
79 given by range(min_components, max_components + 1) and all covariance
80 structures given by covariance_type.
81 ari_ : pandas.DataFrame
82 Only computed when y is given. Pandas Dataframe containing ARI values computed
83 for all possible number of clusters given by range(min_components,
84 max_components) and all covariance structures given by covariance_type.
85 """
86
87 def __init__(
88 self,
89 min_components=2,
90 max_components=None,
91 covariance_type="full",
92 random_state=None,
93 ):
94 if isinstance(min_components, int):
95 if min_components <= 0:
96 msg = "min_components must be >= 1."
97 raise ValueError(msg)
98 else:
99 msg = "min_components must be an integer, not {}.".format(
100 type(min_components)
101 )
102 raise TypeError(msg)
103
104 if isinstance(max_components, int):
105 if max_components <= 0:
106 msg = "max_components must be >= 1 or None."
107 raise ValueError(msg)
108 elif min_components > max_components:
109 msg = "min_components must be less than or equal to max_components."
110 raise ValueError(msg)
111 elif max_components is not None:
112 msg = "max_components must be an integer or None, not {}.".format(
113 type(max_components)
114 )
115 raise TypeError(msg)
116
117 if isinstance(covariance_type, (np.ndarray, list)):
118 covariance_type = np.unique(covariance_type)
119 elif isinstance(covariance_type, str):
120 if covariance_type == "all":
121 covariance_type = ["spherical", "diag", "tied", "full"]
122 else:
123 covariance_type = [covariance_type]
124 else:
125 msg = "covariance_type must be a numpy array, a list, or "
126 msg += "string, not {}".format(type(covariance_type))
127 raise TypeError(msg)
128
129 for cov in covariance_type:
130 if cov not in ["spherical", "diag", "tied", "full"]:
131 msg = (
132 "covariance structure must be one of "
133 + '["spherical", "diag", "tied", "full"]'
134 )
135 msg += " not {}".format(cov)
136 raise ValueError(msg)
137
138 new_covariance_type = []
139 for cov in ["spherical", "diag", "tied", "full"]:
140 if cov in covariance_type:
141 new_covariance_type.append(cov)
142
143 self.min_components = min_components
144 self.max_components = max_components
145 self.covariance_type = new_covariance_type
146 self.random_state = random_state
147
148 def fit(self, X, y=None):
149 """
150 Fits gaussian mixure model to the data.
151 Estimate model parameters with the EM algorithm.
152
153 Parameters
154 ----------
155 X : array-like, shape (n_samples, n_features)
156 List of n_features-dimensional data points. Each row
157 corresponds to a single data point.
158
159 y : array-like, shape (n_samples,), optional (default=None)
160 List of labels for X if available. Used to compute
161 ARI scores.
162
163 Returns
164 -------
165 self
166 """
167
168 # Deal with number of clusters
169 if self.max_components is None:
170 lower_ncomponents = 1
171 upper_ncomponents = self.min_components
172 else:
173 lower_ncomponents = self.min_components
174 upper_ncomponents = self.max_components
175
176 n_mixture_components = upper_ncomponents - lower_ncomponents + 1
177
178 if upper_ncomponents > X.shape[0]:
179 if self.max_components is None:
180 msg = "if max_components is None then min_components must be >= "
181 msg += "n_samples, but min_components = {}, n_samples = {}".format(
182 upper_ncomponents, X.shape[0]
183 )
184 else:
185 msg = "max_components must be >= n_samples, but max_components = "
186 msg += "{}, n_samples = {}".format(upper_ncomponents, X.shape[0])
187 raise ValueError(msg)
188 elif lower_ncomponents > X.shape[0]:
189 msg = "min_components must be <= n_samples, but min_components = "
190 msg += "{}, n_samples = {}".format(upper_ncomponents, X.shape[0])
191 raise ValueError(msg)
192
193 # Get parameters
194 random_state = self.random_state
195
196 param_grid = dict(
197 covariance_type=self.covariance_type,
198 n_components=range(lower_ncomponents, upper_ncomponents + 1),
199 random_state=[random_state],
200 )
201
202 param_grid = list(ParameterGrid(param_grid))
203
204 models = [[] for _ in range(n_mixture_components)]
205 bics = [[] for _ in range(n_mixture_components)]
206 aris = [[] for _ in range(n_mixture_components)]
207
208 for i, params in enumerate(param_grid):
209 model = GaussianMixture(**params)
210 model.fit(X)
211 models[i % n_mixture_components].append(model)
212 bics[i % n_mixture_components].append(model.bic(X))
213 if y is not None:
214 predictions = model.predict(X)
215 aris[i % n_mixture_components].append(
216 adjusted_rand_score(y, predictions)
217 )
218
219 self.bic_ = pd.DataFrame(
220 bics,
221 index=np.arange(lower_ncomponents, upper_ncomponents + 1),
222 columns=self.covariance_type,
223 )
224
225 if y is not None:
226 self.ari_ = pd.DataFrame(
227 aris,
228 index=np.arange(lower_ncomponents, upper_ncomponents + 1),
229 columns=self.covariance_type,
230 )
231 else:
232 self.ari_ = None
233
234 # Get the best cov type and its index within the dataframe
235 best_covariance = self.bic_.min(axis=0).idxmin()
236 best_covariance_idx = self.covariance_type.index(best_covariance)
237
238 # Get the index best component for best_covariance
239 best_component = self.bic_.idxmin()[best_covariance]
240
241 self.n_components_ = best_component
242 self.covariance_type_ = best_covariance
243 self.model_ = models[best_component - 1][best_covariance_idx]
244
245 return self
246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/graspy/cluster/gclust.py b/graspy/cluster/gclust.py
--- a/graspy/cluster/gclust.py
+++ b/graspy/cluster/gclust.py
@@ -240,6 +240,6 @@
self.n_components_ = best_component
self.covariance_type_ = best_covariance
- self.model_ = models[best_component - 1][best_covariance_idx]
+ self.model_ = models[best_component - self.min_components][best_covariance_idx]
return self
|
{"golden_diff": "diff --git a/graspy/cluster/gclust.py b/graspy/cluster/gclust.py\n--- a/graspy/cluster/gclust.py\n+++ b/graspy/cluster/gclust.py\n@@ -240,6 +240,6 @@\n \n self.n_components_ = best_component\n self.covariance_type_ = best_covariance\n- self.model_ = models[best_component - 1][best_covariance_idx]\n+ self.model_ = models[best_component - self.min_components][best_covariance_idx]\n \n return self\n", "issue": "GClust bug\n<img width=\"558\" alt=\"Screen Shot 2019-06-22 at 3 46 06 PM\" src=\"https://user-images.githubusercontent.com/25714207/59968259-eb346c80-9504-11e9-984c-8c13dff93a37.png\">\r\n\r\nshould be `- self.min_components` rather than `- 1`\r\n\r\nThis causes an indexing error when `min_components` does not equal 1\n", "before_files": [{"content": "# Copyright 2019 NeuroData (http://neurodata.io)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport pandas as pd\nfrom sklearn.metrics import adjusted_rand_score\nfrom sklearn.mixture import GaussianMixture\nfrom sklearn.model_selection import ParameterGrid\n\nfrom .base import BaseCluster\n\n\nclass GaussianCluster(BaseCluster):\n r\"\"\"\n Gaussian Mixture Model (GMM)\n\n Representation of a Gaussian mixture model probability distribution. \n This class allows to estimate the parameters of a Gaussian mixture \n distribution. It computes all possible models from one component to \n max_components. The best model is given by the lowest BIC score.\n\n Parameters\n ----------\n min_components : int, default=2. \n The minimum number of mixture components to consider (unless\n max_components=None, in which case this is the maximum number of\n components to consider). If max_componens is not None, min_components\n must be less than or equal to max_components.\n\n max_components : int or None, default=None.\n The maximum number of mixture components to consider. Must be greater \n than or equal to min_components.\n\n covariance_type : {'full' (default), 'tied', 'diag', 'spherical'}, optional\n String or list/array describing the type of covariance parameters to use.\n If a string, it must be one of:\n \n - 'full'\n each component has its own general covariance matrix\n - 'tied'\n all components share the same general covariance matrix\n - 'diag'\n each component has its own diagonal covariance matrix\n - 'spherical'\n each component has its own single variance\n - 'all'\n considers all covariance structures in ['spherical', 'diag', 'tied', 'full']\n If a list/array, it must be a list/array of strings containing only\n 'spherical', 'tied', 'diag', and/or 'spherical'.\n \n random_state : int, RandomState instance or None, optional (default=None)\n If int, random_state is the seed used by the random number generator;\n If RandomState instance, random_state is the random number generator;\n If None, the random number generator is the RandomState instance used\n by ``np.random``.\n\n Attributes\n ----------\n n_components_ : int\n Optimal number of components based on BIC.\n covariance_type_ : str\n Optimal covariance type based on BIC.\n model_ : GaussianMixture object\n Fitted GaussianMixture object fitted with optimal numeber of components \n and optimal covariance structure.\n bic_ : pandas.DataFrame\n A pandas DataFrame of BIC values computed for all possible number of clusters\n given by range(min_components, max_components + 1) and all covariance\n structures given by covariance_type.\n ari_ : pandas.DataFrame\n Only computed when y is given. Pandas Dataframe containing ARI values computed\n for all possible number of clusters given by range(min_components,\n max_components) and all covariance structures given by covariance_type.\n \"\"\"\n\n def __init__(\n self,\n min_components=2,\n max_components=None,\n covariance_type=\"full\",\n random_state=None,\n ):\n if isinstance(min_components, int):\n if min_components <= 0:\n msg = \"min_components must be >= 1.\"\n raise ValueError(msg)\n else:\n msg = \"min_components must be an integer, not {}.\".format(\n type(min_components)\n )\n raise TypeError(msg)\n\n if isinstance(max_components, int):\n if max_components <= 0:\n msg = \"max_components must be >= 1 or None.\"\n raise ValueError(msg)\n elif min_components > max_components:\n msg = \"min_components must be less than or equal to max_components.\"\n raise ValueError(msg)\n elif max_components is not None:\n msg = \"max_components must be an integer or None, not {}.\".format(\n type(max_components)\n )\n raise TypeError(msg)\n\n if isinstance(covariance_type, (np.ndarray, list)):\n covariance_type = np.unique(covariance_type)\n elif isinstance(covariance_type, str):\n if covariance_type == \"all\":\n covariance_type = [\"spherical\", \"diag\", \"tied\", \"full\"]\n else:\n covariance_type = [covariance_type]\n else:\n msg = \"covariance_type must be a numpy array, a list, or \"\n msg += \"string, not {}\".format(type(covariance_type))\n raise TypeError(msg)\n\n for cov in covariance_type:\n if cov not in [\"spherical\", \"diag\", \"tied\", \"full\"]:\n msg = (\n \"covariance structure must be one of \"\n + '[\"spherical\", \"diag\", \"tied\", \"full\"]'\n )\n msg += \" not {}\".format(cov)\n raise ValueError(msg)\n\n new_covariance_type = []\n for cov in [\"spherical\", \"diag\", \"tied\", \"full\"]:\n if cov in covariance_type:\n new_covariance_type.append(cov)\n\n self.min_components = min_components\n self.max_components = max_components\n self.covariance_type = new_covariance_type\n self.random_state = random_state\n\n def fit(self, X, y=None):\n \"\"\"\n Fits gaussian mixure model to the data. \n Estimate model parameters with the EM algorithm.\n\n Parameters\n ----------\n X : array-like, shape (n_samples, n_features)\n List of n_features-dimensional data points. Each row\n corresponds to a single data point.\n \n y : array-like, shape (n_samples,), optional (default=None)\n List of labels for X if available. Used to compute\n ARI scores.\n\n Returns\n -------\n self\n \"\"\"\n\n # Deal with number of clusters\n if self.max_components is None:\n lower_ncomponents = 1\n upper_ncomponents = self.min_components\n else:\n lower_ncomponents = self.min_components\n upper_ncomponents = self.max_components\n\n n_mixture_components = upper_ncomponents - lower_ncomponents + 1\n\n if upper_ncomponents > X.shape[0]:\n if self.max_components is None:\n msg = \"if max_components is None then min_components must be >= \"\n msg += \"n_samples, but min_components = {}, n_samples = {}\".format(\n upper_ncomponents, X.shape[0]\n )\n else:\n msg = \"max_components must be >= n_samples, but max_components = \"\n msg += \"{}, n_samples = {}\".format(upper_ncomponents, X.shape[0])\n raise ValueError(msg)\n elif lower_ncomponents > X.shape[0]:\n msg = \"min_components must be <= n_samples, but min_components = \"\n msg += \"{}, n_samples = {}\".format(upper_ncomponents, X.shape[0])\n raise ValueError(msg)\n\n # Get parameters\n random_state = self.random_state\n\n param_grid = dict(\n covariance_type=self.covariance_type,\n n_components=range(lower_ncomponents, upper_ncomponents + 1),\n random_state=[random_state],\n )\n\n param_grid = list(ParameterGrid(param_grid))\n\n models = [[] for _ in range(n_mixture_components)]\n bics = [[] for _ in range(n_mixture_components)]\n aris = [[] for _ in range(n_mixture_components)]\n\n for i, params in enumerate(param_grid):\n model = GaussianMixture(**params)\n model.fit(X)\n models[i % n_mixture_components].append(model)\n bics[i % n_mixture_components].append(model.bic(X))\n if y is not None:\n predictions = model.predict(X)\n aris[i % n_mixture_components].append(\n adjusted_rand_score(y, predictions)\n )\n\n self.bic_ = pd.DataFrame(\n bics,\n index=np.arange(lower_ncomponents, upper_ncomponents + 1),\n columns=self.covariance_type,\n )\n\n if y is not None:\n self.ari_ = pd.DataFrame(\n aris,\n index=np.arange(lower_ncomponents, upper_ncomponents + 1),\n columns=self.covariance_type,\n )\n else:\n self.ari_ = None\n\n # Get the best cov type and its index within the dataframe\n best_covariance = self.bic_.min(axis=0).idxmin()\n best_covariance_idx = self.covariance_type.index(best_covariance)\n\n # Get the index best component for best_covariance\n best_component = self.bic_.idxmin()[best_covariance]\n\n self.n_components_ = best_component\n self.covariance_type_ = best_covariance\n self.model_ = models[best_component - 1][best_covariance_idx]\n\n return self\n", "path": "graspy/cluster/gclust.py"}], "after_files": [{"content": "# Copyright 2019 NeuroData (http://neurodata.io)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport pandas as pd\nfrom sklearn.metrics import adjusted_rand_score\nfrom sklearn.mixture import GaussianMixture\nfrom sklearn.model_selection import ParameterGrid\n\nfrom .base import BaseCluster\n\n\nclass GaussianCluster(BaseCluster):\n r\"\"\"\n Gaussian Mixture Model (GMM)\n\n Representation of a Gaussian mixture model probability distribution. \n This class allows to estimate the parameters of a Gaussian mixture \n distribution. It computes all possible models from one component to \n max_components. The best model is given by the lowest BIC score.\n\n Parameters\n ----------\n min_components : int, default=2. \n The minimum number of mixture components to consider (unless\n max_components=None, in which case this is the maximum number of\n components to consider). If max_componens is not None, min_components\n must be less than or equal to max_components.\n\n max_components : int or None, default=None.\n The maximum number of mixture components to consider. Must be greater \n than or equal to min_components.\n\n covariance_type : {'full' (default), 'tied', 'diag', 'spherical'}, optional\n String or list/array describing the type of covariance parameters to use.\n If a string, it must be one of:\n \n - 'full'\n each component has its own general covariance matrix\n - 'tied'\n all components share the same general covariance matrix\n - 'diag'\n each component has its own diagonal covariance matrix\n - 'spherical'\n each component has its own single variance\n - 'all'\n considers all covariance structures in ['spherical', 'diag', 'tied', 'full']\n If a list/array, it must be a list/array of strings containing only\n 'spherical', 'tied', 'diag', and/or 'spherical'.\n \n random_state : int, RandomState instance or None, optional (default=None)\n If int, random_state is the seed used by the random number generator;\n If RandomState instance, random_state is the random number generator;\n If None, the random number generator is the RandomState instance used\n by ``np.random``.\n\n Attributes\n ----------\n n_components_ : int\n Optimal number of components based on BIC.\n covariance_type_ : str\n Optimal covariance type based on BIC.\n model_ : GaussianMixture object\n Fitted GaussianMixture object fitted with optimal numeber of components \n and optimal covariance structure.\n bic_ : pandas.DataFrame\n A pandas DataFrame of BIC values computed for all possible number of clusters\n given by range(min_components, max_components + 1) and all covariance\n structures given by covariance_type.\n ari_ : pandas.DataFrame\n Only computed when y is given. Pandas Dataframe containing ARI values computed\n for all possible number of clusters given by range(min_components,\n max_components) and all covariance structures given by covariance_type.\n \"\"\"\n\n def __init__(\n self,\n min_components=2,\n max_components=None,\n covariance_type=\"full\",\n random_state=None,\n ):\n if isinstance(min_components, int):\n if min_components <= 0:\n msg = \"min_components must be >= 1.\"\n raise ValueError(msg)\n else:\n msg = \"min_components must be an integer, not {}.\".format(\n type(min_components)\n )\n raise TypeError(msg)\n\n if isinstance(max_components, int):\n if max_components <= 0:\n msg = \"max_components must be >= 1 or None.\"\n raise ValueError(msg)\n elif min_components > max_components:\n msg = \"min_components must be less than or equal to max_components.\"\n raise ValueError(msg)\n elif max_components is not None:\n msg = \"max_components must be an integer or None, not {}.\".format(\n type(max_components)\n )\n raise TypeError(msg)\n\n if isinstance(covariance_type, (np.ndarray, list)):\n covariance_type = np.unique(covariance_type)\n elif isinstance(covariance_type, str):\n if covariance_type == \"all\":\n covariance_type = [\"spherical\", \"diag\", \"tied\", \"full\"]\n else:\n covariance_type = [covariance_type]\n else:\n msg = \"covariance_type must be a numpy array, a list, or \"\n msg += \"string, not {}\".format(type(covariance_type))\n raise TypeError(msg)\n\n for cov in covariance_type:\n if cov not in [\"spherical\", \"diag\", \"tied\", \"full\"]:\n msg = (\n \"covariance structure must be one of \"\n + '[\"spherical\", \"diag\", \"tied\", \"full\"]'\n )\n msg += \" not {}\".format(cov)\n raise ValueError(msg)\n\n new_covariance_type = []\n for cov in [\"spherical\", \"diag\", \"tied\", \"full\"]:\n if cov in covariance_type:\n new_covariance_type.append(cov)\n\n self.min_components = min_components\n self.max_components = max_components\n self.covariance_type = new_covariance_type\n self.random_state = random_state\n\n def fit(self, X, y=None):\n \"\"\"\n Fits gaussian mixure model to the data. \n Estimate model parameters with the EM algorithm.\n\n Parameters\n ----------\n X : array-like, shape (n_samples, n_features)\n List of n_features-dimensional data points. Each row\n corresponds to a single data point.\n \n y : array-like, shape (n_samples,), optional (default=None)\n List of labels for X if available. Used to compute\n ARI scores.\n\n Returns\n -------\n self\n \"\"\"\n\n # Deal with number of clusters\n if self.max_components is None:\n lower_ncomponents = 1\n upper_ncomponents = self.min_components\n else:\n lower_ncomponents = self.min_components\n upper_ncomponents = self.max_components\n\n n_mixture_components = upper_ncomponents - lower_ncomponents + 1\n\n if upper_ncomponents > X.shape[0]:\n if self.max_components is None:\n msg = \"if max_components is None then min_components must be >= \"\n msg += \"n_samples, but min_components = {}, n_samples = {}\".format(\n upper_ncomponents, X.shape[0]\n )\n else:\n msg = \"max_components must be >= n_samples, but max_components = \"\n msg += \"{}, n_samples = {}\".format(upper_ncomponents, X.shape[0])\n raise ValueError(msg)\n elif lower_ncomponents > X.shape[0]:\n msg = \"min_components must be <= n_samples, but min_components = \"\n msg += \"{}, n_samples = {}\".format(upper_ncomponents, X.shape[0])\n raise ValueError(msg)\n\n # Get parameters\n random_state = self.random_state\n\n param_grid = dict(\n covariance_type=self.covariance_type,\n n_components=range(lower_ncomponents, upper_ncomponents + 1),\n random_state=[random_state],\n )\n\n param_grid = list(ParameterGrid(param_grid))\n\n models = [[] for _ in range(n_mixture_components)]\n bics = [[] for _ in range(n_mixture_components)]\n aris = [[] for _ in range(n_mixture_components)]\n\n for i, params in enumerate(param_grid):\n model = GaussianMixture(**params)\n model.fit(X)\n models[i % n_mixture_components].append(model)\n bics[i % n_mixture_components].append(model.bic(X))\n if y is not None:\n predictions = model.predict(X)\n aris[i % n_mixture_components].append(\n adjusted_rand_score(y, predictions)\n )\n\n self.bic_ = pd.DataFrame(\n bics,\n index=np.arange(lower_ncomponents, upper_ncomponents + 1),\n columns=self.covariance_type,\n )\n\n if y is not None:\n self.ari_ = pd.DataFrame(\n aris,\n index=np.arange(lower_ncomponents, upper_ncomponents + 1),\n columns=self.covariance_type,\n )\n else:\n self.ari_ = None\n\n # Get the best cov type and its index within the dataframe\n best_covariance = self.bic_.min(axis=0).idxmin()\n best_covariance_idx = self.covariance_type.index(best_covariance)\n\n # Get the index best component for best_covariance\n best_component = self.bic_.idxmin()[best_covariance]\n\n self.n_components_ = best_component\n self.covariance_type_ = best_covariance\n self.model_ = models[best_component - self.min_components][best_covariance_idx]\n\n return self\n", "path": "graspy/cluster/gclust.py"}]}
| 3,038 | 128 |
gh_patches_debug_13841
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-431
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dvc run: regular output files should not be in .gitignore
Please add a corresponded unit-test. See examples in class `TestGitIgnoreWhenCheckout` file `test_checkout.py`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/stage.py`
Content:
```
1 import os
2 import stat
3 import yaml
4 import itertools
5
6 from dvc.system import System
7 from dvc.utils import file_md5
8 from dvc.exceptions import DvcException
9 from dvc.executor import Executor
10
11
12 class OutputError(DvcException):
13 pass
14
15
16 class MissingDataSource(OutputError):
17 def __init__(self, missing_files):
18 assert len(missing_files) > 0
19
20 source = 'source'
21 if len(missing_files) > 1:
22 source += 's'
23
24 msg = u'missing data {}: {}'.format(source, ', '.join(missing_files))
25 super(MissingDataSource, self).__init__(msg)
26
27
28 class CmdOutputError(DvcException):
29 def __init__(self, path, msg):
30 super(CmdOutputError, self).__init__('Output file \'{}\' error: {}'.format(path, msg))
31
32
33 class CmdOutputNoCacheError(CmdOutputError):
34 def __init__(self, path):
35 super(CmdOutputNoCacheError, self).__init__(path, 'no cache')
36
37
38 class CmdOutputOutsideOfRepoError(CmdOutputError):
39 def __init__(self, path):
40 super(CmdOutputOutsideOfRepoError, self).__init__(path, 'outside of repository')
41
42
43 class CmdOutputDoesNotExistError(CmdOutputError):
44 def __init__(self, path):
45 super(CmdOutputDoesNotExistError, self).__init__(path, 'does not exist')
46
47
48 class CmdOutputIsNotFileError(CmdOutputError):
49 def __init__(self, path):
50 super(CmdOutputIsNotFileError, self).__init__(path, 'not a file')
51
52
53 class CmdOutputAlreadyTrackedError(CmdOutputError):
54 def __init__(self, path):
55 super(CmdOutputAlreadyTrackedError, self).__init__(path, 'already tracked by scm(e.g. git)')
56
57
58 class Dependency(object):
59 PARAM_PATH = 'path'
60 PARAM_MD5 = 'md5'
61
62 def __init__(self, project, path, md5=None):
63 self.project = project
64 self.path = os.path.abspath(os.path.realpath(path))
65
66 if not self.path.startswith(self.project.root_dir):
67 raise CmdOutputOutsideOfRepoError(self.path)
68
69 self.md5 = md5
70
71 @property
72 def dvc_path(self):
73 return os.path.relpath(self.path, self.project.root_dir)
74
75 @property
76 def rel_path(self):
77 return os.path.relpath(self.path, '.')
78
79 def _changed_md5(self):
80 if not os.path.exists(self.path):
81 return True
82
83 state = self.project.state.get(self.path)
84 if state and state.mtime == self.mtime():
85 md5 = state.md5
86 else:
87 md5 = file_md5(self.path)[0]
88
89 return self.md5 != md5
90
91 def changed(self):
92 return self._changed_md5()
93
94 def mtime(self):
95 return os.path.getmtime(self.path)
96
97 def inode(self):
98 return os.stat(self.path).st_ino
99
100 def update(self):
101 if not os.path.exists(self.path):
102 raise CmdOutputDoesNotExistError(self.rel_path)
103 if not os.path.isfile(self.path):
104 raise CmdOutputIsNotFileError(self.path)
105
106 state = self.project.state.get(self.path)
107 if state and state.mtime == self.mtime() and state.inode == self.inode():
108 md5 = state.md5
109 msg = '{} using md5 {} from state file'
110 self.project.logger.debug(msg.format(self.path, md5))
111 self.md5 = md5
112 else:
113 self.md5 = file_md5(self.path)[0]
114 self.project.state.update(self.path, self.md5, self.mtime(), self.inode())
115
116 def dumpd(self, cwd):
117 return {
118 Output.PARAM_PATH: os.path.relpath(self.path, cwd),
119 Output.PARAM_MD5: self.md5,
120 }
121
122 @classmethod
123 def loadd(cls, project, d, cwd=os.curdir):
124 path = os.path.join(cwd, d[Output.PARAM_PATH])
125 md5 = d.get(Output.PARAM_MD5, None)
126 return cls(project, path, md5=md5)
127
128 @classmethod
129 def loadd_from(cls, project, d_list, cwd=os.curdir):
130 return [cls.loadd(project, x, cwd=cwd) for x in d_list]
131
132 @classmethod
133 def loads(cls, project, s, cwd=os.curdir):
134 return cls(project, os.path.join(cwd, s), md5=None)
135
136 @classmethod
137 def loads_from(cls, project, s_list, cwd=os.curdir):
138 return [cls.loads(project, x, cwd=cwd) for x in s_list]
139
140 def stage(self):
141 for stage in self.project.stages():
142 for out in stage.outs:
143 if self.path == out.path:
144 return stage
145 return None
146
147
148 class Output(Dependency):
149 PARAM_CACHE = 'cache'
150
151 def __init__(self, project, path, md5=None, use_cache=True):
152 super(Output, self).__init__(project, path, md5=md5)
153 self.use_cache = use_cache
154
155 @property
156 def cache(self):
157 return self.project.cache.get(self.md5)
158
159 def dumpd(self, cwd):
160 ret = super(Output, self).dumpd(cwd)
161 ret[Output.PARAM_CACHE] = self.use_cache
162 return ret
163
164 @classmethod
165 def loadd(cls, project, d, cwd=os.curdir):
166 ret = super(Output, cls).loadd(project, d, cwd=cwd)
167 ret.use_cache = d.get(Output.PARAM_CACHE, True)
168 return ret
169
170 @classmethod
171 def loads(cls, project, s, use_cache=True, cwd=os.curdir):
172 ret = super(Output, cls).loads(project, s, cwd=cwd)
173 ret.use_cache = use_cache
174 return ret
175
176 @classmethod
177 def loads_from(cls, project, s_list, use_cache=False, cwd=os.curdir):
178 return [cls.loads(project, x, use_cache=use_cache, cwd=cwd) for x in s_list]
179
180 def changed(self):
181 if not self.use_cache:
182 return super(Output, self).changed()
183
184 return not os.path.exists(self.path) or \
185 not os.path.exists(self.cache) or \
186 not System.samefile(self.path, self.cache)
187
188 def link(self, checkout=False):
189 if not self.use_cache:
190 raise CmdOutputNoCacheError(self.path)
191
192 if not os.path.exists(self.path) and not os.path.exists(self.cache):
193 raise CmdOutputNoCacheError(self.path)
194
195 if os.path.exists(self.path) and \
196 os.path.exists(self.cache) and \
197 System.samefile(self.path, self.cache) and \
198 os.stat(self.cache).st_mode & stat.S_IREAD:
199 return
200
201 if os.path.exists(self.cache):
202 if os.path.exists(self.path):
203 # This means that we already have cache for this data.
204 # We remove data and link it to existing cache to save
205 # some space.
206 self.remove()
207 src = self.cache
208 link = self.path
209 elif not checkout:
210 src = self.path
211 link = self.cache
212 else:
213 raise CmdOutputNoCacheError(self.path)
214
215 System.hardlink(src, link)
216
217 os.chmod(self.path, stat.S_IREAD)
218
219 def checkout(self):
220 if not self.use_cache:
221 return
222 if not os.path.exists(self.cache):
223 self.project.logger.warn(u'\'{}\': cache file not found'.format(self.dvc_path))
224 self.remove()
225 else:
226 self.link(checkout=True)
227
228 def save(self):
229 if not self.use_cache:
230 return
231
232 if self.project.scm.is_tracked(self.path):
233 raise CmdOutputAlreadyTrackedError(self.path)
234
235 self.link()
236
237 def remove(self):
238 if not os.path.exists(self.path):
239 return
240
241 self.project.logger.debug("Removing '{}'".format(self.path))
242 os.chmod(self.path, stat.S_IWUSR)
243 os.unlink(self.path)
244 if os.path.exists(self.cache):
245 os.chmod(self.cache, stat.S_IREAD)
246
247
248 class Stage(object):
249 STAGE_FILE = 'Dvcfile'
250 STAGE_FILE_SUFFIX = '.dvc'
251
252 PARAM_CMD = 'cmd'
253 PARAM_DEPS = 'deps'
254 PARAM_OUTS = 'outs'
255
256 def __init__(self, project, path=None, cmd=None, cwd=None, deps=[], outs=[]):
257 self.project = project
258 self.path = path
259 self.cmd = cmd
260 self.cwd = cwd
261 self.outs = outs
262 self.deps = deps
263
264 @property
265 def relpath(self):
266 return os.path.relpath(self.path)
267
268 @property
269 def dvc_path(self):
270 return os.path.relpath(self.path, self.project.root_dir)
271
272 @property
273 def is_data_source(self):
274 return self.cmd is None
275
276 @staticmethod
277 def is_stage_file(path):
278 if not os.path.isfile(path):
279 return False
280
281 if not path.endswith(Stage.STAGE_FILE_SUFFIX) and os.path.basename(path) != Stage.STAGE_FILE:
282 return False
283
284 return True
285
286 def changed(self):
287 for entry in itertools.chain(self.outs, self.deps):
288 if entry.changed():
289 self.project.logger.debug("{} changed".format(self.path))
290 return True
291 return False
292
293 def remove_outs(self):
294 for out in self.outs:
295 out.remove()
296 self.project.scm.ignore_remove(out.path)
297
298 def remove(self):
299 self.remove_outs()
300 os.unlink(self.path)
301
302 def reproduce(self, force=False):
303 if not self.changed() and not force:
304 return
305
306 if self.cmd:
307 # Removing outputs only if we actually have command to reproduce
308 self.remove_outs()
309
310 self.run()
311
312 @staticmethod
313 def loadd(project, d, path):
314 path = os.path.abspath(path)
315 cwd = os.path.dirname(path)
316 cmd = d[Stage.PARAM_CMD]
317 deps = Dependency.loadd_from(project, d[Stage.PARAM_DEPS], cwd=cwd)
318 outs = Output.loadd_from(project, d[Stage.PARAM_OUTS], cwd=cwd)
319
320 return Stage(project=project,
321 path=path,
322 cmd=cmd,
323 cwd=cwd,
324 deps=deps,
325 outs=outs)
326
327 @staticmethod
328 def load(project, fname):
329 with open(fname, 'r') as fd:
330 return Stage.loadd(project, yaml.safe_load(fd), fname)
331
332 def dumpd(self):
333 deps = [x.dumpd(self.cwd) for x in self.deps]
334 outs = [x.dumpd(self.cwd) for x in self.outs]
335
336 return {
337 Stage.PARAM_CMD: self.cmd,
338 Stage.PARAM_DEPS: deps,
339 Stage.PARAM_OUTS: outs,
340 }
341
342 def dump(self, fname=None):
343 if not fname:
344 fname = self.path
345
346 with open(fname, 'w') as fd:
347 yaml.safe_dump(self.dumpd(), fd, default_flow_style=False)
348
349 def save(self):
350 for dep in self.deps:
351 dep.update()
352
353 for out in self.outs:
354 out.update()
355 out.save()
356 self.project.scm.ignore(out.path)
357
358 def run(self):
359 if not self.is_data_source:
360 self.project.logger.info(u'Reproducing \'{}\':\n\t{}'.format(self.relpath, self.cmd))
361
362 Executor.exec_cmd_only_success(self.cmd, cwd=str(self.cwd), shell=True)
363 self.save()
364
365 self.project.logger.debug(u'\'{}\' was reproduced'.format(self.relpath))
366 else:
367 self.project.logger.info(u'Verifying data sources in \'{}\''.format(self.relpath))
368 self.check_missing_outputs()
369 self.save()
370
371 def check_missing_outputs(self):
372 missing_outs = [out.rel_path for out in self.outs if not os.path.exists(out.rel_path)]
373 if missing_outs:
374 raise MissingDataSource(missing_outs)
375
376 def checkout(self):
377 for out in self.outs:
378 out.checkout()
379
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dvc/stage.py b/dvc/stage.py
--- a/dvc/stage.py
+++ b/dvc/stage.py
@@ -293,7 +293,8 @@
def remove_outs(self):
for out in self.outs:
out.remove()
- self.project.scm.ignore_remove(out.path)
+ if out.use_cache:
+ self.project.scm.ignore_remove(out.path)
def remove(self):
self.remove_outs()
@@ -353,7 +354,8 @@
for out in self.outs:
out.update()
out.save()
- self.project.scm.ignore(out.path)
+ if out.use_cache:
+ self.project.scm.ignore(out.path)
def run(self):
if not self.is_data_source:
|
{"golden_diff": "diff --git a/dvc/stage.py b/dvc/stage.py\n--- a/dvc/stage.py\n+++ b/dvc/stage.py\n@@ -293,7 +293,8 @@\n def remove_outs(self):\n for out in self.outs:\n out.remove()\n- self.project.scm.ignore_remove(out.path)\n+ if out.use_cache:\n+ self.project.scm.ignore_remove(out.path)\n \n def remove(self):\n self.remove_outs()\n@@ -353,7 +354,8 @@\n for out in self.outs:\n out.update()\n out.save()\n- self.project.scm.ignore(out.path)\n+ if out.use_cache:\n+ self.project.scm.ignore(out.path)\n \n def run(self):\n if not self.is_data_source:\n", "issue": "dvc run: regular output files should not be in .gitignore\nPlease add a corresponded unit-test. See examples in class `TestGitIgnoreWhenCheckout` file `test_checkout.py`.\n", "before_files": [{"content": "import os\nimport stat\nimport yaml\nimport itertools\n\nfrom dvc.system import System\nfrom dvc.utils import file_md5\nfrom dvc.exceptions import DvcException\nfrom dvc.executor import Executor\n\n\nclass OutputError(DvcException):\n pass\n\n\nclass MissingDataSource(OutputError):\n def __init__(self, missing_files):\n assert len(missing_files) > 0\n\n source = 'source'\n if len(missing_files) > 1:\n source += 's'\n\n msg = u'missing data {}: {}'.format(source, ', '.join(missing_files))\n super(MissingDataSource, self).__init__(msg)\n\n\nclass CmdOutputError(DvcException):\n def __init__(self, path, msg):\n super(CmdOutputError, self).__init__('Output file \\'{}\\' error: {}'.format(path, msg))\n\n\nclass CmdOutputNoCacheError(CmdOutputError):\n def __init__(self, path):\n super(CmdOutputNoCacheError, self).__init__(path, 'no cache')\n\n\nclass CmdOutputOutsideOfRepoError(CmdOutputError):\n def __init__(self, path):\n super(CmdOutputOutsideOfRepoError, self).__init__(path, 'outside of repository')\n\n\nclass CmdOutputDoesNotExistError(CmdOutputError):\n def __init__(self, path):\n super(CmdOutputDoesNotExistError, self).__init__(path, 'does not exist')\n\n\nclass CmdOutputIsNotFileError(CmdOutputError):\n def __init__(self, path):\n super(CmdOutputIsNotFileError, self).__init__(path, 'not a file')\n\n\nclass CmdOutputAlreadyTrackedError(CmdOutputError):\n def __init__(self, path):\n super(CmdOutputAlreadyTrackedError, self).__init__(path, 'already tracked by scm(e.g. git)')\n\n\nclass Dependency(object):\n PARAM_PATH = 'path'\n PARAM_MD5 = 'md5'\n\n def __init__(self, project, path, md5=None):\n self.project = project\n self.path = os.path.abspath(os.path.realpath(path))\n\n if not self.path.startswith(self.project.root_dir):\n raise CmdOutputOutsideOfRepoError(self.path)\n\n self.md5 = md5\n\n @property\n def dvc_path(self):\n return os.path.relpath(self.path, self.project.root_dir)\n\n @property\n def rel_path(self):\n return os.path.relpath(self.path, '.')\n\n def _changed_md5(self):\n if not os.path.exists(self.path):\n return True\n\n state = self.project.state.get(self.path)\n if state and state.mtime == self.mtime():\n md5 = state.md5\n else:\n md5 = file_md5(self.path)[0]\n\n return self.md5 != md5\n\n def changed(self):\n return self._changed_md5()\n\n def mtime(self):\n return os.path.getmtime(self.path)\n\n def inode(self):\n return os.stat(self.path).st_ino\n\n def update(self):\n if not os.path.exists(self.path):\n raise CmdOutputDoesNotExistError(self.rel_path)\n if not os.path.isfile(self.path):\n raise CmdOutputIsNotFileError(self.path)\n\n state = self.project.state.get(self.path)\n if state and state.mtime == self.mtime() and state.inode == self.inode():\n md5 = state.md5\n msg = '{} using md5 {} from state file'\n self.project.logger.debug(msg.format(self.path, md5))\n self.md5 = md5\n else:\n self.md5 = file_md5(self.path)[0]\n self.project.state.update(self.path, self.md5, self.mtime(), self.inode())\n\n def dumpd(self, cwd):\n return {\n Output.PARAM_PATH: os.path.relpath(self.path, cwd),\n Output.PARAM_MD5: self.md5,\n }\n\n @classmethod\n def loadd(cls, project, d, cwd=os.curdir):\n path = os.path.join(cwd, d[Output.PARAM_PATH])\n md5 = d.get(Output.PARAM_MD5, None)\n return cls(project, path, md5=md5)\n\n @classmethod\n def loadd_from(cls, project, d_list, cwd=os.curdir):\n return [cls.loadd(project, x, cwd=cwd) for x in d_list]\n\n @classmethod\n def loads(cls, project, s, cwd=os.curdir):\n return cls(project, os.path.join(cwd, s), md5=None)\n\n @classmethod\n def loads_from(cls, project, s_list, cwd=os.curdir):\n return [cls.loads(project, x, cwd=cwd) for x in s_list]\n\n def stage(self):\n for stage in self.project.stages():\n for out in stage.outs:\n if self.path == out.path:\n return stage\n return None\n\n\nclass Output(Dependency):\n PARAM_CACHE = 'cache'\n\n def __init__(self, project, path, md5=None, use_cache=True):\n super(Output, self).__init__(project, path, md5=md5)\n self.use_cache = use_cache\n\n @property\n def cache(self):\n return self.project.cache.get(self.md5)\n\n def dumpd(self, cwd):\n ret = super(Output, self).dumpd(cwd)\n ret[Output.PARAM_CACHE] = self.use_cache\n return ret\n\n @classmethod\n def loadd(cls, project, d, cwd=os.curdir):\n ret = super(Output, cls).loadd(project, d, cwd=cwd)\n ret.use_cache = d.get(Output.PARAM_CACHE, True)\n return ret\n\n @classmethod\n def loads(cls, project, s, use_cache=True, cwd=os.curdir):\n ret = super(Output, cls).loads(project, s, cwd=cwd)\n ret.use_cache = use_cache\n return ret\n\n @classmethod\n def loads_from(cls, project, s_list, use_cache=False, cwd=os.curdir):\n return [cls.loads(project, x, use_cache=use_cache, cwd=cwd) for x in s_list]\n\n def changed(self):\n if not self.use_cache:\n return super(Output, self).changed()\n\n return not os.path.exists(self.path) or \\\n not os.path.exists(self.cache) or \\\n not System.samefile(self.path, self.cache)\n\n def link(self, checkout=False):\n if not self.use_cache:\n raise CmdOutputNoCacheError(self.path)\n\n if not os.path.exists(self.path) and not os.path.exists(self.cache):\n raise CmdOutputNoCacheError(self.path)\n\n if os.path.exists(self.path) and \\\n os.path.exists(self.cache) and \\\n System.samefile(self.path, self.cache) and \\\n os.stat(self.cache).st_mode & stat.S_IREAD:\n return\n\n if os.path.exists(self.cache):\n if os.path.exists(self.path):\n # This means that we already have cache for this data.\n # We remove data and link it to existing cache to save\n # some space.\n self.remove()\n src = self.cache\n link = self.path\n elif not checkout:\n src = self.path\n link = self.cache\n else:\n raise CmdOutputNoCacheError(self.path)\n\n System.hardlink(src, link)\n\n os.chmod(self.path, stat.S_IREAD)\n\n def checkout(self):\n if not self.use_cache:\n return\n if not os.path.exists(self.cache):\n self.project.logger.warn(u'\\'{}\\': cache file not found'.format(self.dvc_path))\n self.remove()\n else:\n self.link(checkout=True)\n\n def save(self):\n if not self.use_cache:\n return\n\n if self.project.scm.is_tracked(self.path):\n raise CmdOutputAlreadyTrackedError(self.path)\n\n self.link()\n\n def remove(self):\n if not os.path.exists(self.path):\n return\n\n self.project.logger.debug(\"Removing '{}'\".format(self.path))\n os.chmod(self.path, stat.S_IWUSR)\n os.unlink(self.path)\n if os.path.exists(self.cache):\n os.chmod(self.cache, stat.S_IREAD)\n\n\nclass Stage(object):\n STAGE_FILE = 'Dvcfile'\n STAGE_FILE_SUFFIX = '.dvc'\n\n PARAM_CMD = 'cmd'\n PARAM_DEPS = 'deps'\n PARAM_OUTS = 'outs'\n\n def __init__(self, project, path=None, cmd=None, cwd=None, deps=[], outs=[]):\n self.project = project\n self.path = path\n self.cmd = cmd\n self.cwd = cwd\n self.outs = outs\n self.deps = deps\n\n @property\n def relpath(self):\n return os.path.relpath(self.path)\n\n @property\n def dvc_path(self):\n return os.path.relpath(self.path, self.project.root_dir)\n\n @property\n def is_data_source(self):\n return self.cmd is None\n\n @staticmethod\n def is_stage_file(path):\n if not os.path.isfile(path):\n return False\n\n if not path.endswith(Stage.STAGE_FILE_SUFFIX) and os.path.basename(path) != Stage.STAGE_FILE:\n return False\n\n return True\n\n def changed(self):\n for entry in itertools.chain(self.outs, self.deps):\n if entry.changed():\n self.project.logger.debug(\"{} changed\".format(self.path))\n return True\n return False\n\n def remove_outs(self):\n for out in self.outs:\n out.remove()\n self.project.scm.ignore_remove(out.path)\n\n def remove(self):\n self.remove_outs()\n os.unlink(self.path)\n\n def reproduce(self, force=False):\n if not self.changed() and not force:\n return\n\n if self.cmd:\n # Removing outputs only if we actually have command to reproduce\n self.remove_outs()\n\n self.run()\n\n @staticmethod\n def loadd(project, d, path):\n path = os.path.abspath(path)\n cwd = os.path.dirname(path)\n cmd = d[Stage.PARAM_CMD]\n deps = Dependency.loadd_from(project, d[Stage.PARAM_DEPS], cwd=cwd)\n outs = Output.loadd_from(project, d[Stage.PARAM_OUTS], cwd=cwd)\n\n return Stage(project=project,\n path=path,\n cmd=cmd,\n cwd=cwd,\n deps=deps,\n outs=outs)\n\n @staticmethod\n def load(project, fname):\n with open(fname, 'r') as fd:\n return Stage.loadd(project, yaml.safe_load(fd), fname)\n\n def dumpd(self):\n deps = [x.dumpd(self.cwd) for x in self.deps]\n outs = [x.dumpd(self.cwd) for x in self.outs]\n\n return {\n Stage.PARAM_CMD: self.cmd,\n Stage.PARAM_DEPS: deps,\n Stage.PARAM_OUTS: outs,\n }\n\n def dump(self, fname=None):\n if not fname:\n fname = self.path\n\n with open(fname, 'w') as fd:\n yaml.safe_dump(self.dumpd(), fd, default_flow_style=False)\n\n def save(self):\n for dep in self.deps:\n dep.update()\n\n for out in self.outs:\n out.update()\n out.save()\n self.project.scm.ignore(out.path)\n\n def run(self):\n if not self.is_data_source:\n self.project.logger.info(u'Reproducing \\'{}\\':\\n\\t{}'.format(self.relpath, self.cmd))\n\n Executor.exec_cmd_only_success(self.cmd, cwd=str(self.cwd), shell=True)\n self.save()\n\n self.project.logger.debug(u'\\'{}\\' was reproduced'.format(self.relpath))\n else:\n self.project.logger.info(u'Verifying data sources in \\'{}\\''.format(self.relpath))\n self.check_missing_outputs()\n self.save()\n\n def check_missing_outputs(self):\n missing_outs = [out.rel_path for out in self.outs if not os.path.exists(out.rel_path)]\n if missing_outs:\n raise MissingDataSource(missing_outs)\n\n def checkout(self):\n for out in self.outs:\n out.checkout()\n", "path": "dvc/stage.py"}], "after_files": [{"content": "import os\nimport stat\nimport yaml\nimport itertools\n\nfrom dvc.system import System\nfrom dvc.utils import file_md5\nfrom dvc.exceptions import DvcException\nfrom dvc.executor import Executor\n\n\nclass OutputError(DvcException):\n pass\n\n\nclass MissingDataSource(OutputError):\n def __init__(self, missing_files):\n assert len(missing_files) > 0\n\n source = 'source'\n if len(missing_files) > 1:\n source += 's'\n\n msg = u'missing data {}: {}'.format(source, ', '.join(missing_files))\n super(MissingDataSource, self).__init__(msg)\n\n\nclass CmdOutputError(DvcException):\n def __init__(self, path, msg):\n super(CmdOutputError, self).__init__('Output file \\'{}\\' error: {}'.format(path, msg))\n\n\nclass CmdOutputNoCacheError(CmdOutputError):\n def __init__(self, path):\n super(CmdOutputNoCacheError, self).__init__(path, 'no cache')\n\n\nclass CmdOutputOutsideOfRepoError(CmdOutputError):\n def __init__(self, path):\n super(CmdOutputOutsideOfRepoError, self).__init__(path, 'outside of repository')\n\n\nclass CmdOutputDoesNotExistError(CmdOutputError):\n def __init__(self, path):\n super(CmdOutputDoesNotExistError, self).__init__(path, 'does not exist')\n\n\nclass CmdOutputIsNotFileError(CmdOutputError):\n def __init__(self, path):\n super(CmdOutputIsNotFileError, self).__init__(path, 'not a file')\n\n\nclass CmdOutputAlreadyTrackedError(CmdOutputError):\n def __init__(self, path):\n super(CmdOutputAlreadyTrackedError, self).__init__(path, 'already tracked by scm(e.g. git)')\n\n\nclass Dependency(object):\n PARAM_PATH = 'path'\n PARAM_MD5 = 'md5'\n\n def __init__(self, project, path, md5=None):\n self.project = project\n self.path = os.path.abspath(os.path.realpath(path))\n\n if not self.path.startswith(self.project.root_dir):\n raise CmdOutputOutsideOfRepoError(self.path)\n\n self.md5 = md5\n\n @property\n def dvc_path(self):\n return os.path.relpath(self.path, self.project.root_dir)\n\n @property\n def rel_path(self):\n return os.path.relpath(self.path, '.')\n\n def _changed_md5(self):\n if not os.path.exists(self.path):\n return True\n\n state = self.project.state.get(self.path)\n if state and state.mtime == self.mtime():\n md5 = state.md5\n else:\n md5 = file_md5(self.path)[0]\n\n return self.md5 != md5\n\n def changed(self):\n return self._changed_md5()\n\n def mtime(self):\n return os.path.getmtime(self.path)\n\n def inode(self):\n return os.stat(self.path).st_ino\n\n def update(self):\n if not os.path.exists(self.path):\n raise CmdOutputDoesNotExistError(self.rel_path)\n if not os.path.isfile(self.path):\n raise CmdOutputIsNotFileError(self.path)\n\n state = self.project.state.get(self.path)\n if state and state.mtime == self.mtime() and state.inode == self.inode():\n md5 = state.md5\n msg = '{} using md5 {} from state file'\n self.project.logger.debug(msg.format(self.path, md5))\n self.md5 = md5\n else:\n self.md5 = file_md5(self.path)[0]\n self.project.state.update(self.path, self.md5, self.mtime(), self.inode())\n\n def dumpd(self, cwd):\n return {\n Output.PARAM_PATH: os.path.relpath(self.path, cwd),\n Output.PARAM_MD5: self.md5,\n }\n\n @classmethod\n def loadd(cls, project, d, cwd=os.curdir):\n path = os.path.join(cwd, d[Output.PARAM_PATH])\n md5 = d.get(Output.PARAM_MD5, None)\n return cls(project, path, md5=md5)\n\n @classmethod\n def loadd_from(cls, project, d_list, cwd=os.curdir):\n return [cls.loadd(project, x, cwd=cwd) for x in d_list]\n\n @classmethod\n def loads(cls, project, s, cwd=os.curdir):\n return cls(project, os.path.join(cwd, s), md5=None)\n\n @classmethod\n def loads_from(cls, project, s_list, cwd=os.curdir):\n return [cls.loads(project, x, cwd=cwd) for x in s_list]\n\n def stage(self):\n for stage in self.project.stages():\n for out in stage.outs:\n if self.path == out.path:\n return stage\n return None\n\n\nclass Output(Dependency):\n PARAM_CACHE = 'cache'\n\n def __init__(self, project, path, md5=None, use_cache=True):\n super(Output, self).__init__(project, path, md5=md5)\n self.use_cache = use_cache\n\n @property\n def cache(self):\n return self.project.cache.get(self.md5)\n\n def dumpd(self, cwd):\n ret = super(Output, self).dumpd(cwd)\n ret[Output.PARAM_CACHE] = self.use_cache\n return ret\n\n @classmethod\n def loadd(cls, project, d, cwd=os.curdir):\n ret = super(Output, cls).loadd(project, d, cwd=cwd)\n ret.use_cache = d.get(Output.PARAM_CACHE, True)\n return ret\n\n @classmethod\n def loads(cls, project, s, use_cache=True, cwd=os.curdir):\n ret = super(Output, cls).loads(project, s, cwd=cwd)\n ret.use_cache = use_cache\n return ret\n\n @classmethod\n def loads_from(cls, project, s_list, use_cache=False, cwd=os.curdir):\n return [cls.loads(project, x, use_cache=use_cache, cwd=cwd) for x in s_list]\n\n def changed(self):\n if not self.use_cache:\n return super(Output, self).changed()\n\n return not os.path.exists(self.path) or \\\n not os.path.exists(self.cache) or \\\n not System.samefile(self.path, self.cache)\n\n def link(self, checkout=False):\n if not self.use_cache:\n raise CmdOutputNoCacheError(self.path)\n\n if not os.path.exists(self.path) and not os.path.exists(self.cache):\n raise CmdOutputNoCacheError(self.path)\n\n if os.path.exists(self.path) and \\\n os.path.exists(self.cache) and \\\n System.samefile(self.path, self.cache) and \\\n os.stat(self.cache).st_mode & stat.S_IREAD:\n return\n\n if os.path.exists(self.cache):\n if os.path.exists(self.path):\n # This means that we already have cache for this data.\n # We remove data and link it to existing cache to save\n # some space.\n self.remove()\n src = self.cache\n link = self.path\n elif not checkout:\n src = self.path\n link = self.cache\n else:\n raise CmdOutputNoCacheError(self.path)\n\n System.hardlink(src, link)\n\n os.chmod(self.path, stat.S_IREAD)\n\n def checkout(self):\n if not self.use_cache:\n return\n if not os.path.exists(self.cache):\n self.project.logger.warn(u'\\'{}\\': cache file not found'.format(self.dvc_path))\n self.remove()\n else:\n self.link(checkout=True)\n\n def save(self):\n if not self.use_cache:\n return\n\n if self.project.scm.is_tracked(self.path):\n raise CmdOutputAlreadyTrackedError(self.path)\n\n self.link()\n\n def remove(self):\n if not os.path.exists(self.path):\n return\n\n self.project.logger.debug(\"Removing '{}'\".format(self.path))\n os.chmod(self.path, stat.S_IWUSR)\n os.unlink(self.path)\n if os.path.exists(self.cache):\n os.chmod(self.cache, stat.S_IREAD)\n\n\nclass Stage(object):\n STAGE_FILE = 'Dvcfile'\n STAGE_FILE_SUFFIX = '.dvc'\n\n PARAM_CMD = 'cmd'\n PARAM_DEPS = 'deps'\n PARAM_OUTS = 'outs'\n\n def __init__(self, project, path=None, cmd=None, cwd=None, deps=[], outs=[]):\n self.project = project\n self.path = path\n self.cmd = cmd\n self.cwd = cwd\n self.outs = outs\n self.deps = deps\n\n @property\n def relpath(self):\n return os.path.relpath(self.path)\n\n @property\n def dvc_path(self):\n return os.path.relpath(self.path, self.project.root_dir)\n\n @property\n def is_data_source(self):\n return self.cmd is None\n\n @staticmethod\n def is_stage_file(path):\n if not os.path.isfile(path):\n return False\n\n if not path.endswith(Stage.STAGE_FILE_SUFFIX) and os.path.basename(path) != Stage.STAGE_FILE:\n return False\n\n return True\n\n def changed(self):\n for entry in itertools.chain(self.outs, self.deps):\n if entry.changed():\n self.project.logger.debug(\"{} changed\".format(self.path))\n return True\n return False\n\n def remove_outs(self):\n for out in self.outs:\n out.remove()\n if out.use_cache:\n self.project.scm.ignore_remove(out.path)\n\n def remove(self):\n self.remove_outs()\n os.unlink(self.path)\n\n def reproduce(self, force=False):\n if not self.changed() and not force:\n return\n\n if self.cmd:\n # Removing outputs only if we actually have command to reproduce\n self.remove_outs()\n\n self.run()\n\n @staticmethod\n def loadd(project, d, path):\n path = os.path.abspath(path)\n cwd = os.path.dirname(path)\n cmd = d[Stage.PARAM_CMD]\n deps = Dependency.loadd_from(project, d[Stage.PARAM_DEPS], cwd=cwd)\n outs = Output.loadd_from(project, d[Stage.PARAM_OUTS], cwd=cwd)\n\n return Stage(project=project,\n path=path,\n cmd=cmd,\n cwd=cwd,\n deps=deps,\n outs=outs)\n\n @staticmethod\n def load(project, fname):\n with open(fname, 'r') as fd:\n return Stage.loadd(project, yaml.safe_load(fd), fname)\n\n def dumpd(self):\n deps = [x.dumpd(self.cwd) for x in self.deps]\n outs = [x.dumpd(self.cwd) for x in self.outs]\n\n return {\n Stage.PARAM_CMD: self.cmd,\n Stage.PARAM_DEPS: deps,\n Stage.PARAM_OUTS: outs,\n }\n\n def dump(self, fname=None):\n if not fname:\n fname = self.path\n\n with open(fname, 'w') as fd:\n yaml.safe_dump(self.dumpd(), fd, default_flow_style=False)\n\n def save(self):\n for dep in self.deps:\n dep.update()\n\n for out in self.outs:\n out.update()\n out.save()\n if out.use_cache:\n self.project.scm.ignore(out.path)\n\n def run(self):\n if not self.is_data_source:\n self.project.logger.info(u'Reproducing \\'{}\\':\\n\\t{}'.format(self.relpath, self.cmd))\n\n Executor.exec_cmd_only_success(self.cmd, cwd=str(self.cwd), shell=True)\n self.save()\n\n self.project.logger.debug(u'\\'{}\\' was reproduced'.format(self.relpath))\n else:\n self.project.logger.info(u'Verifying data sources in \\'{}\\''.format(self.relpath))\n self.check_missing_outputs()\n self.save()\n\n def check_missing_outputs(self):\n missing_outs = [out.rel_path for out in self.outs if not os.path.exists(out.rel_path)]\n if missing_outs:\n raise MissingDataSource(missing_outs)\n\n def checkout(self):\n for out in self.outs:\n out.checkout()\n", "path": "dvc/stage.py"}]}
| 4,007 | 177 |
gh_patches_debug_42116
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-653
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Google Style Docstrings
We should implement Google Style Docstrings to every function, method, class in opsdroid. This style will support existing documentation and will help in the future by generating documentation automatically.
This consists in a bit of effort so this issue can be worked by more than one contributor, just make sure that everyone knows what you are working on in order to avoid other contributors spending time on something that you are working on.
If you are unfamiliar with the Google Style Docstrings I'd recommend that you check these resources:
- [Sphix 1.8.0+ - Google Style Docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)
Docstrings that need to be updated:
- main.py
- [x] configure_lang
- [ ] configure_log
- [ ] get_logging_level
- [ ] check_dependencies
- [ ] print_version
- [ ] print_example_config
- [ ] edit_files
- [x] welcome_message
- ~~helper.py~~
- [x] get_opsdroid
- [x] del_rw
- [x] move_config_to_appdir
- memory.py
- [x] Memory
- [x] get
- [x] put
- [x] _get_from_database
- [x] _put_to_database
- message.py
- [x] Message
- [x] __init__
- [x] _thinking_delay
- [x] _typing delay
- [x] respond
- [x] react
- web.py
- [ ] Web
- [x] get_port
- [x] get_host
- [x] get_ssl_context
- [ ] start
- [ ] build_response
- [ ] web_index_handler
- [ ] web_stats_handler
- matchers.py
- [ ] match_regex
- [ ] match_apiai_action
- [ ] match_apiai_intent
- [ ] match_dialogflow_action
- [ ] match_dialogflow_intent
- [ ] match_luisai_intent
- [ ] match_rasanlu
- [ ] match_recastai
- [ ] match_witai
- [ ] match_crontab
- [ ] match_webhook
- [ ] match_always
- core.py
- [ ] OpsDroid
- [ ] default_connector
- [ ] exit
- [ ] critical
- [ ] call_stop
- [ ] disconnect
- [ ] stop
- [ ] load
- [ ] start_loop
- [x] setup_skills
- [ ] train_parsers
- [ ] start_connector_tasks
- [ ] start_database
- [ ] run_skill
- [ ] get_ranked_skills
- [ ] parse
- loader.py
- [ ] Loader
- [x] import_module_from_spec
- [x] import_module
- [x] check_cache
- [x] build_module_import_path
- [x] build_module_install_path
- [x] git_clone
- [x] git_pull
- [x] pip_install_deps
- [x] create_default_config
- [x] load_config_file
- [ ] envvar_constructor
- [ ] include_constructor
- [x] setup_modules_directory
- [x] load_modules_from_config
- [x] _load_modules
- [x] _install_module
- [x] _update_module
- [ ] _install_git_module
- [x] _install_local_module
---- ORIGINAL POST ----
I've been wondering about this for a while now and I would like to know if we should replace/update all the docstrings in opsdroid with the Google Style doc strings.
I think this could help new and old contributors to contribute and commit to opsdroid since the Google Style docstrings give more information about every method/function and specifies clearly what sort of input the function/method expects, what will it return and what will be raised (if applicable).
The downsize of this style is that the length of every .py file will increase due to the doc strings, but since most IDE's allow you to hide those fields it shouldn't be too bad.
Here is a good example of Google Style Doc strings: [Sphix 1.8.0+ - Google Style Docstrings](http://www.sphinx-doc.org/en/master/ext/example_google.html)
I would like to know what you all think about this idea and if its worth spending time on it.
Add Google Style Docstrings
We should implement Google Style Docstrings to every function, method, class in opsdroid. This style will support existing documentation and will help in the future by generating documentation automatically.
This consists in a bit of effort so this issue can be worked by more than one contributor, just make sure that everyone knows what you are working on in order to avoid other contributors spending time on something that you are working on.
If you are unfamiliar with the Google Style Docstrings I'd recommend that you check these resources:
- [Sphix 1.8.0+ - Google Style Docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)
Docstrings that need to be updated:
- main.py
- [x] configure_lang
- [ ] configure_log
- [ ] get_logging_level
- [ ] check_dependencies
- [ ] print_version
- [ ] print_example_config
- [ ] edit_files
- [x] welcome_message
- ~~helper.py~~
- [x] get_opsdroid
- [x] del_rw
- [x] move_config_to_appdir
- memory.py
- [x] Memory
- [x] get
- [x] put
- [x] _get_from_database
- [x] _put_to_database
- message.py
- [x] Message
- [x] __init__
- [x] _thinking_delay
- [x] _typing delay
- [x] respond
- [x] react
- web.py
- [ ] Web
- [x] get_port
- [x] get_host
- [x] get_ssl_context
- [ ] start
- [ ] build_response
- [ ] web_index_handler
- [ ] web_stats_handler
- matchers.py
- [ ] match_regex
- [ ] match_apiai_action
- [ ] match_apiai_intent
- [ ] match_dialogflow_action
- [ ] match_dialogflow_intent
- [ ] match_luisai_intent
- [ ] match_rasanlu
- [ ] match_recastai
- [ ] match_witai
- [ ] match_crontab
- [ ] match_webhook
- [ ] match_always
- core.py
- [ ] OpsDroid
- [ ] default_connector
- [ ] exit
- [ ] critical
- [ ] call_stop
- [ ] disconnect
- [ ] stop
- [ ] load
- [ ] start_loop
- [x] setup_skills
- [ ] train_parsers
- [ ] start_connector_tasks
- [ ] start_database
- [ ] run_skill
- [ ] get_ranked_skills
- [ ] parse
- loader.py
- [ ] Loader
- [x] import_module_from_spec
- [x] import_module
- [x] check_cache
- [x] build_module_import_path
- [x] build_module_install_path
- [x] git_clone
- [x] git_pull
- [x] pip_install_deps
- [x] create_default_config
- [x] load_config_file
- [ ] envvar_constructor
- [ ] include_constructor
- [x] setup_modules_directory
- [x] load_modules_from_config
- [x] _load_modules
- [x] _install_module
- [x] _update_module
- [ ] _install_git_module
- [x] _install_local_module
---- ORIGINAL POST ----
I've been wondering about this for a while now and I would like to know if we should replace/update all the docstrings in opsdroid with the Google Style doc strings.
I think this could help new and old contributors to contribute and commit to opsdroid since the Google Style docstrings give more information about every method/function and specifies clearly what sort of input the function/method expects, what will it return and what will be raised (if applicable).
The downsize of this style is that the length of every .py file will increase due to the doc strings, but since most IDE's allow you to hide those fields it shouldn't be too bad.
Here is a good example of Google Style Doc strings: [Sphix 1.8.0+ - Google Style Docstrings](http://www.sphinx-doc.org/en/master/ext/example_google.html)
I would like to know what you all think about this idea and if its worth spending time on it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/message.py`
Content:
```
1 """Class to encapsulate a message."""
2
3 from datetime import datetime
4 from copy import copy
5 import asyncio
6 from random import randrange
7
8 from opsdroid.helper import get_opsdroid
9
10
11 class Message:
12 # pylint: disable=too-few-public-methods
13 """A message object."""
14
15 def __init__(self, text, user, room, connector, raw_message=None):
16 """Create object with minimum properties."""
17 self.created = datetime.now()
18 self.text = text
19 self.user = user
20 self.room = room
21 self.connector = connector
22 self.raw_message = raw_message
23 self.regex = None
24 self.responded_to = False
25
26 async def _thinking_delay(self):
27 """Make opsdroid wait x-seconds before responding."""
28 seconds = self.connector.configuration.get('thinking-delay', 0)
29
30 if isinstance(seconds, list):
31 seconds = randrange(seconds[0], seconds[1])
32
33 await asyncio.sleep(seconds)
34
35 async def _typing_delay(self, text):
36 """Simulate typing, takes an int or float to delay reply."""
37 seconds = self.connector.configuration.get('typing-delay', 0)
38 char_count = len(text)
39
40 if isinstance(seconds, list):
41 seconds = randrange(seconds[0], seconds[1])
42
43 await asyncio.sleep(char_count*seconds)
44
45 async def respond(self, text, room=None):
46 """Respond to this message using the connector it was created by."""
47 opsdroid = get_opsdroid()
48 response = copy(self)
49 response.text = text
50
51 if 'thinking-delay' in self.connector.configuration or \
52 'typing-delay' in self.connector.configuration:
53 await self._thinking_delay()
54 await self._typing_delay(response.text)
55
56 await self.connector.respond(response, room)
57 if not self.responded_to:
58 now = datetime.now()
59 opsdroid.stats["total_responses"] = \
60 opsdroid.stats["total_responses"] + 1
61 opsdroid.stats["total_response_time"] = \
62 opsdroid.stats["total_response_time"] + \
63 (now - self.created).total_seconds()
64 self.responded_to = True
65
66 async def react(self, emoji):
67 """React to this message using the connector it was created by."""
68 if 'thinking-delay' in self.connector.configuration:
69 await self._thinking_delay()
70 return await self.connector.react(self, emoji)
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opsdroid/message.py b/opsdroid/message.py
--- a/opsdroid/message.py
+++ b/opsdroid/message.py
@@ -10,7 +10,31 @@
class Message:
# pylint: disable=too-few-public-methods
- """A message object."""
+ """A message object.
+
+ Stores messages in a format that allows OpsDroid to respond or react with
+ delays for thinking and typing as defined in configuration YAML file.
+
+ Args:
+ text: String text of message
+ user: String name of user sending message
+ room: String name of the room or chat channel in which message was sent
+ connector: Connector object used to interact with given chat service
+ raw_message: Raw message as provided by chat service. None by default
+
+ Attributes:
+ created: Local date and time that message object was created
+ text: Text of message as string
+ user: String name of user sending message
+ room: String name of the room or chat channel in which message was sent
+ connector: Connector object used to interact with given chat service
+ raw_message: Raw message provided by chat service
+ regex: A re match object for the regular expression message was matched
+ against
+ responded_to: Boolean initialized as False. True if message has been
+ responded to
+
+ """
def __init__(self, text, user, room, connector, raw_message=None):
"""Create object with minimum properties."""
@@ -24,7 +48,10 @@
self.responded_to = False
async def _thinking_delay(self):
- """Make opsdroid wait x-seconds before responding."""
+ """Make opsdroid wait x-seconds before responding.
+
+ Number of seconds defined in YAML config. file, accessed via connector.
+ """
seconds = self.connector.configuration.get('thinking-delay', 0)
if isinstance(seconds, list):
@@ -33,7 +60,11 @@
await asyncio.sleep(seconds)
async def _typing_delay(self, text):
- """Simulate typing, takes an int or float to delay reply."""
+ """Delays reply to simulate typing.
+
+ Seconds to delay equals number of characters in response multiplied by
+ number of seconds defined in YAML config. file, accessed via connector.
+ """
seconds = self.connector.configuration.get('typing-delay', 0)
char_count = len(text)
@@ -43,7 +74,13 @@
await asyncio.sleep(char_count*seconds)
async def respond(self, text, room=None):
- """Respond to this message using the connector it was created by."""
+ """Respond to this message using the connector it was created by.
+
+ Creates copy of this message with updated text as response.
+ Delays message if thinking or typing delay present in config. file.
+ Updates responded_to attribute to True if False.
+ Logs response and response time in OpsDroid object stats.
+ """
opsdroid = get_opsdroid()
response = copy(self)
response.text = text
@@ -64,7 +101,17 @@
self.responded_to = True
async def react(self, emoji):
- """React to this message using the connector it was created by."""
+ """React to this message with emoji using the specified connector.
+
+ Delays message if thinking delay present in config. file.
+
+ Args:
+ emoji: Sting name of emoji with which OpsDroid will react.
+
+ Returns:
+ bool: True for message successfully sent. False otherwise.
+
+ """
if 'thinking-delay' in self.connector.configuration:
await self._thinking_delay()
return await self.connector.react(self, emoji)
|
{"golden_diff": "diff --git a/opsdroid/message.py b/opsdroid/message.py\n--- a/opsdroid/message.py\n+++ b/opsdroid/message.py\n@@ -10,7 +10,31 @@\n \n class Message:\n # pylint: disable=too-few-public-methods\n- \"\"\"A message object.\"\"\"\n+ \"\"\"A message object.\n+\n+ Stores messages in a format that allows OpsDroid to respond or react with\n+ delays for thinking and typing as defined in configuration YAML file.\n+\n+ Args:\n+ text: String text of message\n+ user: String name of user sending message\n+ room: String name of the room or chat channel in which message was sent\n+ connector: Connector object used to interact with given chat service\n+ raw_message: Raw message as provided by chat service. None by default\n+\n+ Attributes:\n+ created: Local date and time that message object was created\n+ text: Text of message as string\n+ user: String name of user sending message\n+ room: String name of the room or chat channel in which message was sent\n+ connector: Connector object used to interact with given chat service\n+ raw_message: Raw message provided by chat service\n+ regex: A re match object for the regular expression message was matched\n+ against\n+ responded_to: Boolean initialized as False. True if message has been\n+ responded to\n+\n+ \"\"\"\n \n def __init__(self, text, user, room, connector, raw_message=None):\n \"\"\"Create object with minimum properties.\"\"\"\n@@ -24,7 +48,10 @@\n self.responded_to = False\n \n async def _thinking_delay(self):\n- \"\"\"Make opsdroid wait x-seconds before responding.\"\"\"\n+ \"\"\"Make opsdroid wait x-seconds before responding.\n+\n+ Number of seconds defined in YAML config. file, accessed via connector.\n+ \"\"\"\n seconds = self.connector.configuration.get('thinking-delay', 0)\n \n if isinstance(seconds, list):\n@@ -33,7 +60,11 @@\n await asyncio.sleep(seconds)\n \n async def _typing_delay(self, text):\n- \"\"\"Simulate typing, takes an int or float to delay reply.\"\"\"\n+ \"\"\"Delays reply to simulate typing.\n+\n+ Seconds to delay equals number of characters in response multiplied by\n+ number of seconds defined in YAML config. file, accessed via connector.\n+ \"\"\"\n seconds = self.connector.configuration.get('typing-delay', 0)\n char_count = len(text)\n \n@@ -43,7 +74,13 @@\n await asyncio.sleep(char_count*seconds)\n \n async def respond(self, text, room=None):\n- \"\"\"Respond to this message using the connector it was created by.\"\"\"\n+ \"\"\"Respond to this message using the connector it was created by.\n+\n+ Creates copy of this message with updated text as response.\n+ Delays message if thinking or typing delay present in config. file.\n+ Updates responded_to attribute to True if False.\n+ Logs response and response time in OpsDroid object stats.\n+ \"\"\"\n opsdroid = get_opsdroid()\n response = copy(self)\n response.text = text\n@@ -64,7 +101,17 @@\n self.responded_to = True\n \n async def react(self, emoji):\n- \"\"\"React to this message using the connector it was created by.\"\"\"\n+ \"\"\"React to this message with emoji using the specified connector.\n+\n+ Delays message if thinking delay present in config. file.\n+\n+ Args:\n+ emoji: Sting name of emoji with which OpsDroid will react.\n+\n+ Returns:\n+ bool: True for message successfully sent. False otherwise.\n+\n+ \"\"\"\n if 'thinking-delay' in self.connector.configuration:\n await self._thinking_delay()\n return await self.connector.react(self, emoji)\n", "issue": "Add Google Style Docstrings\nWe should implement Google Style Docstrings to every function, method, class in opsdroid. This style will support existing documentation and will help in the future by generating documentation automatically.\r\n\r\nThis consists in a bit of effort so this issue can be worked by more than one contributor, just make sure that everyone knows what you are working on in order to avoid other contributors spending time on something that you are working on.\r\n\r\nIf you are unfamiliar with the Google Style Docstrings I'd recommend that you check these resources:\r\n\r\n - [Sphix 1.8.0+ - Google Style Docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)\r\n\r\n\r\n\r\nDocstrings that need to be updated:\r\n\r\n- main.py\r\n - [x] configure_lang\r\n - [ ] configure_log\r\n - [ ] get_logging_level\r\n - [ ] check_dependencies\r\n - [ ] print_version\r\n - [ ] print_example_config\r\n - [ ] edit_files\r\n - [x] welcome_message\r\n- ~~helper.py~~\r\n - [x] get_opsdroid\r\n - [x] del_rw\r\n - [x] move_config_to_appdir\r\n- memory.py\r\n - [x] Memory\r\n - [x] get\r\n - [x] put\r\n - [x] _get_from_database\r\n - [x] _put_to_database\r\n- message.py\r\n - [x] Message\r\n - [x] __init__\r\n - [x] _thinking_delay\r\n - [x] _typing delay\r\n - [x] respond\r\n - [x] react\r\n- web.py\r\n - [ ] Web\r\n - [x] get_port\r\n - [x] get_host\r\n - [x] get_ssl_context\r\n - [ ] start\r\n - [ ] build_response\r\n - [ ] web_index_handler\r\n - [ ] web_stats_handler\r\n- matchers.py\r\n - [ ] match_regex\r\n - [ ] match_apiai_action\r\n - [ ] match_apiai_intent\r\n - [ ] match_dialogflow_action\r\n - [ ] match_dialogflow_intent\r\n - [ ] match_luisai_intent\r\n - [ ] match_rasanlu\r\n - [ ] match_recastai\r\n - [ ] match_witai\r\n - [ ] match_crontab\r\n - [ ] match_webhook\r\n - [ ] match_always\r\n- core.py\r\n - [ ] OpsDroid\r\n - [ ] default_connector\r\n - [ ] exit\r\n - [ ] critical\r\n - [ ] call_stop\r\n - [ ] disconnect\r\n - [ ] stop\r\n - [ ] load\r\n - [ ] start_loop\r\n - [x] setup_skills\r\n - [ ] train_parsers\r\n - [ ] start_connector_tasks\r\n - [ ] start_database\r\n - [ ] run_skill\r\n - [ ] get_ranked_skills\r\n - [ ] parse\r\n- loader.py\r\n - [ ] Loader\r\n - [x] import_module_from_spec\r\n - [x] import_module\r\n - [x] check_cache\r\n - [x] build_module_import_path\r\n - [x] build_module_install_path\r\n - [x] git_clone\r\n - [x] git_pull\r\n - [x] pip_install_deps\r\n - [x] create_default_config\r\n - [x] load_config_file\r\n - [ ] envvar_constructor\r\n - [ ] include_constructor\r\n - [x] setup_modules_directory\r\n - [x] load_modules_from_config\r\n - [x] _load_modules\r\n - [x] _install_module\r\n - [x] _update_module\r\n - [ ] _install_git_module\r\n - [x] _install_local_module\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n---- ORIGINAL POST ---- \r\nI've been wondering about this for a while now and I would like to know if we should replace/update all the docstrings in opsdroid with the Google Style doc strings. \r\n\r\nI think this could help new and old contributors to contribute and commit to opsdroid since the Google Style docstrings give more information about every method/function and specifies clearly what sort of input the function/method expects, what will it return and what will be raised (if applicable).\r\n\r\nThe downsize of this style is that the length of every .py file will increase due to the doc strings, but since most IDE's allow you to hide those fields it shouldn't be too bad.\r\n\r\nHere is a good example of Google Style Doc strings: [Sphix 1.8.0+ - Google Style Docstrings](http://www.sphinx-doc.org/en/master/ext/example_google.html)\r\n\r\nI would like to know what you all think about this idea and if its worth spending time on it.\nAdd Google Style Docstrings\nWe should implement Google Style Docstrings to every function, method, class in opsdroid. This style will support existing documentation and will help in the future by generating documentation automatically.\r\n\r\nThis consists in a bit of effort so this issue can be worked by more than one contributor, just make sure that everyone knows what you are working on in order to avoid other contributors spending time on something that you are working on.\r\n\r\nIf you are unfamiliar with the Google Style Docstrings I'd recommend that you check these resources:\r\n\r\n - [Sphix 1.8.0+ - Google Style Docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)\r\n\r\n\r\n\r\nDocstrings that need to be updated:\r\n\r\n- main.py\r\n - [x] configure_lang\r\n - [ ] configure_log\r\n - [ ] get_logging_level\r\n - [ ] check_dependencies\r\n - [ ] print_version\r\n - [ ] print_example_config\r\n - [ ] edit_files\r\n - [x] welcome_message\r\n- ~~helper.py~~\r\n - [x] get_opsdroid\r\n - [x] del_rw\r\n - [x] move_config_to_appdir\r\n- memory.py\r\n - [x] Memory\r\n - [x] get\r\n - [x] put\r\n - [x] _get_from_database\r\n - [x] _put_to_database\r\n- message.py\r\n - [x] Message\r\n - [x] __init__\r\n - [x] _thinking_delay\r\n - [x] _typing delay\r\n - [x] respond\r\n - [x] react\r\n- web.py\r\n - [ ] Web\r\n - [x] get_port\r\n - [x] get_host\r\n - [x] get_ssl_context\r\n - [ ] start\r\n - [ ] build_response\r\n - [ ] web_index_handler\r\n - [ ] web_stats_handler\r\n- matchers.py\r\n - [ ] match_regex\r\n - [ ] match_apiai_action\r\n - [ ] match_apiai_intent\r\n - [ ] match_dialogflow_action\r\n - [ ] match_dialogflow_intent\r\n - [ ] match_luisai_intent\r\n - [ ] match_rasanlu\r\n - [ ] match_recastai\r\n - [ ] match_witai\r\n - [ ] match_crontab\r\n - [ ] match_webhook\r\n - [ ] match_always\r\n- core.py\r\n - [ ] OpsDroid\r\n - [ ] default_connector\r\n - [ ] exit\r\n - [ ] critical\r\n - [ ] call_stop\r\n - [ ] disconnect\r\n - [ ] stop\r\n - [ ] load\r\n - [ ] start_loop\r\n - [x] setup_skills\r\n - [ ] train_parsers\r\n - [ ] start_connector_tasks\r\n - [ ] start_database\r\n - [ ] run_skill\r\n - [ ] get_ranked_skills\r\n - [ ] parse\r\n- loader.py\r\n - [ ] Loader\r\n - [x] import_module_from_spec\r\n - [x] import_module\r\n - [x] check_cache\r\n - [x] build_module_import_path\r\n - [x] build_module_install_path\r\n - [x] git_clone\r\n - [x] git_pull\r\n - [x] pip_install_deps\r\n - [x] create_default_config\r\n - [x] load_config_file\r\n - [ ] envvar_constructor\r\n - [ ] include_constructor\r\n - [x] setup_modules_directory\r\n - [x] load_modules_from_config\r\n - [x] _load_modules\r\n - [x] _install_module\r\n - [x] _update_module\r\n - [ ] _install_git_module\r\n - [x] _install_local_module\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n---- ORIGINAL POST ---- \r\nI've been wondering about this for a while now and I would like to know if we should replace/update all the docstrings in opsdroid with the Google Style doc strings. \r\n\r\nI think this could help new and old contributors to contribute and commit to opsdroid since the Google Style docstrings give more information about every method/function and specifies clearly what sort of input the function/method expects, what will it return and what will be raised (if applicable).\r\n\r\nThe downsize of this style is that the length of every .py file will increase due to the doc strings, but since most IDE's allow you to hide those fields it shouldn't be too bad.\r\n\r\nHere is a good example of Google Style Doc strings: [Sphix 1.8.0+ - Google Style Docstrings](http://www.sphinx-doc.org/en/master/ext/example_google.html)\r\n\r\nI would like to know what you all think about this idea and if its worth spending time on it.\n", "before_files": [{"content": "\"\"\"Class to encapsulate a message.\"\"\"\n\nfrom datetime import datetime\nfrom copy import copy\nimport asyncio\nfrom random import randrange\n\nfrom opsdroid.helper import get_opsdroid\n\n\nclass Message:\n # pylint: disable=too-few-public-methods\n \"\"\"A message object.\"\"\"\n\n def __init__(self, text, user, room, connector, raw_message=None):\n \"\"\"Create object with minimum properties.\"\"\"\n self.created = datetime.now()\n self.text = text\n self.user = user\n self.room = room\n self.connector = connector\n self.raw_message = raw_message\n self.regex = None\n self.responded_to = False\n\n async def _thinking_delay(self):\n \"\"\"Make opsdroid wait x-seconds before responding.\"\"\"\n seconds = self.connector.configuration.get('thinking-delay', 0)\n\n if isinstance(seconds, list):\n seconds = randrange(seconds[0], seconds[1])\n\n await asyncio.sleep(seconds)\n\n async def _typing_delay(self, text):\n \"\"\"Simulate typing, takes an int or float to delay reply.\"\"\"\n seconds = self.connector.configuration.get('typing-delay', 0)\n char_count = len(text)\n\n if isinstance(seconds, list):\n seconds = randrange(seconds[0], seconds[1])\n\n await asyncio.sleep(char_count*seconds)\n\n async def respond(self, text, room=None):\n \"\"\"Respond to this message using the connector it was created by.\"\"\"\n opsdroid = get_opsdroid()\n response = copy(self)\n response.text = text\n\n if 'thinking-delay' in self.connector.configuration or \\\n 'typing-delay' in self.connector.configuration:\n await self._thinking_delay()\n await self._typing_delay(response.text)\n\n await self.connector.respond(response, room)\n if not self.responded_to:\n now = datetime.now()\n opsdroid.stats[\"total_responses\"] = \\\n opsdroid.stats[\"total_responses\"] + 1\n opsdroid.stats[\"total_response_time\"] = \\\n opsdroid.stats[\"total_response_time\"] + \\\n (now - self.created).total_seconds()\n self.responded_to = True\n\n async def react(self, emoji):\n \"\"\"React to this message using the connector it was created by.\"\"\"\n if 'thinking-delay' in self.connector.configuration:\n await self._thinking_delay()\n return await self.connector.react(self, emoji)\n", "path": "opsdroid/message.py"}], "after_files": [{"content": "\"\"\"Class to encapsulate a message.\"\"\"\n\nfrom datetime import datetime\nfrom copy import copy\nimport asyncio\nfrom random import randrange\n\nfrom opsdroid.helper import get_opsdroid\n\n\nclass Message:\n # pylint: disable=too-few-public-methods\n \"\"\"A message object.\n\n Stores messages in a format that allows OpsDroid to respond or react with\n delays for thinking and typing as defined in configuration YAML file.\n\n Args:\n text: String text of message\n user: String name of user sending message\n room: String name of the room or chat channel in which message was sent\n connector: Connector object used to interact with given chat service\n raw_message: Raw message as provided by chat service. None by default\n\n Attributes:\n created: Local date and time that message object was created\n text: Text of message as string\n user: String name of user sending message\n room: String name of the room or chat channel in which message was sent\n connector: Connector object used to interact with given chat service\n raw_message: Raw message provided by chat service\n regex: A re match object for the regular expression message was matched\n against\n responded_to: Boolean initialized as False. True if message has been\n responded to\n\n \"\"\"\n\n def __init__(self, text, user, room, connector, raw_message=None):\n \"\"\"Create object with minimum properties.\"\"\"\n self.created = datetime.now()\n self.text = text\n self.user = user\n self.room = room\n self.connector = connector\n self.raw_message = raw_message\n self.regex = None\n self.responded_to = False\n\n async def _thinking_delay(self):\n \"\"\"Make opsdroid wait x-seconds before responding.\n\n Number of seconds defined in YAML config. file, accessed via connector.\n \"\"\"\n seconds = self.connector.configuration.get('thinking-delay', 0)\n\n if isinstance(seconds, list):\n seconds = randrange(seconds[0], seconds[1])\n\n await asyncio.sleep(seconds)\n\n async def _typing_delay(self, text):\n \"\"\"Delays reply to simulate typing.\n\n Seconds to delay equals number of characters in response multiplied by\n number of seconds defined in YAML config. file, accessed via connector.\n \"\"\"\n seconds = self.connector.configuration.get('typing-delay', 0)\n char_count = len(text)\n\n if isinstance(seconds, list):\n seconds = randrange(seconds[0], seconds[1])\n\n await asyncio.sleep(char_count*seconds)\n\n async def respond(self, text, room=None):\n \"\"\"Respond to this message using the connector it was created by.\n\n Creates copy of this message with updated text as response.\n Delays message if thinking or typing delay present in config. file.\n Updates responded_to attribute to True if False.\n Logs response and response time in OpsDroid object stats.\n \"\"\"\n opsdroid = get_opsdroid()\n response = copy(self)\n response.text = text\n\n if 'thinking-delay' in self.connector.configuration or \\\n 'typing-delay' in self.connector.configuration:\n await self._thinking_delay()\n await self._typing_delay(response.text)\n\n await self.connector.respond(response, room)\n if not self.responded_to:\n now = datetime.now()\n opsdroid.stats[\"total_responses\"] = \\\n opsdroid.stats[\"total_responses\"] + 1\n opsdroid.stats[\"total_response_time\"] = \\\n opsdroid.stats[\"total_response_time\"] + \\\n (now - self.created).total_seconds()\n self.responded_to = True\n\n async def react(self, emoji):\n \"\"\"React to this message with emoji using the specified connector.\n\n Delays message if thinking delay present in config. file.\n\n Args:\n emoji: Sting name of emoji with which OpsDroid will react.\n\n Returns:\n bool: True for message successfully sent. False otherwise.\n\n \"\"\"\n if 'thinking-delay' in self.connector.configuration:\n await self._thinking_delay()\n return await self.connector.react(self, emoji)\n", "path": "opsdroid/message.py"}]}
| 2,979 | 844 |
gh_patches_debug_19148
|
rasdani/github-patches
|
git_diff
|
coala__coala-bears-1422
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Most YAML documents use document starts (---)
Hi,
I am the creator of yamllint, the linter coala uses for YAML.
Since #965 was merged three months ago, coala fails on many projects like Ansible, OpenStack and even yamllint itself, because coala doesn't accept document start markers (`---`) anymore.
Document start markers are commonly used, and required when declaring multiple documents in a single `.yaml` file (see [the spec](http://yaml.org/spec/1.2/spec.html#id2800132)).
The proposed fix in the original issue (#923) was to disable the rule, but the implemented fix (#965) made document starts forbidden.
My opinion is that coala should either require document starts, or disable the rule by default.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bears/yaml/YAMLLintBear.py`
Content:
```
1 from coalib.bearlib.abstractions.Linter import linter
2 from dependency_management.requirements.PipRequirement import PipRequirement
3 import yaml
4
5
6 @linter(executable='yamllint',
7 output_format='regex',
8 output_regex=r'.+:(?P<line>\d+):(?P<column>\d+): '
9 r'\[(?P<severity>error|warning)\] (?P<message>.+)')
10 class YAMLLintBear:
11 """
12 Check yaml code for errors and possible problems.
13
14 You can read more about capabilities at
15 <http://yamllint.readthedocs.org/en/latest/rules.html>.
16 """
17
18 LANGUAGES = {'YAML'}
19 REQUIREMENTS = {PipRequirement('yamllint', '1.5')}
20 AUTHORS = {'The coala developers'}
21 AUTHORS_EMAILS = {'[email protected]'}
22 LICENSE = 'AGPL-3.0'
23 CAN_DETECT = {'Syntax', 'Formatting'}
24
25 @staticmethod
26 def generate_config(filename, file,
27 document_start: bool=False):
28 """
29 :param document_start:
30 Use this rule to require or forbid the use of document start
31 marker (---).
32 """
33 yamllint_configs = {
34 'extends': 'default',
35 'rules': {
36 'document-start': {
37 'present': False
38 }
39 }
40 }
41 if document_start:
42 yamllint_configs['rules']['document-start']['present'] = True
43
44 return yaml.dump(yamllint_configs)
45
46 @staticmethod
47 def create_arguments(filename, file, config_file, yamllint_config: str=''):
48 """
49 :param yamllint_config: Path to a custom configuration file.
50 """
51 args = ('-f', 'parsable', filename)
52 if yamllint_config:
53 args += ('--config-file=' + yamllint_config,)
54 else:
55 args += ('--config-file=' + config_file,)
56 return args
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bears/yaml/YAMLLintBear.py b/bears/yaml/YAMLLintBear.py
--- a/bears/yaml/YAMLLintBear.py
+++ b/bears/yaml/YAMLLintBear.py
@@ -24,7 +24,7 @@
@staticmethod
def generate_config(filename, file,
- document_start: bool=False):
+ document_start: bool=None):
"""
:param document_start:
Use this rule to require or forbid the use of document start
@@ -33,13 +33,10 @@
yamllint_configs = {
'extends': 'default',
'rules': {
- 'document-start': {
- 'present': False
- }
- }
+ 'document-start': 'disable' if document_start is None
+ else {'present': document_start},
+ },
}
- if document_start:
- yamllint_configs['rules']['document-start']['present'] = True
return yaml.dump(yamllint_configs)
|
{"golden_diff": "diff --git a/bears/yaml/YAMLLintBear.py b/bears/yaml/YAMLLintBear.py\n--- a/bears/yaml/YAMLLintBear.py\n+++ b/bears/yaml/YAMLLintBear.py\n@@ -24,7 +24,7 @@\n \n @staticmethod\n def generate_config(filename, file,\n- document_start: bool=False):\n+ document_start: bool=None):\n \"\"\"\n :param document_start:\n Use this rule to require or forbid the use of document start\n@@ -33,13 +33,10 @@\n yamllint_configs = {\n 'extends': 'default',\n 'rules': {\n- 'document-start': {\n- 'present': False\n- }\n- }\n+ 'document-start': 'disable' if document_start is None\n+ else {'present': document_start},\n+ },\n }\n- if document_start:\n- yamllint_configs['rules']['document-start']['present'] = True\n \n return yaml.dump(yamllint_configs)\n", "issue": "Most YAML documents use document starts (---)\nHi,\r\n\r\nI am the creator of yamllint, the linter coala uses for YAML.\r\n\r\nSince #965 was merged three months ago, coala fails on many projects like Ansible, OpenStack and even yamllint itself, because coala doesn't accept document start markers (`---`) anymore.\r\n\r\nDocument start markers are commonly used, and required when declaring multiple documents in a single `.yaml` file (see [the spec](http://yaml.org/spec/1.2/spec.html#id2800132)).\r\n\r\nThe proposed fix in the original issue (#923) was to disable the rule, but the implemented fix (#965) made document starts forbidden.\r\n\r\nMy opinion is that coala should either require document starts, or disable the rule by default.\n", "before_files": [{"content": "from coalib.bearlib.abstractions.Linter import linter\nfrom dependency_management.requirements.PipRequirement import PipRequirement\nimport yaml\n\n\n@linter(executable='yamllint',\n output_format='regex',\n output_regex=r'.+:(?P<line>\\d+):(?P<column>\\d+): '\n r'\\[(?P<severity>error|warning)\\] (?P<message>.+)')\nclass YAMLLintBear:\n \"\"\"\n Check yaml code for errors and possible problems.\n\n You can read more about capabilities at\n <http://yamllint.readthedocs.org/en/latest/rules.html>.\n \"\"\"\n\n LANGUAGES = {'YAML'}\n REQUIREMENTS = {PipRequirement('yamllint', '1.5')}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_DETECT = {'Syntax', 'Formatting'}\n\n @staticmethod\n def generate_config(filename, file,\n document_start: bool=False):\n \"\"\"\n :param document_start:\n Use this rule to require or forbid the use of document start\n marker (---).\n \"\"\"\n yamllint_configs = {\n 'extends': 'default',\n 'rules': {\n 'document-start': {\n 'present': False\n }\n }\n }\n if document_start:\n yamllint_configs['rules']['document-start']['present'] = True\n\n return yaml.dump(yamllint_configs)\n\n @staticmethod\n def create_arguments(filename, file, config_file, yamllint_config: str=''):\n \"\"\"\n :param yamllint_config: Path to a custom configuration file.\n \"\"\"\n args = ('-f', 'parsable', filename)\n if yamllint_config:\n args += ('--config-file=' + yamllint_config,)\n else:\n args += ('--config-file=' + config_file,)\n return args\n", "path": "bears/yaml/YAMLLintBear.py"}], "after_files": [{"content": "from coalib.bearlib.abstractions.Linter import linter\nfrom dependency_management.requirements.PipRequirement import PipRequirement\nimport yaml\n\n\n@linter(executable='yamllint',\n output_format='regex',\n output_regex=r'.+:(?P<line>\\d+):(?P<column>\\d+): '\n r'\\[(?P<severity>error|warning)\\] (?P<message>.+)')\nclass YAMLLintBear:\n \"\"\"\n Check yaml code for errors and possible problems.\n\n You can read more about capabilities at\n <http://yamllint.readthedocs.org/en/latest/rules.html>.\n \"\"\"\n\n LANGUAGES = {'YAML'}\n REQUIREMENTS = {PipRequirement('yamllint', '1.5')}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_DETECT = {'Syntax', 'Formatting'}\n\n @staticmethod\n def generate_config(filename, file,\n document_start: bool=None):\n \"\"\"\n :param document_start:\n Use this rule to require or forbid the use of document start\n marker (---).\n \"\"\"\n yamllint_configs = {\n 'extends': 'default',\n 'rules': {\n 'document-start': 'disable' if document_start is None\n else {'present': document_start},\n },\n }\n\n return yaml.dump(yamllint_configs)\n\n @staticmethod\n def create_arguments(filename, file, config_file, yamllint_config: str=''):\n \"\"\"\n :param yamllint_config: Path to a custom configuration file.\n \"\"\"\n args = ('-f', 'parsable', filename)\n if yamllint_config:\n args += ('--config-file=' + yamllint_config,)\n else:\n args += ('--config-file=' + config_file,)\n return args\n", "path": "bears/yaml/YAMLLintBear.py"}]}
| 976 | 232 |
gh_patches_debug_27778
|
rasdani/github-patches
|
git_diff
|
enthought__chaco-502
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Don't put Cythonized .c files in source control, but do ship in sdist
Currently we check-in .c files produced by Cython to the source tree alongside the .pyx files so that people building the source don't need to have Cython installed. This is awkward from the developer's perspective, however, and can result in noisy deltas.
Following discussion in #325 the the proposal is that we will only check in the .pyx files into source control, but we will ship the .c files as part of the sdist source distributions. This change will mean that people wishing to work from non-released versions will need to have Cython installed (as will the CI environment), but people wanting to build a release from source won't need it. Having Cython available is not as unreasonable a requirement as it was several years ago.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright (c) 2008-2019 by Enthought, Inc.
2 # All rights reserved.
3 import os
4 import re
5 import subprocess
6
7 from numpy import get_include
8 from setuptools import setup, Extension, find_packages
9
10 MAJOR = 4
11 MINOR = 8
12 MICRO = 1
13
14 IS_RELEASED = False
15
16 VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
17
18 # Name of the directory containing the package.
19 PKG_PATHNAME = 'chaco'
20
21 # Name of the file containing the version information.
22 _VERSION_FILENAME = os.path.join(PKG_PATHNAME, '_version.py')
23
24
25 def read_version_py(path):
26 """ Read a _version.py file in a safe way. """
27 with open(path, 'r') as fp:
28 code = compile(fp.read(), 'chaco._version', 'exec')
29 context = {}
30 exec(code, context)
31 return context['git_revision'], context['full_version']
32
33
34 def git_version():
35 """ Parse version information from the current git commit.
36
37 Parse the output of `git describe` and return the git hash and the number
38 of commits since the last version tag.
39 """
40
41 def _minimal_ext_cmd(cmd):
42 # construct minimal environment
43 env = {}
44 for k in ['SYSTEMROOT', 'PATH', 'HOME']:
45 v = os.environ.get(k)
46 if v is not None:
47 env[k] = v
48 # LANGUAGE is used on win32
49 env['LANGUAGE'] = 'C'
50 env['LANG'] = 'C'
51 env['LC_ALL'] = 'C'
52 out = subprocess.Popen(
53 cmd, stdout=subprocess.PIPE, env=env,
54 ).communicate()[0]
55 return out
56
57 try:
58 # We ask git to find the latest tag matching a glob expression. The
59 # intention is to find a release tag of the form '4.50.2'. Strictly
60 # speaking, the glob expression also matches tags of the form
61 # '4abc.5xyz.2gtluuu', but it's very difficult with glob expressions
62 # to distinguish between the two cases, and the likelihood of a
63 # problem is minimal.
64 out = _minimal_ext_cmd(
65 ['git', 'describe', '--match', '[0-9]*.[0-9]*.[0-9]*', '--tags'])
66 except OSError:
67 out = ''
68
69 git_description = out.strip().decode('ascii')
70 expr = r'.*?\-(?P<count>\d+)-g(?P<hash>[a-fA-F0-9]+)'
71 match = re.match(expr, git_description)
72 if match is None:
73 git_revision, git_count = 'Unknown', '0'
74 else:
75 git_revision, git_count = match.group('hash'), match.group('count')
76
77 return git_revision, git_count
78
79
80 def write_version_py(filename=_VERSION_FILENAME):
81 """ Create a file containing the version information. """
82
83 template = """\
84 # This file was automatically generated from the `setup.py` script.
85 version = '{version}'
86 full_version = '{full_version}'
87 git_revision = '{git_revision}'
88 is_released = {is_released}
89
90 if not is_released:
91 version = full_version
92 """
93 # Adding the git rev number needs to be done inside
94 # write_version_py(), otherwise the import of _version messes
95 # up the build under Python 3.
96 fullversion = VERSION
97 chaco_version_path = os.path.join(
98 os.path.dirname(__file__), 'chaco', '_version.py')
99 if os.path.exists('.git'):
100 git_rev, dev_num = git_version()
101 elif os.path.exists(filename):
102 # must be a source distribution, use existing version file
103 try:
104 git_rev, fullversion = read_version_py(chaco_version_path)
105 except (SyntaxError, KeyError):
106 raise RuntimeError("Unable to read git_revision. Try removing "
107 "chaco/_version.py and the build directory "
108 "before building.")
109
110
111 match = re.match(r'.*?\.dev(?P<dev_num>\d+)', fullversion)
112 if match is None:
113 dev_num = '0'
114 else:
115 dev_num = match.group('dev_num')
116 else:
117 git_rev = 'Unknown'
118 dev_num = '0'
119
120 if not IS_RELEASED:
121 fullversion += '.dev{0}'.format(dev_num)
122
123 with open(filename, "wt") as fp:
124 fp.write(template.format(version=VERSION,
125 full_version=fullversion,
126 git_revision=git_rev,
127 is_released=IS_RELEASED))
128
129
130 if __name__ == "__main__":
131 write_version_py()
132 from chaco import __requires__, __version__
133
134 numpy_include_dir = get_include()
135
136 # Register Python extensions
137 contour = Extension(
138 'chaco.contour.contour',
139 sources=['chaco/contour/cntr.c'],
140 include_dirs=[numpy_include_dir],
141 define_macros=[('NUMPY', None)],
142 )
143
144 cython_speedups = Extension(
145 'chaco._cython_speedups',
146 sources=['chaco/_cython_speedups.c'],
147 include_dirs=[numpy_include_dir],
148 )
149
150 downsampling_lttb = Extension(
151 'chaco.downsample._lttb',
152 sources=['chaco/downsample/_lttb.c'],
153 include_dirs=[numpy_include_dir],
154 )
155
156 setup(
157 name = 'chaco',
158 version = __version__,
159 author = 'Peter Wang, et. al.',
160 author_email = '[email protected]',
161 maintainer = 'ETS Developers',
162 maintainer_email = '[email protected]',
163 url = 'http://docs.enthought.com/chaco',
164 download_url = 'https://github.com/enthought/chaco',
165 classifiers = [c.strip() for c in """\
166 Development Status :: 5 - Production/Stable
167 Intended Audience :: Developers
168 Intended Audience :: Science/Research
169 License :: OSI Approved :: BSD License
170 Operating System :: MacOS
171 Operating System :: Microsoft :: Windows
172 Operating System :: OS Independent
173 Operating System :: POSIX
174 Operating System :: Unix
175 Programming Language :: C
176 Programming Language :: Python
177 Topic :: Scientific/Engineering
178 Topic :: Software Development
179 Topic :: Software Development :: Libraries
180 """.splitlines() if len(c.strip()) > 0],
181 package_data={
182 'chaco': ['tools/toolbars/images/*.png',
183 'layers/data/*.svg',
184 'tests/data/PngSuite/*.png']
185 },
186 description = 'interactive 2-dimensional plotting',
187 long_description = open('README.rst').read(),
188 ext_modules = [contour, cython_speedups, downsampling_lttb],
189 include_package_data = True,
190 install_requires = __requires__,
191 license = 'BSD',
192 packages = find_packages(),
193 platforms = ["Windows", "Linux", "Mac OS-X", "Unix", "Solaris"],
194 zip_safe = False,
195 use_2to3=False,
196 )
197
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -6,6 +6,7 @@
from numpy import get_include
from setuptools import setup, Extension, find_packages
+from Cython.Build import cythonize
MAJOR = 4
MINOR = 8
@@ -143,16 +144,19 @@
cython_speedups = Extension(
'chaco._cython_speedups',
- sources=['chaco/_cython_speedups.c'],
+ sources=['chaco/_cython_speedups.pyx'],
include_dirs=[numpy_include_dir],
)
downsampling_lttb = Extension(
'chaco.downsample._lttb',
- sources=['chaco/downsample/_lttb.c'],
+ sources=['chaco/downsample/_lttb.pyx'],
include_dirs=[numpy_include_dir],
)
+ cython_extensions = cythonize([cython_speedups, downsampling_lttb])
+ extensions = [contour] + cython_extensions
+
setup(
name = 'chaco',
version = __version__,
@@ -185,7 +189,7 @@
},
description = 'interactive 2-dimensional plotting',
long_description = open('README.rst').read(),
- ext_modules = [contour, cython_speedups, downsampling_lttb],
+ ext_modules = extensions,
include_package_data = True,
install_requires = __requires__,
license = 'BSD',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -6,6 +6,7 @@\n \n from numpy import get_include\n from setuptools import setup, Extension, find_packages\n+from Cython.Build import cythonize\n \n MAJOR = 4\n MINOR = 8\n@@ -143,16 +144,19 @@\n \n cython_speedups = Extension(\n 'chaco._cython_speedups',\n- sources=['chaco/_cython_speedups.c'],\n+ sources=['chaco/_cython_speedups.pyx'],\n include_dirs=[numpy_include_dir],\n )\n \n downsampling_lttb = Extension(\n 'chaco.downsample._lttb',\n- sources=['chaco/downsample/_lttb.c'],\n+ sources=['chaco/downsample/_lttb.pyx'],\n include_dirs=[numpy_include_dir],\n )\n \n+ cython_extensions = cythonize([cython_speedups, downsampling_lttb])\n+ extensions = [contour] + cython_extensions\n+\n setup(\n name = 'chaco',\n version = __version__,\n@@ -185,7 +189,7 @@\n },\n description = 'interactive 2-dimensional plotting',\n long_description = open('README.rst').read(),\n- ext_modules = [contour, cython_speedups, downsampling_lttb],\n+ ext_modules = extensions,\n include_package_data = True,\n install_requires = __requires__,\n license = 'BSD',\n", "issue": "Don't put Cythonized .c files in source control, but do ship in sdist\nCurrently we check-in .c files produced by Cython to the source tree alongside the .pyx files so that people building the source don't need to have Cython installed. This is awkward from the developer's perspective, however, and can result in noisy deltas.\r\n\r\nFollowing discussion in #325 the the proposal is that we will only check in the .pyx files into source control, but we will ship the .c files as part of the sdist source distributions. This change will mean that people wishing to work from non-released versions will need to have Cython installed (as will the CI environment), but people wanting to build a release from source won't need it. Having Cython available is not as unreasonable a requirement as it was several years ago.\n", "before_files": [{"content": "# Copyright (c) 2008-2019 by Enthought, Inc.\n# All rights reserved.\nimport os\nimport re\nimport subprocess\n\nfrom numpy import get_include\nfrom setuptools import setup, Extension, find_packages\n\nMAJOR = 4\nMINOR = 8\nMICRO = 1\n\nIS_RELEASED = False\n\nVERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)\n\n# Name of the directory containing the package.\nPKG_PATHNAME = 'chaco'\n\n# Name of the file containing the version information.\n_VERSION_FILENAME = os.path.join(PKG_PATHNAME, '_version.py')\n\n\ndef read_version_py(path):\n \"\"\" Read a _version.py file in a safe way. \"\"\"\n with open(path, 'r') as fp:\n code = compile(fp.read(), 'chaco._version', 'exec')\n context = {}\n exec(code, context)\n return context['git_revision'], context['full_version']\n\n\ndef git_version():\n \"\"\" Parse version information from the current git commit.\n\n Parse the output of `git describe` and return the git hash and the number\n of commits since the last version tag.\n \"\"\"\n\n def _minimal_ext_cmd(cmd):\n # construct minimal environment\n env = {}\n for k in ['SYSTEMROOT', 'PATH', 'HOME']:\n v = os.environ.get(k)\n if v is not None:\n env[k] = v\n # LANGUAGE is used on win32\n env['LANGUAGE'] = 'C'\n env['LANG'] = 'C'\n env['LC_ALL'] = 'C'\n out = subprocess.Popen(\n cmd, stdout=subprocess.PIPE, env=env,\n ).communicate()[0]\n return out\n\n try:\n # We ask git to find the latest tag matching a glob expression. The\n # intention is to find a release tag of the form '4.50.2'. Strictly\n # speaking, the glob expression also matches tags of the form\n # '4abc.5xyz.2gtluuu', but it's very difficult with glob expressions\n # to distinguish between the two cases, and the likelihood of a\n # problem is minimal.\n out = _minimal_ext_cmd(\n ['git', 'describe', '--match', '[0-9]*.[0-9]*.[0-9]*', '--tags'])\n except OSError:\n out = ''\n\n git_description = out.strip().decode('ascii')\n expr = r'.*?\\-(?P<count>\\d+)-g(?P<hash>[a-fA-F0-9]+)'\n match = re.match(expr, git_description)\n if match is None:\n git_revision, git_count = 'Unknown', '0'\n else:\n git_revision, git_count = match.group('hash'), match.group('count')\n\n return git_revision, git_count\n\n\ndef write_version_py(filename=_VERSION_FILENAME):\n \"\"\" Create a file containing the version information. \"\"\"\n\n template = \"\"\"\\\n# This file was automatically generated from the `setup.py` script.\nversion = '{version}'\nfull_version = '{full_version}'\ngit_revision = '{git_revision}'\nis_released = {is_released}\n\nif not is_released:\n version = full_version\n\"\"\"\n # Adding the git rev number needs to be done inside\n # write_version_py(), otherwise the import of _version messes\n # up the build under Python 3.\n fullversion = VERSION\n chaco_version_path = os.path.join(\n os.path.dirname(__file__), 'chaco', '_version.py')\n if os.path.exists('.git'):\n git_rev, dev_num = git_version()\n elif os.path.exists(filename):\n # must be a source distribution, use existing version file\n try:\n git_rev, fullversion = read_version_py(chaco_version_path)\n except (SyntaxError, KeyError):\n raise RuntimeError(\"Unable to read git_revision. Try removing \"\n \"chaco/_version.py and the build directory \"\n \"before building.\")\n\n\n match = re.match(r'.*?\\.dev(?P<dev_num>\\d+)', fullversion)\n if match is None:\n dev_num = '0'\n else:\n dev_num = match.group('dev_num')\n else:\n git_rev = 'Unknown'\n dev_num = '0'\n\n if not IS_RELEASED:\n fullversion += '.dev{0}'.format(dev_num)\n\n with open(filename, \"wt\") as fp:\n fp.write(template.format(version=VERSION,\n full_version=fullversion,\n git_revision=git_rev,\n is_released=IS_RELEASED))\n\n\nif __name__ == \"__main__\":\n write_version_py()\n from chaco import __requires__, __version__\n\n numpy_include_dir = get_include()\n\n # Register Python extensions\n contour = Extension(\n 'chaco.contour.contour',\n sources=['chaco/contour/cntr.c'],\n include_dirs=[numpy_include_dir],\n define_macros=[('NUMPY', None)],\n )\n\n cython_speedups = Extension(\n 'chaco._cython_speedups',\n sources=['chaco/_cython_speedups.c'],\n include_dirs=[numpy_include_dir],\n )\n\n downsampling_lttb = Extension(\n 'chaco.downsample._lttb',\n sources=['chaco/downsample/_lttb.c'],\n include_dirs=[numpy_include_dir],\n )\n\n setup(\n name = 'chaco',\n version = __version__,\n author = 'Peter Wang, et. al.',\n author_email = '[email protected]',\n maintainer = 'ETS Developers',\n maintainer_email = '[email protected]',\n url = 'http://docs.enthought.com/chaco',\n download_url = 'https://github.com/enthought/chaco',\n classifiers = [c.strip() for c in \"\"\"\\\n Development Status :: 5 - Production/Stable\n Intended Audience :: Developers\n Intended Audience :: Science/Research\n License :: OSI Approved :: BSD License\n Operating System :: MacOS\n Operating System :: Microsoft :: Windows\n Operating System :: OS Independent\n Operating System :: POSIX\n Operating System :: Unix\n Programming Language :: C\n Programming Language :: Python\n Topic :: Scientific/Engineering\n Topic :: Software Development\n Topic :: Software Development :: Libraries\n \"\"\".splitlines() if len(c.strip()) > 0],\n package_data={\n 'chaco': ['tools/toolbars/images/*.png',\n 'layers/data/*.svg',\n 'tests/data/PngSuite/*.png']\n },\n description = 'interactive 2-dimensional plotting',\n long_description = open('README.rst').read(),\n ext_modules = [contour, cython_speedups, downsampling_lttb],\n include_package_data = True,\n install_requires = __requires__,\n license = 'BSD',\n packages = find_packages(),\n platforms = [\"Windows\", \"Linux\", \"Mac OS-X\", \"Unix\", \"Solaris\"],\n zip_safe = False,\n use_2to3=False,\n )\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright (c) 2008-2019 by Enthought, Inc.\n# All rights reserved.\nimport os\nimport re\nimport subprocess\n\nfrom numpy import get_include\nfrom setuptools import setup, Extension, find_packages\nfrom Cython.Build import cythonize\n\nMAJOR = 4\nMINOR = 8\nMICRO = 1\n\nIS_RELEASED = False\n\nVERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)\n\n# Name of the directory containing the package.\nPKG_PATHNAME = 'chaco'\n\n# Name of the file containing the version information.\n_VERSION_FILENAME = os.path.join(PKG_PATHNAME, '_version.py')\n\n\ndef read_version_py(path):\n \"\"\" Read a _version.py file in a safe way. \"\"\"\n with open(path, 'r') as fp:\n code = compile(fp.read(), 'chaco._version', 'exec')\n context = {}\n exec(code, context)\n return context['git_revision'], context['full_version']\n\n\ndef git_version():\n \"\"\" Parse version information from the current git commit.\n\n Parse the output of `git describe` and return the git hash and the number\n of commits since the last version tag.\n \"\"\"\n\n def _minimal_ext_cmd(cmd):\n # construct minimal environment\n env = {}\n for k in ['SYSTEMROOT', 'PATH', 'HOME']:\n v = os.environ.get(k)\n if v is not None:\n env[k] = v\n # LANGUAGE is used on win32\n env['LANGUAGE'] = 'C'\n env['LANG'] = 'C'\n env['LC_ALL'] = 'C'\n out = subprocess.Popen(\n cmd, stdout=subprocess.PIPE, env=env,\n ).communicate()[0]\n return out\n\n try:\n # We ask git to find the latest tag matching a glob expression. The\n # intention is to find a release tag of the form '4.50.2'. Strictly\n # speaking, the glob expression also matches tags of the form\n # '4abc.5xyz.2gtluuu', but it's very difficult with glob expressions\n # to distinguish between the two cases, and the likelihood of a\n # problem is minimal.\n out = _minimal_ext_cmd(\n ['git', 'describe', '--match', '[0-9]*.[0-9]*.[0-9]*', '--tags'])\n except OSError:\n out = ''\n\n git_description = out.strip().decode('ascii')\n expr = r'.*?\\-(?P<count>\\d+)-g(?P<hash>[a-fA-F0-9]+)'\n match = re.match(expr, git_description)\n if match is None:\n git_revision, git_count = 'Unknown', '0'\n else:\n git_revision, git_count = match.group('hash'), match.group('count')\n\n return git_revision, git_count\n\n\ndef write_version_py(filename=_VERSION_FILENAME):\n \"\"\" Create a file containing the version information. \"\"\"\n\n template = \"\"\"\\\n# This file was automatically generated from the `setup.py` script.\nversion = '{version}'\nfull_version = '{full_version}'\ngit_revision = '{git_revision}'\nis_released = {is_released}\n\nif not is_released:\n version = full_version\n\"\"\"\n # Adding the git rev number needs to be done inside\n # write_version_py(), otherwise the import of _version messes\n # up the build under Python 3.\n fullversion = VERSION\n chaco_version_path = os.path.join(\n os.path.dirname(__file__), 'chaco', '_version.py')\n if os.path.exists('.git'):\n git_rev, dev_num = git_version()\n elif os.path.exists(filename):\n # must be a source distribution, use existing version file\n try:\n git_rev, fullversion = read_version_py(chaco_version_path)\n except (SyntaxError, KeyError):\n raise RuntimeError(\"Unable to read git_revision. Try removing \"\n \"chaco/_version.py and the build directory \"\n \"before building.\")\n\n\n match = re.match(r'.*?\\.dev(?P<dev_num>\\d+)', fullversion)\n if match is None:\n dev_num = '0'\n else:\n dev_num = match.group('dev_num')\n else:\n git_rev = 'Unknown'\n dev_num = '0'\n\n if not IS_RELEASED:\n fullversion += '.dev{0}'.format(dev_num)\n\n with open(filename, \"wt\") as fp:\n fp.write(template.format(version=VERSION,\n full_version=fullversion,\n git_revision=git_rev,\n is_released=IS_RELEASED))\n\n\nif __name__ == \"__main__\":\n write_version_py()\n from chaco import __requires__, __version__\n\n numpy_include_dir = get_include()\n\n # Register Python extensions\n contour = Extension(\n 'chaco.contour.contour',\n sources=['chaco/contour/cntr.c'],\n include_dirs=[numpy_include_dir],\n define_macros=[('NUMPY', None)],\n )\n\n cython_speedups = Extension(\n 'chaco._cython_speedups',\n sources=['chaco/_cython_speedups.pyx'],\n include_dirs=[numpy_include_dir],\n )\n\n downsampling_lttb = Extension(\n 'chaco.downsample._lttb',\n sources=['chaco/downsample/_lttb.pyx'],\n include_dirs=[numpy_include_dir],\n )\n\n cython_extensions = cythonize([cython_speedups, downsampling_lttb])\n extensions = [contour] + cython_extensions\n\n setup(\n name = 'chaco',\n version = __version__,\n author = 'Peter Wang, et. al.',\n author_email = '[email protected]',\n maintainer = 'ETS Developers',\n maintainer_email = '[email protected]',\n url = 'http://docs.enthought.com/chaco',\n download_url = 'https://github.com/enthought/chaco',\n classifiers = [c.strip() for c in \"\"\"\\\n Development Status :: 5 - Production/Stable\n Intended Audience :: Developers\n Intended Audience :: Science/Research\n License :: OSI Approved :: BSD License\n Operating System :: MacOS\n Operating System :: Microsoft :: Windows\n Operating System :: OS Independent\n Operating System :: POSIX\n Operating System :: Unix\n Programming Language :: C\n Programming Language :: Python\n Topic :: Scientific/Engineering\n Topic :: Software Development\n Topic :: Software Development :: Libraries\n \"\"\".splitlines() if len(c.strip()) > 0],\n package_data={\n 'chaco': ['tools/toolbars/images/*.png',\n 'layers/data/*.svg',\n 'tests/data/PngSuite/*.png']\n },\n description = 'interactive 2-dimensional plotting',\n long_description = open('README.rst').read(),\n ext_modules = extensions,\n include_package_data = True,\n install_requires = __requires__,\n license = 'BSD',\n packages = find_packages(),\n platforms = [\"Windows\", \"Linux\", \"Mac OS-X\", \"Unix\", \"Solaris\"],\n zip_safe = False,\n use_2to3=False,\n )\n", "path": "setup.py"}]}
| 2,456 | 335 |
gh_patches_debug_22852
|
rasdani/github-patches
|
git_diff
|
python__mypy-3330
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mypy_extensions is listed as owned by David Foster
See https://github.com/python/mypy/blob/master/extensions/setup.py#L37
David Foster did indeed create the first version but I presume he doesn't want to be bothered about the subsequent additions?
We should probably change this to "The mypy developers" -- but where to point the email? Maybe it can be omitted. The url might also better point to GitHub.
Attn: @davidfstr
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `extensions/setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 # NOTE: This package must support Python 2.7 in addition to Python 3.x
4
5 from distutils.core import setup
6
7 version = '0.2.0-dev'
8 description = 'Experimental type system extensions for programs checked with the mypy typechecker.'
9 long_description = '''
10 Mypy Extensions
11 ===============
12
13 The "mypy_extensions" module defines experimental extensions to the
14 standard "typing" module that are supported by the mypy typechecker.
15 '''.lstrip()
16
17 classifiers = [
18 'Development Status :: 2 - Pre-Alpha',
19 'Environment :: Console',
20 'Intended Audience :: Developers',
21 'License :: OSI Approved :: MIT License',
22 'Operating System :: POSIX',
23 'Programming Language :: Python :: 2',
24 'Programming Language :: Python :: 2.7',
25 'Programming Language :: Python :: 3',
26 'Programming Language :: Python :: 3.3',
27 'Programming Language :: Python :: 3.4',
28 'Programming Language :: Python :: 3.5',
29 'Topic :: Software Development',
30 ]
31
32 setup(
33 name='mypy_extensions',
34 version=version,
35 description=description,
36 long_description=long_description,
37 author='David Foster',
38 author_email='[email protected]',
39 url='http://www.mypy-lang.org/',
40 license='MIT License',
41 platforms=['POSIX'],
42 py_modules=['mypy_extensions'],
43 classifiers=classifiers,
44 )
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/extensions/setup.py b/extensions/setup.py
--- a/extensions/setup.py
+++ b/extensions/setup.py
@@ -4,7 +4,7 @@
from distutils.core import setup
-version = '0.2.0-dev'
+version = '0.2.0'
description = 'Experimental type system extensions for programs checked with the mypy typechecker.'
long_description = '''
Mypy Extensions
@@ -26,6 +26,7 @@
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
+ 'Programming Language :: Python :: 3.6',
'Topic :: Software Development',
]
@@ -34,8 +35,8 @@
version=version,
description=description,
long_description=long_description,
- author='David Foster',
- author_email='[email protected]',
+ author='The mypy developers',
+ author_email='[email protected]',
url='http://www.mypy-lang.org/',
license='MIT License',
platforms=['POSIX'],
|
{"golden_diff": "diff --git a/extensions/setup.py b/extensions/setup.py\n--- a/extensions/setup.py\n+++ b/extensions/setup.py\n@@ -4,7 +4,7 @@\n \n from distutils.core import setup\n \n-version = '0.2.0-dev'\n+version = '0.2.0'\n description = 'Experimental type system extensions for programs checked with the mypy typechecker.'\n long_description = '''\n Mypy Extensions\n@@ -26,6 +26,7 @@\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n+ 'Programming Language :: Python :: 3.6',\n 'Topic :: Software Development',\n ]\n \n@@ -34,8 +35,8 @@\n version=version,\n description=description,\n long_description=long_description,\n- author='David Foster',\n- author_email='[email protected]',\n+ author='The mypy developers',\n+ author_email='[email protected]',\n url='http://www.mypy-lang.org/',\n license='MIT License',\n platforms=['POSIX'],\n", "issue": "mypy_extensions is listed as owned by David Foster\nSee https://github.com/python/mypy/blob/master/extensions/setup.py#L37\r\n\r\nDavid Foster did indeed create the first version but I presume he doesn't want to be bothered about the subsequent additions?\r\n\r\nWe should probably change this to \"The mypy developers\" -- but where to point the email? Maybe it can be omitted. The url might also better point to GitHub.\r\n\r\nAttn: @davidfstr \n", "before_files": [{"content": "#!/usr/bin/env python\n\n# NOTE: This package must support Python 2.7 in addition to Python 3.x\n\nfrom distutils.core import setup\n\nversion = '0.2.0-dev'\ndescription = 'Experimental type system extensions for programs checked with the mypy typechecker.'\nlong_description = '''\nMypy Extensions\n===============\n\nThe \"mypy_extensions\" module defines experimental extensions to the\nstandard \"typing\" module that are supported by the mypy typechecker.\n'''.lstrip()\n\nclassifiers = [\n 'Development Status :: 2 - Pre-Alpha',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: POSIX',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Software Development',\n]\n\nsetup(\n name='mypy_extensions',\n version=version,\n description=description,\n long_description=long_description,\n author='David Foster',\n author_email='[email protected]',\n url='http://www.mypy-lang.org/',\n license='MIT License',\n platforms=['POSIX'],\n py_modules=['mypy_extensions'],\n classifiers=classifiers,\n)\n", "path": "extensions/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# NOTE: This package must support Python 2.7 in addition to Python 3.x\n\nfrom distutils.core import setup\n\nversion = '0.2.0'\ndescription = 'Experimental type system extensions for programs checked with the mypy typechecker.'\nlong_description = '''\nMypy Extensions\n===============\n\nThe \"mypy_extensions\" module defines experimental extensions to the\nstandard \"typing\" module that are supported by the mypy typechecker.\n'''.lstrip()\n\nclassifiers = [\n 'Development Status :: 2 - Pre-Alpha',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: POSIX',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Software Development',\n]\n\nsetup(\n name='mypy_extensions',\n version=version,\n description=description,\n long_description=long_description,\n author='The mypy developers',\n author_email='[email protected]',\n url='http://www.mypy-lang.org/',\n license='MIT License',\n platforms=['POSIX'],\n py_modules=['mypy_extensions'],\n classifiers=classifiers,\n)\n", "path": "extensions/setup.py"}]}
| 748 | 253 |
gh_patches_debug_4534
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-4237
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False positive for check CKV_AZURE_5: "Ensure RBAC is enabled on AKS clusters"
**Describe the issue**
The check CKV_AZURE_5 for terraform resource `azurerm_kubernetes_cluster` ensures that RBAC is enabled in the kubernetes cluster.
Depending on how the `role_based_access_control_enabled` property is set, the check result is exact or not :
- `role_based_access_control_enabled = true`: the check passes. It's ok.
- `role_based_access_control_enabled = false`: the check fails. It's ok.
- `role_based_access_control_enabled` not defined : check fails. It's NOT ok as default value of this property is `true` (see https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#role_based_access_control_enabled)
**Examples**
This example will fails but it shouldn't:
```
resource "azurerm_resource_group" "foo" {
name = "foo"
location = "West Europe"
}
resource "azurerm_kubernetes_cluster" "foo" {
name = "foo"
resource_group_name = azurerm_resource_group.foo.name
location = azurerm_resource_group.foo.location
dns_prefix = "foo"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
# role_based_access_control_enabled = true
}
```
**Version (please complete the following information):**
- Checkov Version : `2.2.252` (latest docker image)
**Additional context**
The problem is in this source file : https://github.com/bridgecrewio/checkov/blob/48abe40926c97bd2e6f8c80491369be462ce3edd/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py#L19-L29
It returns `false` if the property is not found in the resource. It shouldn't be the case as the default value of the property is `true`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/AKSRbacEnabled.py`
Content:
```
1 import dpath.util
2 from checkov.common.models.enums import CheckCategories, CheckResult
3 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
4
5
6 class AKSRbacEnabled(BaseResourceCheck):
7 def __init__(self):
8 name = "Ensure RBAC is enabled on AKS clusters"
9 id = "CKV_AZURE_5"
10 supported_resources = ["azurerm_kubernetes_cluster"]
11 categories = [CheckCategories.KUBERNETES]
12 super().__init__(
13 name=name,
14 id=id,
15 categories=categories,
16 supported_resources=supported_resources,
17 )
18
19 def scan_resource_conf(self, conf):
20 self.evaluated_keys = [
21 "role_based_access_control/[0]/enabled", # azurerm < 2.99.0
22 "role_based_access_control_enabled", # azurerm >= 2.99.0
23 ]
24
25 for key in self.evaluated_keys:
26 if dpath.search(conf, key) and dpath.get(conf, key)[0]:
27 return CheckResult.PASSED
28
29 return CheckResult.FAILED
30
31
32 check = AKSRbacEnabled()
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py b/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py
--- a/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py
+++ b/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py
@@ -23,10 +23,10 @@
]
for key in self.evaluated_keys:
- if dpath.search(conf, key) and dpath.get(conf, key)[0]:
- return CheckResult.PASSED
+ if dpath.search(conf, key):
+ return CheckResult.PASSED if dpath.get(conf, key)[0] else CheckResult.FAILED
- return CheckResult.FAILED
+ return CheckResult.PASSED
check = AKSRbacEnabled()
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py b/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py\n--- a/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py\n+++ b/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py\n@@ -23,10 +23,10 @@\n ]\n \n for key in self.evaluated_keys:\n- if dpath.search(conf, key) and dpath.get(conf, key)[0]:\n- return CheckResult.PASSED\n+ if dpath.search(conf, key):\n+ return CheckResult.PASSED if dpath.get(conf, key)[0] else CheckResult.FAILED\n \n- return CheckResult.FAILED\n+ return CheckResult.PASSED\n \n \n check = AKSRbacEnabled()\n", "issue": "False positive for check CKV_AZURE_5: \"Ensure RBAC is enabled on AKS clusters\"\n**Describe the issue**\r\nThe check CKV_AZURE_5 for terraform resource `azurerm_kubernetes_cluster` ensures that RBAC is enabled in the kubernetes cluster.\r\nDepending on how the `role_based_access_control_enabled` property is set, the check result is exact or not :\r\n- `role_based_access_control_enabled = true`: the check passes. It's ok.\r\n- `role_based_access_control_enabled = false`: the check fails. It's ok.\r\n- `role_based_access_control_enabled` not defined : check fails. It's NOT ok as default value of this property is `true` (see https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#role_based_access_control_enabled)\r\n\r\n**Examples**\r\nThis example will fails but it shouldn't:\r\n```\r\nresource \"azurerm_resource_group\" \"foo\" {\r\n name = \"foo\"\r\n location = \"West Europe\"\r\n}\r\n\r\nresource \"azurerm_kubernetes_cluster\" \"foo\" {\r\n name = \"foo\"\r\n resource_group_name = azurerm_resource_group.foo.name\r\n location = azurerm_resource_group.foo.location\r\n dns_prefix = \"foo\"\r\n\r\n default_node_pool {\r\n name = \"default\"\r\n node_count = 1\r\n vm_size = \"Standard_D2_v2\"\r\n }\r\n\r\n identity {\r\n type = \"SystemAssigned\"\r\n }\r\n\r\n # role_based_access_control_enabled = true\r\n}\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version : `2.2.252` (latest docker image)\r\n\r\n**Additional context**\r\nThe problem is in this source file : https://github.com/bridgecrewio/checkov/blob/48abe40926c97bd2e6f8c80491369be462ce3edd/checkov/terraform/checks/resource/azure/AKSRbacEnabled.py#L19-L29\r\n\r\nIt returns `false` if the property is not found in the resource. It shouldn't be the case as the default value of the property is `true`\r\n\n", "before_files": [{"content": "import dpath.util\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass AKSRbacEnabled(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure RBAC is enabled on AKS clusters\"\n id = \"CKV_AZURE_5\"\n supported_resources = [\"azurerm_kubernetes_cluster\"]\n categories = [CheckCategories.KUBERNETES]\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_resources=supported_resources,\n )\n\n def scan_resource_conf(self, conf):\n self.evaluated_keys = [\n \"role_based_access_control/[0]/enabled\", # azurerm < 2.99.0\n \"role_based_access_control_enabled\", # azurerm >= 2.99.0\n ]\n\n for key in self.evaluated_keys:\n if dpath.search(conf, key) and dpath.get(conf, key)[0]:\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n\ncheck = AKSRbacEnabled()\n", "path": "checkov/terraform/checks/resource/azure/AKSRbacEnabled.py"}], "after_files": [{"content": "import dpath.util\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass AKSRbacEnabled(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure RBAC is enabled on AKS clusters\"\n id = \"CKV_AZURE_5\"\n supported_resources = [\"azurerm_kubernetes_cluster\"]\n categories = [CheckCategories.KUBERNETES]\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_resources=supported_resources,\n )\n\n def scan_resource_conf(self, conf):\n self.evaluated_keys = [\n \"role_based_access_control/[0]/enabled\", # azurerm < 2.99.0\n \"role_based_access_control_enabled\", # azurerm >= 2.99.0\n ]\n\n for key in self.evaluated_keys:\n if dpath.search(conf, key):\n return CheckResult.PASSED if dpath.get(conf, key)[0] else CheckResult.FAILED\n\n return CheckResult.PASSED\n\n\ncheck = AKSRbacEnabled()\n", "path": "checkov/terraform/checks/resource/azure/AKSRbacEnabled.py"}]}
| 1,047 | 188 |
gh_patches_debug_24671
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-45
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Failure to import requests.packages.urllib3.connectionpool
With requests 1.2.3 (the version which gets installed), this happens.
Workaround: use requests 1.2.0.
``` ipython
$ pip install --user docker-py
Downloading/unpacking docker-py
Downloading docker-py-0.1.5.tar.gz
Running setup.py egg_info for package docker-py
Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages (from docker-py)
Requirement already satisfied (use --upgrade to upgrade): six in /usr/lib/python2.7/dist-packages (from docker-py)
Installing collected packages: docker-py
Running setup.py install for docker-py
Successfully installed docker-py
Cleaning up...
pwaller@fractal:~$ ipython
imporPython 2.7.5+ (default, Jun 5 2013, 10:40:07)
Type "copyright", "credits" or "license" for more information.
IPython 1.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import docker
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-3ac1c348f58a> in <module>()
----> 1 import docker
/home/pwaller/.local/lib/python2.7/site-packages/docker/__init__.py in <module>()
----> 1 from .client import Client
/home/pwaller/.local/lib/python2.7/site-packages/docker/client.py in <module>()
15 from requests.exceptions import HTTPError
16 from requests.adapters import HTTPAdapter
---> 17 from requests.packages.urllib3.connectionpool import HTTPConnectionPool
18
19 if six.PY3:
ImportError: No module named packages.urllib3.connectionpool
In [2]:
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/unixconn.py`
Content:
```
1 # Copyright 2013 dotCloud inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import httplib
16 import requests.adapters
17 import requests.packages.urllib3.connectionpool
18 import socket
19
20 HTTPConnectionPool = requests.packages.urllib3.connectionpool.HTTPConnectionPool
21
22
23 class UnixHTTPConnection(httplib.HTTPConnection, object):
24 def __init__(self, base_url, unix_socket):
25 httplib.HTTPConnection.__init__(self, 'localhost')
26 self.base_url = base_url
27 self.unix_socket = unix_socket
28
29 def connect(self):
30 sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
31 sock.connect(self.base_url.replace("unix:/",""))
32 self.sock = sock
33
34 def _extract_path(self, url):
35 #remove the base_url entirely..
36 return url.replace(self.base_url, "")
37
38 def request(self, method, url, **kwargs):
39 url = self._extract_path(self.unix_socket)
40 super(UnixHTTPConnection, self).request(method, url, **kwargs)
41
42
43 class UnixHTTPConnectionPool(HTTPConnectionPool):
44 def __init__(self, base_url, socket_path):
45 self.socket_path = socket_path
46 self.base_url = base_url
47 super(UnixHTTPConnectionPool, self).__init__(self, 'localhost')
48
49 def _new_conn(self):
50 return UnixHTTPConnection(self.base_url, self.socket_path)
51
52
53 class UnixAdapter(requests.adapters.HTTPAdapter):
54 def __init__(self, base_url):
55 self.base_url = base_url
56 super(UnixAdapter, self).__init__()
57
58 def get_connection(self, socket_path, proxies=None):
59 return UnixHTTPConnectionPool(self.base_url, socket_path)
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docker/unixconn.py b/docker/unixconn.py
--- a/docker/unixconn.py
+++ b/docker/unixconn.py
@@ -14,10 +14,12 @@
import httplib
import requests.adapters
-import requests.packages.urllib3.connectionpool
import socket
-HTTPConnectionPool = requests.packages.urllib3.connectionpool.HTTPConnectionPool
+try:
+ import requests.packages.urllib3.connectionpool as connectionpool
+except ImportError:
+ import urllib3.connectionpool as connectionpool
class UnixHTTPConnection(httplib.HTTPConnection, object):
@@ -28,7 +30,7 @@
def connect(self):
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
- sock.connect(self.base_url.replace("unix:/",""))
+ sock.connect(self.base_url.replace("unix:/", ""))
self.sock = sock
def _extract_path(self, url):
@@ -40,7 +42,7 @@
super(UnixHTTPConnection, self).request(method, url, **kwargs)
-class UnixHTTPConnectionPool(HTTPConnectionPool):
+class UnixHTTPConnectionPool(connectionpool.HTTPConnectionPool):
def __init__(self, base_url, socket_path):
self.socket_path = socket_path
self.base_url = base_url
|
{"golden_diff": "diff --git a/docker/unixconn.py b/docker/unixconn.py\n--- a/docker/unixconn.py\n+++ b/docker/unixconn.py\n@@ -14,10 +14,12 @@\n \n import httplib\n import requests.adapters\n-import requests.packages.urllib3.connectionpool\n import socket\n \n-HTTPConnectionPool = requests.packages.urllib3.connectionpool.HTTPConnectionPool\n+try:\n+ import requests.packages.urllib3.connectionpool as connectionpool\n+except ImportError:\n+ import urllib3.connectionpool as connectionpool\n \n \n class UnixHTTPConnection(httplib.HTTPConnection, object):\n@@ -28,7 +30,7 @@\n \n def connect(self):\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n- sock.connect(self.base_url.replace(\"unix:/\",\"\"))\n+ sock.connect(self.base_url.replace(\"unix:/\", \"\"))\n self.sock = sock\n \n def _extract_path(self, url):\n@@ -40,7 +42,7 @@\n super(UnixHTTPConnection, self).request(method, url, **kwargs)\n \n \n-class UnixHTTPConnectionPool(HTTPConnectionPool):\n+class UnixHTTPConnectionPool(connectionpool.HTTPConnectionPool):\n def __init__(self, base_url, socket_path):\n self.socket_path = socket_path\n self.base_url = base_url\n", "issue": "Failure to import requests.packages.urllib3.connectionpool\nWith requests 1.2.3 (the version which gets installed), this happens.\n\nWorkaround: use requests 1.2.0.\n\n``` ipython\n$ pip install --user docker-py\nDownloading/unpacking docker-py\n Downloading docker-py-0.1.5.tar.gz\n Running setup.py egg_info for package docker-py\n\nRequirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages (from docker-py)\nRequirement already satisfied (use --upgrade to upgrade): six in /usr/lib/python2.7/dist-packages (from docker-py)\nInstalling collected packages: docker-py\n Running setup.py install for docker-py\n\nSuccessfully installed docker-py\nCleaning up...\npwaller@fractal:~$ ipython\nimporPython 2.7.5+ (default, Jun 5 2013, 10:40:07) \nType \"copyright\", \"credits\" or \"license\" for more information.\n\nIPython 1.1.0 -- An enhanced Interactive Python.\n? -> Introduction and overview of IPython's features.\n%quickref -> Quick reference.\nhelp -> Python's own help system.\nobject? -> Details about 'object', use 'object??' for extra details.\n\nIn [1]: import docker\n---------------------------------------------------------------------------\nImportError Traceback (most recent call last)\n<ipython-input-1-3ac1c348f58a> in <module>()\n----> 1 import docker\n\n/home/pwaller/.local/lib/python2.7/site-packages/docker/__init__.py in <module>()\n----> 1 from .client import Client\n\n/home/pwaller/.local/lib/python2.7/site-packages/docker/client.py in <module>()\n 15 from requests.exceptions import HTTPError\n 16 from requests.adapters import HTTPAdapter\n---> 17 from requests.packages.urllib3.connectionpool import HTTPConnectionPool\n 18 \n 19 if six.PY3:\n\nImportError: No module named packages.urllib3.connectionpool\n\nIn [2]: \n```\n\n", "before_files": [{"content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport httplib\nimport requests.adapters\nimport requests.packages.urllib3.connectionpool\nimport socket\n\nHTTPConnectionPool = requests.packages.urllib3.connectionpool.HTTPConnectionPool\n\n\nclass UnixHTTPConnection(httplib.HTTPConnection, object):\n def __init__(self, base_url, unix_socket):\n httplib.HTTPConnection.__init__(self, 'localhost')\n self.base_url = base_url\n self.unix_socket = unix_socket\n\n def connect(self):\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.connect(self.base_url.replace(\"unix:/\",\"\"))\n self.sock = sock\n\n def _extract_path(self, url):\n #remove the base_url entirely..\n return url.replace(self.base_url, \"\")\n\n def request(self, method, url, **kwargs):\n url = self._extract_path(self.unix_socket)\n super(UnixHTTPConnection, self).request(method, url, **kwargs)\n\n\nclass UnixHTTPConnectionPool(HTTPConnectionPool):\n def __init__(self, base_url, socket_path):\n self.socket_path = socket_path\n self.base_url = base_url\n super(UnixHTTPConnectionPool, self).__init__(self, 'localhost')\n\n def _new_conn(self):\n return UnixHTTPConnection(self.base_url, self.socket_path)\n\n\nclass UnixAdapter(requests.adapters.HTTPAdapter):\n def __init__(self, base_url):\n self.base_url = base_url\n super(UnixAdapter, self).__init__()\n\n def get_connection(self, socket_path, proxies=None):\n return UnixHTTPConnectionPool(self.base_url, socket_path)\n", "path": "docker/unixconn.py"}], "after_files": [{"content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport httplib\nimport requests.adapters\nimport socket\n\ntry:\n import requests.packages.urllib3.connectionpool as connectionpool\nexcept ImportError:\n import urllib3.connectionpool as connectionpool\n\n\nclass UnixHTTPConnection(httplib.HTTPConnection, object):\n def __init__(self, base_url, unix_socket):\n httplib.HTTPConnection.__init__(self, 'localhost')\n self.base_url = base_url\n self.unix_socket = unix_socket\n\n def connect(self):\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.connect(self.base_url.replace(\"unix:/\", \"\"))\n self.sock = sock\n\n def _extract_path(self, url):\n #remove the base_url entirely..\n return url.replace(self.base_url, \"\")\n\n def request(self, method, url, **kwargs):\n url = self._extract_path(self.unix_socket)\n super(UnixHTTPConnection, self).request(method, url, **kwargs)\n\n\nclass UnixHTTPConnectionPool(connectionpool.HTTPConnectionPool):\n def __init__(self, base_url, socket_path):\n self.socket_path = socket_path\n self.base_url = base_url\n super(UnixHTTPConnectionPool, self).__init__(self, 'localhost')\n\n def _new_conn(self):\n return UnixHTTPConnection(self.base_url, self.socket_path)\n\n\nclass UnixAdapter(requests.adapters.HTTPAdapter):\n def __init__(self, base_url):\n self.base_url = base_url\n super(UnixAdapter, self).__init__()\n\n def get_connection(self, socket_path, proxies=None):\n return UnixHTTPConnectionPool(self.base_url, socket_path)\n", "path": "docker/unixconn.py"}]}
| 1,321 | 282 |
gh_patches_debug_7867
|
rasdani/github-patches
|
git_diff
|
arviz-devs__arviz-1096
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`plot_density` does not work for models with different variables
**Describe the bug**
When passing data of several models to `plot_density`, it throws an error if both models contain parameters that the other model does not have.
As long the the variables in the one model are a subset of the the variables of the other model it works.
I think that the problem lies in how `plot_density` determines the number of plots to create. It calculates the number of parameters in each model and then uses the maximum:
```
192 length_plotters = []
193 for plotters in to_plot:
194 length_plotters.append(len(plotters))
195 for var_name, selection, _ in plotters:
196 label = make_label(var_name, selection)
197 if label not in all_labels:
198 all_labels.append(label)
199 length_plotters = max(length_plotters)
```
That does not account for the situation where the union of parameters over all models is larger than the set parameters in each single model.
**To Reproduce**
This simple example should demonstrate what I mean:
```python
import numpy as np
import xarray as xr
import arviz
n_draws = 1000
model_ab = xr.Dataset({
"a": ("draw", np.random.normal(size=n_draws)),
"b": ("draw", np.random.normal(size=n_draws)),
})
model_b = xr.Dataset({
"b": ("draw", np.random.normal(size=n_draws)),
})
model_bc = xr.Dataset({
"c": ("draw", np.random.normal(size=n_draws)),
"b": ("draw", np.random.normal(size=n_draws)),
})
# Works
arviz.plot_density([model_ab, model_b], data_labels=["ab", "b"]);
# Does not work
arviz.plot_density([model_ab, model_bc], data_labels=["ab", "bc"]);
```
**Expected behavior**
In the second case, the code should create 3 subplots, for parameters a, b, and c. While the plots for a and c would contain only one density, the plot for b would contain two densities.
**Additional context**
arviz Version: 0.6.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/plots/densityplot.py`
Content:
```
1 """KDE and histogram plots for multiple variables."""
2 from itertools import cycle
3 import warnings
4
5 import matplotlib.pyplot as plt
6
7 from ..data import convert_to_dataset
8 from .plot_utils import (
9 _scale_fig_size,
10 make_label,
11 xarray_var_iter,
12 default_grid,
13 get_plotting_function,
14 )
15 from ..rcparams import rcParams
16 from ..utils import _var_names
17
18
19 # pylint:disable-msg=too-many-function-args
20 def plot_density(
21 data,
22 group="posterior",
23 data_labels=None,
24 var_names=None,
25 transform=None,
26 credible_interval=None,
27 point_estimate="auto",
28 colors="cycle",
29 outline=True,
30 hpd_markers="",
31 shade=0.0,
32 bw=4.5,
33 figsize=None,
34 textsize=None,
35 ax=None,
36 backend=None,
37 backend_kwargs=None,
38 show=None,
39 ):
40 """Generate KDE plots for continuous variables and histograms for discrete ones.
41
42 Plots are truncated at their 100*(1-alpha)% credible intervals. Plots are grouped per variable
43 and colors assigned to models.
44
45 Parameters
46 ----------
47 data : Union[Object, Iterator[Object]]
48 Any object that can be converted to an az.InferenceData object, or an Iterator returning
49 a sequence of such objects.
50 Refer to documentation of az.convert_to_dataset for details about such objects.
51 group: Optional[str]
52 Specifies which InferenceData group should be plotted. Defaults to 'posterior'.
53 Alternative values include 'prior' and any other strings used as dataset keys in the
54 InferenceData.
55 data_labels : Optional[List[str]]
56 List with names for the datasets passed as "data." Useful when plotting more than one
57 dataset. Must be the same shape as the data parameter. Defaults to None.
58 var_names: Optional[List[str]]
59 List of variables to plot. If multiple datasets are supplied and var_names is not None,
60 will print the same set of variables for each dataset. Defaults to None, which results in
61 all the variables being plotted.
62 transform : callable
63 Function to transform data (defaults to None i.e. the identity function)
64 credible_interval : float
65 Credible intervals. Should be in the interval (0, 1]. Defaults to 0.94.
66 point_estimate : Optional[str]
67 Plot point estimate per variable. Values should be 'mean', 'median', 'mode' or None.
68 Defaults to 'auto' i.e. it falls back to default set in rcParams.
69 colors : Optional[Union[List[str],str]]
70 List with valid matplotlib colors, one color per model. Alternative a string can be passed.
71 If the string is `cycle`, it will automatically choose a color per model from matplotlib's
72 cycle. If a single color is passed, e.g. 'k', 'C2' or 'red' this color will be used for all
73 models. Defaults to `cycle`.
74 outline : bool
75 Use a line to draw KDEs and histograms. Default to True
76 hpd_markers : str
77 A valid `matplotlib.markers` like 'v', used to indicate the limits of the hpd interval.
78 Defaults to empty string (no marker).
79 shade : Optional[float]
80 Alpha blending value for the shaded area under the curve, between 0 (no shade) and 1
81 (opaque). Defaults to 0.
82 bw : Optional[float]
83 Bandwidth scaling factor for the KDE. Should be larger than 0. The higher this number the
84 smoother the KDE will be. Defaults to 4.5 which is essentially the same as the Scott's rule
85 of thumb (the default rule used by SciPy).
86 figsize : Optional[Tuple[int, int]]
87 Figure size. If None it will be defined automatically.
88 textsize: Optional[float]
89 Text size scaling factor for labels, titles and lines. If None it will be autoscaled based
90 on figsize.
91 ax: axes, optional
92 Matplotlib axes or Bokeh figures.
93 backend: str, optional
94 Select plotting backend {"matplotlib","bokeh"}. Default "matplotlib".
95 backend_kwargs: bool, optional
96 These are kwargs specific to the backend being used. For additional documentation
97 check the plotting method of the backend.
98 show : bool, optional
99 Call backend show function.
100
101 Returns
102 -------
103 axes : matplotlib axes or bokeh figures
104
105
106 Examples
107 --------
108 Plot default density plot
109
110 .. plot::
111 :context: close-figs
112
113 >>> import arviz as az
114 >>> centered = az.load_arviz_data('centered_eight')
115 >>> non_centered = az.load_arviz_data('non_centered_eight')
116 >>> az.plot_density([centered, non_centered])
117
118 Plot subset variables by specifying variable name exactly
119
120 .. plot::
121 :context: close-figs
122
123 >>> az.plot_density([centered, non_centered], var_names=["mu"])
124
125 Plot a specific `az.InferenceData` group
126
127 .. plot::
128 :context: close-figs
129
130 >>> az.plot_density([centered, non_centered], var_names=["mu"], group="prior")
131
132 Specify credible interval
133
134 .. plot::
135 :context: close-figs
136
137 >>> az.plot_density([centered, non_centered], var_names=["mu"], credible_interval=.5)
138
139 Shade plots and/or remove outlines
140
141 .. plot::
142 :context: close-figs
143
144 >>> az.plot_density([centered, non_centered], var_names=["mu"], outline=False, shade=.8)
145
146 Specify binwidth for kernel density estimation
147
148 .. plot::
149 :context: close-figs
150
151 >>> az.plot_density([centered, non_centered], var_names=["mu"], bw=.9)
152 """
153 if transform is not None:
154 data = transform(data)
155 if not isinstance(data, (list, tuple)):
156 datasets = [convert_to_dataset(data, group=group)]
157 else:
158 datasets = [convert_to_dataset(datum, group=group) for datum in data]
159
160 var_names = _var_names(var_names, datasets)
161 n_data = len(datasets)
162
163 if data_labels is None:
164 if n_data > 1:
165 data_labels = ["{}".format(idx) for idx in range(n_data)]
166 else:
167 data_labels = [""]
168 elif len(data_labels) != n_data:
169 raise ValueError(
170 "The number of names for the models ({}) "
171 "does not match the number of models ({})".format(len(data_labels), n_data)
172 )
173
174 if colors == "cycle":
175 colors = [
176 prop
177 for _, prop in zip(
178 range(n_data), cycle(plt.rcParams["axes.prop_cycle"].by_key()["color"])
179 )
180 ]
181 elif isinstance(colors, str):
182 colors = [colors for _ in range(n_data)]
183
184 if credible_interval is None:
185 credible_interval = rcParams["stats.credible_interval"]
186 else:
187 if not 1 >= credible_interval > 0:
188 raise ValueError("The value of credible_interval should be in the interval (0, 1]")
189
190 to_plot = [list(xarray_var_iter(data, var_names, combined=True)) for data in datasets]
191 all_labels = []
192 length_plotters = []
193 for plotters in to_plot:
194 length_plotters.append(len(plotters))
195 for var_name, selection, _ in plotters:
196 label = make_label(var_name, selection)
197 if label not in all_labels:
198 all_labels.append(label)
199 length_plotters = max(length_plotters)
200 max_plots = rcParams["plot.max_subplots"]
201 max_plots = length_plotters if max_plots is None else max_plots
202 if length_plotters > max_plots:
203 warnings.warn(
204 "rcParams['plot.max_subplots'] ({max_plots}) is smaller than the number "
205 "of variables to plot ({len_plotters}) in plot_density, generating only "
206 "{max_plots} plots".format(max_plots=max_plots, len_plotters=length_plotters),
207 UserWarning,
208 )
209 all_labels = all_labels[:max_plots]
210 to_plot = [
211 [
212 (var_name, selection, values)
213 for var_name, selection, values in plotters
214 if make_label(var_name, selection) in all_labels
215 ]
216 for plotters in to_plot
217 ]
218 length_plotters = max_plots
219 rows, cols = default_grid(length_plotters, max_cols=3)
220
221 (figsize, _, titlesize, xt_labelsize, linewidth, markersize) = _scale_fig_size(
222 figsize, textsize, rows, cols
223 )
224
225 plot_density_kwargs = dict(
226 ax=ax,
227 all_labels=all_labels,
228 to_plot=to_plot,
229 colors=colors,
230 bw=bw,
231 figsize=figsize,
232 length_plotters=length_plotters,
233 rows=rows,
234 cols=cols,
235 titlesize=titlesize,
236 xt_labelsize=xt_labelsize,
237 linewidth=linewidth,
238 markersize=markersize,
239 credible_interval=credible_interval,
240 point_estimate=point_estimate,
241 hpd_markers=hpd_markers,
242 outline=outline,
243 shade=shade,
244 n_data=n_data,
245 data_labels=data_labels,
246 backend_kwargs=backend_kwargs,
247 show=show,
248 )
249
250 if backend is None:
251 backend = rcParams["plot.backend"]
252 backend = backend.lower()
253
254 if backend == "bokeh":
255
256 plot_density_kwargs["line_width"] = plot_density_kwargs.pop("linewidth")
257 plot_density_kwargs.pop("titlesize")
258 plot_density_kwargs.pop("xt_labelsize")
259 plot_density_kwargs.pop("n_data")
260
261 # TODO: Add backend kwargs
262 plot = get_plotting_function("plot_density", "densityplot", backend)
263 ax = plot(**plot_density_kwargs)
264 return ax
265
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/arviz/plots/densityplot.py b/arviz/plots/densityplot.py
--- a/arviz/plots/densityplot.py
+++ b/arviz/plots/densityplot.py
@@ -196,7 +196,7 @@
label = make_label(var_name, selection)
if label not in all_labels:
all_labels.append(label)
- length_plotters = max(length_plotters)
+ length_plotters = len(all_labels)
max_plots = rcParams["plot.max_subplots"]
max_plots = length_plotters if max_plots is None else max_plots
if length_plotters > max_plots:
|
{"golden_diff": "diff --git a/arviz/plots/densityplot.py b/arviz/plots/densityplot.py\n--- a/arviz/plots/densityplot.py\n+++ b/arviz/plots/densityplot.py\n@@ -196,7 +196,7 @@\n label = make_label(var_name, selection)\n if label not in all_labels:\n all_labels.append(label)\n- length_plotters = max(length_plotters)\n+ length_plotters = len(all_labels)\n max_plots = rcParams[\"plot.max_subplots\"]\n max_plots = length_plotters if max_plots is None else max_plots\n if length_plotters > max_plots:\n", "issue": "`plot_density` does not work for models with different variables\n**Describe the bug**\r\nWhen passing data of several models to `plot_density`, it throws an error if both models contain parameters that the other model does not have.\r\nAs long the the variables in the one model are a subset of the the variables of the other model it works.\r\nI think that the problem lies in how `plot_density` determines the number of plots to create. It calculates the number of parameters in each model and then uses the maximum:\r\n```\r\n192 length_plotters = []\r\n193 for plotters in to_plot:\r\n194 length_plotters.append(len(plotters))\r\n195 for var_name, selection, _ in plotters:\r\n196 label = make_label(var_name, selection)\r\n197 if label not in all_labels:\r\n198 all_labels.append(label)\r\n199 length_plotters = max(length_plotters)\r\n```\r\nThat does not account for the situation where the union of parameters over all models is larger than the set parameters in each single model.\r\n\r\n**To Reproduce**\r\nThis simple example should demonstrate what I mean:\r\n```python\r\nimport numpy as np\r\nimport xarray as xr\r\nimport arviz\r\n\r\nn_draws = 1000\r\n\r\nmodel_ab = xr.Dataset({\r\n \"a\": (\"draw\", np.random.normal(size=n_draws)),\r\n \"b\": (\"draw\", np.random.normal(size=n_draws)),\r\n})\r\nmodel_b = xr.Dataset({\r\n \"b\": (\"draw\", np.random.normal(size=n_draws)),\r\n})\r\nmodel_bc = xr.Dataset({\r\n \"c\": (\"draw\", np.random.normal(size=n_draws)),\r\n \"b\": (\"draw\", np.random.normal(size=n_draws)),\r\n})\r\n\r\n# Works\r\narviz.plot_density([model_ab, model_b], data_labels=[\"ab\", \"b\"]);\r\n\r\n# Does not work\r\narviz.plot_density([model_ab, model_bc], data_labels=[\"ab\", \"bc\"]);\r\n```\r\n\r\n**Expected behavior**\r\nIn the second case, the code should create 3 subplots, for parameters a, b, and c. While the plots for a and c would contain only one density, the plot for b would contain two densities.\r\n\r\n**Additional context**\r\narviz Version: 0.6.1\r\n\n", "before_files": [{"content": "\"\"\"KDE and histogram plots for multiple variables.\"\"\"\nfrom itertools import cycle\nimport warnings\n\nimport matplotlib.pyplot as plt\n\nfrom ..data import convert_to_dataset\nfrom .plot_utils import (\n _scale_fig_size,\n make_label,\n xarray_var_iter,\n default_grid,\n get_plotting_function,\n)\nfrom ..rcparams import rcParams\nfrom ..utils import _var_names\n\n\n# pylint:disable-msg=too-many-function-args\ndef plot_density(\n data,\n group=\"posterior\",\n data_labels=None,\n var_names=None,\n transform=None,\n credible_interval=None,\n point_estimate=\"auto\",\n colors=\"cycle\",\n outline=True,\n hpd_markers=\"\",\n shade=0.0,\n bw=4.5,\n figsize=None,\n textsize=None,\n ax=None,\n backend=None,\n backend_kwargs=None,\n show=None,\n):\n \"\"\"Generate KDE plots for continuous variables and histograms for discrete ones.\n\n Plots are truncated at their 100*(1-alpha)% credible intervals. Plots are grouped per variable\n and colors assigned to models.\n\n Parameters\n ----------\n data : Union[Object, Iterator[Object]]\n Any object that can be converted to an az.InferenceData object, or an Iterator returning\n a sequence of such objects.\n Refer to documentation of az.convert_to_dataset for details about such objects.\n group: Optional[str]\n Specifies which InferenceData group should be plotted. Defaults to 'posterior'.\n Alternative values include 'prior' and any other strings used as dataset keys in the\n InferenceData.\n data_labels : Optional[List[str]]\n List with names for the datasets passed as \"data.\" Useful when plotting more than one\n dataset. Must be the same shape as the data parameter. Defaults to None.\n var_names: Optional[List[str]]\n List of variables to plot. If multiple datasets are supplied and var_names is not None,\n will print the same set of variables for each dataset. Defaults to None, which results in\n all the variables being plotted.\n transform : callable\n Function to transform data (defaults to None i.e. the identity function)\n credible_interval : float\n Credible intervals. Should be in the interval (0, 1]. Defaults to 0.94.\n point_estimate : Optional[str]\n Plot point estimate per variable. Values should be 'mean', 'median', 'mode' or None.\n Defaults to 'auto' i.e. it falls back to default set in rcParams.\n colors : Optional[Union[List[str],str]]\n List with valid matplotlib colors, one color per model. Alternative a string can be passed.\n If the string is `cycle`, it will automatically choose a color per model from matplotlib's\n cycle. If a single color is passed, e.g. 'k', 'C2' or 'red' this color will be used for all\n models. Defaults to `cycle`.\n outline : bool\n Use a line to draw KDEs and histograms. Default to True\n hpd_markers : str\n A valid `matplotlib.markers` like 'v', used to indicate the limits of the hpd interval.\n Defaults to empty string (no marker).\n shade : Optional[float]\n Alpha blending value for the shaded area under the curve, between 0 (no shade) and 1\n (opaque). Defaults to 0.\n bw : Optional[float]\n Bandwidth scaling factor for the KDE. Should be larger than 0. The higher this number the\n smoother the KDE will be. Defaults to 4.5 which is essentially the same as the Scott's rule\n of thumb (the default rule used by SciPy).\n figsize : Optional[Tuple[int, int]]\n Figure size. If None it will be defined automatically.\n textsize: Optional[float]\n Text size scaling factor for labels, titles and lines. If None it will be autoscaled based\n on figsize.\n ax: axes, optional\n Matplotlib axes or Bokeh figures.\n backend: str, optional\n Select plotting backend {\"matplotlib\",\"bokeh\"}. Default \"matplotlib\".\n backend_kwargs: bool, optional\n These are kwargs specific to the backend being used. For additional documentation\n check the plotting method of the backend.\n show : bool, optional\n Call backend show function.\n\n Returns\n -------\n axes : matplotlib axes or bokeh figures\n\n\n Examples\n --------\n Plot default density plot\n\n .. plot::\n :context: close-figs\n\n >>> import arviz as az\n >>> centered = az.load_arviz_data('centered_eight')\n >>> non_centered = az.load_arviz_data('non_centered_eight')\n >>> az.plot_density([centered, non_centered])\n\n Plot subset variables by specifying variable name exactly\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_density([centered, non_centered], var_names=[\"mu\"])\n\n Plot a specific `az.InferenceData` group\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_density([centered, non_centered], var_names=[\"mu\"], group=\"prior\")\n\n Specify credible interval\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_density([centered, non_centered], var_names=[\"mu\"], credible_interval=.5)\n\n Shade plots and/or remove outlines\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_density([centered, non_centered], var_names=[\"mu\"], outline=False, shade=.8)\n\n Specify binwidth for kernel density estimation\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_density([centered, non_centered], var_names=[\"mu\"], bw=.9)\n \"\"\"\n if transform is not None:\n data = transform(data)\n if not isinstance(data, (list, tuple)):\n datasets = [convert_to_dataset(data, group=group)]\n else:\n datasets = [convert_to_dataset(datum, group=group) for datum in data]\n\n var_names = _var_names(var_names, datasets)\n n_data = len(datasets)\n\n if data_labels is None:\n if n_data > 1:\n data_labels = [\"{}\".format(idx) for idx in range(n_data)]\n else:\n data_labels = [\"\"]\n elif len(data_labels) != n_data:\n raise ValueError(\n \"The number of names for the models ({}) \"\n \"does not match the number of models ({})\".format(len(data_labels), n_data)\n )\n\n if colors == \"cycle\":\n colors = [\n prop\n for _, prop in zip(\n range(n_data), cycle(plt.rcParams[\"axes.prop_cycle\"].by_key()[\"color\"])\n )\n ]\n elif isinstance(colors, str):\n colors = [colors for _ in range(n_data)]\n\n if credible_interval is None:\n credible_interval = rcParams[\"stats.credible_interval\"]\n else:\n if not 1 >= credible_interval > 0:\n raise ValueError(\"The value of credible_interval should be in the interval (0, 1]\")\n\n to_plot = [list(xarray_var_iter(data, var_names, combined=True)) for data in datasets]\n all_labels = []\n length_plotters = []\n for plotters in to_plot:\n length_plotters.append(len(plotters))\n for var_name, selection, _ in plotters:\n label = make_label(var_name, selection)\n if label not in all_labels:\n all_labels.append(label)\n length_plotters = max(length_plotters)\n max_plots = rcParams[\"plot.max_subplots\"]\n max_plots = length_plotters if max_plots is None else max_plots\n if length_plotters > max_plots:\n warnings.warn(\n \"rcParams['plot.max_subplots'] ({max_plots}) is smaller than the number \"\n \"of variables to plot ({len_plotters}) in plot_density, generating only \"\n \"{max_plots} plots\".format(max_plots=max_plots, len_plotters=length_plotters),\n UserWarning,\n )\n all_labels = all_labels[:max_plots]\n to_plot = [\n [\n (var_name, selection, values)\n for var_name, selection, values in plotters\n if make_label(var_name, selection) in all_labels\n ]\n for plotters in to_plot\n ]\n length_plotters = max_plots\n rows, cols = default_grid(length_plotters, max_cols=3)\n\n (figsize, _, titlesize, xt_labelsize, linewidth, markersize) = _scale_fig_size(\n figsize, textsize, rows, cols\n )\n\n plot_density_kwargs = dict(\n ax=ax,\n all_labels=all_labels,\n to_plot=to_plot,\n colors=colors,\n bw=bw,\n figsize=figsize,\n length_plotters=length_plotters,\n rows=rows,\n cols=cols,\n titlesize=titlesize,\n xt_labelsize=xt_labelsize,\n linewidth=linewidth,\n markersize=markersize,\n credible_interval=credible_interval,\n point_estimate=point_estimate,\n hpd_markers=hpd_markers,\n outline=outline,\n shade=shade,\n n_data=n_data,\n data_labels=data_labels,\n backend_kwargs=backend_kwargs,\n show=show,\n )\n\n if backend is None:\n backend = rcParams[\"plot.backend\"]\n backend = backend.lower()\n\n if backend == \"bokeh\":\n\n plot_density_kwargs[\"line_width\"] = plot_density_kwargs.pop(\"linewidth\")\n plot_density_kwargs.pop(\"titlesize\")\n plot_density_kwargs.pop(\"xt_labelsize\")\n plot_density_kwargs.pop(\"n_data\")\n\n # TODO: Add backend kwargs\n plot = get_plotting_function(\"plot_density\", \"densityplot\", backend)\n ax = plot(**plot_density_kwargs)\n return ax\n", "path": "arviz/plots/densityplot.py"}], "after_files": [{"content": "\"\"\"KDE and histogram plots for multiple variables.\"\"\"\nfrom itertools import cycle\nimport warnings\n\nimport matplotlib.pyplot as plt\n\nfrom ..data import convert_to_dataset\nfrom .plot_utils import (\n _scale_fig_size,\n make_label,\n xarray_var_iter,\n default_grid,\n get_plotting_function,\n)\nfrom ..rcparams import rcParams\nfrom ..utils import _var_names\n\n\n# pylint:disable-msg=too-many-function-args\ndef plot_density(\n data,\n group=\"posterior\",\n data_labels=None,\n var_names=None,\n transform=None,\n credible_interval=None,\n point_estimate=\"auto\",\n colors=\"cycle\",\n outline=True,\n hpd_markers=\"\",\n shade=0.0,\n bw=4.5,\n figsize=None,\n textsize=None,\n ax=None,\n backend=None,\n backend_kwargs=None,\n show=None,\n):\n \"\"\"Generate KDE plots for continuous variables and histograms for discrete ones.\n\n Plots are truncated at their 100*(1-alpha)% credible intervals. Plots are grouped per variable\n and colors assigned to models.\n\n Parameters\n ----------\n data : Union[Object, Iterator[Object]]\n Any object that can be converted to an az.InferenceData object, or an Iterator returning\n a sequence of such objects.\n Refer to documentation of az.convert_to_dataset for details about such objects.\n group: Optional[str]\n Specifies which InferenceData group should be plotted. Defaults to 'posterior'.\n Alternative values include 'prior' and any other strings used as dataset keys in the\n InferenceData.\n data_labels : Optional[List[str]]\n List with names for the datasets passed as \"data.\" Useful when plotting more than one\n dataset. Must be the same shape as the data parameter. Defaults to None.\n var_names: Optional[List[str]]\n List of variables to plot. If multiple datasets are supplied and var_names is not None,\n will print the same set of variables for each dataset. Defaults to None, which results in\n all the variables being plotted.\n transform : callable\n Function to transform data (defaults to None i.e. the identity function)\n credible_interval : float\n Credible intervals. Should be in the interval (0, 1]. Defaults to 0.94.\n point_estimate : Optional[str]\n Plot point estimate per variable. Values should be 'mean', 'median', 'mode' or None.\n Defaults to 'auto' i.e. it falls back to default set in rcParams.\n colors : Optional[Union[List[str],str]]\n List with valid matplotlib colors, one color per model. Alternative a string can be passed.\n If the string is `cycle`, it will automatically choose a color per model from matplotlib's\n cycle. If a single color is passed, e.g. 'k', 'C2' or 'red' this color will be used for all\n models. Defaults to `cycle`.\n outline : bool\n Use a line to draw KDEs and histograms. Default to True\n hpd_markers : str\n A valid `matplotlib.markers` like 'v', used to indicate the limits of the hpd interval.\n Defaults to empty string (no marker).\n shade : Optional[float]\n Alpha blending value for the shaded area under the curve, between 0 (no shade) and 1\n (opaque). Defaults to 0.\n bw : Optional[float]\n Bandwidth scaling factor for the KDE. Should be larger than 0. The higher this number the\n smoother the KDE will be. Defaults to 4.5 which is essentially the same as the Scott's rule\n of thumb (the default rule used by SciPy).\n figsize : Optional[Tuple[int, int]]\n Figure size. If None it will be defined automatically.\n textsize: Optional[float]\n Text size scaling factor for labels, titles and lines. If None it will be autoscaled based\n on figsize.\n ax: axes, optional\n Matplotlib axes or Bokeh figures.\n backend: str, optional\n Select plotting backend {\"matplotlib\",\"bokeh\"}. Default \"matplotlib\".\n backend_kwargs: bool, optional\n These are kwargs specific to the backend being used. For additional documentation\n check the plotting method of the backend.\n show : bool, optional\n Call backend show function.\n\n Returns\n -------\n axes : matplotlib axes or bokeh figures\n\n\n Examples\n --------\n Plot default density plot\n\n .. plot::\n :context: close-figs\n\n >>> import arviz as az\n >>> centered = az.load_arviz_data('centered_eight')\n >>> non_centered = az.load_arviz_data('non_centered_eight')\n >>> az.plot_density([centered, non_centered])\n\n Plot subset variables by specifying variable name exactly\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_density([centered, non_centered], var_names=[\"mu\"])\n\n Plot a specific `az.InferenceData` group\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_density([centered, non_centered], var_names=[\"mu\"], group=\"prior\")\n\n Specify credible interval\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_density([centered, non_centered], var_names=[\"mu\"], credible_interval=.5)\n\n Shade plots and/or remove outlines\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_density([centered, non_centered], var_names=[\"mu\"], outline=False, shade=.8)\n\n Specify binwidth for kernel density estimation\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_density([centered, non_centered], var_names=[\"mu\"], bw=.9)\n \"\"\"\n if transform is not None:\n data = transform(data)\n if not isinstance(data, (list, tuple)):\n datasets = [convert_to_dataset(data, group=group)]\n else:\n datasets = [convert_to_dataset(datum, group=group) for datum in data]\n\n var_names = _var_names(var_names, datasets)\n n_data = len(datasets)\n\n if data_labels is None:\n if n_data > 1:\n data_labels = [\"{}\".format(idx) for idx in range(n_data)]\n else:\n data_labels = [\"\"]\n elif len(data_labels) != n_data:\n raise ValueError(\n \"The number of names for the models ({}) \"\n \"does not match the number of models ({})\".format(len(data_labels), n_data)\n )\n\n if colors == \"cycle\":\n colors = [\n prop\n for _, prop in zip(\n range(n_data), cycle(plt.rcParams[\"axes.prop_cycle\"].by_key()[\"color\"])\n )\n ]\n elif isinstance(colors, str):\n colors = [colors for _ in range(n_data)]\n\n if credible_interval is None:\n credible_interval = rcParams[\"stats.credible_interval\"]\n else:\n if not 1 >= credible_interval > 0:\n raise ValueError(\"The value of credible_interval should be in the interval (0, 1]\")\n\n to_plot = [list(xarray_var_iter(data, var_names, combined=True)) for data in datasets]\n all_labels = []\n length_plotters = []\n for plotters in to_plot:\n length_plotters.append(len(plotters))\n for var_name, selection, _ in plotters:\n label = make_label(var_name, selection)\n if label not in all_labels:\n all_labels.append(label)\n length_plotters = len(all_labels)\n max_plots = rcParams[\"plot.max_subplots\"]\n max_plots = length_plotters if max_plots is None else max_plots\n if length_plotters > max_plots:\n warnings.warn(\n \"rcParams['plot.max_subplots'] ({max_plots}) is smaller than the number \"\n \"of variables to plot ({len_plotters}) in plot_density, generating only \"\n \"{max_plots} plots\".format(max_plots=max_plots, len_plotters=length_plotters),\n UserWarning,\n )\n all_labels = all_labels[:max_plots]\n to_plot = [\n [\n (var_name, selection, values)\n for var_name, selection, values in plotters\n if make_label(var_name, selection) in all_labels\n ]\n for plotters in to_plot\n ]\n length_plotters = max_plots\n rows, cols = default_grid(length_plotters, max_cols=3)\n\n (figsize, _, titlesize, xt_labelsize, linewidth, markersize) = _scale_fig_size(\n figsize, textsize, rows, cols\n )\n\n plot_density_kwargs = dict(\n ax=ax,\n all_labels=all_labels,\n to_plot=to_plot,\n colors=colors,\n bw=bw,\n figsize=figsize,\n length_plotters=length_plotters,\n rows=rows,\n cols=cols,\n titlesize=titlesize,\n xt_labelsize=xt_labelsize,\n linewidth=linewidth,\n markersize=markersize,\n credible_interval=credible_interval,\n point_estimate=point_estimate,\n hpd_markers=hpd_markers,\n outline=outline,\n shade=shade,\n n_data=n_data,\n data_labels=data_labels,\n backend_kwargs=backend_kwargs,\n show=show,\n )\n\n if backend is None:\n backend = rcParams[\"plot.backend\"]\n backend = backend.lower()\n\n if backend == \"bokeh\":\n\n plot_density_kwargs[\"line_width\"] = plot_density_kwargs.pop(\"linewidth\")\n plot_density_kwargs.pop(\"titlesize\")\n plot_density_kwargs.pop(\"xt_labelsize\")\n plot_density_kwargs.pop(\"n_data\")\n\n # TODO: Add backend kwargs\n plot = get_plotting_function(\"plot_density\", \"densityplot\", backend)\n ax = plot(**plot_density_kwargs)\n return ax\n", "path": "arviz/plots/densityplot.py"}]}
| 3,597 | 145 |
gh_patches_debug_27967
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-1743
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_K8S_31 failure with DockerDefault configured
**Describe the bug**
CKV_K8S_31 failure when the seccompProfile type is configured as Docker/Default or Runtime/Default
**To Reproduce**
Define security context as below.
```
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: clust3rf8ck
name: clust3rf8ck
namespace: clust3rf8ck
spec:
replicas: 2
selector:
matchLabels:
app: clust3rf8ck
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: clust3rf8ck
annotations:
seccomp.security.alpha.kubernetes.io/pod: "docker/default"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- clust3rf8ck
topologyKey: "kubernetes.io/hostname"
containers:
- image: eurogig/clust3rf8ck@sha256:a374eb5853e0e17d06bcf37afc2fcb40892aa3477caf362ea3581c71373cb90a
name: clust3rf8ck
imagePullPolicy: Always
resources:
limits:
cpu: "1"
memory: "200Mi"
requests:
cpu: "0.6"
memory: "100Mi"
livenessProbe:
exec:
command:
- /bin/sh
- -c
- "[ -f /var/run/nginx.pid ] && ps -A | grep nginx"
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
scheme: HTTP
path: /index.html
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
- NET_RAW
volumeMounts:
- mountPath: /var/cache/nginx
name: cache-volume
- mountPath: /var/run
name: pid-volume
automountServiceAccountToken: false
securityContext:
runAsNonRoot: true
runAsUser: 10014
runAsGroup: 10014
volumes:
- name: cache-volume
emptyDir: {}
- name: pid-volume
emptyDir: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: clust3rf8ck
name: cflb
namespace: clust3rf8ck
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 8080
selector:
app: clust3rf8ck
type: LoadBalancer
```
2. Run checkov for kubernetes.
```
checkov --framework=kubernetes --quiet -d .
```
**Expected behavior**
CKV_K8S_31 to pass with the following configuration.
```
spec:
replicas: 2
selector:
matchLabels:
app: clust3rf8ck
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: clust3rf8ck
annotations:
seccomp.security.alpha.kubernetes.io/pod: "docker/default"
```
**Actual Behaviour**
```
[terraformpipeline] checkov --framework=kubernetes --quiet -d . 20:52:07 ☁ master ☂ ⚡
kubernetes scan results:
Passed checks: 89, Failed checks: 1, Skipped checks: 0
Check: CKV_K8S_31: "Ensure that the seccomp profile is set to docker/default or runtime/default"
FAILED for resource: Deployment.clust3rf8ck.clust3rf8ck
File: /k8s-sample.yaml:35-114
Guide: https://docs.bridgecrew.io/docs/bc_k8s_29
35 | apiVersion: apps/v1
36 | kind: Deployment
37 | metadata:
38 | labels:
39 | app: clust3rf8ck
40 | name: clust3rf8ck
41 | namespace: clust3rf8ck
42 | spec:
43 | replicas: 2
44 | selector:
45 | matchLabels:
46 | app: clust3rf8ck
47 | strategy: {}
48 | template:
49 | metadata:
50 | creationTimestamp: null
51 | labels:
52 | app: clust3rf8ck
53 | annotations:
54 | seccomp.security.alpha.kubernetes.io/pod: "docker/default"
55 | spec:
56 | affinity:
57 | podAntiAffinity:
58 | requiredDuringSchedulingIgnoredDuringExecution:
59 | - labelSelector:
60 | matchExpressions:
61 | - key: app
62 | operator: In
63 | values:
64 | - clust3rf8ck
65 | topologyKey: "kubernetes.io/hostname"
66 | containers:
67 | - image: eurogig/clust3rf8ck@sha256:a374eb5853e0e17d06bcf37afc2fcb40892aa3477caf362ea3581c71373cb90a
68 | name: clust3rf8ck
69 | imagePullPolicy: Always
70 | resources:
71 | limits:
72 | cpu: "1"
73 | memory: "200Mi"
74 | requests:
75 | cpu: "0.6"
76 | memory: "100Mi"
77 | livenessProbe:
78 | exec:
79 | command:
80 | - /bin/sh
81 | - -c
82 | - "[ -f /var/run/nginx.pid ] && ps -A | grep nginx"
83 | initialDelaySeconds: 10
84 | periodSeconds: 5
85 | readinessProbe:
86 | httpGet:
87 | scheme: HTTP
88 | path: /index.html
89 | port: 8080
90 | initialDelaySeconds: 10
91 | periodSeconds: 5
92 | securityContext:
93 | readOnlyRootFilesystem: true
94 | allowPrivilegeEscalation: false
95 | capabilities:
96 | drop:
97 | - ALL
98 | - NET_RAW
99 | volumeMounts:
100 | - mountPath: /var/cache/nginx
101 | name: cache-volume
102 | - mountPath: /var/run
103 | name: pid-volume
104 | automountServiceAccountToken: false
105 | securityContext:
106 | runAsNonRoot: true
107 | runAsUser: 10014
108 | runAsGroup: 10014
109 | volumes:
110 | - name: cache-volume
111 | emptyDir: {}
112 | - name: pid-volume
113 | emptyDir: {}
114 | status: {}
```
**Desktop (please complete the following information):**
- OS: Big Sur 11.5.2
- Checkov Version 2.0.479
**Additional context**
Took the K8s example from this blog https://bridgecrew.io/blog/creating-a-secure-kubernetes-nginx-deployment-using-checkov/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/kubernetes/checks/Seccomp.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories, CheckResult
2 from checkov.common.util.data_structures_utils import find_in_dict
3 from checkov.kubernetes.base_spec_check import BaseK8Check
4 from checkov.common.util.type_forcers import force_list
5
6
7 class Seccomp(BaseK8Check):
8
9 def __init__(self):
10 # CIS-1.5 5.7.2
11 name = "Ensure that the seccomp profile is set to docker/default or runtime/default"
12 id = "CKV_K8S_31"
13 # Location: Pod.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod
14 # Location: CronJob.spec.jobTemplate.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod
15 # Location: *.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod
16 # Location: *.spec.securityContext.seccompProfile.type
17 supported_kind = ['Pod', 'Deployment', 'DaemonSet', 'StatefulSet', 'ReplicaSet', 'ReplicationController', 'Job', 'CronJob']
18 categories = [CheckCategories.KUBERNETES]
19 super().__init__(name=name, id=id, categories=categories, supported_entities=supported_kind)
20
21 def get_resource_id(self, conf):
22 if "namespace" in conf["metadata"]:
23 return "{}.{}.{}".format(conf["kind"], conf["metadata"]["name"], conf["metadata"]["namespace"])
24 else:
25 return "{}.{}.default".format(conf["kind"], conf["metadata"]["name"])
26
27 def scan_spec_conf(self, conf):
28 metadata = {}
29
30 if conf['kind'] == 'Pod':
31 security_profile = find_in_dict(conf, 'spec/securityContext/seccompProfile/type')
32 if security_profile:
33 return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
34 if "metadata" in conf:
35 metadata = conf["metadata"]
36 if conf['kind'] == 'Deployment' or conf['kind'] == 'StatefulSet':
37 security_profile = find_in_dict(conf, 'spec/template/spec/securityContext/seccompProfile/type')
38 if security_profile:
39 return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
40 if "metadata" in conf:
41 metadata = conf["metadata"]
42 elif conf['kind'] == 'CronJob':
43 if "spec" in conf:
44 if "jobTemplate" in conf["spec"]:
45 if "spec" in conf["spec"]["jobTemplate"]:
46 if "template" in conf["spec"]["jobTemplate"]["spec"]:
47 if "metadata" in conf["spec"]["jobTemplate"]["spec"]["template"]:
48 metadata = conf["spec"]["jobTemplate"]["spec"]["template"]["metadata"]
49 else:
50 inner_metadata = self.get_inner_entry(conf, "metadata")
51 metadata = inner_metadata if inner_metadata else metadata
52
53 if metadata:
54 if metadata.get('annotations'):
55 for annotation in force_list(metadata["annotations"]):
56 for key in annotation:
57 if "seccomp.security.alpha.kubernetes.io/pod" in key:
58 if "docker/default" in annotation[key] or "runtime/default" in annotation[key]:
59 return CheckResult.PASSED
60 return CheckResult.FAILED
61
62
63 check = Seccomp()
64
```
Path: `checkov/kubernetes/base_spec_check.py`
Content:
```
1 from abc import abstractmethod
2
3 from checkov.common.checks.base_check import BaseCheck
4 from checkov.common.multi_signature import multi_signature
5 from checkov.kubernetes.registry import registry
6
7
8 class BaseK8Check(BaseCheck):
9 def __init__(self, name, id, categories, supported_entities):
10 super().__init__(name=name, id=id, categories=categories, supported_entities=supported_entities,
11 block_type="k8")
12 self.supported_specs = supported_entities
13 registry.register(self)
14
15 @abstractmethod
16 def get_resource_id(self, conf):
17 pass
18
19 def scan_entity_conf(self, conf, entity_type):
20 return self.scan_spec_conf(conf, entity_type)
21
22 @multi_signature()
23 @abstractmethod
24 def scan_spec_conf(self, conf, entity_type):
25 raise NotImplementedError()
26
27 @classmethod
28 @scan_spec_conf.add_signature(args=["self", "conf"])
29 def _scan_spec_conf_self_conf(cls, wrapped):
30 def wrapper(self, conf, entity_type=None):
31 # keep default argument for entity_type so old code, that doesn't set it, will work.
32 return wrapped(self, conf)
33
34 return wrapper
35
36 @staticmethod
37 def get_inner_entry(conf, entry_name):
38 spec = {}
39 if conf.get("spec") and conf.get("spec").get("template"):
40 spec = conf.get("spec").get("template").get(entry_name, {})
41 return spec
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/kubernetes/base_spec_check.py b/checkov/kubernetes/base_spec_check.py
--- a/checkov/kubernetes/base_spec_check.py
+++ b/checkov/kubernetes/base_spec_check.py
@@ -1,4 +1,5 @@
from abc import abstractmethod
+from typing import Dict, Any
from checkov.common.checks.base_check import BaseCheck
from checkov.common.multi_signature import multi_signature
@@ -34,7 +35,7 @@
return wrapper
@staticmethod
- def get_inner_entry(conf, entry_name):
+ def get_inner_entry(conf: Dict[str, Any], entry_name: str) -> Dict[str, Any]:
spec = {}
if conf.get("spec") and conf.get("spec").get("template"):
spec = conf.get("spec").get("template").get(entry_name, {})
diff --git a/checkov/kubernetes/checks/Seccomp.py b/checkov/kubernetes/checks/Seccomp.py
--- a/checkov/kubernetes/checks/Seccomp.py
+++ b/checkov/kubernetes/checks/Seccomp.py
@@ -37,7 +37,9 @@
security_profile = find_in_dict(conf, 'spec/template/spec/securityContext/seccompProfile/type')
if security_profile:
return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
- if "metadata" in conf:
+
+ metadata = self.get_inner_entry(conf, "metadata")
+ if not metadata and "metadata" in conf:
metadata = conf["metadata"]
elif conf['kind'] == 'CronJob':
if "spec" in conf:
|
{"golden_diff": "diff --git a/checkov/kubernetes/base_spec_check.py b/checkov/kubernetes/base_spec_check.py\n--- a/checkov/kubernetes/base_spec_check.py\n+++ b/checkov/kubernetes/base_spec_check.py\n@@ -1,4 +1,5 @@\n from abc import abstractmethod\n+from typing import Dict, Any\n \n from checkov.common.checks.base_check import BaseCheck\n from checkov.common.multi_signature import multi_signature\n@@ -34,7 +35,7 @@\n return wrapper\n \n @staticmethod\n- def get_inner_entry(conf, entry_name):\n+ def get_inner_entry(conf: Dict[str, Any], entry_name: str) -> Dict[str, Any]:\n spec = {}\n if conf.get(\"spec\") and conf.get(\"spec\").get(\"template\"):\n spec = conf.get(\"spec\").get(\"template\").get(entry_name, {})\ndiff --git a/checkov/kubernetes/checks/Seccomp.py b/checkov/kubernetes/checks/Seccomp.py\n--- a/checkov/kubernetes/checks/Seccomp.py\n+++ b/checkov/kubernetes/checks/Seccomp.py\n@@ -37,7 +37,9 @@\n security_profile = find_in_dict(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n if security_profile:\n return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n- if \"metadata\" in conf:\n+\n+ metadata = self.get_inner_entry(conf, \"metadata\")\n+ if not metadata and \"metadata\" in conf:\n metadata = conf[\"metadata\"]\n elif conf['kind'] == 'CronJob':\n if \"spec\" in conf:\n", "issue": "CKV_K8S_31 failure with DockerDefault configured\n**Describe the bug**\r\nCKV_K8S_31 failure when the seccompProfile type is configured as Docker/Default or Runtime/Default\r\n\r\n**To Reproduce**\r\nDefine security context as below.\r\n\r\n```\r\napiVersion: apps/v1\r\nkind: Deployment\r\nmetadata:\r\n labels:\r\n app: clust3rf8ck\r\n name: clust3rf8ck\r\n namespace: clust3rf8ck\r\nspec:\r\n replicas: 2\r\n selector:\r\n matchLabels:\r\n app: clust3rf8ck\r\n strategy: {}\r\n template:\r\n metadata:\r\n creationTimestamp: null\r\n labels:\r\n app: clust3rf8ck\r\n annotations:\r\n seccomp.security.alpha.kubernetes.io/pod: \"docker/default\"\r\n spec:\r\n affinity:\r\n podAntiAffinity:\r\n requiredDuringSchedulingIgnoredDuringExecution:\r\n - labelSelector:\r\n matchExpressions:\r\n - key: app\r\n operator: In\r\n values:\r\n - clust3rf8ck\r\n topologyKey: \"kubernetes.io/hostname\"\r\n containers:\r\n - image: eurogig/clust3rf8ck@sha256:a374eb5853e0e17d06bcf37afc2fcb40892aa3477caf362ea3581c71373cb90a\r\n name: clust3rf8ck\r\n imagePullPolicy: Always\r\n resources:\r\n limits:\r\n cpu: \"1\"\r\n memory: \"200Mi\"\r\n requests:\r\n cpu: \"0.6\"\r\n memory: \"100Mi\"\r\n livenessProbe:\r\n exec:\r\n command:\r\n - /bin/sh\r\n - -c\r\n - \"[ -f /var/run/nginx.pid ] && ps -A | grep nginx\"\r\n initialDelaySeconds: 10\r\n periodSeconds: 5\r\n readinessProbe:\r\n httpGet:\r\n scheme: HTTP\r\n path: /index.html\r\n port: 8080\r\n initialDelaySeconds: 10\r\n periodSeconds: 5\r\n securityContext:\r\n readOnlyRootFilesystem: true\r\n allowPrivilegeEscalation: false\r\n capabilities:\r\n drop:\r\n - ALL\r\n - NET_RAW\r\n volumeMounts:\r\n - mountPath: /var/cache/nginx\r\n name: cache-volume\r\n - mountPath: /var/run\r\n name: pid-volume\r\n automountServiceAccountToken: false\r\n securityContext:\r\n runAsNonRoot: true\r\n runAsUser: 10014\r\n runAsGroup: 10014\r\n volumes:\r\n - name: cache-volume\r\n emptyDir: {}\r\n - name: pid-volume\r\n emptyDir: {}\r\nstatus: {}\r\n---\r\napiVersion: v1\r\nkind: Service\r\nmetadata:\r\n creationTimestamp: null\r\n labels:\r\n app: clust3rf8ck\r\n name: cflb\r\n namespace: clust3rf8ck\r\nspec:\r\n ports:\r\n - name: 80-80\r\n port: 80\r\n protocol: TCP\r\n targetPort: 8080\r\n selector:\r\n app: clust3rf8ck\r\n type: LoadBalancer\r\n```\r\n2. Run checkov for kubernetes.\r\n```\r\ncheckov --framework=kubernetes --quiet -d .\r\n```\r\n**Expected behavior**\r\nCKV_K8S_31 to pass with the following configuration.\r\n\r\n```\r\nspec:\r\n replicas: 2\r\n selector:\r\n matchLabels:\r\n app: clust3rf8ck\r\n strategy: {}\r\n template:\r\n metadata:\r\n creationTimestamp: null\r\n labels:\r\n app: clust3rf8ck\r\n annotations:\r\n seccomp.security.alpha.kubernetes.io/pod: \"docker/default\"\r\n```\r\n\r\n**Actual Behaviour**\r\n```\r\n[terraformpipeline] checkov --framework=kubernetes --quiet -d . 20:52:07 \u2601 master \u2602 \u26a1\r\nkubernetes scan results:\r\n\r\nPassed checks: 89, Failed checks: 1, Skipped checks: 0\r\n\r\nCheck: CKV_K8S_31: \"Ensure that the seccomp profile is set to docker/default or runtime/default\"\r\n\tFAILED for resource: Deployment.clust3rf8ck.clust3rf8ck\r\n\tFile: /k8s-sample.yaml:35-114\r\n\tGuide: https://docs.bridgecrew.io/docs/bc_k8s_29\r\n\r\n\t\t35 | apiVersion: apps/v1\r\n\t\t36 | kind: Deployment\r\n\t\t37 | metadata:\r\n\t\t38 | labels:\r\n\t\t39 | app: clust3rf8ck\r\n\t\t40 | name: clust3rf8ck\r\n\t\t41 | namespace: clust3rf8ck\r\n\t\t42 | spec:\r\n\t\t43 | replicas: 2\r\n\t\t44 | selector:\r\n\t\t45 | matchLabels:\r\n\t\t46 | app: clust3rf8ck\r\n\t\t47 | strategy: {}\r\n\t\t48 | template:\r\n\t\t49 | metadata:\r\n\t\t50 | creationTimestamp: null\r\n\t\t51 | labels:\r\n\t\t52 | app: clust3rf8ck\r\n\t\t53 | annotations:\r\n\t\t54 | seccomp.security.alpha.kubernetes.io/pod: \"docker/default\"\r\n\t\t55 | spec:\r\n\t\t56 | affinity:\r\n\t\t57 | podAntiAffinity:\r\n\t\t58 | requiredDuringSchedulingIgnoredDuringExecution:\r\n\t\t59 | - labelSelector:\r\n\t\t60 | matchExpressions:\r\n\t\t61 | - key: app\r\n\t\t62 | operator: In\r\n\t\t63 | values:\r\n\t\t64 | - clust3rf8ck\r\n\t\t65 | topologyKey: \"kubernetes.io/hostname\"\r\n\t\t66 | containers:\r\n\t\t67 | - image: eurogig/clust3rf8ck@sha256:a374eb5853e0e17d06bcf37afc2fcb40892aa3477caf362ea3581c71373cb90a\r\n\t\t68 | name: clust3rf8ck\r\n\t\t69 | imagePullPolicy: Always\r\n\t\t70 | resources:\r\n\t\t71 | limits:\r\n\t\t72 | cpu: \"1\"\r\n\t\t73 | memory: \"200Mi\"\r\n\t\t74 | requests:\r\n\t\t75 | cpu: \"0.6\"\r\n\t\t76 | memory: \"100Mi\"\r\n\t\t77 | livenessProbe:\r\n\t\t78 | exec:\r\n\t\t79 | command:\r\n\t\t80 | - /bin/sh\r\n\t\t81 | - -c\r\n\t\t82 | - \"[ -f /var/run/nginx.pid ] && ps -A | grep nginx\"\r\n\t\t83 | initialDelaySeconds: 10\r\n\t\t84 | periodSeconds: 5\r\n\t\t85 | readinessProbe:\r\n\t\t86 | httpGet:\r\n\t\t87 | scheme: HTTP\r\n\t\t88 | path: /index.html\r\n\t\t89 | port: 8080\r\n\t\t90 | initialDelaySeconds: 10\r\n\t\t91 | periodSeconds: 5\r\n\t\t92 | securityContext:\r\n\t\t93 | readOnlyRootFilesystem: true\r\n\t\t94 | allowPrivilegeEscalation: false\r\n\t\t95 | capabilities:\r\n\t\t96 | drop:\r\n\t\t97 | - ALL\r\n\t\t98 | - NET_RAW\r\n\t\t99 | volumeMounts:\r\n\t\t100 | - mountPath: /var/cache/nginx\r\n\t\t101 | name: cache-volume\r\n\t\t102 | - mountPath: /var/run\r\n\t\t103 | name: pid-volume\r\n\t\t104 | automountServiceAccountToken: false\r\n\t\t105 | securityContext:\r\n\t\t106 | runAsNonRoot: true\r\n\t\t107 | runAsUser: 10014\r\n\t\t108 | runAsGroup: 10014\r\n\t\t109 | volumes:\r\n\t\t110 | - name: cache-volume\r\n\t\t111 | emptyDir: {}\r\n\t\t112 | - name: pid-volume\r\n\t\t113 | emptyDir: {}\r\n\t\t114 | status: {}\r\n```\r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Big Sur 11.5.2 \r\n - Checkov Version 2.0.479\r\n\r\n**Additional context**\r\nTook the K8s example from this blog https://bridgecrew.io/blog/creating-a-secure-kubernetes-nginx-deployment-using-checkov/\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.common.util.data_structures_utils import find_in_dict\nfrom checkov.kubernetes.base_spec_check import BaseK8Check\nfrom checkov.common.util.type_forcers import force_list\n\n\nclass Seccomp(BaseK8Check):\n\n def __init__(self):\n # CIS-1.5 5.7.2\n name = \"Ensure that the seccomp profile is set to docker/default or runtime/default\"\n id = \"CKV_K8S_31\"\n # Location: Pod.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: CronJob.spec.jobTemplate.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: *.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: *.spec.securityContext.seccompProfile.type\n supported_kind = ['Pod', 'Deployment', 'DaemonSet', 'StatefulSet', 'ReplicaSet', 'ReplicationController', 'Job', 'CronJob']\n categories = [CheckCategories.KUBERNETES]\n super().__init__(name=name, id=id, categories=categories, supported_entities=supported_kind)\n\n def get_resource_id(self, conf):\n if \"namespace\" in conf[\"metadata\"]:\n return \"{}.{}.{}\".format(conf[\"kind\"], conf[\"metadata\"][\"name\"], conf[\"metadata\"][\"namespace\"])\n else:\n return \"{}.{}.default\".format(conf[\"kind\"], conf[\"metadata\"][\"name\"])\n\n def scan_spec_conf(self, conf):\n metadata = {}\n\n if conf['kind'] == 'Pod':\n security_profile = find_in_dict(conf, 'spec/securityContext/seccompProfile/type')\n if security_profile:\n return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n if \"metadata\" in conf:\n metadata = conf[\"metadata\"]\n if conf['kind'] == 'Deployment' or conf['kind'] == 'StatefulSet':\n security_profile = find_in_dict(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n if security_profile:\n return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n if \"metadata\" in conf:\n metadata = conf[\"metadata\"]\n elif conf['kind'] == 'CronJob':\n if \"spec\" in conf:\n if \"jobTemplate\" in conf[\"spec\"]:\n if \"spec\" in conf[\"spec\"][\"jobTemplate\"]:\n if \"template\" in conf[\"spec\"][\"jobTemplate\"][\"spec\"]:\n if \"metadata\" in conf[\"spec\"][\"jobTemplate\"][\"spec\"][\"template\"]:\n metadata = conf[\"spec\"][\"jobTemplate\"][\"spec\"][\"template\"][\"metadata\"]\n else:\n inner_metadata = self.get_inner_entry(conf, \"metadata\")\n metadata = inner_metadata if inner_metadata else metadata\n\n if metadata:\n if metadata.get('annotations'):\n for annotation in force_list(metadata[\"annotations\"]):\n for key in annotation:\n if \"seccomp.security.alpha.kubernetes.io/pod\" in key:\n if \"docker/default\" in annotation[key] or \"runtime/default\" in annotation[key]:\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = Seccomp()\n", "path": "checkov/kubernetes/checks/Seccomp.py"}, {"content": "from abc import abstractmethod\n\nfrom checkov.common.checks.base_check import BaseCheck\nfrom checkov.common.multi_signature import multi_signature\nfrom checkov.kubernetes.registry import registry\n\n\nclass BaseK8Check(BaseCheck):\n def __init__(self, name, id, categories, supported_entities):\n super().__init__(name=name, id=id, categories=categories, supported_entities=supported_entities,\n block_type=\"k8\")\n self.supported_specs = supported_entities\n registry.register(self)\n\n @abstractmethod\n def get_resource_id(self, conf):\n pass\n\n def scan_entity_conf(self, conf, entity_type):\n return self.scan_spec_conf(conf, entity_type)\n\n @multi_signature()\n @abstractmethod\n def scan_spec_conf(self, conf, entity_type):\n raise NotImplementedError()\n\n @classmethod\n @scan_spec_conf.add_signature(args=[\"self\", \"conf\"])\n def _scan_spec_conf_self_conf(cls, wrapped):\n def wrapper(self, conf, entity_type=None):\n # keep default argument for entity_type so old code, that doesn't set it, will work.\n return wrapped(self, conf)\n\n return wrapper\n\n @staticmethod\n def get_inner_entry(conf, entry_name):\n spec = {}\n if conf.get(\"spec\") and conf.get(\"spec\").get(\"template\"):\n spec = conf.get(\"spec\").get(\"template\").get(entry_name, {})\n return spec\n", "path": "checkov/kubernetes/base_spec_check.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.common.util.data_structures_utils import find_in_dict\nfrom checkov.kubernetes.base_spec_check import BaseK8Check\nfrom checkov.common.util.type_forcers import force_list\n\n\nclass Seccomp(BaseK8Check):\n\n def __init__(self):\n # CIS-1.5 5.7.2\n name = \"Ensure that the seccomp profile is set to docker/default or runtime/default\"\n id = \"CKV_K8S_31\"\n # Location: Pod.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: CronJob.spec.jobTemplate.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: *.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: *.spec.securityContext.seccompProfile.type\n supported_kind = ['Pod', 'Deployment', 'DaemonSet', 'StatefulSet', 'ReplicaSet', 'ReplicationController', 'Job', 'CronJob']\n categories = [CheckCategories.KUBERNETES]\n super().__init__(name=name, id=id, categories=categories, supported_entities=supported_kind)\n\n def get_resource_id(self, conf):\n if \"namespace\" in conf[\"metadata\"]:\n return \"{}.{}.{}\".format(conf[\"kind\"], conf[\"metadata\"][\"name\"], conf[\"metadata\"][\"namespace\"])\n else:\n return \"{}.{}.default\".format(conf[\"kind\"], conf[\"metadata\"][\"name\"])\n\n def scan_spec_conf(self, conf):\n metadata = {}\n\n if conf['kind'] == 'Pod':\n security_profile = find_in_dict(conf, 'spec/securityContext/seccompProfile/type')\n if security_profile:\n return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n if \"metadata\" in conf:\n metadata = conf[\"metadata\"]\n if conf['kind'] == 'Deployment' or conf['kind'] == 'StatefulSet':\n security_profile = find_in_dict(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n if security_profile:\n return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n\n metadata = self.get_inner_entry(conf, \"metadata\")\n if not metadata and \"metadata\" in conf:\n metadata = conf[\"metadata\"]\n elif conf['kind'] == 'CronJob':\n if \"spec\" in conf:\n if \"jobTemplate\" in conf[\"spec\"]:\n if \"spec\" in conf[\"spec\"][\"jobTemplate\"]:\n if \"template\" in conf[\"spec\"][\"jobTemplate\"][\"spec\"]:\n if \"metadata\" in conf[\"spec\"][\"jobTemplate\"][\"spec\"][\"template\"]:\n metadata = conf[\"spec\"][\"jobTemplate\"][\"spec\"][\"template\"][\"metadata\"]\n else:\n inner_metadata = self.get_inner_entry(conf, \"metadata\")\n metadata = inner_metadata if inner_metadata else metadata\n\n if metadata:\n if metadata.get('annotations'):\n for annotation in force_list(metadata[\"annotations\"]):\n for key in annotation:\n if \"seccomp.security.alpha.kubernetes.io/pod\" in key:\n if \"docker/default\" in annotation[key] or \"runtime/default\" in annotation[key]:\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = Seccomp()\n", "path": "checkov/kubernetes/checks/Seccomp.py"}, {"content": "from abc import abstractmethod\nfrom typing import Dict, Any\n\nfrom checkov.common.checks.base_check import BaseCheck\nfrom checkov.common.multi_signature import multi_signature\nfrom checkov.kubernetes.registry import registry\n\n\nclass BaseK8Check(BaseCheck):\n def __init__(self, name, id, categories, supported_entities):\n super().__init__(name=name, id=id, categories=categories, supported_entities=supported_entities,\n block_type=\"k8\")\n self.supported_specs = supported_entities\n registry.register(self)\n\n @abstractmethod\n def get_resource_id(self, conf):\n pass\n\n def scan_entity_conf(self, conf, entity_type):\n return self.scan_spec_conf(conf, entity_type)\n\n @multi_signature()\n @abstractmethod\n def scan_spec_conf(self, conf, entity_type):\n raise NotImplementedError()\n\n @classmethod\n @scan_spec_conf.add_signature(args=[\"self\", \"conf\"])\n def _scan_spec_conf_self_conf(cls, wrapped):\n def wrapper(self, conf, entity_type=None):\n # keep default argument for entity_type so old code, that doesn't set it, will work.\n return wrapped(self, conf)\n\n return wrapper\n\n @staticmethod\n def get_inner_entry(conf: Dict[str, Any], entry_name: str) -> Dict[str, Any]:\n spec = {}\n if conf.get(\"spec\") and conf.get(\"spec\").get(\"template\"):\n spec = conf.get(\"spec\").get(\"template\").get(entry_name, {})\n return spec\n", "path": "checkov/kubernetes/base_spec_check.py"}]}
| 3,615 | 353 |
gh_patches_debug_31637
|
rasdani/github-patches
|
git_diff
|
optuna__optuna-3544
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`GridSearchSampler` raises `TypeError` when `choices` of `CategoricalDistribution` are not comparable
### Expected behavior
The folliwing grid search based optimisation works without any error because default sampler works fine.
```python
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [None, 1])
if y is None:
y = -1
return x ** 2 + y
# Grid search.
search_space = {"x": [-50, 0, 50], "y": [None, 1]}
study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))
study.optimize(objective, n_trials=20)
# Note Random search + TPE works well.
# study.optimize(objective, n_trials=20)
```
### Environment
- Optuna version: 3.0.0b1.dev
- Python version: 3.9.12
- OS: macOS-10.16-x86_64-i386-64bit
- (Optional) Other libraries and their versions:
### Error messages, stack traces, or logs
```shell
TypeError Traceback (most recent call last)
Input In [15], in <cell line: 17>()
14 study.optimize(objective, n_trials=20)
16 # Grid search
---> 17 study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))
18 study.optimize(objective, n_trials=20)
File ~/Documents/optuna/optuna/samplers/_grid.py:112, in GridSampler.__init__(self, search_space)
109 for param_name, param_values in sorted(search_space.items(), key=lambda x: x[0]):
110 param_values = cast(SortableParamValueSequenceType, param_values)
--> 112 self._search_space[param_name] = sorted(param_values)
114 self._all_grids = list(itertools.product(*self._search_space.values()))
115 self._param_names = sorted(search_space.keys())
TypeError: '<' not supported between instances of 'int' and 'NoneType'
```
### Steps to reproduce
1. Run the code above
2.
3.
```python
# python code
```
### Additional context (optional)
Since grid search sampler implementation sorts `choices`, the current implementation assumes the choices are sortable.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/samplers/_grid.py`
Content:
```
1 import collections
2 import itertools
3 import random
4 from typing import Any
5 from typing import cast
6 from typing import Dict
7 from typing import List
8 from typing import Mapping
9 from typing import Optional
10 from typing import Sequence
11 from typing import Union
12 import warnings
13
14 from optuna.distributions import BaseDistribution
15 from optuna.logging import get_logger
16 from optuna.samplers import BaseSampler
17 from optuna.study import Study
18 from optuna.trial import FrozenTrial
19 from optuna.trial import TrialState
20
21
22 GridValueType = Union[str, float, int, bool, None]
23 SortableParamValueSequenceType = Union[List[str], List[float], List[int], List[bool]]
24
25
26 _logger = get_logger(__name__)
27
28
29 class GridSampler(BaseSampler):
30 """Sampler using grid search.
31
32 With :class:`~optuna.samplers.GridSampler`, the trials suggest all combinations of parameters
33 in the given search space during the study.
34
35 Example:
36
37 .. testcode::
38
39 import optuna
40
41
42 def objective(trial):
43 x = trial.suggest_float("x", -100, 100)
44 y = trial.suggest_int("y", -100, 100)
45 return x**2 + y**2
46
47
48 search_space = {"x": [-50, 0, 50], "y": [-99, 0, 99]}
49 study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))
50 study.optimize(objective)
51
52 Note:
53
54 :class:`~optuna.samplers.GridSampler` automatically stops the optimization if all
55 combinations in the passed ``search_space`` have already been evaluated, internally
56 invoking the :func:`~optuna.study.Study.stop` method.
57
58 Note:
59
60 :class:`~optuna.samplers.GridSampler` does not take care of a parameter's quantization
61 specified by discrete suggest methods but just samples one of values specified in the
62 search space. E.g., in the following code snippet, either of ``-0.5`` or ``0.5`` is
63 sampled as ``x`` instead of an integer point.
64
65 .. testcode::
66
67 import optuna
68
69
70 def objective(trial):
71 # The following suggest method specifies integer points between -5 and 5.
72 x = trial.suggest_float("x", -5, 5, step=1)
73 return x**2
74
75
76 # Non-int points are specified in the grid.
77 search_space = {"x": [-0.5, 0.5]}
78 study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))
79 study.optimize(objective, n_trials=2)
80
81 Note:
82 A parameter configuration in the grid is not considered finished until its trial is
83 finished. Therefore, during distributed optimization where trials run concurrently,
84 different workers will occasionally suggest the same parameter configuration.
85 The total number of actual trials may therefore exceed the size of the grid.
86
87 Note:
88 The grid is randomly shuffled and the order in which parameter configurations are
89 suggested may vary. This is to reduce duplicate suggestions during distributed
90 optimization.
91
92 Note:
93 All parameters must be specified when using :class:`~optuna.samplers.GridSampler` with
94 :meth:`~optuna.study.Study.enqueue_trial`.
95
96 Args:
97 search_space:
98 A dictionary whose key and value are a parameter name and the corresponding candidates
99 of values, respectively.
100 """
101
102 def __init__(self, search_space: Mapping[str, Sequence[GridValueType]]) -> None:
103
104 for param_name, param_values in search_space.items():
105 for value in param_values:
106 self._check_value(param_name, value)
107
108 self._search_space = collections.OrderedDict()
109 for param_name, param_values in sorted(search_space.items(), key=lambda x: x[0]):
110 param_values = cast(SortableParamValueSequenceType, param_values)
111
112 self._search_space[param_name] = sorted(param_values)
113
114 self._all_grids = list(itertools.product(*self._search_space.values()))
115 self._param_names = sorted(search_space.keys())
116 self._n_min_trials = len(self._all_grids)
117
118 def infer_relative_search_space(
119 self, study: Study, trial: FrozenTrial
120 ) -> Dict[str, BaseDistribution]:
121
122 return {}
123
124 def sample_relative(
125 self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]
126 ) -> Dict[str, Any]:
127 # Instead of returning param values, GridSampler puts the target grid id as a system attr,
128 # and the values are returned from `sample_independent`. This is because the distribution
129 # object is hard to get at the beginning of trial, while we need the access to the object
130 # to validate the sampled value.
131
132 # When the trial is created by RetryFailedTrialCallback or enqueue_trial, we should not
133 # assign a new grid_id.
134 if "grid_id" in trial.system_attrs or "fixed_params" in trial.system_attrs:
135 return {}
136
137 target_grids = self._get_unvisited_grid_ids(study)
138
139 if len(target_grids) == 0:
140 # This case may occur with distributed optimization or trial queue. If there is no
141 # target grid, `GridSampler` evaluates a visited, duplicated point with the current
142 # trial. After that, the optimization stops.
143
144 _logger.warning(
145 "`GridSampler` is re-evaluating a configuration because the grid has been "
146 "exhausted. This may happen due to a timing issue during distributed optimization "
147 "or when re-running optimizations on already finished studies."
148 )
149
150 # One of all grids is randomly picked up in this case.
151 target_grids = list(range(len(self._all_grids)))
152
153 # In distributed optimization, multiple workers may simultaneously pick up the same grid.
154 # To make the conflict less frequent, the grid is chosen randomly.
155 grid_id = random.choice(target_grids)
156
157 study._storage.set_trial_system_attr(trial._trial_id, "search_space", self._search_space)
158 study._storage.set_trial_system_attr(trial._trial_id, "grid_id", grid_id)
159
160 return {}
161
162 def sample_independent(
163 self,
164 study: Study,
165 trial: FrozenTrial,
166 param_name: str,
167 param_distribution: BaseDistribution,
168 ) -> Any:
169
170 if "grid_id" not in trial.system_attrs:
171 message = "All parameters must be specified when using GridSampler with enqueue_trial."
172 raise ValueError(message)
173
174 if param_name not in self._search_space:
175 message = "The parameter name, {}, is not found in the given grid.".format(param_name)
176 raise ValueError(message)
177
178 # TODO(c-bata): Reduce the number of duplicated evaluations on multiple workers.
179 # Current selection logic may evaluate the same parameters multiple times.
180 # See https://gist.github.com/c-bata/f759f64becb24eea2040f4b2e3afce8f for details.
181 grid_id = trial.system_attrs["grid_id"]
182 param_value = self._all_grids[grid_id][self._param_names.index(param_name)]
183 contains = param_distribution._contains(param_distribution.to_internal_repr(param_value))
184 if not contains:
185 warnings.warn(
186 f"The value `{param_value}` is out of range of the parameter `{param_name}`. "
187 f"The value will be used but the actual distribution is: `{param_distribution}`."
188 )
189
190 return param_value
191
192 def after_trial(
193 self,
194 study: Study,
195 trial: FrozenTrial,
196 state: TrialState,
197 values: Optional[Sequence[float]],
198 ) -> None:
199 target_grids = self._get_unvisited_grid_ids(study)
200
201 if len(target_grids) == 0:
202 study.stop()
203 elif len(target_grids) == 1:
204 grid_id = study._storage.get_trial_system_attrs(trial._trial_id)["grid_id"]
205 if grid_id == target_grids[0]:
206 study.stop()
207
208 @staticmethod
209 def _check_value(param_name: str, param_value: Any) -> None:
210
211 if param_value is None or isinstance(param_value, (str, int, float, bool)):
212 return
213
214 raise ValueError(
215 "{} contains a value with the type of {}, which is not supported by "
216 "`GridSampler`. Please make sure a value is `str`, `int`, `float`, `bool`"
217 " or `None`.".format(param_name, type(param_value))
218 )
219
220 def _get_unvisited_grid_ids(self, study: Study) -> List[int]:
221
222 # List up unvisited grids based on already finished ones.
223 visited_grids = []
224 running_grids = []
225
226 # We directly query the storage to get trials here instead of `study.get_trials`,
227 # since some pruners such as `HyperbandPruner` use the study transformed
228 # to filter trials. See https://github.com/optuna/optuna/issues/2327 for details.
229 trials = study._storage.get_all_trials(study._study_id, deepcopy=False)
230
231 for t in trials:
232 if "grid_id" in t.system_attrs and self._same_search_space(
233 t.system_attrs["search_space"]
234 ):
235 if t.state.is_finished():
236 visited_grids.append(t.system_attrs["grid_id"])
237 elif t.state == TrialState.RUNNING:
238 running_grids.append(t.system_attrs["grid_id"])
239
240 unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids) - set(running_grids)
241
242 # If evaluations for all grids have been started, return grids that have not yet finished
243 # because all grids should be evaluated before stopping the optimization.
244 if len(unvisited_grids) == 0:
245 unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids)
246
247 return list(unvisited_grids)
248
249 def _same_search_space(self, search_space: Mapping[str, Sequence[GridValueType]]) -> bool:
250
251 if set(search_space.keys()) != set(self._search_space.keys()):
252 return False
253
254 for param_name in search_space.keys():
255 if len(search_space[param_name]) != len(self._search_space[param_name]):
256 return False
257
258 param_values = cast(SortableParamValueSequenceType, search_space[param_name])
259 for i, param_value in enumerate(sorted(param_values)):
260 if param_value != self._search_space[param_name][i]:
261 return False
262
263 return True
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/optuna/samplers/_grid.py b/optuna/samplers/_grid.py
--- a/optuna/samplers/_grid.py
+++ b/optuna/samplers/_grid.py
@@ -2,7 +2,6 @@
import itertools
import random
from typing import Any
-from typing import cast
from typing import Dict
from typing import List
from typing import Mapping
@@ -20,7 +19,6 @@
GridValueType = Union[str, float, int, bool, None]
-SortableParamValueSequenceType = Union[List[str], List[float], List[int], List[bool]]
_logger = get_logger(__name__)
@@ -106,10 +104,8 @@
self._check_value(param_name, value)
self._search_space = collections.OrderedDict()
- for param_name, param_values in sorted(search_space.items(), key=lambda x: x[0]):
- param_values = cast(SortableParamValueSequenceType, param_values)
-
- self._search_space[param_name] = sorted(param_values)
+ for param_name, param_values in sorted(search_space.items()):
+ self._search_space[param_name] = param_values
self._all_grids = list(itertools.product(*self._search_space.values()))
self._param_names = sorted(search_space.keys())
@@ -255,8 +251,7 @@
if len(search_space[param_name]) != len(self._search_space[param_name]):
return False
- param_values = cast(SortableParamValueSequenceType, search_space[param_name])
- for i, param_value in enumerate(sorted(param_values)):
+ for i, param_value in enumerate(search_space[param_name]):
if param_value != self._search_space[param_name][i]:
return False
|
{"golden_diff": "diff --git a/optuna/samplers/_grid.py b/optuna/samplers/_grid.py\n--- a/optuna/samplers/_grid.py\n+++ b/optuna/samplers/_grid.py\n@@ -2,7 +2,6 @@\n import itertools\n import random\n from typing import Any\n-from typing import cast\n from typing import Dict\n from typing import List\n from typing import Mapping\n@@ -20,7 +19,6 @@\n \n \n GridValueType = Union[str, float, int, bool, None]\n-SortableParamValueSequenceType = Union[List[str], List[float], List[int], List[bool]]\n \n \n _logger = get_logger(__name__)\n@@ -106,10 +104,8 @@\n self._check_value(param_name, value)\n \n self._search_space = collections.OrderedDict()\n- for param_name, param_values in sorted(search_space.items(), key=lambda x: x[0]):\n- param_values = cast(SortableParamValueSequenceType, param_values)\n-\n- self._search_space[param_name] = sorted(param_values)\n+ for param_name, param_values in sorted(search_space.items()):\n+ self._search_space[param_name] = param_values\n \n self._all_grids = list(itertools.product(*self._search_space.values()))\n self._param_names = sorted(search_space.keys())\n@@ -255,8 +251,7 @@\n if len(search_space[param_name]) != len(self._search_space[param_name]):\n return False\n \n- param_values = cast(SortableParamValueSequenceType, search_space[param_name])\n- for i, param_value in enumerate(sorted(param_values)):\n+ for i, param_value in enumerate(search_space[param_name]):\n if param_value != self._search_space[param_name][i]:\n return False\n", "issue": "`GridSearchSampler` raises `TypeError` when `choices` of `CategoricalDistribution` are not comparable\n### Expected behavior\r\n\r\nThe folliwing grid search based optimisation works without any error because default sampler works fine.\r\n```python\r\nimport optuna\r\n\r\n\r\ndef objective(trial):\r\n x = trial.suggest_float(\"x\", -100, 100)\r\n y = trial.suggest_categorical(\"y\", [None, 1])\r\n if y is None:\r\n y = -1\r\n return x ** 2 + y\r\n\r\n# Grid search.\r\nsearch_space = {\"x\": [-50, 0, 50], \"y\": [None, 1]}\r\nstudy = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))\r\nstudy.optimize(objective, n_trials=20)\r\n\r\n\r\n# Note Random search + TPE works well.\r\n# study.optimize(objective, n_trials=20)\r\n```\r\n\r\n### Environment\r\n\r\n- Optuna version: 3.0.0b1.dev\r\n- Python version: 3.9.12\r\n- OS: macOS-10.16-x86_64-i386-64bit\r\n- (Optional) Other libraries and their versions:\r\n\r\n\r\n### Error messages, stack traces, or logs\r\n\r\n```shell\r\nTypeError Traceback (most recent call last)\r\nInput In [15], in <cell line: 17>()\r\n 14 study.optimize(objective, n_trials=20)\r\n 16 # Grid search\r\n---> 17 study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))\r\n 18 study.optimize(objective, n_trials=20)\r\n\r\nFile ~/Documents/optuna/optuna/samplers/_grid.py:112, in GridSampler.__init__(self, search_space)\r\n 109 for param_name, param_values in sorted(search_space.items(), key=lambda x: x[0]):\r\n 110 param_values = cast(SortableParamValueSequenceType, param_values)\r\n--> 112 self._search_space[param_name] = sorted(param_values)\r\n 114 self._all_grids = list(itertools.product(*self._search_space.values()))\r\n 115 self._param_names = sorted(search_space.keys())\r\n\r\nTypeError: '<' not supported between instances of 'int' and 'NoneType'\r\n```\r\n\r\n\r\n\r\n### Steps to reproduce\r\n\r\n1. Run the code above\r\n2.\r\n3.\r\n```python\r\n# python code\r\n```\r\n\r\n\r\n### Additional context (optional)\r\n\r\nSince grid search sampler implementation sorts `choices`, the current implementation assumes the choices are sortable. \n", "before_files": [{"content": "import collections\nimport itertools\nimport random\nfrom typing import Any\nfrom typing import cast\nfrom typing import Dict\nfrom typing import List\nfrom typing import Mapping\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Union\nimport warnings\n\nfrom optuna.distributions import BaseDistribution\nfrom optuna.logging import get_logger\nfrom optuna.samplers import BaseSampler\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\nfrom optuna.trial import TrialState\n\n\nGridValueType = Union[str, float, int, bool, None]\nSortableParamValueSequenceType = Union[List[str], List[float], List[int], List[bool]]\n\n\n_logger = get_logger(__name__)\n\n\nclass GridSampler(BaseSampler):\n \"\"\"Sampler using grid search.\n\n With :class:`~optuna.samplers.GridSampler`, the trials suggest all combinations of parameters\n in the given search space during the study.\n\n Example:\n\n .. testcode::\n\n import optuna\n\n\n def objective(trial):\n x = trial.suggest_float(\"x\", -100, 100)\n y = trial.suggest_int(\"y\", -100, 100)\n return x**2 + y**2\n\n\n search_space = {\"x\": [-50, 0, 50], \"y\": [-99, 0, 99]}\n study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))\n study.optimize(objective)\n\n Note:\n\n :class:`~optuna.samplers.GridSampler` automatically stops the optimization if all\n combinations in the passed ``search_space`` have already been evaluated, internally\n invoking the :func:`~optuna.study.Study.stop` method.\n\n Note:\n\n :class:`~optuna.samplers.GridSampler` does not take care of a parameter's quantization\n specified by discrete suggest methods but just samples one of values specified in the\n search space. E.g., in the following code snippet, either of ``-0.5`` or ``0.5`` is\n sampled as ``x`` instead of an integer point.\n\n .. testcode::\n\n import optuna\n\n\n def objective(trial):\n # The following suggest method specifies integer points between -5 and 5.\n x = trial.suggest_float(\"x\", -5, 5, step=1)\n return x**2\n\n\n # Non-int points are specified in the grid.\n search_space = {\"x\": [-0.5, 0.5]}\n study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))\n study.optimize(objective, n_trials=2)\n\n Note:\n A parameter configuration in the grid is not considered finished until its trial is\n finished. Therefore, during distributed optimization where trials run concurrently,\n different workers will occasionally suggest the same parameter configuration.\n The total number of actual trials may therefore exceed the size of the grid.\n\n Note:\n The grid is randomly shuffled and the order in which parameter configurations are\n suggested may vary. This is to reduce duplicate suggestions during distributed\n optimization.\n\n Note:\n All parameters must be specified when using :class:`~optuna.samplers.GridSampler` with\n :meth:`~optuna.study.Study.enqueue_trial`.\n\n Args:\n search_space:\n A dictionary whose key and value are a parameter name and the corresponding candidates\n of values, respectively.\n \"\"\"\n\n def __init__(self, search_space: Mapping[str, Sequence[GridValueType]]) -> None:\n\n for param_name, param_values in search_space.items():\n for value in param_values:\n self._check_value(param_name, value)\n\n self._search_space = collections.OrderedDict()\n for param_name, param_values in sorted(search_space.items(), key=lambda x: x[0]):\n param_values = cast(SortableParamValueSequenceType, param_values)\n\n self._search_space[param_name] = sorted(param_values)\n\n self._all_grids = list(itertools.product(*self._search_space.values()))\n self._param_names = sorted(search_space.keys())\n self._n_min_trials = len(self._all_grids)\n\n def infer_relative_search_space(\n self, study: Study, trial: FrozenTrial\n ) -> Dict[str, BaseDistribution]:\n\n return {}\n\n def sample_relative(\n self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]\n ) -> Dict[str, Any]:\n # Instead of returning param values, GridSampler puts the target grid id as a system attr,\n # and the values are returned from `sample_independent`. This is because the distribution\n # object is hard to get at the beginning of trial, while we need the access to the object\n # to validate the sampled value.\n\n # When the trial is created by RetryFailedTrialCallback or enqueue_trial, we should not\n # assign a new grid_id.\n if \"grid_id\" in trial.system_attrs or \"fixed_params\" in trial.system_attrs:\n return {}\n\n target_grids = self._get_unvisited_grid_ids(study)\n\n if len(target_grids) == 0:\n # This case may occur with distributed optimization or trial queue. If there is no\n # target grid, `GridSampler` evaluates a visited, duplicated point with the current\n # trial. After that, the optimization stops.\n\n _logger.warning(\n \"`GridSampler` is re-evaluating a configuration because the grid has been \"\n \"exhausted. This may happen due to a timing issue during distributed optimization \"\n \"or when re-running optimizations on already finished studies.\"\n )\n\n # One of all grids is randomly picked up in this case.\n target_grids = list(range(len(self._all_grids)))\n\n # In distributed optimization, multiple workers may simultaneously pick up the same grid.\n # To make the conflict less frequent, the grid is chosen randomly.\n grid_id = random.choice(target_grids)\n\n study._storage.set_trial_system_attr(trial._trial_id, \"search_space\", self._search_space)\n study._storage.set_trial_system_attr(trial._trial_id, \"grid_id\", grid_id)\n\n return {}\n\n def sample_independent(\n self,\n study: Study,\n trial: FrozenTrial,\n param_name: str,\n param_distribution: BaseDistribution,\n ) -> Any:\n\n if \"grid_id\" not in trial.system_attrs:\n message = \"All parameters must be specified when using GridSampler with enqueue_trial.\"\n raise ValueError(message)\n\n if param_name not in self._search_space:\n message = \"The parameter name, {}, is not found in the given grid.\".format(param_name)\n raise ValueError(message)\n\n # TODO(c-bata): Reduce the number of duplicated evaluations on multiple workers.\n # Current selection logic may evaluate the same parameters multiple times.\n # See https://gist.github.com/c-bata/f759f64becb24eea2040f4b2e3afce8f for details.\n grid_id = trial.system_attrs[\"grid_id\"]\n param_value = self._all_grids[grid_id][self._param_names.index(param_name)]\n contains = param_distribution._contains(param_distribution.to_internal_repr(param_value))\n if not contains:\n warnings.warn(\n f\"The value `{param_value}` is out of range of the parameter `{param_name}`. \"\n f\"The value will be used but the actual distribution is: `{param_distribution}`.\"\n )\n\n return param_value\n\n def after_trial(\n self,\n study: Study,\n trial: FrozenTrial,\n state: TrialState,\n values: Optional[Sequence[float]],\n ) -> None:\n target_grids = self._get_unvisited_grid_ids(study)\n\n if len(target_grids) == 0:\n study.stop()\n elif len(target_grids) == 1:\n grid_id = study._storage.get_trial_system_attrs(trial._trial_id)[\"grid_id\"]\n if grid_id == target_grids[0]:\n study.stop()\n\n @staticmethod\n def _check_value(param_name: str, param_value: Any) -> None:\n\n if param_value is None or isinstance(param_value, (str, int, float, bool)):\n return\n\n raise ValueError(\n \"{} contains a value with the type of {}, which is not supported by \"\n \"`GridSampler`. Please make sure a value is `str`, `int`, `float`, `bool`\"\n \" or `None`.\".format(param_name, type(param_value))\n )\n\n def _get_unvisited_grid_ids(self, study: Study) -> List[int]:\n\n # List up unvisited grids based on already finished ones.\n visited_grids = []\n running_grids = []\n\n # We directly query the storage to get trials here instead of `study.get_trials`,\n # since some pruners such as `HyperbandPruner` use the study transformed\n # to filter trials. See https://github.com/optuna/optuna/issues/2327 for details.\n trials = study._storage.get_all_trials(study._study_id, deepcopy=False)\n\n for t in trials:\n if \"grid_id\" in t.system_attrs and self._same_search_space(\n t.system_attrs[\"search_space\"]\n ):\n if t.state.is_finished():\n visited_grids.append(t.system_attrs[\"grid_id\"])\n elif t.state == TrialState.RUNNING:\n running_grids.append(t.system_attrs[\"grid_id\"])\n\n unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids) - set(running_grids)\n\n # If evaluations for all grids have been started, return grids that have not yet finished\n # because all grids should be evaluated before stopping the optimization.\n if len(unvisited_grids) == 0:\n unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids)\n\n return list(unvisited_grids)\n\n def _same_search_space(self, search_space: Mapping[str, Sequence[GridValueType]]) -> bool:\n\n if set(search_space.keys()) != set(self._search_space.keys()):\n return False\n\n for param_name in search_space.keys():\n if len(search_space[param_name]) != len(self._search_space[param_name]):\n return False\n\n param_values = cast(SortableParamValueSequenceType, search_space[param_name])\n for i, param_value in enumerate(sorted(param_values)):\n if param_value != self._search_space[param_name][i]:\n return False\n\n return True\n", "path": "optuna/samplers/_grid.py"}], "after_files": [{"content": "import collections\nimport itertools\nimport random\nfrom typing import Any\nfrom typing import Dict\nfrom typing import List\nfrom typing import Mapping\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Union\nimport warnings\n\nfrom optuna.distributions import BaseDistribution\nfrom optuna.logging import get_logger\nfrom optuna.samplers import BaseSampler\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\nfrom optuna.trial import TrialState\n\n\nGridValueType = Union[str, float, int, bool, None]\n\n\n_logger = get_logger(__name__)\n\n\nclass GridSampler(BaseSampler):\n \"\"\"Sampler using grid search.\n\n With :class:`~optuna.samplers.GridSampler`, the trials suggest all combinations of parameters\n in the given search space during the study.\n\n Example:\n\n .. testcode::\n\n import optuna\n\n\n def objective(trial):\n x = trial.suggest_float(\"x\", -100, 100)\n y = trial.suggest_int(\"y\", -100, 100)\n return x**2 + y**2\n\n\n search_space = {\"x\": [-50, 0, 50], \"y\": [-99, 0, 99]}\n study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))\n study.optimize(objective)\n\n Note:\n\n :class:`~optuna.samplers.GridSampler` automatically stops the optimization if all\n combinations in the passed ``search_space`` have already been evaluated, internally\n invoking the :func:`~optuna.study.Study.stop` method.\n\n Note:\n\n :class:`~optuna.samplers.GridSampler` does not take care of a parameter's quantization\n specified by discrete suggest methods but just samples one of values specified in the\n search space. E.g., in the following code snippet, either of ``-0.5`` or ``0.5`` is\n sampled as ``x`` instead of an integer point.\n\n .. testcode::\n\n import optuna\n\n\n def objective(trial):\n # The following suggest method specifies integer points between -5 and 5.\n x = trial.suggest_float(\"x\", -5, 5, step=1)\n return x**2\n\n\n # Non-int points are specified in the grid.\n search_space = {\"x\": [-0.5, 0.5]}\n study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))\n study.optimize(objective, n_trials=2)\n\n Note:\n A parameter configuration in the grid is not considered finished until its trial is\n finished. Therefore, during distributed optimization where trials run concurrently,\n different workers will occasionally suggest the same parameter configuration.\n The total number of actual trials may therefore exceed the size of the grid.\n\n Note:\n The grid is randomly shuffled and the order in which parameter configurations are\n suggested may vary. This is to reduce duplicate suggestions during distributed\n optimization.\n\n Note:\n All parameters must be specified when using :class:`~optuna.samplers.GridSampler` with\n :meth:`~optuna.study.Study.enqueue_trial`.\n\n Args:\n search_space:\n A dictionary whose key and value are a parameter name and the corresponding candidates\n of values, respectively.\n \"\"\"\n\n def __init__(self, search_space: Mapping[str, Sequence[GridValueType]]) -> None:\n\n for param_name, param_values in search_space.items():\n for value in param_values:\n self._check_value(param_name, value)\n\n self._search_space = collections.OrderedDict()\n for param_name, param_values in sorted(search_space.items()):\n self._search_space[param_name] = param_values\n\n self._all_grids = list(itertools.product(*self._search_space.values()))\n self._param_names = sorted(search_space.keys())\n self._n_min_trials = len(self._all_grids)\n\n def infer_relative_search_space(\n self, study: Study, trial: FrozenTrial\n ) -> Dict[str, BaseDistribution]:\n\n return {}\n\n def sample_relative(\n self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]\n ) -> Dict[str, Any]:\n # Instead of returning param values, GridSampler puts the target grid id as a system attr,\n # and the values are returned from `sample_independent`. This is because the distribution\n # object is hard to get at the beginning of trial, while we need the access to the object\n # to validate the sampled value.\n\n # When the trial is created by RetryFailedTrialCallback or enqueue_trial, we should not\n # assign a new grid_id.\n if \"grid_id\" in trial.system_attrs or \"fixed_params\" in trial.system_attrs:\n return {}\n\n target_grids = self._get_unvisited_grid_ids(study)\n\n if len(target_grids) == 0:\n # This case may occur with distributed optimization or trial queue. If there is no\n # target grid, `GridSampler` evaluates a visited, duplicated point with the current\n # trial. After that, the optimization stops.\n\n _logger.warning(\n \"`GridSampler` is re-evaluating a configuration because the grid has been \"\n \"exhausted. This may happen due to a timing issue during distributed optimization \"\n \"or when re-running optimizations on already finished studies.\"\n )\n\n # One of all grids is randomly picked up in this case.\n target_grids = list(range(len(self._all_grids)))\n\n # In distributed optimization, multiple workers may simultaneously pick up the same grid.\n # To make the conflict less frequent, the grid is chosen randomly.\n grid_id = random.choice(target_grids)\n\n study._storage.set_trial_system_attr(trial._trial_id, \"search_space\", self._search_space)\n study._storage.set_trial_system_attr(trial._trial_id, \"grid_id\", grid_id)\n\n return {}\n\n def sample_independent(\n self,\n study: Study,\n trial: FrozenTrial,\n param_name: str,\n param_distribution: BaseDistribution,\n ) -> Any:\n\n if \"grid_id\" not in trial.system_attrs:\n message = \"All parameters must be specified when using GridSampler with enqueue_trial.\"\n raise ValueError(message)\n\n if param_name not in self._search_space:\n message = \"The parameter name, {}, is not found in the given grid.\".format(param_name)\n raise ValueError(message)\n\n # TODO(c-bata): Reduce the number of duplicated evaluations on multiple workers.\n # Current selection logic may evaluate the same parameters multiple times.\n # See https://gist.github.com/c-bata/f759f64becb24eea2040f4b2e3afce8f for details.\n grid_id = trial.system_attrs[\"grid_id\"]\n param_value = self._all_grids[grid_id][self._param_names.index(param_name)]\n contains = param_distribution._contains(param_distribution.to_internal_repr(param_value))\n if not contains:\n warnings.warn(\n f\"The value `{param_value}` is out of range of the parameter `{param_name}`. \"\n f\"The value will be used but the actual distribution is: `{param_distribution}`.\"\n )\n\n return param_value\n\n def after_trial(\n self,\n study: Study,\n trial: FrozenTrial,\n state: TrialState,\n values: Optional[Sequence[float]],\n ) -> None:\n target_grids = self._get_unvisited_grid_ids(study)\n\n if len(target_grids) == 0:\n study.stop()\n elif len(target_grids) == 1:\n grid_id = study._storage.get_trial_system_attrs(trial._trial_id)[\"grid_id\"]\n if grid_id == target_grids[0]:\n study.stop()\n\n @staticmethod\n def _check_value(param_name: str, param_value: Any) -> None:\n\n if param_value is None or isinstance(param_value, (str, int, float, bool)):\n return\n\n raise ValueError(\n \"{} contains a value with the type of {}, which is not supported by \"\n \"`GridSampler`. Please make sure a value is `str`, `int`, `float`, `bool`\"\n \" or `None`.\".format(param_name, type(param_value))\n )\n\n def _get_unvisited_grid_ids(self, study: Study) -> List[int]:\n\n # List up unvisited grids based on already finished ones.\n visited_grids = []\n running_grids = []\n\n # We directly query the storage to get trials here instead of `study.get_trials`,\n # since some pruners such as `HyperbandPruner` use the study transformed\n # to filter trials. See https://github.com/optuna/optuna/issues/2327 for details.\n trials = study._storage.get_all_trials(study._study_id, deepcopy=False)\n\n for t in trials:\n if \"grid_id\" in t.system_attrs and self._same_search_space(\n t.system_attrs[\"search_space\"]\n ):\n if t.state.is_finished():\n visited_grids.append(t.system_attrs[\"grid_id\"])\n elif t.state == TrialState.RUNNING:\n running_grids.append(t.system_attrs[\"grid_id\"])\n\n unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids) - set(running_grids)\n\n # If evaluations for all grids have been started, return grids that have not yet finished\n # because all grids should be evaluated before stopping the optimization.\n if len(unvisited_grids) == 0:\n unvisited_grids = set(range(self._n_min_trials)) - set(visited_grids)\n\n return list(unvisited_grids)\n\n def _same_search_space(self, search_space: Mapping[str, Sequence[GridValueType]]) -> bool:\n\n if set(search_space.keys()) != set(self._search_space.keys()):\n return False\n\n for param_name in search_space.keys():\n if len(search_space[param_name]) != len(self._search_space[param_name]):\n return False\n\n for i, param_value in enumerate(search_space[param_name]):\n if param_value != self._search_space[param_name][i]:\n return False\n\n return True\n", "path": "optuna/samplers/_grid.py"}]}
| 3,808 | 389 |
gh_patches_debug_4916
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-566
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
colorize average grades on course detail pages
the numbers in the lower right should be css'd like the ones in the upper left.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/templatetags/evaluation_templatetags.py`
Content:
```
1 from django.template import Library
2
3 register = Library()
4
5
6 @register.inclusion_tag("user_list_with_links.html")
7 def include_user_list_with_links(users):
8 return dict(users=users)
9
10
11 @register.inclusion_tag("sortable_form_js.html")
12 def include_sortable_form_js():
13 return dict()
14
15 @register.inclusion_tag("progress_bar.html")
16 def include_progress_bar(done, total, large=False):
17 return dict(done=done, total=total, large=large)
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/evap/evaluation/templatetags/evaluation_templatetags.py b/evap/evaluation/templatetags/evaluation_templatetags.py
--- a/evap/evaluation/templatetags/evaluation_templatetags.py
+++ b/evap/evaluation/templatetags/evaluation_templatetags.py
@@ -15,3 +15,7 @@
@register.inclusion_tag("progress_bar.html")
def include_progress_bar(done, total, large=False):
return dict(done=done, total=total, large=large)
+
[email protected]_tag("result_bar.html")
+def include_result_bar(result, show_grades, questionnaire_warning=False):
+ return dict(result=result, show_grades=show_grades, questionnaire_warning=questionnaire_warning)
|
{"golden_diff": "diff --git a/evap/evaluation/templatetags/evaluation_templatetags.py b/evap/evaluation/templatetags/evaluation_templatetags.py\n--- a/evap/evaluation/templatetags/evaluation_templatetags.py\n+++ b/evap/evaluation/templatetags/evaluation_templatetags.py\n@@ -15,3 +15,7 @@\n @register.inclusion_tag(\"progress_bar.html\")\n def include_progress_bar(done, total, large=False):\n return dict(done=done, total=total, large=large)\n+\[email protected]_tag(\"result_bar.html\")\n+def include_result_bar(result, show_grades, questionnaire_warning=False):\n+ return dict(result=result, show_grades=show_grades, questionnaire_warning=questionnaire_warning)\n", "issue": "colorize average grades on course detail pages\nthe numbers in the lower right should be css'd like the ones in the upper left.\n\n\n\n", "before_files": [{"content": "from django.template import Library\n\nregister = Library()\n\n\[email protected]_tag(\"user_list_with_links.html\")\ndef include_user_list_with_links(users):\n return dict(users=users)\n\n\[email protected]_tag(\"sortable_form_js.html\")\ndef include_sortable_form_js():\n return dict()\n\[email protected]_tag(\"progress_bar.html\")\ndef include_progress_bar(done, total, large=False):\n return dict(done=done, total=total, large=large)\n", "path": "evap/evaluation/templatetags/evaluation_templatetags.py"}], "after_files": [{"content": "from django.template import Library\n\nregister = Library()\n\n\[email protected]_tag(\"user_list_with_links.html\")\ndef include_user_list_with_links(users):\n return dict(users=users)\n\n\[email protected]_tag(\"sortable_form_js.html\")\ndef include_sortable_form_js():\n return dict()\n\[email protected]_tag(\"progress_bar.html\")\ndef include_progress_bar(done, total, large=False):\n return dict(done=done, total=total, large=large)\n\[email protected]_tag(\"result_bar.html\")\ndef include_result_bar(result, show_grades, questionnaire_warning=False):\n return dict(result=result, show_grades=show_grades, questionnaire_warning=questionnaire_warning)\n", "path": "evap/evaluation/templatetags/evaluation_templatetags.py"}]}
| 511 | 181 |
gh_patches_debug_32283
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-5611
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Date de publication future des News
Bonjour,
Ça ne regarde certainement que moi mais...

C'est pas logique. « Dans le futur » n'apporte **aucune information utile**. L’intérêt des news c'est d'avoir la date future, pour pouvoir géré et vérifier que la publication sera à la bonne date. Là, il faut ouvrir la news, et lire la date dans un format cryptique.

Il y a certainement plus claire comme format.
C'est tellement compliqué qu'au final, j'écris juste tout dans un fichier ou sur un boue de papier.
Quand les 10 prochains contenus sont marqués « Dans le futur » franchement on a le sentiment d'être pris pour un crétin par l'IHM.
______________________
Bref, c'est géré par la fonction `format_date`.
À votre avis comment on pourrait faire pour améliorer ça ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/featured/forms.py`
Content:
```
1 from crispy_forms.bootstrap import StrictButton
2 from crispy_forms.helper import FormHelper
3 from crispy_forms.layout import Layout, Field, ButtonHolder
4 from django import forms
5 from django.urls import reverse
6 from django.utils.translation import ugettext_lazy as _
7
8 from zds.featured.models import FeaturedResource, FeaturedMessage
9
10
11 class FeaturedResourceForm(forms.ModelForm):
12 class Meta:
13 model = FeaturedResource
14
15 fields = ['title', 'type', 'authors', 'image_url', 'url']
16
17 widgets = {
18 'title': forms.TextInput(
19 attrs={
20 'placeholder': _('Titre de la Une')
21 }
22 ),
23
24 'type': forms.TextInput(
25 attrs={
26 'placeholder': _('ex: Un projet, Un article, Un tutoriel...')
27 }
28 ),
29
30 'authors': forms.TextInput(
31 attrs={
32 'placeholder': _('Des auteurs (ou pas) ?')
33 }
34 ),
35
36 'image_url': forms.URLInput(
37 attrs={
38 'placeholder': _('Lien vers l\'image de la Une (dimensions: 228x228px).')
39 }
40 ),
41
42 'url': forms.URLInput(
43 attrs={
44 'placeholder': _('Lien vers la ressource.')
45 }
46 )
47 }
48
49 major_update = forms.BooleanField(
50 label=_('Mise à jour majeure (fera passer la Une en première position lors d\'un changement)'),
51 initial=False,
52 required=False
53 )
54
55 pubdate = forms.DateTimeField(
56 label=_('Date de publication (exemple: 25/12/2015 15:00 ou 2015-12-25T15:00)'),
57 input_formats=[
58 '%d/%m/%Y %H:%M:%S', '%Y-%m-%d %H:%M:%S', # full format with second
59 '%Y-%m-%dT%H:%M', # datetime field format
60 '%Y-%m-%d %H:%M', '%d/%m/%Y %H:%M', # without second
61 '%Y-%m-%d', '%d/%m/%Y' # day only
62 ],
63 widget=forms.DateTimeInput(
64 attrs={'placeholder': _('Exemple : 25/12/2016 10:00'), 'type': 'datetime-local'},
65 format='%Y-%m-%dT%H:%M' # datetime field format
66 )
67 )
68
69 request = forms.IntegerField(widget=forms.HiddenInput(), required=False)
70
71 def __init__(self, *args, **kwargs):
72 hide_major_update_field = kwargs.pop('hide_major_update_field', False)
73
74 super(FeaturedResourceForm, self).__init__(*args, **kwargs)
75 self.helper = FormHelper()
76 self.helper.form_class = 'content-wrapper'
77 self.helper.form_method = 'post'
78 self.helper.form_action = reverse('featured-resource-create')
79
80 fields = [
81 Field('request'),
82 Field('title'),
83 Field('type'),
84 Field('authors'),
85 Field('image_url'),
86 Field('url')
87 ]
88
89 if not hide_major_update_field:
90 fields.append(Field('major_update'))
91
92 fields.extend([
93 Field('pubdate'),
94 ButtonHolder(
95 StrictButton(_('Enregistrer'), type='submit'),
96 )
97 ])
98
99 self.helper.layout = Layout(*fields)
100
101
102 class FeaturedMessageForm(forms.ModelForm):
103 class Meta:
104 model = FeaturedMessage
105
106 fields = ['hook', 'message', 'url']
107
108 widgets = {
109 'hook': forms.TextInput(
110 attrs={
111 'placeholder': _('Mot d\'accroche court ("Nouveau !")')
112 }
113 ),
114
115 'message': forms.TextInput(
116 attrs={
117 'placeholder': _('Message à afficher')
118 }
119 ),
120
121 'url': forms.URLInput(
122 attrs={
123 'placeholder': _('Lien vers la description de la ressource')
124 }
125 )
126 }
127
128 def __init__(self, *args, **kwargs):
129 super(FeaturedMessageForm, self).__init__(*args, **kwargs)
130 self.helper = FormHelper()
131 self.helper.form_class = 'content-wrapper'
132 self.helper.form_method = 'post'
133 self.helper.form_action = reverse('featured-message-create')
134
135 self.helper.layout = Layout(
136 Field('hook'),
137 Field('message'),
138 Field('url'),
139 ButtonHolder(
140 StrictButton(_('Enregistrer'), type='submit'),
141 ),
142 )
143
```
Path: `zds/utils/templatetags/date.py`
Content:
```
1 from datetime import datetime, timedelta
2
3 from django import template
4 from django.contrib.humanize.templatetags.humanize import naturaltime
5 from django.template.defaultfilters import date
6 from django.utils.timezone import get_default_timezone
7 from django.utils.translation import ugettext_lazy as _
8
9 register = template.Library()
10
11 """
12 Define a filter to format date.
13 """
14
15 # Date formatting constants
16
17 __DATE_FMT_FUTUR = _('Dans le futur')
18 __ABS_DATE_FMT_SMALL = _(r'd/m/y à H\hi') # Small format
19 __ABS_DATE_FMT_NORMAL = _(r'l d F Y à H\hi') # Normal format
20 __ABS_HUMAN_TIME_FMT = _('%d %b %Y, %H:%M:%S')
21
22
23 def date_formatter(value, tooltip, small):
24 """
25 Format a date to an human readable string.
26
27 :param value: Date to format.
28 :param bool tooltip: if `True`, format date to a tooltip label.
29 :param bool small: if `True`, create a shorter string.
30 :return:
31 """
32 if not isinstance(value, datetime):
33 return value
34
35 if getattr(value, 'tzinfo', None):
36 now = datetime.now(get_default_timezone())
37 else:
38 now = datetime.now()
39 now = now - timedelta(microseconds=now.microsecond)
40
41 if value > now:
42 return __DATE_FMT_FUTUR
43 else:
44 delta = now - value
45 # Natural time for today, absolute date after.
46 # Reverse if in tooltip
47 if (delta.days == 0) != tooltip:
48 return naturaltime(value)
49 else:
50 return date(value, str(__ABS_DATE_FMT_SMALL if small else __ABS_DATE_FMT_NORMAL))
51
52
53 @register.filter
54 def format_date(value, small=False):
55 """Format a date to an human readable string."""
56 return date_formatter(value, tooltip=False, small=small)
57
58
59 @register.filter
60 def tooltip_date(value):
61 """Format a date to an human readable string. To be used in tooltip."""
62 return date_formatter(value, tooltip=True, small=False)
63
64
65 @register.filter
66 def humane_time(timestamp):
67 """Render time (number of second from epoch) to an human readable string"""
68 return format_date(datetime.fromtimestamp(timestamp))
69
70
71 @register.filter
72 def from_elasticsearch_date(value):
73 try:
74 date = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%f')
75 except ValueError:
76 date = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S')
77 return date
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zds/featured/forms.py b/zds/featured/forms.py
--- a/zds/featured/forms.py
+++ b/zds/featured/forms.py
@@ -61,8 +61,8 @@
'%Y-%m-%d', '%d/%m/%Y' # day only
],
widget=forms.DateTimeInput(
- attrs={'placeholder': _('Exemple : 25/12/2016 10:00'), 'type': 'datetime-local'},
- format='%Y-%m-%dT%H:%M' # datetime field format
+ attrs={'placeholder': _('Exemple : 25/12/2016 10:00'), 'type': 'text'},
+ format='%d/%m/%Y %H:%M' # datetime field format
)
)
diff --git a/zds/utils/templatetags/date.py b/zds/utils/templatetags/date.py
--- a/zds/utils/templatetags/date.py
+++ b/zds/utils/templatetags/date.py
@@ -20,7 +20,7 @@
__ABS_HUMAN_TIME_FMT = _('%d %b %Y, %H:%M:%S')
-def date_formatter(value, tooltip, small):
+def date_formatter(value, tooltip, small, ignore_future=False):
"""
Format a date to an human readable string.
@@ -38,7 +38,7 @@
now = datetime.now()
now = now - timedelta(microseconds=now.microsecond)
- if value > now:
+ if value > now and not ignore_future:
return __DATE_FMT_FUTUR
else:
delta = now - value
@@ -52,10 +52,22 @@
@register.filter
def format_date(value, small=False):
- """Format a date to an human readable string."""
+ """
+ Format a date to an human readable string.
+ If ``value`` is in future it is replaced by "In the future".
+ """
return date_formatter(value, tooltip=False, small=small)
[email protected]
+def format_date_no_future(value):
+ """
+ Format a date to an human readable string.
+ If ``value`` is in future it is formatted as a normal date.
+ """
+ return date_formatter(value, tooltip=False, small=True, ignore_future=True)
+
+
@register.filter
def tooltip_date(value):
"""Format a date to an human readable string. To be used in tooltip."""
|
{"golden_diff": "diff --git a/zds/featured/forms.py b/zds/featured/forms.py\n--- a/zds/featured/forms.py\n+++ b/zds/featured/forms.py\n@@ -61,8 +61,8 @@\n '%Y-%m-%d', '%d/%m/%Y' # day only\n ],\n widget=forms.DateTimeInput(\n- attrs={'placeholder': _('Exemple : 25/12/2016 10:00'), 'type': 'datetime-local'},\n- format='%Y-%m-%dT%H:%M' # datetime field format\n+ attrs={'placeholder': _('Exemple : 25/12/2016 10:00'), 'type': 'text'},\n+ format='%d/%m/%Y %H:%M' # datetime field format\n )\n )\n \ndiff --git a/zds/utils/templatetags/date.py b/zds/utils/templatetags/date.py\n--- a/zds/utils/templatetags/date.py\n+++ b/zds/utils/templatetags/date.py\n@@ -20,7 +20,7 @@\n __ABS_HUMAN_TIME_FMT = _('%d %b %Y, %H:%M:%S')\n \n \n-def date_formatter(value, tooltip, small):\n+def date_formatter(value, tooltip, small, ignore_future=False):\n \"\"\"\n Format a date to an human readable string.\n \n@@ -38,7 +38,7 @@\n now = datetime.now()\n now = now - timedelta(microseconds=now.microsecond)\n \n- if value > now:\n+ if value > now and not ignore_future:\n return __DATE_FMT_FUTUR\n else:\n delta = now - value\n@@ -52,10 +52,22 @@\n \n @register.filter\n def format_date(value, small=False):\n- \"\"\"Format a date to an human readable string.\"\"\"\n+ \"\"\"\n+ Format a date to an human readable string.\n+ If ``value`` is in future it is replaced by \"In the future\".\n+ \"\"\"\n return date_formatter(value, tooltip=False, small=small)\n \n \[email protected]\n+def format_date_no_future(value):\n+ \"\"\"\n+ Format a date to an human readable string.\n+ If ``value`` is in future it is formatted as a normal date.\n+ \"\"\"\n+ return date_formatter(value, tooltip=False, small=True, ignore_future=True)\n+\n+\n @register.filter\n def tooltip_date(value):\n \"\"\"Format a date to an human readable string. To be used in tooltip.\"\"\"\n", "issue": "Date de publication future des News\nBonjour,\r\n\r\n\u00c7a ne regarde certainement que moi mais...\r\n\r\n\r\n\r\nC'est pas logique. \u00ab Dans le futur \u00bb n'apporte **aucune information utile**. L\u2019int\u00e9r\u00eat des news c'est d'avoir la date future, pour pouvoir g\u00e9r\u00e9 et v\u00e9rifier que la publication sera \u00e0 la bonne date. L\u00e0, il faut ouvrir la news, et lire la date dans un format cryptique.\r\n\r\n\r\n\r\nIl y a certainement plus claire comme format.\r\n\r\nC'est tellement compliqu\u00e9 qu'au final, j'\u00e9cris juste tout dans un fichier ou sur un boue de papier.\r\n\r\nQuand les 10 prochains contenus sont marqu\u00e9s \u00ab Dans le futur \u00bb franchement on a le sentiment d'\u00eatre pris pour un cr\u00e9tin par l'IHM.\r\n\r\n______________________\r\n\r\n\r\nBref, c'est g\u00e9r\u00e9 par la fonction `format_date`. \r\n\u00c0 votre avis comment on pourrait faire pour am\u00e9liorer \u00e7a ?\n", "before_files": [{"content": "from crispy_forms.bootstrap import StrictButton\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Field, ButtonHolder\nfrom django import forms\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom zds.featured.models import FeaturedResource, FeaturedMessage\n\n\nclass FeaturedResourceForm(forms.ModelForm):\n class Meta:\n model = FeaturedResource\n\n fields = ['title', 'type', 'authors', 'image_url', 'url']\n\n widgets = {\n 'title': forms.TextInput(\n attrs={\n 'placeholder': _('Titre de la Une')\n }\n ),\n\n 'type': forms.TextInput(\n attrs={\n 'placeholder': _('ex: Un projet, Un article, Un tutoriel...')\n }\n ),\n\n 'authors': forms.TextInput(\n attrs={\n 'placeholder': _('Des auteurs (ou pas)\u00a0?')\n }\n ),\n\n 'image_url': forms.URLInput(\n attrs={\n 'placeholder': _('Lien vers l\\'image de la Une (dimensions: 228x228px).')\n }\n ),\n\n 'url': forms.URLInput(\n attrs={\n 'placeholder': _('Lien vers la ressource.')\n }\n )\n }\n\n major_update = forms.BooleanField(\n label=_('Mise \u00e0 jour majeure (fera passer la Une en premi\u00e8re position lors d\\'un changement)'),\n initial=False,\n required=False\n )\n\n pubdate = forms.DateTimeField(\n label=_('Date de publication (exemple: 25/12/2015 15:00 ou 2015-12-25T15:00)'),\n input_formats=[\n '%d/%m/%Y %H:%M:%S', '%Y-%m-%d %H:%M:%S', # full format with second\n '%Y-%m-%dT%H:%M', # datetime field format\n '%Y-%m-%d %H:%M', '%d/%m/%Y %H:%M', # without second\n '%Y-%m-%d', '%d/%m/%Y' # day only\n ],\n widget=forms.DateTimeInput(\n attrs={'placeholder': _('Exemple : 25/12/2016 10:00'), 'type': 'datetime-local'},\n format='%Y-%m-%dT%H:%M' # datetime field format\n )\n )\n\n request = forms.IntegerField(widget=forms.HiddenInput(), required=False)\n\n def __init__(self, *args, **kwargs):\n hide_major_update_field = kwargs.pop('hide_major_update_field', False)\n\n super(FeaturedResourceForm, self).__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.form_class = 'content-wrapper'\n self.helper.form_method = 'post'\n self.helper.form_action = reverse('featured-resource-create')\n\n fields = [\n Field('request'),\n Field('title'),\n Field('type'),\n Field('authors'),\n Field('image_url'),\n Field('url')\n ]\n\n if not hide_major_update_field:\n fields.append(Field('major_update'))\n\n fields.extend([\n Field('pubdate'),\n ButtonHolder(\n StrictButton(_('Enregistrer'), type='submit'),\n )\n ])\n\n self.helper.layout = Layout(*fields)\n\n\nclass FeaturedMessageForm(forms.ModelForm):\n class Meta:\n model = FeaturedMessage\n\n fields = ['hook', 'message', 'url']\n\n widgets = {\n 'hook': forms.TextInput(\n attrs={\n 'placeholder': _('Mot d\\'accroche court (\"Nouveau\u00a0!\")')\n }\n ),\n\n 'message': forms.TextInput(\n attrs={\n 'placeholder': _('Message \u00e0 afficher')\n }\n ),\n\n 'url': forms.URLInput(\n attrs={\n 'placeholder': _('Lien vers la description de la ressource')\n }\n )\n }\n\n def __init__(self, *args, **kwargs):\n super(FeaturedMessageForm, self).__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.form_class = 'content-wrapper'\n self.helper.form_method = 'post'\n self.helper.form_action = reverse('featured-message-create')\n\n self.helper.layout = Layout(\n Field('hook'),\n Field('message'),\n Field('url'),\n ButtonHolder(\n StrictButton(_('Enregistrer'), type='submit'),\n ),\n )\n", "path": "zds/featured/forms.py"}, {"content": "from datetime import datetime, timedelta\n\nfrom django import template\nfrom django.contrib.humanize.templatetags.humanize import naturaltime\nfrom django.template.defaultfilters import date\nfrom django.utils.timezone import get_default_timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nregister = template.Library()\n\n\"\"\"\nDefine a filter to format date.\n\"\"\"\n\n# Date formatting constants\n\n__DATE_FMT_FUTUR = _('Dans le futur')\n__ABS_DATE_FMT_SMALL = _(r'd/m/y \u00e0 H\\hi') # Small format\n__ABS_DATE_FMT_NORMAL = _(r'l d F Y \u00e0 H\\hi') # Normal format\n__ABS_HUMAN_TIME_FMT = _('%d %b %Y, %H:%M:%S')\n\n\ndef date_formatter(value, tooltip, small):\n \"\"\"\n Format a date to an human readable string.\n\n :param value: Date to format.\n :param bool tooltip: if `True`, format date to a tooltip label.\n :param bool small: if `True`, create a shorter string.\n :return:\n \"\"\"\n if not isinstance(value, datetime):\n return value\n\n if getattr(value, 'tzinfo', None):\n now = datetime.now(get_default_timezone())\n else:\n now = datetime.now()\n now = now - timedelta(microseconds=now.microsecond)\n\n if value > now:\n return __DATE_FMT_FUTUR\n else:\n delta = now - value\n # Natural time for today, absolute date after.\n # Reverse if in tooltip\n if (delta.days == 0) != tooltip:\n return naturaltime(value)\n else:\n return date(value, str(__ABS_DATE_FMT_SMALL if small else __ABS_DATE_FMT_NORMAL))\n\n\[email protected]\ndef format_date(value, small=False):\n \"\"\"Format a date to an human readable string.\"\"\"\n return date_formatter(value, tooltip=False, small=small)\n\n\[email protected]\ndef tooltip_date(value):\n \"\"\"Format a date to an human readable string. To be used in tooltip.\"\"\"\n return date_formatter(value, tooltip=True, small=False)\n\n\[email protected]\ndef humane_time(timestamp):\n \"\"\"Render time (number of second from epoch) to an human readable string\"\"\"\n return format_date(datetime.fromtimestamp(timestamp))\n\n\[email protected]\ndef from_elasticsearch_date(value):\n try:\n date = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%f')\n except ValueError:\n date = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S')\n return date\n", "path": "zds/utils/templatetags/date.py"}], "after_files": [{"content": "from crispy_forms.bootstrap import StrictButton\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Field, ButtonHolder\nfrom django import forms\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom zds.featured.models import FeaturedResource, FeaturedMessage\n\n\nclass FeaturedResourceForm(forms.ModelForm):\n class Meta:\n model = FeaturedResource\n\n fields = ['title', 'type', 'authors', 'image_url', 'url']\n\n widgets = {\n 'title': forms.TextInput(\n attrs={\n 'placeholder': _('Titre de la Une')\n }\n ),\n\n 'type': forms.TextInput(\n attrs={\n 'placeholder': _('ex: Un projet, Un article, Un tutoriel...')\n }\n ),\n\n 'authors': forms.TextInput(\n attrs={\n 'placeholder': _('Des auteurs (ou pas)\u00a0?')\n }\n ),\n\n 'image_url': forms.URLInput(\n attrs={\n 'placeholder': _('Lien vers l\\'image de la Une (dimensions: 228x228px).')\n }\n ),\n\n 'url': forms.URLInput(\n attrs={\n 'placeholder': _('Lien vers la ressource.')\n }\n )\n }\n\n major_update = forms.BooleanField(\n label=_('Mise \u00e0 jour majeure (fera passer la Une en premi\u00e8re position lors d\\'un changement)'),\n initial=False,\n required=False\n )\n\n pubdate = forms.DateTimeField(\n label=_('Date de publication (exemple: 25/12/2015 15:00 ou 2015-12-25T15:00)'),\n input_formats=[\n '%d/%m/%Y %H:%M:%S', '%Y-%m-%d %H:%M:%S', # full format with second\n '%Y-%m-%dT%H:%M', # datetime field format\n '%Y-%m-%d %H:%M', '%d/%m/%Y %H:%M', # without second\n '%Y-%m-%d', '%d/%m/%Y' # day only\n ],\n widget=forms.DateTimeInput(\n attrs={'placeholder': _('Exemple : 25/12/2016 10:00'), 'type': 'text'},\n format='%d/%m/%Y %H:%M' # datetime field format\n )\n )\n\n request = forms.IntegerField(widget=forms.HiddenInput(), required=False)\n\n def __init__(self, *args, **kwargs):\n hide_major_update_field = kwargs.pop('hide_major_update_field', False)\n\n super(FeaturedResourceForm, self).__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.form_class = 'content-wrapper'\n self.helper.form_method = 'post'\n self.helper.form_action = reverse('featured-resource-create')\n\n fields = [\n Field('request'),\n Field('title'),\n Field('type'),\n Field('authors'),\n Field('image_url'),\n Field('url')\n ]\n\n if not hide_major_update_field:\n fields.append(Field('major_update'))\n\n fields.extend([\n Field('pubdate'),\n ButtonHolder(\n StrictButton(_('Enregistrer'), type='submit'),\n )\n ])\n\n self.helper.layout = Layout(*fields)\n\n\nclass FeaturedMessageForm(forms.ModelForm):\n class Meta:\n model = FeaturedMessage\n\n fields = ['hook', 'message', 'url']\n\n widgets = {\n 'hook': forms.TextInput(\n attrs={\n 'placeholder': _('Mot d\\'accroche court (\"Nouveau\u00a0!\")')\n }\n ),\n\n 'message': forms.TextInput(\n attrs={\n 'placeholder': _('Message \u00e0 afficher')\n }\n ),\n\n 'url': forms.URLInput(\n attrs={\n 'placeholder': _('Lien vers la description de la ressource')\n }\n )\n }\n\n def __init__(self, *args, **kwargs):\n super(FeaturedMessageForm, self).__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.form_class = 'content-wrapper'\n self.helper.form_method = 'post'\n self.helper.form_action = reverse('featured-message-create')\n\n self.helper.layout = Layout(\n Field('hook'),\n Field('message'),\n Field('url'),\n ButtonHolder(\n StrictButton(_('Enregistrer'), type='submit'),\n ),\n )\n", "path": "zds/featured/forms.py"}, {"content": "from datetime import datetime, timedelta\n\nfrom django import template\nfrom django.contrib.humanize.templatetags.humanize import naturaltime\nfrom django.template.defaultfilters import date\nfrom django.utils.timezone import get_default_timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nregister = template.Library()\n\n\"\"\"\nDefine a filter to format date.\n\"\"\"\n\n# Date formatting constants\n\n__DATE_FMT_FUTUR = _('Dans le futur')\n__ABS_DATE_FMT_SMALL = _(r'd/m/y \u00e0 H\\hi') # Small format\n__ABS_DATE_FMT_NORMAL = _(r'l d F Y \u00e0 H\\hi') # Normal format\n__ABS_HUMAN_TIME_FMT = _('%d %b %Y, %H:%M:%S')\n\n\ndef date_formatter(value, tooltip, small, ignore_future=False):\n \"\"\"\n Format a date to an human readable string.\n\n :param value: Date to format.\n :param bool tooltip: if `True`, format date to a tooltip label.\n :param bool small: if `True`, create a shorter string.\n :return:\n \"\"\"\n if not isinstance(value, datetime):\n return value\n\n if getattr(value, 'tzinfo', None):\n now = datetime.now(get_default_timezone())\n else:\n now = datetime.now()\n now = now - timedelta(microseconds=now.microsecond)\n\n if value > now and not ignore_future:\n return __DATE_FMT_FUTUR\n else:\n delta = now - value\n # Natural time for today, absolute date after.\n # Reverse if in tooltip\n if (delta.days == 0) != tooltip:\n return naturaltime(value)\n else:\n return date(value, str(__ABS_DATE_FMT_SMALL if small else __ABS_DATE_FMT_NORMAL))\n\n\[email protected]\ndef format_date(value, small=False):\n \"\"\"\n Format a date to an human readable string.\n If ``value`` is in future it is replaced by \"In the future\".\n \"\"\"\n return date_formatter(value, tooltip=False, small=small)\n\n\[email protected]\ndef format_date_no_future(value):\n \"\"\"\n Format a date to an human readable string.\n If ``value`` is in future it is formatted as a normal date.\n \"\"\"\n return date_formatter(value, tooltip=False, small=True, ignore_future=True)\n\n\[email protected]\ndef tooltip_date(value):\n \"\"\"Format a date to an human readable string. To be used in tooltip.\"\"\"\n return date_formatter(value, tooltip=True, small=False)\n\n\[email protected]\ndef humane_time(timestamp):\n \"\"\"Render time (number of second from epoch) to an human readable string\"\"\"\n return format_date(datetime.fromtimestamp(timestamp))\n\n\[email protected]\ndef from_elasticsearch_date(value):\n try:\n date = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%f')\n except ValueError:\n date = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S')\n return date\n", "path": "zds/utils/templatetags/date.py"}]}
| 2,648 | 562 |
gh_patches_debug_17091
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-493
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add creation date and current kinto version to generated config file
Using comments for example:
``` ini
# Created at Thu, 03 Mar 2016 17:02:37 +0100
# Using Kinto version 1.11.2
[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 8888
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/config/__init__.py`
Content:
```
1 import os
2 import codecs
3
4 from cliquet import utils as cliquet_utils
5
6 from kinto import logger
7
8 HERE = os.path.abspath(os.path.dirname(__file__))
9
10
11 def render_template(template, destination, **kwargs):
12 template = os.path.join(HERE, template)
13 folder = os.path.dirname(destination)
14
15 if folder and not os.path.exists(folder):
16 os.makedirs(folder)
17
18 logger.info("Created config {}".format(os.path.abspath(destination)))
19
20 with codecs.open(template, 'r', encoding='utf-8') as f:
21 raw_template = f.read()
22 rendered = raw_template.format(**kwargs)
23 with codecs.open(destination, 'w+', encoding='utf-8') as output:
24 output.write(rendered)
25
26
27 def init(config_file, backend):
28 values = {}
29
30 values['secret'] = cliquet_utils.random_bytes_hex(32)
31
32 values['storage_backend'] = "cliquet.storage.%s" % backend
33 values['cache_backend'] = "cliquet.cache.%s" % backend
34 values['permission_backend'] = "cliquet.permission.%s" % backend
35
36 if backend == 'postgresql':
37 postgresql_url = "postgres://postgres:postgres@localhost/postgres"
38 values['storage_url'] = postgresql_url
39 values['cache_url'] = postgresql_url
40 values['permission_url'] = postgresql_url
41
42 elif backend == 'redis':
43 redis_url = "redis://localhost:6379"
44 values['storage_url'] = redis_url + "/1"
45 values['cache_url'] = redis_url + "/2"
46 values['permission_url'] = redis_url + "/3"
47
48 else:
49 values['storage_url'] = ''
50 values['cache_url'] = ''
51 values['permission_url'] = ''
52
53 render_template("kinto.tpl", config_file, **values)
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/config/__init__.py b/kinto/config/__init__.py
--- a/kinto/config/__init__.py
+++ b/kinto/config/__init__.py
@@ -1,9 +1,11 @@
import os
import codecs
+from time import strftime
from cliquet import utils as cliquet_utils
from kinto import logger
+from kinto import __version__
HERE = os.path.abspath(os.path.dirname(__file__))
@@ -29,6 +31,9 @@
values['secret'] = cliquet_utils.random_bytes_hex(32)
+ values['kinto_version'] = __version__
+ values['config_file_timestamp'] = strftime('%a, %d %b %Y %H:%M:%S %z')
+
values['storage_backend'] = "cliquet.storage.%s" % backend
values['cache_backend'] = "cliquet.cache.%s" % backend
values['permission_backend'] = "cliquet.permission.%s" % backend
|
{"golden_diff": "diff --git a/kinto/config/__init__.py b/kinto/config/__init__.py\n--- a/kinto/config/__init__.py\n+++ b/kinto/config/__init__.py\n@@ -1,9 +1,11 @@\n import os\n import codecs\n+from time import strftime\n \n from cliquet import utils as cliquet_utils\n \n from kinto import logger\n+from kinto import __version__\n \n HERE = os.path.abspath(os.path.dirname(__file__))\n \n@@ -29,6 +31,9 @@\n \n values['secret'] = cliquet_utils.random_bytes_hex(32)\n \n+ values['kinto_version'] = __version__\n+ values['config_file_timestamp'] = strftime('%a, %d %b %Y %H:%M:%S %z')\n+\n values['storage_backend'] = \"cliquet.storage.%s\" % backend\n values['cache_backend'] = \"cliquet.cache.%s\" % backend\n values['permission_backend'] = \"cliquet.permission.%s\" % backend\n", "issue": "Add creation date and current kinto version to generated config file\nUsing comments for example:\n\n``` ini\n# Created at Thu, 03 Mar 2016 17:02:37 +0100\n# Using Kinto version 1.11.2\n\n[server:main]\nuse = egg:waitress#main\nhost = 0.0.0.0\nport = 8888\n\n```\n\n", "before_files": [{"content": "import os\nimport codecs\n\nfrom cliquet import utils as cliquet_utils\n\nfrom kinto import logger\n\nHERE = os.path.abspath(os.path.dirname(__file__))\n\n\ndef render_template(template, destination, **kwargs):\n template = os.path.join(HERE, template)\n folder = os.path.dirname(destination)\n\n if folder and not os.path.exists(folder):\n os.makedirs(folder)\n\n logger.info(\"Created config {}\".format(os.path.abspath(destination)))\n\n with codecs.open(template, 'r', encoding='utf-8') as f:\n raw_template = f.read()\n rendered = raw_template.format(**kwargs)\n with codecs.open(destination, 'w+', encoding='utf-8') as output:\n output.write(rendered)\n\n\ndef init(config_file, backend):\n values = {}\n\n values['secret'] = cliquet_utils.random_bytes_hex(32)\n\n values['storage_backend'] = \"cliquet.storage.%s\" % backend\n values['cache_backend'] = \"cliquet.cache.%s\" % backend\n values['permission_backend'] = \"cliquet.permission.%s\" % backend\n\n if backend == 'postgresql':\n postgresql_url = \"postgres://postgres:postgres@localhost/postgres\"\n values['storage_url'] = postgresql_url\n values['cache_url'] = postgresql_url\n values['permission_url'] = postgresql_url\n\n elif backend == 'redis':\n redis_url = \"redis://localhost:6379\"\n values['storage_url'] = redis_url + \"/1\"\n values['cache_url'] = redis_url + \"/2\"\n values['permission_url'] = redis_url + \"/3\"\n\n else:\n values['storage_url'] = ''\n values['cache_url'] = ''\n values['permission_url'] = ''\n\n render_template(\"kinto.tpl\", config_file, **values)\n", "path": "kinto/config/__init__.py"}], "after_files": [{"content": "import os\nimport codecs\nfrom time import strftime\n\nfrom cliquet import utils as cliquet_utils\n\nfrom kinto import logger\nfrom kinto import __version__\n\nHERE = os.path.abspath(os.path.dirname(__file__))\n\n\ndef render_template(template, destination, **kwargs):\n template = os.path.join(HERE, template)\n folder = os.path.dirname(destination)\n\n if folder and not os.path.exists(folder):\n os.makedirs(folder)\n\n logger.info(\"Created config {}\".format(os.path.abspath(destination)))\n\n with codecs.open(template, 'r', encoding='utf-8') as f:\n raw_template = f.read()\n rendered = raw_template.format(**kwargs)\n with codecs.open(destination, 'w+', encoding='utf-8') as output:\n output.write(rendered)\n\n\ndef init(config_file, backend):\n values = {}\n\n values['secret'] = cliquet_utils.random_bytes_hex(32)\n\n values['kinto_version'] = __version__\n values['config_file_timestamp'] = strftime('%a, %d %b %Y %H:%M:%S %z')\n\n values['storage_backend'] = \"cliquet.storage.%s\" % backend\n values['cache_backend'] = \"cliquet.cache.%s\" % backend\n values['permission_backend'] = \"cliquet.permission.%s\" % backend\n\n if backend == 'postgresql':\n postgresql_url = \"postgres://postgres:postgres@localhost/postgres\"\n values['storage_url'] = postgresql_url\n values['cache_url'] = postgresql_url\n values['permission_url'] = postgresql_url\n\n elif backend == 'redis':\n redis_url = \"redis://localhost:6379\"\n values['storage_url'] = redis_url + \"/1\"\n values['cache_url'] = redis_url + \"/2\"\n values['permission_url'] = redis_url + \"/3\"\n\n else:\n values['storage_url'] = ''\n values['cache_url'] = ''\n values['permission_url'] = ''\n\n render_template(\"kinto.tpl\", config_file, **values)\n", "path": "kinto/config/__init__.py"}]}
| 860 | 225 |
gh_patches_debug_12115
|
rasdani/github-patches
|
git_diff
|
Miserlou__Zappa-2049
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Syntax warning due to comparison of literals using is in Python 3.8
## Context
Syntax warning due to comparison of literals using is.
## Possible Fix
Use == and != as suggested in the warning
## Steps to Reproduce
```
find . -iname '*.py' | xargs -P 4 -I{} python -Walways -m py_compile {}
./zappa/core.py:2026: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif key is 'LambdaConfig':
./zappa/cli.py:1379: SyntaxWarning: "is" with a literal. Did you mean "=="?
if token.count('-') is 4 and token.replace('-', '').isalnum():
./zappa/cli.py:2513: SyntaxWarning: "is" with a literal. Did you mean "=="?
if (token.count('.') is 3 and token.replace('.', '').isnumeric()):
./zappa/cli.py:2548: SyntaxWarning: "is" with a literal. Did you mean "=="?
if token.count('-') is 4 and token.replace('-', '').isalnum():
./zappa/cli.py:2555: SyntaxWarning: "is" with a literal. Did you mean "=="?
if token.count('.') is 3 and token.replace('.', '').isnumeric():
./example/authmodule.py:78: DeprecationWarning: invalid escape sequence \*
pathRegex = "^[/.a-zA-Z0-9-\*]+$"
```
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: master
* Operating System and Python version: Python 3.8
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.json`:
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `example/authmodule.py`
Content:
```
1 """
2 Copyright 2015-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at
4 http://aws.amazon.com/apache2.0/
5 or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
6 """
7 import re
8 import time
9 import pprint
10 import json
11
12
13 def lambda_handler(event, context):
14 print("Client token: " + event['authorizationToken'])
15 print("Method ARN: " + event['methodArn'])
16 """validate the incoming token"""
17 """and produce the principal user identifier associated with the token"""
18
19 """this could be accomplished in a number of ways:"""
20 """1. Call out to OAuth provider"""
21 """2. Decode a JWT token inline"""
22 """3. Lookup in a self-managed DB"""
23 principalId = "user|a1b2c3d4"
24
25 """you can send a 401 Unauthorized response to the client by failing like so:"""
26 """raise Exception('Unauthorized')"""
27
28 """if the token is valid, a policy must be generated which will allow or deny access to the client"""
29
30 """if access is denied, the client will receive a 403 Access Denied response"""
31 """if access is allowed, API Gateway will proceed with the backend integration configured on the method that was called"""
32
33 """this function must generate a policy that is associated with the recognized principal user identifier."""
34 """depending on your use case, you might store policies in a DB, or generate them on the fly"""
35
36 """keep in mind, the policy is cached for 5 minutes by default (TTL is configurable in the authorizer)"""
37 """and will apply to subsequent calls to any method/resource in the RestApi"""
38 """made with the same token"""
39
40 """the example policy below denies access to all resources in the RestApi"""
41 tmp = event['methodArn'].split(':')
42 apiGatewayArnTmp = tmp[5].split('/')
43 awsAccountId = tmp[4]
44
45 policy = AuthPolicy(principalId, awsAccountId)
46 policy.restApiId = apiGatewayArnTmp[0]
47 policy.region = tmp[3]
48 policy.stage = apiGatewayArnTmp[1]
49
50 # Blueprint denies all methods by default
51 # policy.denyAllMethods()
52
53 # Example allows all methods
54 policy.allowAllMethods()
55
56 """policy.allowMethod(HttpVerb.GET, "/pets/*")"""
57
58 """finally, build the policy and exit the function using return"""
59 return policy.build()
60
61 class HttpVerb:
62 GET = "GET"
63 POST = "POST"
64 PUT = "PUT"
65 PATCH = "PATCH"
66 HEAD = "HEAD"
67 DELETE = "DELETE"
68 OPTIONS = "OPTIONS"
69 ALL = "*"
70
71 class AuthPolicy:
72 awsAccountId = ""
73 """The AWS account id the policy will be generated for. This is used to create the method ARNs."""
74 principalId = ""
75 """The principal used for the policy, this should be a unique identifier for the end user."""
76 version = "2012-10-17"
77 """The policy version used for the evaluation. This should always be '2012-10-17'"""
78 pathRegex = "^[/.a-zA-Z0-9-\*]+$"
79 """The regular expression used to validate resource paths for the policy"""
80
81 """these are the internal lists of allowed and denied methods. These are lists
82 of objects and each object has 2 properties: A resource ARN and a nullable
83 conditions statement.
84 the build method processes these lists and generates the appropriate
85 statements for the final policy"""
86 allowMethods = []
87 denyMethods = []
88
89 restApiId = "*"
90 """The API Gateway API id. By default this is set to '*'"""
91 region = "*"
92 """The region where the API is deployed. By default this is set to '*'"""
93 stage = "*"
94 """The name of the stage used in the policy. By default this is set to '*'"""
95
96 def __init__(self, principal, awsAccountId):
97 self.awsAccountId = awsAccountId
98 self.principalId = principal
99 self.allowMethods = []
100 self.denyMethods = []
101
102 def _addMethod(self, effect, verb, resource, conditions):
103 """Adds a method to the internal lists of allowed or denied methods. Each object in
104 the internal list contains a resource ARN and a condition statement. The condition
105 statement can be null."""
106 if verb != "*" and not hasattr(HttpVerb, verb):
107 raise NameError("Invalid HTTP verb " + verb + ". Allowed verbs in HttpVerb class")
108 resourcePattern = re.compile(self.pathRegex)
109 if not resourcePattern.match(resource):
110 raise NameError("Invalid resource path: " + resource + ". Path should match " + self.pathRegex)
111
112 if resource[:1] == "/":
113 resource = resource[1:]
114
115 resourceArn = ("arn:aws:execute-api:" +
116 self.region + ":" +
117 self.awsAccountId + ":" +
118 self.restApiId + "/" +
119 self.stage + "/" +
120 verb + "/" +
121 resource)
122
123 if effect.lower() == "allow":
124 self.allowMethods.append({
125 'resourceArn' : resourceArn,
126 'conditions' : conditions
127 })
128 elif effect.lower() == "deny":
129 self.denyMethods.append({
130 'resourceArn' : resourceArn,
131 'conditions' : conditions
132 })
133
134 def _getEmptyStatement(self, effect):
135 """Returns an empty statement object prepopulated with the correct action and the
136 desired effect."""
137 statement = {
138 'Action': 'execute-api:Invoke',
139 'Effect': effect[:1].upper() + effect[1:].lower(),
140 'Resource': []
141 }
142
143 return statement
144
145 def _getStatementForEffect(self, effect, methods):
146 """This function loops over an array of objects containing a resourceArn and
147 conditions statement and generates the array of statements for the policy."""
148 statements = []
149
150 if len(methods) > 0:
151 statement = self._getEmptyStatement(effect)
152
153 for curMethod in methods:
154 if curMethod['conditions'] is None or len(curMethod['conditions']) == 0:
155 statement['Resource'].append(curMethod['resourceArn'])
156 else:
157 conditionalStatement = self._getEmptyStatement(effect)
158 conditionalStatement['Resource'].append(curMethod['resourceArn'])
159 conditionalStatement['Condition'] = curMethod['conditions']
160 statements.append(conditionalStatement)
161
162 statements.append(statement)
163
164 return statements
165
166 def allowAllMethods(self):
167 """Adds a '*' allow to the policy to authorize access to all methods of an API"""
168 self._addMethod("Allow", HttpVerb.ALL, "*", [])
169
170 def denyAllMethods(self):
171 """Adds a '*' allow to the policy to deny access to all methods of an API"""
172 self._addMethod("Deny", HttpVerb.ALL, "*", [])
173
174 def allowMethod(self, verb, resource):
175 """Adds an API Gateway method (Http verb + Resource path) to the list of allowed
176 methods for the policy"""
177 self._addMethod("Allow", verb, resource, [])
178
179 def denyMethod(self, verb, resource):
180 """Adds an API Gateway method (Http verb + Resource path) to the list of denied
181 methods for the policy"""
182 self._addMethod("Deny", verb, resource, [])
183
184 def allowMethodWithConditions(self, verb, resource, conditions):
185 """Adds an API Gateway method (Http verb + Resource path) to the list of allowed
186 methods and includes a condition for the policy statement. More on AWS policy
187 conditions here: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Condition"""
188 self._addMethod("Allow", verb, resource, conditions)
189
190 def denyMethodWithConditions(self, verb, resource, conditions):
191 """Adds an API Gateway method (Http verb + Resource path) to the list of denied
192 methods and includes a condition for the policy statement. More on AWS policy
193 conditions here: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Condition"""
194 self._addMethod("Deny", verb, resource, conditions)
195
196 def build(self):
197 """Generates the policy document based on the internal lists of allowed and denied
198 conditions. This will generate a policy with two main statements for the effect:
199 one statement for Allow and one statement for Deny.
200 Methods that includes conditions will have their own statement in the policy."""
201 if ((self.allowMethods is None or len(self.allowMethods) == 0) and
202 (self.denyMethods is None or len(self.denyMethods) == 0)):
203 raise NameError("No statements defined for the policy")
204
205 policy = {
206 'principalId' : self.principalId,
207 'policyDocument' : {
208 'Version' : self.version,
209 'Statement' : []
210 }
211 }
212
213 policy['policyDocument']['Statement'].extend(self._getStatementForEffect("Allow", self.allowMethods))
214 policy['policyDocument']['Statement'].extend(self._getStatementForEffect("Deny", self.denyMethods))
215
216 return policy
217
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/example/authmodule.py b/example/authmodule.py
--- a/example/authmodule.py
+++ b/example/authmodule.py
@@ -75,7 +75,7 @@
"""The principal used for the policy, this should be a unique identifier for the end user."""
version = "2012-10-17"
"""The policy version used for the evaluation. This should always be '2012-10-17'"""
- pathRegex = "^[/.a-zA-Z0-9-\*]+$"
+ pathRegex = r"^[/.a-zA-Z0-9-\*]+$"
"""The regular expression used to validate resource paths for the policy"""
"""these are the internal lists of allowed and denied methods. These are lists
|
{"golden_diff": "diff --git a/example/authmodule.py b/example/authmodule.py\n--- a/example/authmodule.py\n+++ b/example/authmodule.py\n@@ -75,7 +75,7 @@\n \"\"\"The principal used for the policy, this should be a unique identifier for the end user.\"\"\"\n version = \"2012-10-17\"\n \"\"\"The policy version used for the evaluation. This should always be '2012-10-17'\"\"\"\n- pathRegex = \"^[/.a-zA-Z0-9-\\*]+$\"\n+ pathRegex = r\"^[/.a-zA-Z0-9-\\*]+$\"\n \"\"\"The regular expression used to validate resource paths for the policy\"\"\"\n \n \"\"\"these are the internal lists of allowed and denied methods. These are lists\n", "issue": "Syntax warning due to comparison of literals using is in Python 3.8\n## Context\r\n\r\nSyntax warning due to comparison of literals using is.\r\n\r\n## Possible Fix\r\n\r\nUse == and != as suggested in the warning\r\n\r\n## Steps to Reproduce\r\n\r\n```\r\nfind . -iname '*.py' | xargs -P 4 -I{} python -Walways -m py_compile {} \r\n\r\n./zappa/core.py:2026: SyntaxWarning: \"is\" with a literal. Did you mean \"==\"?\r\n elif key is 'LambdaConfig':\r\n./zappa/cli.py:1379: SyntaxWarning: \"is\" with a literal. Did you mean \"==\"?\r\n if token.count('-') is 4 and token.replace('-', '').isalnum():\r\n./zappa/cli.py:2513: SyntaxWarning: \"is\" with a literal. Did you mean \"==\"?\r\n if (token.count('.') is 3 and token.replace('.', '').isnumeric()):\r\n./zappa/cli.py:2548: SyntaxWarning: \"is\" with a literal. Did you mean \"==\"?\r\n if token.count('-') is 4 and token.replace('-', '').isalnum():\r\n./zappa/cli.py:2555: SyntaxWarning: \"is\" with a literal. Did you mean \"==\"?\r\n if token.count('.') is 3 and token.replace('.', '').isnumeric():\r\n./example/authmodule.py:78: DeprecationWarning: invalid escape sequence \\*\r\n pathRegex = \"^[/.a-zA-Z0-9-\\*]+$\"\r\n```\r\n\r\n## Your Environment\r\n<!--- Include as many relevant details about the environment you experienced the bug in -->\r\n* Zappa version used: master\r\n* Operating System and Python version: Python 3.8\r\n* The output of `pip freeze`:\r\n* Link to your project (optional):\r\n* Your `zappa_settings.json`: \r\n\n", "before_files": [{"content": "\"\"\"\nCopyright 2015-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nLicensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at\n http://aws.amazon.com/apache2.0/\nor in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\n\"\"\"\nimport re\nimport time\nimport pprint\nimport json\n\n\ndef lambda_handler(event, context):\n print(\"Client token: \" + event['authorizationToken'])\n print(\"Method ARN: \" + event['methodArn'])\n \"\"\"validate the incoming token\"\"\"\n \"\"\"and produce the principal user identifier associated with the token\"\"\"\n\n \"\"\"this could be accomplished in a number of ways:\"\"\"\n \"\"\"1. Call out to OAuth provider\"\"\"\n \"\"\"2. Decode a JWT token inline\"\"\"\n \"\"\"3. Lookup in a self-managed DB\"\"\"\n principalId = \"user|a1b2c3d4\"\n\n \"\"\"you can send a 401 Unauthorized response to the client by failing like so:\"\"\"\n \"\"\"raise Exception('Unauthorized')\"\"\"\n\n \"\"\"if the token is valid, a policy must be generated which will allow or deny access to the client\"\"\"\n\n \"\"\"if access is denied, the client will receive a 403 Access Denied response\"\"\"\n \"\"\"if access is allowed, API Gateway will proceed with the backend integration configured on the method that was called\"\"\"\n\n \"\"\"this function must generate a policy that is associated with the recognized principal user identifier.\"\"\"\n \"\"\"depending on your use case, you might store policies in a DB, or generate them on the fly\"\"\"\n\n \"\"\"keep in mind, the policy is cached for 5 minutes by default (TTL is configurable in the authorizer)\"\"\"\n \"\"\"and will apply to subsequent calls to any method/resource in the RestApi\"\"\"\n \"\"\"made with the same token\"\"\"\n\n \"\"\"the example policy below denies access to all resources in the RestApi\"\"\"\n tmp = event['methodArn'].split(':')\n apiGatewayArnTmp = tmp[5].split('/')\n awsAccountId = tmp[4]\n\n policy = AuthPolicy(principalId, awsAccountId)\n policy.restApiId = apiGatewayArnTmp[0]\n policy.region = tmp[3]\n policy.stage = apiGatewayArnTmp[1]\n\n # Blueprint denies all methods by default\n # policy.denyAllMethods()\n\n # Example allows all methods\n policy.allowAllMethods()\n\n \"\"\"policy.allowMethod(HttpVerb.GET, \"/pets/*\")\"\"\"\n\n \"\"\"finally, build the policy and exit the function using return\"\"\"\n return policy.build()\n\nclass HttpVerb:\n GET = \"GET\"\n POST = \"POST\"\n PUT = \"PUT\"\n PATCH = \"PATCH\"\n HEAD = \"HEAD\"\n DELETE = \"DELETE\"\n OPTIONS = \"OPTIONS\"\n ALL = \"*\"\n\nclass AuthPolicy:\n awsAccountId = \"\"\n \"\"\"The AWS account id the policy will be generated for. This is used to create the method ARNs.\"\"\"\n principalId = \"\"\n \"\"\"The principal used for the policy, this should be a unique identifier for the end user.\"\"\"\n version = \"2012-10-17\"\n \"\"\"The policy version used for the evaluation. This should always be '2012-10-17'\"\"\"\n pathRegex = \"^[/.a-zA-Z0-9-\\*]+$\"\n \"\"\"The regular expression used to validate resource paths for the policy\"\"\"\n\n \"\"\"these are the internal lists of allowed and denied methods. These are lists\n of objects and each object has 2 properties: A resource ARN and a nullable\n conditions statement.\n the build method processes these lists and generates the appropriate\n statements for the final policy\"\"\"\n allowMethods = []\n denyMethods = []\n\n restApiId = \"*\"\n \"\"\"The API Gateway API id. By default this is set to '*'\"\"\"\n region = \"*\"\n \"\"\"The region where the API is deployed. By default this is set to '*'\"\"\"\n stage = \"*\"\n \"\"\"The name of the stage used in the policy. By default this is set to '*'\"\"\"\n\n def __init__(self, principal, awsAccountId):\n self.awsAccountId = awsAccountId\n self.principalId = principal\n self.allowMethods = []\n self.denyMethods = []\n\n def _addMethod(self, effect, verb, resource, conditions):\n \"\"\"Adds a method to the internal lists of allowed or denied methods. Each object in\n the internal list contains a resource ARN and a condition statement. The condition\n statement can be null.\"\"\"\n if verb != \"*\" and not hasattr(HttpVerb, verb):\n raise NameError(\"Invalid HTTP verb \" + verb + \". Allowed verbs in HttpVerb class\")\n resourcePattern = re.compile(self.pathRegex)\n if not resourcePattern.match(resource):\n raise NameError(\"Invalid resource path: \" + resource + \". Path should match \" + self.pathRegex)\n\n if resource[:1] == \"/\":\n resource = resource[1:]\n\n resourceArn = (\"arn:aws:execute-api:\" +\n self.region + \":\" +\n self.awsAccountId + \":\" +\n self.restApiId + \"/\" +\n self.stage + \"/\" +\n verb + \"/\" +\n resource)\n\n if effect.lower() == \"allow\":\n self.allowMethods.append({\n 'resourceArn' : resourceArn,\n 'conditions' : conditions\n })\n elif effect.lower() == \"deny\":\n self.denyMethods.append({\n 'resourceArn' : resourceArn,\n 'conditions' : conditions\n })\n\n def _getEmptyStatement(self, effect):\n \"\"\"Returns an empty statement object prepopulated with the correct action and the\n desired effect.\"\"\"\n statement = {\n 'Action': 'execute-api:Invoke',\n 'Effect': effect[:1].upper() + effect[1:].lower(),\n 'Resource': []\n }\n\n return statement\n\n def _getStatementForEffect(self, effect, methods):\n \"\"\"This function loops over an array of objects containing a resourceArn and\n conditions statement and generates the array of statements for the policy.\"\"\"\n statements = []\n\n if len(methods) > 0:\n statement = self._getEmptyStatement(effect)\n\n for curMethod in methods:\n if curMethod['conditions'] is None or len(curMethod['conditions']) == 0:\n statement['Resource'].append(curMethod['resourceArn'])\n else:\n conditionalStatement = self._getEmptyStatement(effect)\n conditionalStatement['Resource'].append(curMethod['resourceArn'])\n conditionalStatement['Condition'] = curMethod['conditions']\n statements.append(conditionalStatement)\n\n statements.append(statement)\n\n return statements\n\n def allowAllMethods(self):\n \"\"\"Adds a '*' allow to the policy to authorize access to all methods of an API\"\"\"\n self._addMethod(\"Allow\", HttpVerb.ALL, \"*\", [])\n\n def denyAllMethods(self):\n \"\"\"Adds a '*' allow to the policy to deny access to all methods of an API\"\"\"\n self._addMethod(\"Deny\", HttpVerb.ALL, \"*\", [])\n\n def allowMethod(self, verb, resource):\n \"\"\"Adds an API Gateway method (Http verb + Resource path) to the list of allowed\n methods for the policy\"\"\"\n self._addMethod(\"Allow\", verb, resource, [])\n\n def denyMethod(self, verb, resource):\n \"\"\"Adds an API Gateway method (Http verb + Resource path) to the list of denied\n methods for the policy\"\"\"\n self._addMethod(\"Deny\", verb, resource, [])\n\n def allowMethodWithConditions(self, verb, resource, conditions):\n \"\"\"Adds an API Gateway method (Http verb + Resource path) to the list of allowed\n methods and includes a condition for the policy statement. More on AWS policy\n conditions here: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Condition\"\"\"\n self._addMethod(\"Allow\", verb, resource, conditions)\n\n def denyMethodWithConditions(self, verb, resource, conditions):\n \"\"\"Adds an API Gateway method (Http verb + Resource path) to the list of denied\n methods and includes a condition for the policy statement. More on AWS policy\n conditions here: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Condition\"\"\"\n self._addMethod(\"Deny\", verb, resource, conditions)\n\n def build(self):\n \"\"\"Generates the policy document based on the internal lists of allowed and denied\n conditions. This will generate a policy with two main statements for the effect:\n one statement for Allow and one statement for Deny.\n Methods that includes conditions will have their own statement in the policy.\"\"\"\n if ((self.allowMethods is None or len(self.allowMethods) == 0) and\n (self.denyMethods is None or len(self.denyMethods) == 0)):\n raise NameError(\"No statements defined for the policy\")\n\n policy = {\n 'principalId' : self.principalId,\n 'policyDocument' : {\n 'Version' : self.version,\n 'Statement' : []\n }\n }\n\n policy['policyDocument']['Statement'].extend(self._getStatementForEffect(\"Allow\", self.allowMethods))\n policy['policyDocument']['Statement'].extend(self._getStatementForEffect(\"Deny\", self.denyMethods))\n\n return policy\n", "path": "example/authmodule.py"}], "after_files": [{"content": "\"\"\"\nCopyright 2015-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.\nLicensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at\n http://aws.amazon.com/apache2.0/\nor in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.\n\"\"\"\nimport re\nimport time\nimport pprint\nimport json\n\n\ndef lambda_handler(event, context):\n print(\"Client token: \" + event['authorizationToken'])\n print(\"Method ARN: \" + event['methodArn'])\n \"\"\"validate the incoming token\"\"\"\n \"\"\"and produce the principal user identifier associated with the token\"\"\"\n\n \"\"\"this could be accomplished in a number of ways:\"\"\"\n \"\"\"1. Call out to OAuth provider\"\"\"\n \"\"\"2. Decode a JWT token inline\"\"\"\n \"\"\"3. Lookup in a self-managed DB\"\"\"\n principalId = \"user|a1b2c3d4\"\n\n \"\"\"you can send a 401 Unauthorized response to the client by failing like so:\"\"\"\n \"\"\"raise Exception('Unauthorized')\"\"\"\n\n \"\"\"if the token is valid, a policy must be generated which will allow or deny access to the client\"\"\"\n\n \"\"\"if access is denied, the client will receive a 403 Access Denied response\"\"\"\n \"\"\"if access is allowed, API Gateway will proceed with the backend integration configured on the method that was called\"\"\"\n\n \"\"\"this function must generate a policy that is associated with the recognized principal user identifier.\"\"\"\n \"\"\"depending on your use case, you might store policies in a DB, or generate them on the fly\"\"\"\n\n \"\"\"keep in mind, the policy is cached for 5 minutes by default (TTL is configurable in the authorizer)\"\"\"\n \"\"\"and will apply to subsequent calls to any method/resource in the RestApi\"\"\"\n \"\"\"made with the same token\"\"\"\n\n \"\"\"the example policy below denies access to all resources in the RestApi\"\"\"\n tmp = event['methodArn'].split(':')\n apiGatewayArnTmp = tmp[5].split('/')\n awsAccountId = tmp[4]\n\n policy = AuthPolicy(principalId, awsAccountId)\n policy.restApiId = apiGatewayArnTmp[0]\n policy.region = tmp[3]\n policy.stage = apiGatewayArnTmp[1]\n\n # Blueprint denies all methods by default\n # policy.denyAllMethods()\n\n # Example allows all methods\n policy.allowAllMethods()\n\n \"\"\"policy.allowMethod(HttpVerb.GET, \"/pets/*\")\"\"\"\n\n \"\"\"finally, build the policy and exit the function using return\"\"\"\n return policy.build()\n\nclass HttpVerb:\n GET = \"GET\"\n POST = \"POST\"\n PUT = \"PUT\"\n PATCH = \"PATCH\"\n HEAD = \"HEAD\"\n DELETE = \"DELETE\"\n OPTIONS = \"OPTIONS\"\n ALL = \"*\"\n\nclass AuthPolicy:\n awsAccountId = \"\"\n \"\"\"The AWS account id the policy will be generated for. This is used to create the method ARNs.\"\"\"\n principalId = \"\"\n \"\"\"The principal used for the policy, this should be a unique identifier for the end user.\"\"\"\n version = \"2012-10-17\"\n \"\"\"The policy version used for the evaluation. This should always be '2012-10-17'\"\"\"\n pathRegex = r\"^[/.a-zA-Z0-9-\\*]+$\"\n \"\"\"The regular expression used to validate resource paths for the policy\"\"\"\n\n \"\"\"these are the internal lists of allowed and denied methods. These are lists\n of objects and each object has 2 properties: A resource ARN and a nullable\n conditions statement.\n the build method processes these lists and generates the appropriate\n statements for the final policy\"\"\"\n allowMethods = []\n denyMethods = []\n\n restApiId = \"*\"\n \"\"\"The API Gateway API id. By default this is set to '*'\"\"\"\n region = \"*\"\n \"\"\"The region where the API is deployed. By default this is set to '*'\"\"\"\n stage = \"*\"\n \"\"\"The name of the stage used in the policy. By default this is set to '*'\"\"\"\n\n def __init__(self, principal, awsAccountId):\n self.awsAccountId = awsAccountId\n self.principalId = principal\n self.allowMethods = []\n self.denyMethods = []\n\n def _addMethod(self, effect, verb, resource, conditions):\n \"\"\"Adds a method to the internal lists of allowed or denied methods. Each object in\n the internal list contains a resource ARN and a condition statement. The condition\n statement can be null.\"\"\"\n if verb != \"*\" and not hasattr(HttpVerb, verb):\n raise NameError(\"Invalid HTTP verb \" + verb + \". Allowed verbs in HttpVerb class\")\n resourcePattern = re.compile(self.pathRegex)\n if not resourcePattern.match(resource):\n raise NameError(\"Invalid resource path: \" + resource + \". Path should match \" + self.pathRegex)\n\n if resource[:1] == \"/\":\n resource = resource[1:]\n\n resourceArn = (\"arn:aws:execute-api:\" +\n self.region + \":\" +\n self.awsAccountId + \":\" +\n self.restApiId + \"/\" +\n self.stage + \"/\" +\n verb + \"/\" +\n resource)\n\n if effect.lower() == \"allow\":\n self.allowMethods.append({\n 'resourceArn' : resourceArn,\n 'conditions' : conditions\n })\n elif effect.lower() == \"deny\":\n self.denyMethods.append({\n 'resourceArn' : resourceArn,\n 'conditions' : conditions\n })\n\n def _getEmptyStatement(self, effect):\n \"\"\"Returns an empty statement object prepopulated with the correct action and the\n desired effect.\"\"\"\n statement = {\n 'Action': 'execute-api:Invoke',\n 'Effect': effect[:1].upper() + effect[1:].lower(),\n 'Resource': []\n }\n\n return statement\n\n def _getStatementForEffect(self, effect, methods):\n \"\"\"This function loops over an array of objects containing a resourceArn and\n conditions statement and generates the array of statements for the policy.\"\"\"\n statements = []\n\n if len(methods) > 0:\n statement = self._getEmptyStatement(effect)\n\n for curMethod in methods:\n if curMethod['conditions'] is None or len(curMethod['conditions']) == 0:\n statement['Resource'].append(curMethod['resourceArn'])\n else:\n conditionalStatement = self._getEmptyStatement(effect)\n conditionalStatement['Resource'].append(curMethod['resourceArn'])\n conditionalStatement['Condition'] = curMethod['conditions']\n statements.append(conditionalStatement)\n\n statements.append(statement)\n\n return statements\n\n def allowAllMethods(self):\n \"\"\"Adds a '*' allow to the policy to authorize access to all methods of an API\"\"\"\n self._addMethod(\"Allow\", HttpVerb.ALL, \"*\", [])\n\n def denyAllMethods(self):\n \"\"\"Adds a '*' allow to the policy to deny access to all methods of an API\"\"\"\n self._addMethod(\"Deny\", HttpVerb.ALL, \"*\", [])\n\n def allowMethod(self, verb, resource):\n \"\"\"Adds an API Gateway method (Http verb + Resource path) to the list of allowed\n methods for the policy\"\"\"\n self._addMethod(\"Allow\", verb, resource, [])\n\n def denyMethod(self, verb, resource):\n \"\"\"Adds an API Gateway method (Http verb + Resource path) to the list of denied\n methods for the policy\"\"\"\n self._addMethod(\"Deny\", verb, resource, [])\n\n def allowMethodWithConditions(self, verb, resource, conditions):\n \"\"\"Adds an API Gateway method (Http verb + Resource path) to the list of allowed\n methods and includes a condition for the policy statement. More on AWS policy\n conditions here: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Condition\"\"\"\n self._addMethod(\"Allow\", verb, resource, conditions)\n\n def denyMethodWithConditions(self, verb, resource, conditions):\n \"\"\"Adds an API Gateway method (Http verb + Resource path) to the list of denied\n methods and includes a condition for the policy statement. More on AWS policy\n conditions here: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Condition\"\"\"\n self._addMethod(\"Deny\", verb, resource, conditions)\n\n def build(self):\n \"\"\"Generates the policy document based on the internal lists of allowed and denied\n conditions. This will generate a policy with two main statements for the effect:\n one statement for Allow and one statement for Deny.\n Methods that includes conditions will have their own statement in the policy.\"\"\"\n if ((self.allowMethods is None or len(self.allowMethods) == 0) and\n (self.denyMethods is None or len(self.denyMethods) == 0)):\n raise NameError(\"No statements defined for the policy\")\n\n policy = {\n 'principalId' : self.principalId,\n 'policyDocument' : {\n 'Version' : self.version,\n 'Statement' : []\n }\n }\n\n policy['policyDocument']['Statement'].extend(self._getStatementForEffect(\"Allow\", self.allowMethods))\n policy['policyDocument']['Statement'].extend(self._getStatementForEffect(\"Deny\", self.denyMethods))\n\n return policy\n", "path": "example/authmodule.py"}]}
| 3,257 | 171 |
gh_patches_debug_7807
|
rasdani/github-patches
|
git_diff
|
locustio__locust-2609
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Report][Modern-UI] HTML report is blank
### Prerequisites
- [X] I am using [the latest version of Locust](https://github.com/locustio/locust/releases/)
- [X] I am reporting a bug, not asking a question
### Description
Run a test then open the HTML report -> Noticed that it is blank
Note: This bug occurs from 2.22.0, and did not occur on 2.21.0

### Command line
locust -f SimpleWeb.py -u 100 -r 10 -t 30s --html=samplelocust.html
### Locustfile contents
```python3
from locust import FastHttpUser, HttpUser, between, constant_pacing, events, task
from loguru import logger
class QuickstartUser(FastHttpUser):
wait_time = between(2, 5)
host = "http://127.0.0.1:5000"
# begin = time.time()
@task()
def get_tasks_1(self):
res = None
try:
payload = {}
headers = {"Cache-Control": "max-age=0, no-cache, no-store, must-revalidate"}
res = self.client.get("/api/tasks", headers=headers, data=payload, name="Get Tasks")
except Exception as exception:
logger.error(exception)
@task()
def post_lewin(self):
try:
payload = {}
headers = {"Cache-Control": "max-age=0, no-cache, no-store, must-revalidate"}
self.client.post("/api/lewin", headers=headers, data=payload, name="Post Lewin")
except Exception as exception:
logger.error(exception)
```
### Python version
3.9.18
### Locust version
2.23.1
### Operating system
macOS 14.2.1 (23C71)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/web_ui_auth.py`
Content:
```
1 """
2 Example of implementing authentication for Locust when the --web-login flag is given
3
4 This is only to serve as a starting point, proper authentication should be implemented
5 according to your projects specifications.
6
7 For more information, see https://docs.locust.io/en/stable/extending-locust.html#authentication
8 """
9 from locust import HttpUser, events
10
11 import json
12 import os
13
14 from flask import Blueprint, make_response, redirect, request, session, url_for
15 from flask_login import UserMixin, login_user
16
17
18 class LocustHttpUser(HttpUser):
19 pass
20
21
22 class AuthUser(UserMixin):
23 def __init__(self, username):
24 self.username = username
25
26 def get_id(self):
27 return self.username
28
29
30 auth_blueprint = Blueprint("auth", "web_ui_auth")
31
32
33 def load_user(user_id):
34 return AuthUser(session.get("username"))
35
36
37 @events.init.add_listener
38 def locust_init(environment, **kwargs):
39 if environment.web_ui:
40 environment.web_ui.login_manager.user_loader(load_user)
41
42 environment.web_ui.app.config["SECRET_KEY"] = os.getenv("FLASK_SECRET_KEY")
43
44 environment.web_ui.auth_args = {
45 "username_password_callback": "/login_submit",
46 "auth_providers": [
47 {
48 "label": "Github",
49 "callback_url": "/login/github",
50 "icon_url": "https://static-00.iconduck.com/assets.00/github-icon-1024x994-4h5sdmko.png",
51 },
52 ],
53 }
54
55 @auth_blueprint.route("/login/github")
56 def google_login():
57 # Implement authentication with desired auth provider
58 username = "username"
59 session["username"] = username
60 login_user(AuthUser("username"))
61
62 return redirect(url_for("index"))
63
64 @auth_blueprint.route("/login_submit")
65 def login_submit():
66 username = request.args.get("username")
67 password = request.args.get("password")
68
69 # Implement real password verification here
70 if password:
71 session["username"] = username
72 login_user(AuthUser(username))
73
74 return redirect(url_for("index"))
75
76 environment.web_ui.auth_args = {**environment.web_ui.auth_args, "error": "Invalid username or password"}
77
78 return redirect(url_for("login"))
79
80 environment.web_ui.app.register_blueprint(auth_blueprint)
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/web_ui_auth.py b/examples/web_ui_auth.py
--- a/examples/web_ui_auth.py
+++ b/examples/web_ui_auth.py
@@ -6,7 +6,7 @@
For more information, see https://docs.locust.io/en/stable/extending-locust.html#authentication
"""
-from locust import HttpUser, events
+from locust import HttpUser, events, task
import json
import os
@@ -16,7 +16,9 @@
class LocustHttpUser(HttpUser):
- pass
+ @task
+ def example(self):
+ self.client.get("/")
class AuthUser(UserMixin):
|
{"golden_diff": "diff --git a/examples/web_ui_auth.py b/examples/web_ui_auth.py\n--- a/examples/web_ui_auth.py\n+++ b/examples/web_ui_auth.py\n@@ -6,7 +6,7 @@\n \n For more information, see https://docs.locust.io/en/stable/extending-locust.html#authentication\n \"\"\"\n-from locust import HttpUser, events\n+from locust import HttpUser, events, task\n \n import json\n import os\n@@ -16,7 +16,9 @@\n \n \n class LocustHttpUser(HttpUser):\n- pass\n+ @task\n+ def example(self):\n+ self.client.get(\"/\")\n \n \n class AuthUser(UserMixin):\n", "issue": "[Report][Modern-UI] HTML report is blank\n### Prerequisites\n\n- [X] I am using [the latest version of Locust](https://github.com/locustio/locust/releases/)\n- [X] I am reporting a bug, not asking a question\n\n### Description\n\nRun a test then open the HTML report -> Noticed that it is blank\r\nNote: This bug occurs from 2.22.0, and did not occur on 2.21.0 \r\n\r\n\n\n### Command line\n\nlocust -f SimpleWeb.py -u 100 -r 10 -t 30s --html=samplelocust.html\n\n### Locustfile contents\n\n```python3\nfrom locust import FastHttpUser, HttpUser, between, constant_pacing, events, task\r\nfrom loguru import logger\r\n\r\n\r\nclass QuickstartUser(FastHttpUser):\r\n wait_time = between(2, 5)\r\n\r\n\r\n host = \"http://127.0.0.1:5000\"\r\n # begin = time.time()\r\n\r\n @task()\r\n def get_tasks_1(self):\r\n res = None\r\n try:\r\n payload = {}\r\n headers = {\"Cache-Control\": \"max-age=0, no-cache, no-store, must-revalidate\"}\r\n res = self.client.get(\"/api/tasks\", headers=headers, data=payload, name=\"Get Tasks\")\r\n except Exception as exception:\r\n logger.error(exception)\r\n\r\n @task()\r\n def post_lewin(self):\r\n try:\r\n payload = {}\r\n headers = {\"Cache-Control\": \"max-age=0, no-cache, no-store, must-revalidate\"}\r\n self.client.post(\"/api/lewin\", headers=headers, data=payload, name=\"Post Lewin\")\r\n except Exception as exception:\r\n logger.error(exception)\n```\n\n\n### Python version\n\n3.9.18\n\n### Locust version\n\n2.23.1\n\n### Operating system\n\nmacOS 14.2.1 (23C71)\n", "before_files": [{"content": "\"\"\"\nExample of implementing authentication for Locust when the --web-login flag is given\n\nThis is only to serve as a starting point, proper authentication should be implemented\naccording to your projects specifications.\n\nFor more information, see https://docs.locust.io/en/stable/extending-locust.html#authentication\n\"\"\"\nfrom locust import HttpUser, events\n\nimport json\nimport os\n\nfrom flask import Blueprint, make_response, redirect, request, session, url_for\nfrom flask_login import UserMixin, login_user\n\n\nclass LocustHttpUser(HttpUser):\n pass\n\n\nclass AuthUser(UserMixin):\n def __init__(self, username):\n self.username = username\n\n def get_id(self):\n return self.username\n\n\nauth_blueprint = Blueprint(\"auth\", \"web_ui_auth\")\n\n\ndef load_user(user_id):\n return AuthUser(session.get(\"username\"))\n\n\[email protected]_listener\ndef locust_init(environment, **kwargs):\n if environment.web_ui:\n environment.web_ui.login_manager.user_loader(load_user)\n\n environment.web_ui.app.config[\"SECRET_KEY\"] = os.getenv(\"FLASK_SECRET_KEY\")\n\n environment.web_ui.auth_args = {\n \"username_password_callback\": \"/login_submit\",\n \"auth_providers\": [\n {\n \"label\": \"Github\",\n \"callback_url\": \"/login/github\",\n \"icon_url\": \"https://static-00.iconduck.com/assets.00/github-icon-1024x994-4h5sdmko.png\",\n },\n ],\n }\n\n @auth_blueprint.route(\"/login/github\")\n def google_login():\n # Implement authentication with desired auth provider\n username = \"username\"\n session[\"username\"] = username\n login_user(AuthUser(\"username\"))\n\n return redirect(url_for(\"index\"))\n\n @auth_blueprint.route(\"/login_submit\")\n def login_submit():\n username = request.args.get(\"username\")\n password = request.args.get(\"password\")\n\n # Implement real password verification here\n if password:\n session[\"username\"] = username\n login_user(AuthUser(username))\n\n return redirect(url_for(\"index\"))\n\n environment.web_ui.auth_args = {**environment.web_ui.auth_args, \"error\": \"Invalid username or password\"}\n\n return redirect(url_for(\"login\"))\n\n environment.web_ui.app.register_blueprint(auth_blueprint)\n", "path": "examples/web_ui_auth.py"}], "after_files": [{"content": "\"\"\"\nExample of implementing authentication for Locust when the --web-login flag is given\n\nThis is only to serve as a starting point, proper authentication should be implemented\naccording to your projects specifications.\n\nFor more information, see https://docs.locust.io/en/stable/extending-locust.html#authentication\n\"\"\"\nfrom locust import HttpUser, events, task\n\nimport json\nimport os\n\nfrom flask import Blueprint, make_response, redirect, request, session, url_for\nfrom flask_login import UserMixin, login_user\n\n\nclass LocustHttpUser(HttpUser):\n @task\n def example(self):\n self.client.get(\"/\")\n\n\nclass AuthUser(UserMixin):\n def __init__(self, username):\n self.username = username\n\n def get_id(self):\n return self.username\n\n\nauth_blueprint = Blueprint(\"auth\", \"web_ui_auth\")\n\n\ndef load_user(user_id):\n return AuthUser(session.get(\"username\"))\n\n\[email protected]_listener\ndef locust_init(environment, **kwargs):\n if environment.web_ui:\n environment.web_ui.login_manager.user_loader(load_user)\n\n environment.web_ui.app.config[\"SECRET_KEY\"] = os.getenv(\"FLASK_SECRET_KEY\")\n\n environment.web_ui.auth_args = {\n \"username_password_callback\": \"/login_submit\",\n \"auth_providers\": [\n {\n \"label\": \"Github\",\n \"callback_url\": \"/login/github\",\n \"icon_url\": \"https://static-00.iconduck.com/assets.00/github-icon-1024x994-4h5sdmko.png\",\n },\n ],\n }\n\n @auth_blueprint.route(\"/login/github\")\n def google_login():\n # Implement authentication with desired auth provider\n username = \"username\"\n session[\"username\"] = username\n login_user(AuthUser(\"username\"))\n\n return redirect(url_for(\"index\"))\n\n @auth_blueprint.route(\"/login_submit\")\n def login_submit():\n username = request.args.get(\"username\")\n password = request.args.get(\"password\")\n\n # Implement real password verification here\n if password:\n session[\"username\"] = username\n login_user(AuthUser(username))\n\n return redirect(url_for(\"index\"))\n\n environment.web_ui.auth_args = {**environment.web_ui.auth_args, \"error\": \"Invalid username or password\"}\n\n return redirect(url_for(\"login\"))\n\n environment.web_ui.app.register_blueprint(auth_blueprint)\n", "path": "examples/web_ui_auth.py"}]}
| 1,400 | 143 |
gh_patches_debug_2867
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-15341
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use of relative PATH for docker-tool shims prevents use of credential helpers
**Describe the bug**
I'm trying to set up [tools](https://www.pantsbuild.org/docs/reference-docker#section-tools) in my repo's `docker` subsystem, to plug in the [ECR credential helper](https://github.com/awslabs/amazon-ecr-credential-helper). To do so I added the following to `pants.toml`:
```toml
[docker]
tools = ["docker-credential-ecr-login", "sh"]
```
When I run `./pants package path/to/Dockerfile`, I get the error:
```
failed to solve with frontend dockerfile.v0: failed to create LLB definition: rpc error: code = Unknown desc = error getting credentials - err: docker-credential-ecr-login resolves to executable in current directory (./.shims/bin/docker-credential-ecr-login), out: ``
```
If I run the above with `--no-process-cleanup` and `cd` into the tmpdir, I see:
1. There are shims for both tools under `.shims/bin`
2. The shims behave as expected when I use them directly
3. `__run.sh` sets `PATH=.shims/bin`
If I edit `__run.sh` to instead set `PATH=<absolute-path-to-tmpdir>/.shims/bin`, the build works.
**Pants version**
2.11.0+git9ac327d4
**OS**
MacOS
**Additional info**
Docker Desktop v4.7.1 (77678)
Docker Engine v20.10.14
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/backend/docker/util_rules/docker_binary.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import os
7 from dataclasses import dataclass
8 from typing import Mapping
9
10 from pants.backend.docker.subsystems.docker_options import DockerOptions
11 from pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs
12 from pants.core.util_rules.system_binaries import (
13 BinaryPath,
14 BinaryPathRequest,
15 BinaryPaths,
16 BinaryPathTest,
17 BinaryShims,
18 BinaryShimsRequest,
19 )
20 from pants.engine.environment import Environment, EnvironmentRequest
21 from pants.engine.fs import Digest
22 from pants.engine.process import Process, ProcessCacheScope
23 from pants.engine.rules import Get, collect_rules, rule
24 from pants.util.logging import LogLevel
25 from pants.util.strutil import pluralize
26
27
28 # The base class is decorated with `frozen_after_init`.
29 @dataclass
30 class DockerBinary(BinaryPath):
31 """The `docker` binary."""
32
33 extra_env: Mapping[str, str]
34 extra_input_digests: Mapping[str, Digest] | None
35
36 def __init__(
37 self,
38 path: str,
39 fingerprint: str | None = None,
40 extra_env: Mapping[str, str] | None = None,
41 extra_input_digests: Mapping[str, Digest] | None = None,
42 ) -> None:
43 self.extra_env = {} if extra_env is None else extra_env
44 self.extra_input_digests = extra_input_digests
45 super().__init__(path, fingerprint)
46
47 def _get_process_environment(self, env: Mapping[str, str]) -> Mapping[str, str]:
48 if not self.extra_env:
49 return env
50
51 res = {**self.extra_env, **env}
52
53 # Merge the PATH entries, in case they are present in both `env` and `self.extra_env`.
54 res["PATH"] = os.pathsep.join(
55 p for p in (m.get("PATH") for m in (self.extra_env, env)) if p
56 )
57 return res
58
59 def build_image(
60 self,
61 tags: tuple[str, ...],
62 digest: Digest,
63 dockerfile: str,
64 build_args: DockerBuildArgs,
65 context_root: str,
66 env: Mapping[str, str],
67 extra_args: tuple[str, ...] = (),
68 ) -> Process:
69 args = [self.path, "build", *extra_args]
70
71 for tag in tags:
72 args.extend(["--tag", tag])
73
74 for build_arg in build_args:
75 args.extend(["--build-arg", build_arg])
76
77 args.extend(["--file", dockerfile])
78
79 # Docker context root.
80 args.append(context_root)
81
82 return Process(
83 argv=tuple(args),
84 description=(
85 f"Building docker image {tags[0]}"
86 + (f" +{pluralize(len(tags)-1, 'additional tag')}." if len(tags) > 1 else "")
87 ),
88 env=self._get_process_environment(env),
89 input_digest=digest,
90 immutable_input_digests=self.extra_input_digests,
91 cache_scope=ProcessCacheScope.PER_SESSION,
92 )
93
94 def push_image(self, tag: str, env: Mapping[str, str] | None = None) -> Process:
95 return Process(
96 argv=(self.path, "push", tag),
97 cache_scope=ProcessCacheScope.PER_SESSION,
98 description=f"Pushing docker image {tag}",
99 env=self._get_process_environment(env or {}),
100 immutable_input_digests=self.extra_input_digests,
101 )
102
103 def run_image(
104 self,
105 tag: str,
106 *,
107 docker_run_args: tuple[str, ...] | None = None,
108 image_args: tuple[str, ...] | None = None,
109 env: Mapping[str, str] | None = None,
110 ) -> Process:
111 return Process(
112 argv=(self.path, "run", *(docker_run_args or []), tag, *(image_args or [])),
113 cache_scope=ProcessCacheScope.PER_SESSION,
114 description=f"Running docker image {tag}",
115 env=self._get_process_environment(env or {}),
116 immutable_input_digests=self.extra_input_digests,
117 )
118
119
120 @dataclass(frozen=True)
121 class DockerBinaryRequest:
122 pass
123
124
125 @rule(desc="Finding the `docker` binary and related tooling", level=LogLevel.DEBUG)
126 async def find_docker(
127 docker_request: DockerBinaryRequest, docker_options: DockerOptions
128 ) -> DockerBinary:
129 env = await Get(Environment, EnvironmentRequest(["PATH"]))
130 search_path = docker_options.executable_search_path(env)
131 request = BinaryPathRequest(
132 binary_name="docker",
133 search_path=search_path,
134 test=BinaryPathTest(args=["-v"]),
135 )
136 paths = await Get(BinaryPaths, BinaryPathRequest, request)
137 first_path = paths.first_path_or_raise(request, rationale="interact with the docker daemon")
138
139 if not docker_options.tools:
140 return DockerBinary(first_path.path, first_path.fingerprint)
141
142 tools = await Get(
143 BinaryShims,
144 BinaryShimsRequest,
145 BinaryShimsRequest.for_binaries(
146 *docker_options.tools,
147 rationale="use docker",
148 output_directory="bin",
149 search_path=search_path,
150 ),
151 )
152 tools_path = ".shims"
153 extra_env = {"PATH": os.path.join(tools_path, tools.bin_directory)}
154 extra_input_digests = {tools_path: tools.digest}
155
156 return DockerBinary(
157 first_path.path,
158 first_path.fingerprint,
159 extra_env=extra_env,
160 extra_input_digests=extra_input_digests,
161 )
162
163
164 @rule
165 async def get_docker() -> DockerBinary:
166 return await Get(DockerBinary, DockerBinaryRequest())
167
168
169 def rules():
170 return collect_rules()
171
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py
--- a/src/python/pants/backend/docker/util_rules/docker_binary.py
+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py
@@ -150,7 +150,7 @@
),
)
tools_path = ".shims"
- extra_env = {"PATH": os.path.join(tools_path, tools.bin_directory)}
+ extra_env = {"PATH": os.path.join("{chroot}", tools_path, tools.bin_directory)}
extra_input_digests = {tools_path: tools.digest}
return DockerBinary(
|
{"golden_diff": "diff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py\n--- a/src/python/pants/backend/docker/util_rules/docker_binary.py\n+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py\n@@ -150,7 +150,7 @@\n ),\n )\n tools_path = \".shims\"\n- extra_env = {\"PATH\": os.path.join(tools_path, tools.bin_directory)}\n+ extra_env = {\"PATH\": os.path.join(\"{chroot}\", tools_path, tools.bin_directory)}\n extra_input_digests = {tools_path: tools.digest}\n \n return DockerBinary(\n", "issue": "Use of relative PATH for docker-tool shims prevents use of credential helpers\n**Describe the bug**\r\nI'm trying to set up [tools](https://www.pantsbuild.org/docs/reference-docker#section-tools) in my repo's `docker` subsystem, to plug in the [ECR credential helper](https://github.com/awslabs/amazon-ecr-credential-helper). To do so I added the following to `pants.toml`:\r\n```toml\r\n[docker]\r\ntools = [\"docker-credential-ecr-login\", \"sh\"]\r\n```\r\nWhen I run `./pants package path/to/Dockerfile`, I get the error:\r\n```\r\nfailed to solve with frontend dockerfile.v0: failed to create LLB definition: rpc error: code = Unknown desc = error getting credentials - err: docker-credential-ecr-login resolves to executable in current directory (./.shims/bin/docker-credential-ecr-login), out: ``\r\n```\r\nIf I run the above with `--no-process-cleanup` and `cd` into the tmpdir, I see:\r\n1. There are shims for both tools under `.shims/bin`\r\n2. The shims behave as expected when I use them directly\r\n3. `__run.sh` sets `PATH=.shims/bin`\r\n\r\nIf I edit `__run.sh` to instead set `PATH=<absolute-path-to-tmpdir>/.shims/bin`, the build works.\r\n\r\n**Pants version**\r\n2.11.0+git9ac327d4\r\n\r\n**OS**\r\nMacOS\r\n\r\n**Additional info**\r\nDocker Desktop v4.7.1 (77678)\r\nDocker Engine v20.10.14\r\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport os\nfrom dataclasses import dataclass\nfrom typing import Mapping\n\nfrom pants.backend.docker.subsystems.docker_options import DockerOptions\nfrom pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs\nfrom pants.core.util_rules.system_binaries import (\n BinaryPath,\n BinaryPathRequest,\n BinaryPaths,\n BinaryPathTest,\n BinaryShims,\n BinaryShimsRequest,\n)\nfrom pants.engine.environment import Environment, EnvironmentRequest\nfrom pants.engine.fs import Digest\nfrom pants.engine.process import Process, ProcessCacheScope\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.util.logging import LogLevel\nfrom pants.util.strutil import pluralize\n\n\n# The base class is decorated with `frozen_after_init`.\n@dataclass\nclass DockerBinary(BinaryPath):\n \"\"\"The `docker` binary.\"\"\"\n\n extra_env: Mapping[str, str]\n extra_input_digests: Mapping[str, Digest] | None\n\n def __init__(\n self,\n path: str,\n fingerprint: str | None = None,\n extra_env: Mapping[str, str] | None = None,\n extra_input_digests: Mapping[str, Digest] | None = None,\n ) -> None:\n self.extra_env = {} if extra_env is None else extra_env\n self.extra_input_digests = extra_input_digests\n super().__init__(path, fingerprint)\n\n def _get_process_environment(self, env: Mapping[str, str]) -> Mapping[str, str]:\n if not self.extra_env:\n return env\n\n res = {**self.extra_env, **env}\n\n # Merge the PATH entries, in case they are present in both `env` and `self.extra_env`.\n res[\"PATH\"] = os.pathsep.join(\n p for p in (m.get(\"PATH\") for m in (self.extra_env, env)) if p\n )\n return res\n\n def build_image(\n self,\n tags: tuple[str, ...],\n digest: Digest,\n dockerfile: str,\n build_args: DockerBuildArgs,\n context_root: str,\n env: Mapping[str, str],\n extra_args: tuple[str, ...] = (),\n ) -> Process:\n args = [self.path, \"build\", *extra_args]\n\n for tag in tags:\n args.extend([\"--tag\", tag])\n\n for build_arg in build_args:\n args.extend([\"--build-arg\", build_arg])\n\n args.extend([\"--file\", dockerfile])\n\n # Docker context root.\n args.append(context_root)\n\n return Process(\n argv=tuple(args),\n description=(\n f\"Building docker image {tags[0]}\"\n + (f\" +{pluralize(len(tags)-1, 'additional tag')}.\" if len(tags) > 1 else \"\")\n ),\n env=self._get_process_environment(env),\n input_digest=digest,\n immutable_input_digests=self.extra_input_digests,\n cache_scope=ProcessCacheScope.PER_SESSION,\n )\n\n def push_image(self, tag: str, env: Mapping[str, str] | None = None) -> Process:\n return Process(\n argv=(self.path, \"push\", tag),\n cache_scope=ProcessCacheScope.PER_SESSION,\n description=f\"Pushing docker image {tag}\",\n env=self._get_process_environment(env or {}),\n immutable_input_digests=self.extra_input_digests,\n )\n\n def run_image(\n self,\n tag: str,\n *,\n docker_run_args: tuple[str, ...] | None = None,\n image_args: tuple[str, ...] | None = None,\n env: Mapping[str, str] | None = None,\n ) -> Process:\n return Process(\n argv=(self.path, \"run\", *(docker_run_args or []), tag, *(image_args or [])),\n cache_scope=ProcessCacheScope.PER_SESSION,\n description=f\"Running docker image {tag}\",\n env=self._get_process_environment(env or {}),\n immutable_input_digests=self.extra_input_digests,\n )\n\n\n@dataclass(frozen=True)\nclass DockerBinaryRequest:\n pass\n\n\n@rule(desc=\"Finding the `docker` binary and related tooling\", level=LogLevel.DEBUG)\nasync def find_docker(\n docker_request: DockerBinaryRequest, docker_options: DockerOptions\n) -> DockerBinary:\n env = await Get(Environment, EnvironmentRequest([\"PATH\"]))\n search_path = docker_options.executable_search_path(env)\n request = BinaryPathRequest(\n binary_name=\"docker\",\n search_path=search_path,\n test=BinaryPathTest(args=[\"-v\"]),\n )\n paths = await Get(BinaryPaths, BinaryPathRequest, request)\n first_path = paths.first_path_or_raise(request, rationale=\"interact with the docker daemon\")\n\n if not docker_options.tools:\n return DockerBinary(first_path.path, first_path.fingerprint)\n\n tools = await Get(\n BinaryShims,\n BinaryShimsRequest,\n BinaryShimsRequest.for_binaries(\n *docker_options.tools,\n rationale=\"use docker\",\n output_directory=\"bin\",\n search_path=search_path,\n ),\n )\n tools_path = \".shims\"\n extra_env = {\"PATH\": os.path.join(tools_path, tools.bin_directory)}\n extra_input_digests = {tools_path: tools.digest}\n\n return DockerBinary(\n first_path.path,\n first_path.fingerprint,\n extra_env=extra_env,\n extra_input_digests=extra_input_digests,\n )\n\n\n@rule\nasync def get_docker() -> DockerBinary:\n return await Get(DockerBinary, DockerBinaryRequest())\n\n\ndef rules():\n return collect_rules()\n", "path": "src/python/pants/backend/docker/util_rules/docker_binary.py"}], "after_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport os\nfrom dataclasses import dataclass\nfrom typing import Mapping\n\nfrom pants.backend.docker.subsystems.docker_options import DockerOptions\nfrom pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs\nfrom pants.core.util_rules.system_binaries import (\n BinaryPath,\n BinaryPathRequest,\n BinaryPaths,\n BinaryPathTest,\n BinaryShims,\n BinaryShimsRequest,\n)\nfrom pants.engine.environment import Environment, EnvironmentRequest\nfrom pants.engine.fs import Digest\nfrom pants.engine.process import Process, ProcessCacheScope\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.util.logging import LogLevel\nfrom pants.util.strutil import pluralize\n\n\n# The base class is decorated with `frozen_after_init`.\n@dataclass\nclass DockerBinary(BinaryPath):\n \"\"\"The `docker` binary.\"\"\"\n\n extra_env: Mapping[str, str]\n extra_input_digests: Mapping[str, Digest] | None\n\n def __init__(\n self,\n path: str,\n fingerprint: str | None = None,\n extra_env: Mapping[str, str] | None = None,\n extra_input_digests: Mapping[str, Digest] | None = None,\n ) -> None:\n self.extra_env = {} if extra_env is None else extra_env\n self.extra_input_digests = extra_input_digests\n super().__init__(path, fingerprint)\n\n def _get_process_environment(self, env: Mapping[str, str]) -> Mapping[str, str]:\n if not self.extra_env:\n return env\n\n res = {**self.extra_env, **env}\n\n # Merge the PATH entries, in case they are present in both `env` and `self.extra_env`.\n res[\"PATH\"] = os.pathsep.join(\n p for p in (m.get(\"PATH\") for m in (self.extra_env, env)) if p\n )\n return res\n\n def build_image(\n self,\n tags: tuple[str, ...],\n digest: Digest,\n dockerfile: str,\n build_args: DockerBuildArgs,\n context_root: str,\n env: Mapping[str, str],\n extra_args: tuple[str, ...] = (),\n ) -> Process:\n args = [self.path, \"build\", *extra_args]\n\n for tag in tags:\n args.extend([\"--tag\", tag])\n\n for build_arg in build_args:\n args.extend([\"--build-arg\", build_arg])\n\n args.extend([\"--file\", dockerfile])\n\n # Docker context root.\n args.append(context_root)\n\n return Process(\n argv=tuple(args),\n description=(\n f\"Building docker image {tags[0]}\"\n + (f\" +{pluralize(len(tags)-1, 'additional tag')}.\" if len(tags) > 1 else \"\")\n ),\n env=self._get_process_environment(env),\n input_digest=digest,\n immutable_input_digests=self.extra_input_digests,\n cache_scope=ProcessCacheScope.PER_SESSION,\n )\n\n def push_image(self, tag: str, env: Mapping[str, str] | None = None) -> Process:\n return Process(\n argv=(self.path, \"push\", tag),\n cache_scope=ProcessCacheScope.PER_SESSION,\n description=f\"Pushing docker image {tag}\",\n env=self._get_process_environment(env or {}),\n immutable_input_digests=self.extra_input_digests,\n )\n\n def run_image(\n self,\n tag: str,\n *,\n docker_run_args: tuple[str, ...] | None = None,\n image_args: tuple[str, ...] | None = None,\n env: Mapping[str, str] | None = None,\n ) -> Process:\n return Process(\n argv=(self.path, \"run\", *(docker_run_args or []), tag, *(image_args or [])),\n cache_scope=ProcessCacheScope.PER_SESSION,\n description=f\"Running docker image {tag}\",\n env=self._get_process_environment(env or {}),\n immutable_input_digests=self.extra_input_digests,\n )\n\n\n@dataclass(frozen=True)\nclass DockerBinaryRequest:\n pass\n\n\n@rule(desc=\"Finding the `docker` binary and related tooling\", level=LogLevel.DEBUG)\nasync def find_docker(\n docker_request: DockerBinaryRequest, docker_options: DockerOptions\n) -> DockerBinary:\n env = await Get(Environment, EnvironmentRequest([\"PATH\"]))\n search_path = docker_options.executable_search_path(env)\n request = BinaryPathRequest(\n binary_name=\"docker\",\n search_path=search_path,\n test=BinaryPathTest(args=[\"-v\"]),\n )\n paths = await Get(BinaryPaths, BinaryPathRequest, request)\n first_path = paths.first_path_or_raise(request, rationale=\"interact with the docker daemon\")\n\n if not docker_options.tools:\n return DockerBinary(first_path.path, first_path.fingerprint)\n\n tools = await Get(\n BinaryShims,\n BinaryShimsRequest,\n BinaryShimsRequest.for_binaries(\n *docker_options.tools,\n rationale=\"use docker\",\n output_directory=\"bin\",\n search_path=search_path,\n ),\n )\n tools_path = \".shims\"\n extra_env = {\"PATH\": os.path.join(\"{chroot}\", tools_path, tools.bin_directory)}\n extra_input_digests = {tools_path: tools.digest}\n\n return DockerBinary(\n first_path.path,\n first_path.fingerprint,\n extra_env=extra_env,\n extra_input_digests=extra_input_digests,\n )\n\n\n@rule\nasync def get_docker() -> DockerBinary:\n return await Get(DockerBinary, DockerBinaryRequest())\n\n\ndef rules():\n return collect_rules()\n", "path": "src/python/pants/backend/docker/util_rules/docker_binary.py"}]}
| 2,268 | 142 |
gh_patches_debug_21237
|
rasdani/github-patches
|
git_diff
|
OpenCTI-Platform__connectors-1121
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error KeyError: 'value' in Shodan-InternetDB connector
## Description
We get the following error in Shodan-InternetDB connector for every IP we try to process:
INFO:root:Reading StixCyberObservable {f14b0557-269b-478c-822d-dd206ce88060}.
ERROR:root:Error in message processing, reporting error to API
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/pycti/connector/opencti_connector_helper.py", line 181, in _data_handler
message = self.callback(json_data["event"])
File "/opt/opencti/connectors/internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py", line 103, in _process_message
value = observable["value"]
KeyError: 'value'
INFO:root:Reporting work update_received work_6cbd1a73-9cfb-4825-9554-929cc42df702_2023-04-21T11:35:40.994Z
INFO:root:Message (delivery_tag=1) processed, thread terminated
## Environment
1. OS (where OpenCTI server runs): Ubuntu 22
2. OpenCTI version: 5.7.2
3. OpenCTI client: python
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py`
Content:
```
1 """Shodan InternetDB connector"""
2
3 from __future__ import annotations
4
5 import logging
6 from datetime import datetime, timedelta
7 from pathlib import Path
8 from typing import Any, Dict, List, Union
9
10 import pycti
11 import stix2
12 import validators
13 import yaml
14 from pycti.connector.opencti_connector_helper import OpenCTIConnectorHelper
15 from requests.exceptions import RequestException
16
17 from .client import ShodanInternetDbClient, ShodanResult
18 from .config import RootConfig
19
20 __all__ = [
21 "ShodanInternetDBConnector",
22 ]
23
24 log = logging.getLogger(__name__)
25
26
27 class ShodanInternetDBConnector:
28 """Shodan InternetDB connector"""
29
30 def __init__(self):
31 """Constructor"""
32 config_path = Path(__file__).parent.parent.joinpath("config.yml")
33 config = (
34 yaml.load(config_path.open(), Loader=yaml.SafeLoader)
35 if config_path.is_file()
36 else {}
37 )
38
39 self._config = RootConfig.parse_obj(config)
40 self._helper = OpenCTIConnectorHelper(config)
41
42 self._identity = self._helper.api.identity.create(
43 type="Organization",
44 name="Shodan",
45 description="Shodan is a search engine for Internet-connected devices.",
46 )
47 self._identity_id = self._identity["standard_id"]
48 self._object_marking_id = stix2.TLP_WHITE["id"]
49
50 self._client = ShodanInternetDbClient(verify=self._config.shodan.ssl_verify)
51
52 def start(self) -> None:
53 """
54 Start the connector
55 :return: None
56 """
57 self._helper.listen(self._process_message)
58
59 def _process_message(self, data: Dict[str, Any]) -> str:
60 """
61 Process the data message
62 :param data: Entity data
63 :return: None
64 """
65 # Fetch the observable being processed
66 entity_id = data["entity_id"]
67
68 custom_attributes = """
69 id
70 entity_type
71 objectMarking {
72 edges {
73 node {
74 id
75 definition_type
76 definition
77 }
78 }
79 }
80 observable_value
81 """
82 observable = self._helper.api.stix_cyber_observable.read(
83 id=entity_id, customAttributes=custom_attributes
84 )
85
86 if observable is None:
87 log.error("Observable not found with entity_id %s", entity_id)
88 return "Observable not found"
89
90 # Check TLP markings, do not submit higher than the max allowed
91 tlps = ["TLP:CLEAR"]
92 for marking_definition in observable.get("objectMarking", []):
93 if marking_definition["definition_type"] == "TLP":
94 tlps.append(marking_definition["definition"])
95
96 for tlp in tlps:
97 max_tlp_name = self._config.shodan.max_tlp.name
98 if not OpenCTIConnectorHelper.check_max_tlp(tlp, max_tlp_name):
99 log.debug("Skipping observable, TLP is greater than the MAX TLP")
100 return "Skipping observable (TLP)"
101
102 # Process the observable value
103 value = observable["value"]
104 if not validators.ipv4(value):
105 log.error("Observable value is not an IPv4 address")
106 return "Skipping observable (ipv4 validation)"
107
108 try:
109 result = self._client.query(value)
110 except RequestException:
111 log.exception("Shodan API error")
112 return "Skipping observable (Shodan API error)"
113
114 if result is None:
115 log.debug("No information available on %s", value)
116 return "Skipping observable (Shodan 404)"
117
118 # Process the result
119 log.debug("Processing %s", value)
120 self._process_domains(observable, result)
121 self._process_tags(observable, result)
122 self._process_vulns(observable, result)
123 self._process_note(observable, result)
124
125 return "Success"
126
127 def _process_note(
128 self,
129 observable: Dict[str, Any],
130 result: ShodanResult,
131 ) -> None:
132 """
133 Add an enrichment note to the observable
134 :param observable: Observable data
135 :param result: Shodan data
136 :return: None
137 """
138
139 def format_list(alist: List[Union[str, int]]) -> str:
140 """Format a list of primitives into a Markdown list"""
141 return "".join(f"\n- {name}" for name in alist) or "n/a"
142
143 value = observable["value"]
144 abstract = f"Shodan InternetDB enrichment of {value}"
145 content = f"""```
146 Shodan InternetDB:
147 ------------------
148 Hostnames: {format_list(result.hostnames)}
149 ------------------
150 Software: {format_list(result.cpes)}
151 ------------------
152 Vulnerabilities: {format_list(result.vulns)}
153 ------------------
154 Ports: {format_list(result.ports)}
155 ------------------
156 ```
157 """
158
159 self._helper.api.note.create(
160 stix_id=pycti.Note.generate_id(datetime.now().isoformat(), content),
161 createdBy=self._identity_id,
162 objectMarking=[self._object_marking_id],
163 confidence=self._helper.connect_confidence_level,
164 objects=[observable["id"]],
165 authors=[self._identity_id],
166 abstract=abstract,
167 content=content,
168 )
169
170 def _process_domains(
171 self,
172 observable: Dict[str, Any],
173 result: ShodanResult,
174 ) -> None:
175 """
176 Add additional domains to the observable
177 :param observable: Observable data
178 :param result: Shodan data
179 :return: None
180 """
181
182 markings = observable["objectMarkingIds"]
183 for name in result.hostnames:
184 log.debug("Adding domain %s", name)
185 domain = self._helper.api.stix_cyber_observable.create(
186 observableData=dict(
187 type="Domain-Name",
188 value=name,
189 ),
190 objectMarking=markings,
191 createdBy=self._identity_id,
192 update=True,
193 )
194
195 log.debug("Creating domain relationship")
196 self._helper.api.stix_nested_ref_relationship.create(
197 fromId=domain["id"],
198 toId=observable["id"],
199 relationship_type="resolves-to",
200 createdBy=self._identity_id,
201 objectMarking=markings,
202 confidence=self._helper.connect_confidence_level,
203 update=True,
204 )
205
206 def _process_tags(
207 self,
208 observable: Dict[str, Any],
209 result: ShodanResult,
210 ) -> None:
211 """
212 Add additional tags to the observable
213 :param observable: Observable data
214 :param result: Shodan data
215 :return: None
216 """
217
218 for name in result.tags:
219 log.debug("Creating label %s", name)
220 label = self._helper.api.label.create(value=name)
221
222 log.debug("Adding to observable")
223 self._helper.api.stix_cyber_observable.add_label(
224 id=observable["id"],
225 label_id=label["id"],
226 )
227
228 def _process_vulns(
229 self,
230 observable: Dict[str, Any],
231 result: ShodanResult,
232 ) -> None:
233 """
234 Add additional vulnerabilities to the observable
235 :param observable: Observable data
236 :param result: Shodan data
237 :return: None
238 """
239 now = datetime.utcnow()
240 vuln_eol = now + timedelta(days=60)
241
242 for name in result.vulns:
243 log.debug("Creating vulnerability %s", name)
244 vuln = self._helper.api.vulnerability.create(
245 stix_id=pycti.Vulnerability.generate_id(name),
246 name=name,
247 createdBy=self._identity_id,
248 objectMarking=[self._object_marking_id],
249 confidence=self._helper.connect_confidence_level,
250 update=True,
251 )
252
253 log.debug("Creating vulnerability relationship")
254 self._helper.api.stix_core_relationship.create(
255 fromId=observable["id"],
256 toId=vuln["id"],
257 relationship_type="related-to",
258 createdBy=self._identity_id,
259 start_time=now.strftime("%Y-%m-%dT%H:%M:%SZ"),
260 stop_time=vuln_eol.strftime("%Y-%m-%dT%H:%M:%SZ"),
261 confidence=self._helper.connect_confidence_level,
262 objectMarking=[self._object_marking_id],
263 update=True,
264 )
265
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py b/internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py
--- a/internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py
+++ b/internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py
@@ -100,7 +100,7 @@
return "Skipping observable (TLP)"
# Process the observable value
- value = observable["value"]
+ value = observable["observable_value"]
if not validators.ipv4(value):
log.error("Observable value is not an IPv4 address")
return "Skipping observable (ipv4 validation)"
@@ -140,7 +140,7 @@
"""Format a list of primitives into a Markdown list"""
return "".join(f"\n- {name}" for name in alist) or "n/a"
- value = observable["value"]
+ value = observable["observable_value"]
abstract = f"Shodan InternetDB enrichment of {value}"
content = f"""```
Shodan InternetDB:
|
{"golden_diff": "diff --git a/internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py b/internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py\n--- a/internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py\n+++ b/internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py\n@@ -100,7 +100,7 @@\n return \"Skipping observable (TLP)\"\n \n # Process the observable value\n- value = observable[\"value\"]\n+ value = observable[\"observable_value\"]\n if not validators.ipv4(value):\n log.error(\"Observable value is not an IPv4 address\")\n return \"Skipping observable (ipv4 validation)\"\n@@ -140,7 +140,7 @@\n \"\"\"Format a list of primitives into a Markdown list\"\"\"\n return \"\".join(f\"\\n- {name}\" for name in alist) or \"n/a\"\n \n- value = observable[\"value\"]\n+ value = observable[\"observable_value\"]\n abstract = f\"Shodan InternetDB enrichment of {value}\"\n content = f\"\"\"```\n Shodan InternetDB:\n", "issue": "Error KeyError: 'value' in Shodan-InternetDB connector\n## Description\r\n\r\nWe get the following error in Shodan-InternetDB connector for every IP we try to process:\r\n\r\nINFO:root:Reading StixCyberObservable {f14b0557-269b-478c-822d-dd206ce88060}.\r\nERROR:root:Error in message processing, reporting error to API\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/pycti/connector/opencti_connector_helper.py\", line 181, in _data_handler\r\n message = self.callback(json_data[\"event\"])\r\n File \"/opt/opencti/connectors/internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py\", line 103, in _process_message\r\n value = observable[\"value\"]\r\nKeyError: 'value'\r\nINFO:root:Reporting work update_received work_6cbd1a73-9cfb-4825-9554-929cc42df702_2023-04-21T11:35:40.994Z\r\nINFO:root:Message (delivery_tag=1) processed, thread terminated\r\n\r\n## Environment\r\n\r\n1. OS (where OpenCTI server runs): Ubuntu 22\r\n2. OpenCTI version: 5.7.2\r\n3. OpenCTI client: python\r\n\n", "before_files": [{"content": "\"\"\"Shodan InternetDB connector\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nfrom datetime import datetime, timedelta\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Union\n\nimport pycti\nimport stix2\nimport validators\nimport yaml\nfrom pycti.connector.opencti_connector_helper import OpenCTIConnectorHelper\nfrom requests.exceptions import RequestException\n\nfrom .client import ShodanInternetDbClient, ShodanResult\nfrom .config import RootConfig\n\n__all__ = [\n \"ShodanInternetDBConnector\",\n]\n\nlog = logging.getLogger(__name__)\n\n\nclass ShodanInternetDBConnector:\n \"\"\"Shodan InternetDB connector\"\"\"\n\n def __init__(self):\n \"\"\"Constructor\"\"\"\n config_path = Path(__file__).parent.parent.joinpath(\"config.yml\")\n config = (\n yaml.load(config_path.open(), Loader=yaml.SafeLoader)\n if config_path.is_file()\n else {}\n )\n\n self._config = RootConfig.parse_obj(config)\n self._helper = OpenCTIConnectorHelper(config)\n\n self._identity = self._helper.api.identity.create(\n type=\"Organization\",\n name=\"Shodan\",\n description=\"Shodan is a search engine for Internet-connected devices.\",\n )\n self._identity_id = self._identity[\"standard_id\"]\n self._object_marking_id = stix2.TLP_WHITE[\"id\"]\n\n self._client = ShodanInternetDbClient(verify=self._config.shodan.ssl_verify)\n\n def start(self) -> None:\n \"\"\"\n Start the connector\n :return: None\n \"\"\"\n self._helper.listen(self._process_message)\n\n def _process_message(self, data: Dict[str, Any]) -> str:\n \"\"\"\n Process the data message\n :param data: Entity data\n :return: None\n \"\"\"\n # Fetch the observable being processed\n entity_id = data[\"entity_id\"]\n\n custom_attributes = \"\"\"\n id\n entity_type\n objectMarking {\n edges {\n node {\n id\n definition_type\n definition\n }\n }\n }\n observable_value\n \"\"\"\n observable = self._helper.api.stix_cyber_observable.read(\n id=entity_id, customAttributes=custom_attributes\n )\n\n if observable is None:\n log.error(\"Observable not found with entity_id %s\", entity_id)\n return \"Observable not found\"\n\n # Check TLP markings, do not submit higher than the max allowed\n tlps = [\"TLP:CLEAR\"]\n for marking_definition in observable.get(\"objectMarking\", []):\n if marking_definition[\"definition_type\"] == \"TLP\":\n tlps.append(marking_definition[\"definition\"])\n\n for tlp in tlps:\n max_tlp_name = self._config.shodan.max_tlp.name\n if not OpenCTIConnectorHelper.check_max_tlp(tlp, max_tlp_name):\n log.debug(\"Skipping observable, TLP is greater than the MAX TLP\")\n return \"Skipping observable (TLP)\"\n\n # Process the observable value\n value = observable[\"value\"]\n if not validators.ipv4(value):\n log.error(\"Observable value is not an IPv4 address\")\n return \"Skipping observable (ipv4 validation)\"\n\n try:\n result = self._client.query(value)\n except RequestException:\n log.exception(\"Shodan API error\")\n return \"Skipping observable (Shodan API error)\"\n\n if result is None:\n log.debug(\"No information available on %s\", value)\n return \"Skipping observable (Shodan 404)\"\n\n # Process the result\n log.debug(\"Processing %s\", value)\n self._process_domains(observable, result)\n self._process_tags(observable, result)\n self._process_vulns(observable, result)\n self._process_note(observable, result)\n\n return \"Success\"\n\n def _process_note(\n self,\n observable: Dict[str, Any],\n result: ShodanResult,\n ) -> None:\n \"\"\"\n Add an enrichment note to the observable\n :param observable: Observable data\n :param result: Shodan data\n :return: None\n \"\"\"\n\n def format_list(alist: List[Union[str, int]]) -> str:\n \"\"\"Format a list of primitives into a Markdown list\"\"\"\n return \"\".join(f\"\\n- {name}\" for name in alist) or \"n/a\"\n\n value = observable[\"value\"]\n abstract = f\"Shodan InternetDB enrichment of {value}\"\n content = f\"\"\"```\nShodan InternetDB:\n------------------\nHostnames: {format_list(result.hostnames)}\n------------------\nSoftware: {format_list(result.cpes)}\n------------------\nVulnerabilities: {format_list(result.vulns)}\n------------------\nPorts: {format_list(result.ports)}\n------------------\n```\n\"\"\"\n\n self._helper.api.note.create(\n stix_id=pycti.Note.generate_id(datetime.now().isoformat(), content),\n createdBy=self._identity_id,\n objectMarking=[self._object_marking_id],\n confidence=self._helper.connect_confidence_level,\n objects=[observable[\"id\"]],\n authors=[self._identity_id],\n abstract=abstract,\n content=content,\n )\n\n def _process_domains(\n self,\n observable: Dict[str, Any],\n result: ShodanResult,\n ) -> None:\n \"\"\"\n Add additional domains to the observable\n :param observable: Observable data\n :param result: Shodan data\n :return: None\n \"\"\"\n\n markings = observable[\"objectMarkingIds\"]\n for name in result.hostnames:\n log.debug(\"Adding domain %s\", name)\n domain = self._helper.api.stix_cyber_observable.create(\n observableData=dict(\n type=\"Domain-Name\",\n value=name,\n ),\n objectMarking=markings,\n createdBy=self._identity_id,\n update=True,\n )\n\n log.debug(\"Creating domain relationship\")\n self._helper.api.stix_nested_ref_relationship.create(\n fromId=domain[\"id\"],\n toId=observable[\"id\"],\n relationship_type=\"resolves-to\",\n createdBy=self._identity_id,\n objectMarking=markings,\n confidence=self._helper.connect_confidence_level,\n update=True,\n )\n\n def _process_tags(\n self,\n observable: Dict[str, Any],\n result: ShodanResult,\n ) -> None:\n \"\"\"\n Add additional tags to the observable\n :param observable: Observable data\n :param result: Shodan data\n :return: None\n \"\"\"\n\n for name in result.tags:\n log.debug(\"Creating label %s\", name)\n label = self._helper.api.label.create(value=name)\n\n log.debug(\"Adding to observable\")\n self._helper.api.stix_cyber_observable.add_label(\n id=observable[\"id\"],\n label_id=label[\"id\"],\n )\n\n def _process_vulns(\n self,\n observable: Dict[str, Any],\n result: ShodanResult,\n ) -> None:\n \"\"\"\n Add additional vulnerabilities to the observable\n :param observable: Observable data\n :param result: Shodan data\n :return: None\n \"\"\"\n now = datetime.utcnow()\n vuln_eol = now + timedelta(days=60)\n\n for name in result.vulns:\n log.debug(\"Creating vulnerability %s\", name)\n vuln = self._helper.api.vulnerability.create(\n stix_id=pycti.Vulnerability.generate_id(name),\n name=name,\n createdBy=self._identity_id,\n objectMarking=[self._object_marking_id],\n confidence=self._helper.connect_confidence_level,\n update=True,\n )\n\n log.debug(\"Creating vulnerability relationship\")\n self._helper.api.stix_core_relationship.create(\n fromId=observable[\"id\"],\n toId=vuln[\"id\"],\n relationship_type=\"related-to\",\n createdBy=self._identity_id,\n start_time=now.strftime(\"%Y-%m-%dT%H:%M:%SZ\"),\n stop_time=vuln_eol.strftime(\"%Y-%m-%dT%H:%M:%SZ\"),\n confidence=self._helper.connect_confidence_level,\n objectMarking=[self._object_marking_id],\n update=True,\n )\n", "path": "internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py"}], "after_files": [{"content": "\"\"\"Shodan InternetDB connector\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nfrom datetime import datetime, timedelta\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Union\n\nimport pycti\nimport stix2\nimport validators\nimport yaml\nfrom pycti.connector.opencti_connector_helper import OpenCTIConnectorHelper\nfrom requests.exceptions import RequestException\n\nfrom .client import ShodanInternetDbClient, ShodanResult\nfrom .config import RootConfig\n\n__all__ = [\n \"ShodanInternetDBConnector\",\n]\n\nlog = logging.getLogger(__name__)\n\n\nclass ShodanInternetDBConnector:\n \"\"\"Shodan InternetDB connector\"\"\"\n\n def __init__(self):\n \"\"\"Constructor\"\"\"\n config_path = Path(__file__).parent.parent.joinpath(\"config.yml\")\n config = (\n yaml.load(config_path.open(), Loader=yaml.SafeLoader)\n if config_path.is_file()\n else {}\n )\n\n self._config = RootConfig.parse_obj(config)\n self._helper = OpenCTIConnectorHelper(config)\n\n self._identity = self._helper.api.identity.create(\n type=\"Organization\",\n name=\"Shodan\",\n description=\"Shodan is a search engine for Internet-connected devices.\",\n )\n self._identity_id = self._identity[\"standard_id\"]\n self._object_marking_id = stix2.TLP_WHITE[\"id\"]\n\n self._client = ShodanInternetDbClient(verify=self._config.shodan.ssl_verify)\n\n def start(self) -> None:\n \"\"\"\n Start the connector\n :return: None\n \"\"\"\n self._helper.listen(self._process_message)\n\n def _process_message(self, data: Dict[str, Any]) -> str:\n \"\"\"\n Process the data message\n :param data: Entity data\n :return: None\n \"\"\"\n # Fetch the observable being processed\n entity_id = data[\"entity_id\"]\n\n custom_attributes = \"\"\"\n id\n entity_type\n objectMarking {\n edges {\n node {\n id\n definition_type\n definition\n }\n }\n }\n observable_value\n \"\"\"\n observable = self._helper.api.stix_cyber_observable.read(\n id=entity_id, customAttributes=custom_attributes\n )\n\n if observable is None:\n log.error(\"Observable not found with entity_id %s\", entity_id)\n return \"Observable not found\"\n\n # Check TLP markings, do not submit higher than the max allowed\n tlps = [\"TLP:CLEAR\"]\n for marking_definition in observable.get(\"objectMarking\", []):\n if marking_definition[\"definition_type\"] == \"TLP\":\n tlps.append(marking_definition[\"definition\"])\n\n for tlp in tlps:\n max_tlp_name = self._config.shodan.max_tlp.name\n if not OpenCTIConnectorHelper.check_max_tlp(tlp, max_tlp_name):\n log.debug(\"Skipping observable, TLP is greater than the MAX TLP\")\n return \"Skipping observable (TLP)\"\n\n # Process the observable value\n value = observable[\"observable_value\"]\n if not validators.ipv4(value):\n log.error(\"Observable value is not an IPv4 address\")\n return \"Skipping observable (ipv4 validation)\"\n\n try:\n result = self._client.query(value)\n except RequestException:\n log.exception(\"Shodan API error\")\n return \"Skipping observable (Shodan API error)\"\n\n if result is None:\n log.debug(\"No information available on %s\", value)\n return \"Skipping observable (Shodan 404)\"\n\n # Process the result\n log.debug(\"Processing %s\", value)\n self._process_domains(observable, result)\n self._process_tags(observable, result)\n self._process_vulns(observable, result)\n self._process_note(observable, result)\n\n return \"Success\"\n\n def _process_note(\n self,\n observable: Dict[str, Any],\n result: ShodanResult,\n ) -> None:\n \"\"\"\n Add an enrichment note to the observable\n :param observable: Observable data\n :param result: Shodan data\n :return: None\n \"\"\"\n\n def format_list(alist: List[Union[str, int]]) -> str:\n \"\"\"Format a list of primitives into a Markdown list\"\"\"\n return \"\".join(f\"\\n- {name}\" for name in alist) or \"n/a\"\n\n value = observable[\"observable_value\"]\n abstract = f\"Shodan InternetDB enrichment of {value}\"\n content = f\"\"\"```\nShodan InternetDB:\n------------------\nHostnames: {format_list(result.hostnames)}\n------------------\nSoftware: {format_list(result.cpes)}\n------------------\nVulnerabilities: {format_list(result.vulns)}\n------------------\nPorts: {format_list(result.ports)}\n------------------\n```\n\"\"\"\n\n self._helper.api.note.create(\n stix_id=pycti.Note.generate_id(datetime.now().isoformat(), content),\n createdBy=self._identity_id,\n objectMarking=[self._object_marking_id],\n confidence=self._helper.connect_confidence_level,\n objects=[observable[\"id\"]],\n authors=[self._identity_id],\n abstract=abstract,\n content=content,\n )\n\n def _process_domains(\n self,\n observable: Dict[str, Any],\n result: ShodanResult,\n ) -> None:\n \"\"\"\n Add additional domains to the observable\n :param observable: Observable data\n :param result: Shodan data\n :return: None\n \"\"\"\n\n markings = observable[\"objectMarkingIds\"]\n for name in result.hostnames:\n log.debug(\"Adding domain %s\", name)\n domain = self._helper.api.stix_cyber_observable.create(\n observableData=dict(\n type=\"Domain-Name\",\n value=name,\n ),\n objectMarking=markings,\n createdBy=self._identity_id,\n update=True,\n )\n\n log.debug(\"Creating domain relationship\")\n self._helper.api.stix_nested_ref_relationship.create(\n fromId=domain[\"id\"],\n toId=observable[\"id\"],\n relationship_type=\"resolves-to\",\n createdBy=self._identity_id,\n objectMarking=markings,\n confidence=self._helper.connect_confidence_level,\n update=True,\n )\n\n def _process_tags(\n self,\n observable: Dict[str, Any],\n result: ShodanResult,\n ) -> None:\n \"\"\"\n Add additional tags to the observable\n :param observable: Observable data\n :param result: Shodan data\n :return: None\n \"\"\"\n\n for name in result.tags:\n log.debug(\"Creating label %s\", name)\n label = self._helper.api.label.create(value=name)\n\n log.debug(\"Adding to observable\")\n self._helper.api.stix_cyber_observable.add_label(\n id=observable[\"id\"],\n label_id=label[\"id\"],\n )\n\n def _process_vulns(\n self,\n observable: Dict[str, Any],\n result: ShodanResult,\n ) -> None:\n \"\"\"\n Add additional vulnerabilities to the observable\n :param observable: Observable data\n :param result: Shodan data\n :return: None\n \"\"\"\n now = datetime.utcnow()\n vuln_eol = now + timedelta(days=60)\n\n for name in result.vulns:\n log.debug(\"Creating vulnerability %s\", name)\n vuln = self._helper.api.vulnerability.create(\n stix_id=pycti.Vulnerability.generate_id(name),\n name=name,\n createdBy=self._identity_id,\n objectMarking=[self._object_marking_id],\n confidence=self._helper.connect_confidence_level,\n update=True,\n )\n\n log.debug(\"Creating vulnerability relationship\")\n self._helper.api.stix_core_relationship.create(\n fromId=observable[\"id\"],\n toId=vuln[\"id\"],\n relationship_type=\"related-to\",\n createdBy=self._identity_id,\n start_time=now.strftime(\"%Y-%m-%dT%H:%M:%SZ\"),\n stop_time=vuln_eol.strftime(\"%Y-%m-%dT%H:%M:%SZ\"),\n confidence=self._helper.connect_confidence_level,\n objectMarking=[self._object_marking_id],\n update=True,\n )\n", "path": "internal-enrichment/shodan-internetdb/src/shodan_internetdb/connector.py"}]}
| 3,092 | 264 |
gh_patches_debug_25468
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2711
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unprocessed Thalia Pay payment without bank account
Apparently it is possible that people either pay with Thalia pay without having a valid bank account, or, it is possible to remove a bank account after a Thalia Pay payment is made but not processed (in which case it should not be possible)
Sentry Issue: [CONCREXIT-HD](https://sentry.io/organizations/thalia/issues/3640470247/?referrer=github_integration)
```
AttributeError: 'NoneType' object has no attribute 'last_used'
(8 additional frame(s) were not displayed)
...
File "django/utils/decorators.py", line 46, in _wrapper
return bound_method(*args, **kwargs)
File "django/contrib/auth/decorators.py", line 23, in _wrapped_view
return view_func(request, *args, **kwargs)
File "django/views/generic/base.py", line 119, in dispatch
return handler(request, *args, **kwargs)
File "payments/admin_views.py", line 107, in post
services.process_batch(batch)
File "payments/services.py", line 151, in process_batch
bank_account.last_used = batch.withdrawal_date
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/payments/services.py`
Content:
```
1 """The services defined by the payments package."""
2 import datetime
3 from typing import Union
4
5 from django.conf import settings
6 from django.db.models import Model, Q, QuerySet, Sum
7 from django.urls import reverse
8 from django.utils import timezone
9 from django.utils.translation import gettext_lazy as _
10
11 from members.models import Member
12 from utils.snippets import send_email
13
14 from .exceptions import PaymentError
15 from .models import BankAccount, Payment, PaymentUser
16 from .payables import Payable, payables
17
18
19 def create_payment(
20 model_payable: Union[Model, Payable],
21 processed_by: Member,
22 pay_type: Union[Payment.CASH, Payment.CARD, Payment.WIRE, Payment.TPAY],
23 ) -> Payment:
24 """Create a new payment from a payable object.
25
26 :param model_payable: Payable or Model object
27 :param processed_by: PaymentUser that processed this payment
28 :param pay_type: Payment type
29 :return: Payment object
30 """
31 if pay_type not in (Payment.CASH, Payment.CARD, Payment.WIRE, Payment.TPAY):
32 raise PaymentError("Invalid payment type")
33
34 if isinstance(model_payable, Payable):
35 payable = model_payable
36 else:
37 payable = payables.get_payable(model_payable)
38
39 payer = (
40 PaymentUser.objects.get(pk=payable.payment_payer.pk)
41 if payable.payment_payer
42 else None
43 )
44
45 if not (
46 (payer and payer == processed_by and pay_type == Payment.TPAY)
47 or (payable.can_manage_payment(processed_by) and pay_type != Payment.TPAY)
48 ):
49 raise PaymentError(
50 _("User processing payment does not have the right permissions")
51 )
52
53 if payable.payment_amount == 0:
54 raise PaymentError(_("Payment amount 0 is not accepted"))
55
56 if pay_type == Payment.TPAY and not payer.tpay_enabled:
57 raise PaymentError(_("This user does not have Thalia Pay enabled"))
58
59 if not payable.paying_allowed:
60 raise PaymentError(_("Payment restricted"))
61
62 if payable.payment is not None:
63 payable.payment.amount = payable.payment_amount
64 payable.payment.notes = payable.payment_notes
65 payable.payment.topic = payable.payment_topic
66 payable.payment.paid_by = payer
67 payable.payment.processed_by = processed_by
68 payable.payment.type = pay_type
69 payable.payment.save()
70 else:
71 payable.payment = Payment.objects.create(
72 processed_by=processed_by,
73 amount=payable.payment_amount,
74 notes=payable.payment_notes,
75 topic=payable.payment_topic,
76 paid_by=payer,
77 type=pay_type,
78 )
79 return payable.payment
80
81
82 def delete_payment(model: Model, member: Member = None, ignore_change_window=False):
83 """Remove a payment from a payable object.
84
85 :param model: Payable or Model object
86 :param member: member deleting the payment
87 :param ignore_change_window: ignore the payment change window
88 :return:
89 """
90 payable = payables.get_payable(model)
91
92 if member and not payable.can_manage_payment(member):
93 raise PaymentError(
94 _("User deleting payment does not have the right permissions.")
95 )
96
97 payment = payable.payment
98 if (
99 payment.created_at
100 < timezone.now() - timezone.timedelta(seconds=settings.PAYMENT_CHANGE_WINDOW)
101 and not ignore_change_window
102 ):
103 raise PaymentError(_("This payment cannot be deleted anymore."))
104 if payment.batch and payment.batch.processed:
105 raise PaymentError(
106 _("This payment has already been processed and hence cannot be deleted.")
107 )
108
109 payable.payment = None
110 payable.model.save()
111 payment.delete()
112
113
114 def update_last_used(queryset: QuerySet, date: datetime.date = None) -> int:
115 """Update the last used field of a BankAccount queryset.
116
117 :param queryset: Queryset of BankAccounts
118 :param date: date to set last_used to
119 :return: number of affected rows
120 """
121 if not date:
122 date = timezone.now().date()
123
124 result = queryset.filter(
125 (Q(valid_from__gte=timezone.now()) & Q(valid_until__lt=timezone.now()))
126 | Q(valid_until=None)
127 ).update(last_used=date)
128 return result
129
130
131 def revoke_old_mandates() -> int:
132 """Revoke all mandates that have not been used for 36 months or more.
133
134 :return: number of affected rows
135 """
136 return BankAccount.objects.filter(
137 last_used__lte=(timezone.now() - timezone.timedelta(days=36 * 30))
138 ).update(valid_until=timezone.now().date())
139
140
141 def process_batch(batch):
142 """Process a Thalia Pay batch.
143
144 :param batch: the batch to be processed
145 :return:
146 """
147 batch.processed = True
148
149 payments = batch.payments_set.select_related("paid_by")
150 for payment in payments:
151 bank_account = payment.paid_by.bank_accounts.last()
152 bank_account.last_used = batch.withdrawal_date
153 bank_account.save()
154
155 batch.save()
156
157 send_tpay_batch_processing_emails(batch)
158
159
160 def derive_next_mandate_no(member) -> str:
161 accounts = (
162 BankAccount.objects.filter(owner=PaymentUser.objects.get(pk=member.pk))
163 .exclude(mandate_no=None)
164 .filter(mandate_no__regex=BankAccount.MANDATE_NO_DEFAULT_REGEX)
165 )
166 new_mandate_no = 1 + max(
167 (int(account.mandate_no.split("-")[1]) for account in accounts), default=0
168 )
169 return f"{member.pk}-{new_mandate_no}"
170
171
172 def send_tpay_batch_processing_emails(batch):
173 """Send withdrawal notice emails to all members in a batch."""
174 member_payments = batch.payments_set.values("paid_by").annotate(total=Sum("amount"))
175 for member_row in member_payments:
176 member = PaymentUser.objects.get(pk=member_row["paid_by"])
177 total_amount = member_row["total"]
178
179 send_email(
180 member.email,
181 _("Thalia Pay withdrawal notice"),
182 "payments/email/tpay_withdrawal_notice_mail.txt",
183 {
184 "name": member.get_full_name(),
185 "batch": batch,
186 "bank_account": member.bank_accounts.filter(
187 mandate_no__isnull=False
188 ).last(),
189 "creditor_id": settings.SEPA_CREDITOR_ID,
190 "payments": batch.payments_set.filter(paid_by=member),
191 "total_amount": total_amount,
192 "payments_url": (
193 settings.BASE_URL
194 + reverse(
195 "payments:payment-list",
196 )
197 ),
198 },
199 )
200 return len(member_payments)
201
202
203 def execute_data_minimisation(dry_run=False):
204 """Anonymizes payments older than 7 years."""
205 # Sometimes years are 366 days of course, but better delete 1 or 2 days early than late
206 payment_deletion_period = timezone.now().date() - timezone.timedelta(days=(365 * 7))
207 bankaccount_deletion_period = timezone.now() - datetime.timedelta(days=(31 * 13))
208
209 queryset_payments = Payment.objects.filter(
210 created_at__lte=payment_deletion_period
211 ).exclude(paid_by__isnull=True)
212
213 # Delete bank accounts that are not valid anymore, and have not been used in the last 13 months
214 # (13 months is the required time we need to keep the mandates for)
215 queryset_bankaccounts = BankAccount.objects.all()
216 queryset_bankaccounts = queryset_bankaccounts.filter(
217 valid_until__lt=timezone.now()
218 ) # Keep valid bank accounts
219 queryset_bankaccounts = queryset_bankaccounts.exclude( # Also keep bank accounts that
220 Q(
221 owner__paid_payment_set__type=Payment.TPAY
222 ), # are used for Thalia Pay payments, AND
223 Q(
224 owner__paid_payment_set__batch__isnull=True
225 ) # have a payment that is in no batch, OR
226 | Q(
227 owner__paid_payment_set__batch__processed=False
228 ) # have an unprocessed batch, OR
229 | Q(
230 owner__paid_payment_set__batch__processing_date__gt=bankaccount_deletion_period # or have a processed batch that is not older than 13 months
231 ),
232 )
233
234 if not dry_run:
235 queryset_payments.update(paid_by=None, processed_by=None)
236 queryset_bankaccounts.delete()
237 return queryset_payments
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/payments/services.py b/website/payments/services.py
--- a/website/payments/services.py
+++ b/website/payments/services.py
@@ -149,8 +149,13 @@
payments = batch.payments_set.select_related("paid_by")
for payment in payments:
bank_account = payment.paid_by.bank_accounts.last()
- bank_account.last_used = batch.withdrawal_date
- bank_account.save()
+ if not bank_account: # pragma: no cover
+ # This should not happen, cannot haver, does not happen (right... ;p), but if it does, we don't want to crash, but just remove the payment from the batch (make it unprocessed)
+ payment.batch = None
+ payment.save()
+ else:
+ bank_account.last_used = batch.withdrawal_date
+ bank_account.save()
batch.save()
@@ -215,7 +220,7 @@
queryset_bankaccounts = BankAccount.objects.all()
queryset_bankaccounts = queryset_bankaccounts.filter(
valid_until__lt=timezone.now()
- ) # Keep valid bank accounts
+ ) # We must always keep valid bank accounts. so we only select the ones that are not valid anymore (valid_until < now)
queryset_bankaccounts = queryset_bankaccounts.exclude( # Also keep bank accounts that
Q(
owner__paid_payment_set__type=Payment.TPAY
|
{"golden_diff": "diff --git a/website/payments/services.py b/website/payments/services.py\n--- a/website/payments/services.py\n+++ b/website/payments/services.py\n@@ -149,8 +149,13 @@\n payments = batch.payments_set.select_related(\"paid_by\")\n for payment in payments:\n bank_account = payment.paid_by.bank_accounts.last()\n- bank_account.last_used = batch.withdrawal_date\n- bank_account.save()\n+ if not bank_account: # pragma: no cover\n+ # This should not happen, cannot haver, does not happen (right... ;p), but if it does, we don't want to crash, but just remove the payment from the batch (make it unprocessed)\n+ payment.batch = None\n+ payment.save()\n+ else:\n+ bank_account.last_used = batch.withdrawal_date\n+ bank_account.save()\n \n batch.save()\n \n@@ -215,7 +220,7 @@\n queryset_bankaccounts = BankAccount.objects.all()\n queryset_bankaccounts = queryset_bankaccounts.filter(\n valid_until__lt=timezone.now()\n- ) # Keep valid bank accounts\n+ ) # We must always keep valid bank accounts. so we only select the ones that are not valid anymore (valid_until < now)\n queryset_bankaccounts = queryset_bankaccounts.exclude( # Also keep bank accounts that\n Q(\n owner__paid_payment_set__type=Payment.TPAY\n", "issue": "Unprocessed Thalia Pay payment without bank account\nApparently it is possible that people either pay with Thalia pay without having a valid bank account, or, it is possible to remove a bank account after a Thalia Pay payment is made but not processed (in which case it should not be possible)\n\n\nSentry Issue: [CONCREXIT-HD](https://sentry.io/organizations/thalia/issues/3640470247/?referrer=github_integration)\n\n```\nAttributeError: 'NoneType' object has no attribute 'last_used'\n(8 additional frame(s) were not displayed)\n...\n File \"django/utils/decorators.py\", line 46, in _wrapper\n return bound_method(*args, **kwargs)\n File \"django/contrib/auth/decorators.py\", line 23, in _wrapped_view\n return view_func(request, *args, **kwargs)\n File \"django/views/generic/base.py\", line 119, in dispatch\n return handler(request, *args, **kwargs)\n File \"payments/admin_views.py\", line 107, in post\n services.process_batch(batch)\n File \"payments/services.py\", line 151, in process_batch\n bank_account.last_used = batch.withdrawal_date\n```\n", "before_files": [{"content": "\"\"\"The services defined by the payments package.\"\"\"\nimport datetime\nfrom typing import Union\n\nfrom django.conf import settings\nfrom django.db.models import Model, Q, QuerySet, Sum\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom members.models import Member\nfrom utils.snippets import send_email\n\nfrom .exceptions import PaymentError\nfrom .models import BankAccount, Payment, PaymentUser\nfrom .payables import Payable, payables\n\n\ndef create_payment(\n model_payable: Union[Model, Payable],\n processed_by: Member,\n pay_type: Union[Payment.CASH, Payment.CARD, Payment.WIRE, Payment.TPAY],\n) -> Payment:\n \"\"\"Create a new payment from a payable object.\n\n :param model_payable: Payable or Model object\n :param processed_by: PaymentUser that processed this payment\n :param pay_type: Payment type\n :return: Payment object\n \"\"\"\n if pay_type not in (Payment.CASH, Payment.CARD, Payment.WIRE, Payment.TPAY):\n raise PaymentError(\"Invalid payment type\")\n\n if isinstance(model_payable, Payable):\n payable = model_payable\n else:\n payable = payables.get_payable(model_payable)\n\n payer = (\n PaymentUser.objects.get(pk=payable.payment_payer.pk)\n if payable.payment_payer\n else None\n )\n\n if not (\n (payer and payer == processed_by and pay_type == Payment.TPAY)\n or (payable.can_manage_payment(processed_by) and pay_type != Payment.TPAY)\n ):\n raise PaymentError(\n _(\"User processing payment does not have the right permissions\")\n )\n\n if payable.payment_amount == 0:\n raise PaymentError(_(\"Payment amount 0 is not accepted\"))\n\n if pay_type == Payment.TPAY and not payer.tpay_enabled:\n raise PaymentError(_(\"This user does not have Thalia Pay enabled\"))\n\n if not payable.paying_allowed:\n raise PaymentError(_(\"Payment restricted\"))\n\n if payable.payment is not None:\n payable.payment.amount = payable.payment_amount\n payable.payment.notes = payable.payment_notes\n payable.payment.topic = payable.payment_topic\n payable.payment.paid_by = payer\n payable.payment.processed_by = processed_by\n payable.payment.type = pay_type\n payable.payment.save()\n else:\n payable.payment = Payment.objects.create(\n processed_by=processed_by,\n amount=payable.payment_amount,\n notes=payable.payment_notes,\n topic=payable.payment_topic,\n paid_by=payer,\n type=pay_type,\n )\n return payable.payment\n\n\ndef delete_payment(model: Model, member: Member = None, ignore_change_window=False):\n \"\"\"Remove a payment from a payable object.\n\n :param model: Payable or Model object\n :param member: member deleting the payment\n :param ignore_change_window: ignore the payment change window\n :return:\n \"\"\"\n payable = payables.get_payable(model)\n\n if member and not payable.can_manage_payment(member):\n raise PaymentError(\n _(\"User deleting payment does not have the right permissions.\")\n )\n\n payment = payable.payment\n if (\n payment.created_at\n < timezone.now() - timezone.timedelta(seconds=settings.PAYMENT_CHANGE_WINDOW)\n and not ignore_change_window\n ):\n raise PaymentError(_(\"This payment cannot be deleted anymore.\"))\n if payment.batch and payment.batch.processed:\n raise PaymentError(\n _(\"This payment has already been processed and hence cannot be deleted.\")\n )\n\n payable.payment = None\n payable.model.save()\n payment.delete()\n\n\ndef update_last_used(queryset: QuerySet, date: datetime.date = None) -> int:\n \"\"\"Update the last used field of a BankAccount queryset.\n\n :param queryset: Queryset of BankAccounts\n :param date: date to set last_used to\n :return: number of affected rows\n \"\"\"\n if not date:\n date = timezone.now().date()\n\n result = queryset.filter(\n (Q(valid_from__gte=timezone.now()) & Q(valid_until__lt=timezone.now()))\n | Q(valid_until=None)\n ).update(last_used=date)\n return result\n\n\ndef revoke_old_mandates() -> int:\n \"\"\"Revoke all mandates that have not been used for 36 months or more.\n\n :return: number of affected rows\n \"\"\"\n return BankAccount.objects.filter(\n last_used__lte=(timezone.now() - timezone.timedelta(days=36 * 30))\n ).update(valid_until=timezone.now().date())\n\n\ndef process_batch(batch):\n \"\"\"Process a Thalia Pay batch.\n\n :param batch: the batch to be processed\n :return:\n \"\"\"\n batch.processed = True\n\n payments = batch.payments_set.select_related(\"paid_by\")\n for payment in payments:\n bank_account = payment.paid_by.bank_accounts.last()\n bank_account.last_used = batch.withdrawal_date\n bank_account.save()\n\n batch.save()\n\n send_tpay_batch_processing_emails(batch)\n\n\ndef derive_next_mandate_no(member) -> str:\n accounts = (\n BankAccount.objects.filter(owner=PaymentUser.objects.get(pk=member.pk))\n .exclude(mandate_no=None)\n .filter(mandate_no__regex=BankAccount.MANDATE_NO_DEFAULT_REGEX)\n )\n new_mandate_no = 1 + max(\n (int(account.mandate_no.split(\"-\")[1]) for account in accounts), default=0\n )\n return f\"{member.pk}-{new_mandate_no}\"\n\n\ndef send_tpay_batch_processing_emails(batch):\n \"\"\"Send withdrawal notice emails to all members in a batch.\"\"\"\n member_payments = batch.payments_set.values(\"paid_by\").annotate(total=Sum(\"amount\"))\n for member_row in member_payments:\n member = PaymentUser.objects.get(pk=member_row[\"paid_by\"])\n total_amount = member_row[\"total\"]\n\n send_email(\n member.email,\n _(\"Thalia Pay withdrawal notice\"),\n \"payments/email/tpay_withdrawal_notice_mail.txt\",\n {\n \"name\": member.get_full_name(),\n \"batch\": batch,\n \"bank_account\": member.bank_accounts.filter(\n mandate_no__isnull=False\n ).last(),\n \"creditor_id\": settings.SEPA_CREDITOR_ID,\n \"payments\": batch.payments_set.filter(paid_by=member),\n \"total_amount\": total_amount,\n \"payments_url\": (\n settings.BASE_URL\n + reverse(\n \"payments:payment-list\",\n )\n ),\n },\n )\n return len(member_payments)\n\n\ndef execute_data_minimisation(dry_run=False):\n \"\"\"Anonymizes payments older than 7 years.\"\"\"\n # Sometimes years are 366 days of course, but better delete 1 or 2 days early than late\n payment_deletion_period = timezone.now().date() - timezone.timedelta(days=(365 * 7))\n bankaccount_deletion_period = timezone.now() - datetime.timedelta(days=(31 * 13))\n\n queryset_payments = Payment.objects.filter(\n created_at__lte=payment_deletion_period\n ).exclude(paid_by__isnull=True)\n\n # Delete bank accounts that are not valid anymore, and have not been used in the last 13 months\n # (13 months is the required time we need to keep the mandates for)\n queryset_bankaccounts = BankAccount.objects.all()\n queryset_bankaccounts = queryset_bankaccounts.filter(\n valid_until__lt=timezone.now()\n ) # Keep valid bank accounts\n queryset_bankaccounts = queryset_bankaccounts.exclude( # Also keep bank accounts that\n Q(\n owner__paid_payment_set__type=Payment.TPAY\n ), # are used for Thalia Pay payments, AND\n Q(\n owner__paid_payment_set__batch__isnull=True\n ) # have a payment that is in no batch, OR\n | Q(\n owner__paid_payment_set__batch__processed=False\n ) # have an unprocessed batch, OR\n | Q(\n owner__paid_payment_set__batch__processing_date__gt=bankaccount_deletion_period # or have a processed batch that is not older than 13 months\n ),\n )\n\n if not dry_run:\n queryset_payments.update(paid_by=None, processed_by=None)\n queryset_bankaccounts.delete()\n return queryset_payments\n", "path": "website/payments/services.py"}], "after_files": [{"content": "\"\"\"The services defined by the payments package.\"\"\"\nimport datetime\nfrom typing import Union\n\nfrom django.conf import settings\nfrom django.db.models import Model, Q, QuerySet, Sum\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom members.models import Member\nfrom utils.snippets import send_email\n\nfrom .exceptions import PaymentError\nfrom .models import BankAccount, Payment, PaymentUser\nfrom .payables import Payable, payables\n\n\ndef create_payment(\n model_payable: Union[Model, Payable],\n processed_by: Member,\n pay_type: Union[Payment.CASH, Payment.CARD, Payment.WIRE, Payment.TPAY],\n) -> Payment:\n \"\"\"Create a new payment from a payable object.\n\n :param model_payable: Payable or Model object\n :param processed_by: PaymentUser that processed this payment\n :param pay_type: Payment type\n :return: Payment object\n \"\"\"\n if pay_type not in (Payment.CASH, Payment.CARD, Payment.WIRE, Payment.TPAY):\n raise PaymentError(\"Invalid payment type\")\n\n if isinstance(model_payable, Payable):\n payable = model_payable\n else:\n payable = payables.get_payable(model_payable)\n\n payer = (\n PaymentUser.objects.get(pk=payable.payment_payer.pk)\n if payable.payment_payer\n else None\n )\n\n if not (\n (payer and payer == processed_by and pay_type == Payment.TPAY)\n or (payable.can_manage_payment(processed_by) and pay_type != Payment.TPAY)\n ):\n raise PaymentError(\n _(\"User processing payment does not have the right permissions\")\n )\n\n if payable.payment_amount == 0:\n raise PaymentError(_(\"Payment amount 0 is not accepted\"))\n\n if pay_type == Payment.TPAY and not payer.tpay_enabled:\n raise PaymentError(_(\"This user does not have Thalia Pay enabled\"))\n\n if not payable.paying_allowed:\n raise PaymentError(_(\"Payment restricted\"))\n\n if payable.payment is not None:\n payable.payment.amount = payable.payment_amount\n payable.payment.notes = payable.payment_notes\n payable.payment.topic = payable.payment_topic\n payable.payment.paid_by = payer\n payable.payment.processed_by = processed_by\n payable.payment.type = pay_type\n payable.payment.save()\n else:\n payable.payment = Payment.objects.create(\n processed_by=processed_by,\n amount=payable.payment_amount,\n notes=payable.payment_notes,\n topic=payable.payment_topic,\n paid_by=payer,\n type=pay_type,\n )\n return payable.payment\n\n\ndef delete_payment(model: Model, member: Member = None, ignore_change_window=False):\n \"\"\"Remove a payment from a payable object.\n\n :param model: Payable or Model object\n :param member: member deleting the payment\n :param ignore_change_window: ignore the payment change window\n :return:\n \"\"\"\n payable = payables.get_payable(model)\n\n if member and not payable.can_manage_payment(member):\n raise PaymentError(\n _(\"User deleting payment does not have the right permissions.\")\n )\n\n payment = payable.payment\n if (\n payment.created_at\n < timezone.now() - timezone.timedelta(seconds=settings.PAYMENT_CHANGE_WINDOW)\n and not ignore_change_window\n ):\n raise PaymentError(_(\"This payment cannot be deleted anymore.\"))\n if payment.batch and payment.batch.processed:\n raise PaymentError(\n _(\"This payment has already been processed and hence cannot be deleted.\")\n )\n\n payable.payment = None\n payable.model.save()\n payment.delete()\n\n\ndef update_last_used(queryset: QuerySet, date: datetime.date = None) -> int:\n \"\"\"Update the last used field of a BankAccount queryset.\n\n :param queryset: Queryset of BankAccounts\n :param date: date to set last_used to\n :return: number of affected rows\n \"\"\"\n if not date:\n date = timezone.now().date()\n\n result = queryset.filter(\n (Q(valid_from__gte=timezone.now()) & Q(valid_until__lt=timezone.now()))\n | Q(valid_until=None)\n ).update(last_used=date)\n return result\n\n\ndef revoke_old_mandates() -> int:\n \"\"\"Revoke all mandates that have not been used for 36 months or more.\n\n :return: number of affected rows\n \"\"\"\n return BankAccount.objects.filter(\n last_used__lte=(timezone.now() - timezone.timedelta(days=36 * 30))\n ).update(valid_until=timezone.now().date())\n\n\ndef process_batch(batch):\n \"\"\"Process a Thalia Pay batch.\n\n :param batch: the batch to be processed\n :return:\n \"\"\"\n batch.processed = True\n\n payments = batch.payments_set.select_related(\"paid_by\")\n for payment in payments:\n bank_account = payment.paid_by.bank_accounts.last()\n if not bank_account: # pragma: no cover\n # This should not happen, cannot haver, does not happen (right... ;p), but if it does, we don't want to crash, but just remove the payment from the batch (make it unprocessed)\n payment.batch = None\n payment.save()\n else:\n bank_account.last_used = batch.withdrawal_date\n bank_account.save()\n\n batch.save()\n\n send_tpay_batch_processing_emails(batch)\n\n\ndef derive_next_mandate_no(member) -> str:\n accounts = (\n BankAccount.objects.filter(owner=PaymentUser.objects.get(pk=member.pk))\n .exclude(mandate_no=None)\n .filter(mandate_no__regex=BankAccount.MANDATE_NO_DEFAULT_REGEX)\n )\n new_mandate_no = 1 + max(\n (int(account.mandate_no.split(\"-\")[1]) for account in accounts), default=0\n )\n return f\"{member.pk}-{new_mandate_no}\"\n\n\ndef send_tpay_batch_processing_emails(batch):\n \"\"\"Send withdrawal notice emails to all members in a batch.\"\"\"\n member_payments = batch.payments_set.values(\"paid_by\").annotate(total=Sum(\"amount\"))\n for member_row in member_payments:\n member = PaymentUser.objects.get(pk=member_row[\"paid_by\"])\n total_amount = member_row[\"total\"]\n\n send_email(\n member.email,\n _(\"Thalia Pay withdrawal notice\"),\n \"payments/email/tpay_withdrawal_notice_mail.txt\",\n {\n \"name\": member.get_full_name(),\n \"batch\": batch,\n \"bank_account\": member.bank_accounts.filter(\n mandate_no__isnull=False\n ).last(),\n \"creditor_id\": settings.SEPA_CREDITOR_ID,\n \"payments\": batch.payments_set.filter(paid_by=member),\n \"total_amount\": total_amount,\n \"payments_url\": (\n settings.BASE_URL\n + reverse(\n \"payments:payment-list\",\n )\n ),\n },\n )\n return len(member_payments)\n\n\ndef execute_data_minimisation(dry_run=False):\n \"\"\"Anonymizes payments older than 7 years.\"\"\"\n # Sometimes years are 366 days of course, but better delete 1 or 2 days early than late\n payment_deletion_period = timezone.now().date() - timezone.timedelta(days=(365 * 7))\n bankaccount_deletion_period = timezone.now() - datetime.timedelta(days=(31 * 13))\n\n queryset_payments = Payment.objects.filter(\n created_at__lte=payment_deletion_period\n ).exclude(paid_by__isnull=True)\n\n # Delete bank accounts that are not valid anymore, and have not been used in the last 13 months\n # (13 months is the required time we need to keep the mandates for)\n queryset_bankaccounts = BankAccount.objects.all()\n queryset_bankaccounts = queryset_bankaccounts.filter(\n valid_until__lt=timezone.now()\n ) # We must always keep valid bank accounts. so we only select the ones that are not valid anymore (valid_until < now)\n queryset_bankaccounts = queryset_bankaccounts.exclude( # Also keep bank accounts that\n Q(\n owner__paid_payment_set__type=Payment.TPAY\n ), # are used for Thalia Pay payments, AND\n Q(\n owner__paid_payment_set__batch__isnull=True\n ) # have a payment that is in no batch, OR\n | Q(\n owner__paid_payment_set__batch__processed=False\n ) # have an unprocessed batch, OR\n | Q(\n owner__paid_payment_set__batch__processing_date__gt=bankaccount_deletion_period # or have a processed batch that is not older than 13 months\n ),\n )\n\n if not dry_run:\n queryset_payments.update(paid_by=None, processed_by=None)\n queryset_bankaccounts.delete()\n return queryset_payments\n", "path": "website/payments/services.py"}]}
| 2,959 | 320 |
gh_patches_debug_888
|
rasdani/github-patches
|
git_diff
|
helmholtz-analytics__heat-1268
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix Pytorch release tracking workflows
## Due Diligence
<!--- Please address the following points before setting your PR "ready for review".
--->
- General:
- [x] **base branch** must be `main` for new features, latest release branch (e.g. `release/1.3.x`) for bug fixes
- [x] **title** of the PR is suitable to appear in the [Release Notes](https://github.com/helmholtz-analytics/heat/releases/latest)
- Implementation:
- [x] unit tests: all split configurations tested
- [x] unit tests: multiple dtypes tested
- [x] documentation updated where needed
## Description
<!--- Include a summary of the change/s.
Please also include relevant motivation and context. List any dependencies that are required for this change.
--->
Issue/s resolved: #1241
## Changes proposed:
- upgrade to the latest version of checkout action
- delete the token parameter such that the default action token is used
## Type of change
<!--
i.e.
- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- Documentation update
--->
## Memory requirements
<!--- Compare memory requirements to previous implementation / relevant torch operations if applicable:
- in distributed and non-distributed mode
- with `split=None` and `split not None`
This can be done using https://github.com/pythonprofilers/memory_profiler for CPU memory measurements,
GPU measurements can be done with https://pytorch.org/docs/master/generated/torch.cuda.max_memory_allocated.html.
These tools only profile the memory used by each process, not the entire function.
--->
## Performance
<!--- Compare performance to previous implementation / relevant torch operations if applicable:
- in distributed and non-distributed mode
- with `split=None` and `split not None`
Python has an embedded profiler: https://docs.python.org/3.9/library/profile.html
Again, this will only profile the performance on each process. Printing the results with many processes
may be illegible. It may be easiest to save the output of each to a file.
--->
#### Does this change modify the behaviour of other functions? If so, which?
no
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `heat/core/version.py`
Content:
```
1 """This module contains Heat's version information."""
2
3
4 major: int = 1
5 """Indicates Heat's main version."""
6 minor: int = 3
7 """Indicates feature extension."""
8 micro: int = 0
9 """Indicates revisions for bugfixes."""
10 extension: str = "dev"
11 """Indicates special builds, e.g. for specific hardware."""
12
13 if not extension:
14 __version__: str = f"{major}.{minor}.{micro}"
15 """The combined version string, consisting out of major, minor, micro and possibly extension."""
16 else:
17 __version__: str = f"{major}.{minor}.{micro}-{extension}"
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/heat/core/version.py b/heat/core/version.py
--- a/heat/core/version.py
+++ b/heat/core/version.py
@@ -3,7 +3,7 @@
major: int = 1
"""Indicates Heat's main version."""
-minor: int = 3
+minor: int = 4
"""Indicates feature extension."""
micro: int = 0
"""Indicates revisions for bugfixes."""
|
{"golden_diff": "diff --git a/heat/core/version.py b/heat/core/version.py\n--- a/heat/core/version.py\n+++ b/heat/core/version.py\n@@ -3,7 +3,7 @@\n \n major: int = 1\n \"\"\"Indicates Heat's main version.\"\"\"\n-minor: int = 3\n+minor: int = 4\n \"\"\"Indicates feature extension.\"\"\"\n micro: int = 0\n \"\"\"Indicates revisions for bugfixes.\"\"\"\n", "issue": "Fix Pytorch release tracking workflows\n## Due Diligence\r\n<!--- Please address the following points before setting your PR \"ready for review\".\r\n--->\r\n- General:\r\n - [x] **base branch** must be `main` for new features, latest release branch (e.g. `release/1.3.x`) for bug fixes\r\n - [x] **title** of the PR is suitable to appear in the [Release Notes](https://github.com/helmholtz-analytics/heat/releases/latest)\r\n- Implementation:\r\n - [x] unit tests: all split configurations tested\r\n - [x] unit tests: multiple dtypes tested\r\n - [x] documentation updated where needed\r\n\r\n## Description\r\n\r\n<!--- Include a summary of the change/s.\r\nPlease also include relevant motivation and context. List any dependencies that are required for this change.\r\n--->\r\n\r\nIssue/s resolved: #1241 \r\n\r\n## Changes proposed:\r\n\r\n- upgrade to the latest version of checkout action\r\n- delete the token parameter such that the default action token is used\r\n\r\n## Type of change\r\n<!--\r\ni.e.\r\n- Bug fix (non-breaking change which fixes an issue)\r\n- New feature (non-breaking change which adds functionality)\r\n- Breaking change (fix or feature that would cause existing functionality to not work as expected)\r\n- Documentation update\r\n--->\r\n\r\n## Memory requirements\r\n<!--- Compare memory requirements to previous implementation / relevant torch operations if applicable:\r\n- in distributed and non-distributed mode\r\n- with `split=None` and `split not None`\r\n\r\nThis can be done using https://github.com/pythonprofilers/memory_profiler for CPU memory measurements,\r\nGPU measurements can be done with https://pytorch.org/docs/master/generated/torch.cuda.max_memory_allocated.html.\r\nThese tools only profile the memory used by each process, not the entire function.\r\n--->\r\n\r\n## Performance\r\n<!--- Compare performance to previous implementation / relevant torch operations if applicable:\r\n- in distributed and non-distributed mode\r\n- with `split=None` and `split not None`\r\n\r\nPython has an embedded profiler: https://docs.python.org/3.9/library/profile.html\r\nAgain, this will only profile the performance on each process. Printing the results with many processes\r\nmay be illegible. It may be easiest to save the output of each to a file.\r\n--->\r\n\r\n#### Does this change modify the behaviour of other functions? If so, which?\r\nno\r\n\n", "before_files": [{"content": "\"\"\"This module contains Heat's version information.\"\"\"\n\n\nmajor: int = 1\n\"\"\"Indicates Heat's main version.\"\"\"\nminor: int = 3\n\"\"\"Indicates feature extension.\"\"\"\nmicro: int = 0\n\"\"\"Indicates revisions for bugfixes.\"\"\"\nextension: str = \"dev\"\n\"\"\"Indicates special builds, e.g. for specific hardware.\"\"\"\n\nif not extension:\n __version__: str = f\"{major}.{minor}.{micro}\"\n \"\"\"The combined version string, consisting out of major, minor, micro and possibly extension.\"\"\"\nelse:\n __version__: str = f\"{major}.{minor}.{micro}-{extension}\"\n", "path": "heat/core/version.py"}], "after_files": [{"content": "\"\"\"This module contains Heat's version information.\"\"\"\n\n\nmajor: int = 1\n\"\"\"Indicates Heat's main version.\"\"\"\nminor: int = 4\n\"\"\"Indicates feature extension.\"\"\"\nmicro: int = 0\n\"\"\"Indicates revisions for bugfixes.\"\"\"\nextension: str = \"dev\"\n\"\"\"Indicates special builds, e.g. for specific hardware.\"\"\"\n\nif not extension:\n __version__: str = f\"{major}.{minor}.{micro}\"\n \"\"\"The combined version string, consisting out of major, minor, micro and possibly extension.\"\"\"\nelse:\n __version__: str = f\"{major}.{minor}.{micro}-{extension}\"\n", "path": "heat/core/version.py"}]}
| 916 | 96 |
gh_patches_debug_23040
|
rasdani/github-patches
|
git_diff
|
ckan__ckan-5737
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKAN 2.9 changes order in which plugins are returned by PluginImplementations
## Summary
I'm porting a big project from CKAN 2.8 to 2.9. My plugin overrides a template from ckanext-scheming to customize the form. After upgrade, my changes weren't reflected because my custom template wasn't loaded.
Only after changing the ordering of the plugins in `ckan.plugins` it was picked up.
So on CKAN <= 2.8, in order for plugin `abc` to override `scheming_datasets` you needed:
```
ckan.plugins = abc scheming_datasets
```
In CKAN 2.9, you need:
```
ckan.plugins = scheming_datasets abc
```
### Why it is important
This is pretty significant change, which AFAICT wasn't mentioned in the changelog.
After initial investigation it looks like the issue is not how we parse the config option or load the plugins, but how the `PluginImplementations` iterator returns them. We use them in all places where we let plugins integrate with CKAN core. For instance in `environment.py` we call:
```python
for plugin in p.PluginImplementations(p.IConfigurer):
plugin.update_config(config)
```
This is one is relevant to my issue, as it registers template directories from plugins and stores them on a list in `config['extra_template_paths']`. Order is important, as the first template path found will be used to render.
At [this point](https://github.com/ckan/ckan/blob/8eec3e27c320baf29e0d99b2ce20ed14ae10b0d3/ckan/config/environment.py#L173) we get the following behaviour:
* On CKAN 2.8:
```python
[plugin for plugin in p.PluginImplementations(p.IConfigurer)]
# [<Plugin AbcPlugin 'abc'>, <Plugin SchemingDatasetsPlugin 'scheming_datasets'>]
config['extra_template_paths'].split(',')
# [
# u'/home/adria/dev/pyenvs/ckan/src/ckanext-abc/ckanext/abc/templates',
# u'/home/adria/dev/pyenvs/ckan/src/ckanext-scheming/ckanext/scheming/templates',
# ]
```
* On CKAN 2.9:
```python
[plugin for plugin in p.PluginImplementations(p.IConfigurer)]
# [<Plugin SchemingDatasetsPlugin 'scheming_datasets'>, <Plugin AbcPlugin 'abc'>]
config['extra_template_paths'].split(',')
# [
# u'/home/adria/dev/pyenvs/ckan/src/ckanext-scheming/ckanext/scheming/templates',
# u'/home/adria/dev/pyenvs/ckan/src/ckanext-abc/ckanext/abc/templates',
# ]
```
Apart from template loading issues this is likely to affect everywhere where the order of plugins is important, eg chained actions, chained auth functions.
### Root cause
After looking at [ckan/plugins/core.py](https://github.com/ckan/ckan/blob/master/ckan/plugins/core.py) my current thinking is that this is *not* related to the loading of the plugins. AFAICT we´ve always loaded them in the order that they are defined in `ckan.plugins`. It´s the actual iterator returned by `PluginImplementations` that changed the order of the returned plugins at some point between the two versions (pyutilib.component.core==4.6.4 in CKAN 2.8, PyUtilib==5.7.1 in CKAN 2.9). We are importing this class directly from Pyutillib. The only work done on this code between these two versions was https://github.com/ckan/ckan/pull/4886, and I don´t think it should affect the ordering (apart from upgrading the library of course)
### What should we do?
My ideas so far:
1. Change nothing and assume this is the new behaviour, *but* documenting it in the relevant places (2.9 Changelog, plugins docs, mail to ckan-dev). I don´t think we can leave a change like this undocumented
2. Create our own `PluginImplementations` wrapper that restores the old ordering (maybe optionally based on a config option). We would need to override the [`__iter__()`](https://github.com/PyUtilib/pyutilib/blob/5.7.3/pyutilib/component/core/core.py#L222) method, not sure how easy that is
Any thoughts or other ideas on what to do? @ckan/core
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckan/plugins/core.py`
Content:
```
1 # encoding: utf-8
2
3 '''
4 Provides plugin services to the CKAN
5 '''
6
7 from contextlib import contextmanager
8 import logging
9 from pkg_resources import iter_entry_points
10 from pyutilib.component.core import PluginGlobals, implements
11 from pyutilib.component.core import ExtensionPoint as PluginImplementations
12 from pyutilib.component.core import SingletonPlugin as _pca_SingletonPlugin
13 from pyutilib.component.core import Plugin as _pca_Plugin
14 from ckan.common import asbool
15 from six import string_types
16
17 from ckan.plugins import interfaces
18
19 from ckan.common import config
20
21
22 __all__ = [
23 'PluginImplementations', 'implements',
24 'PluginNotFoundException', 'Plugin', 'SingletonPlugin',
25 'load', 'load_all', 'unload', 'unload_all',
26 'get_plugin', 'plugins_update',
27 'use_plugin', 'plugin_loaded',
28 ]
29
30 log = logging.getLogger(__name__)
31
32 # Entry point group.
33 PLUGINS_ENTRY_POINT_GROUP = 'ckan.plugins'
34
35 # Entry point group for system plugins (those that are part of core ckan and
36 # do not need to be explicitly enabled by the user)
37 SYSTEM_PLUGINS_ENTRY_POINT_GROUP = 'ckan.system_plugins'
38
39 # Entry point for test plugins.
40 TEST_PLUGINS_ENTRY_POINT_GROUP = 'ckan.test_plugins'
41
42 GROUPS = [
43 PLUGINS_ENTRY_POINT_GROUP,
44 SYSTEM_PLUGINS_ENTRY_POINT_GROUP,
45 TEST_PLUGINS_ENTRY_POINT_GROUP,
46 ]
47 # These lists are used to ensure that the correct extensions are enabled.
48 _PLUGINS = []
49 _PLUGINS_CLASS = []
50
51 # To aid retrieving extensions by name
52 _PLUGINS_SERVICE = {}
53
54
55 @contextmanager
56 def use_plugin(*plugins):
57 '''Load plugin(s) for testing purposes
58
59 e.g.
60 ```
61 import ckan.plugins as p
62 with p.use_plugin('my_plugin') as my_plugin:
63 # run tests with plugin loaded
64 ```
65 '''
66
67 p = load(*plugins)
68 try:
69 yield p
70 finally:
71 unload(*plugins)
72
73
74 class PluginNotFoundException(Exception):
75 '''
76 Raised when a requested plugin cannot be found.
77 '''
78
79
80 class Plugin(_pca_Plugin):
81 '''
82 Base class for plugins which require multiple instances.
83
84 Unless you need multiple instances of your plugin object you should
85 probably use SingletonPlugin.
86 '''
87
88
89 class SingletonPlugin(_pca_SingletonPlugin):
90 '''
91 Base class for plugins which are singletons (ie most of them)
92
93 One singleton instance of this class will be created when the plugin is
94 loaded. Subsequent calls to the class constructor will always return the
95 same singleton instance.
96 '''
97
98
99 def get_plugin(plugin):
100 ''' Get an instance of a active plugin by name. This is helpful for
101 testing. '''
102 if plugin in _PLUGINS_SERVICE:
103 return _PLUGINS_SERVICE[plugin]
104
105
106 def plugins_update():
107 ''' This is run when plugins have been loaded or unloaded and allows us
108 to run any specific code to ensure that the new plugin setting are
109 correctly setup '''
110
111 # It is posible for extra SingletonPlugin extensions to be activated if
112 # the file containing them is imported, for example if two or more
113 # extensions are defined in the same file. Therefore we do a sanity
114 # check and disable any that should not be active.
115 for env in PluginGlobals.env.values():
116 for service, id_ in env.singleton_services.items():
117 if service not in _PLUGINS_CLASS:
118 PluginGlobals.plugin_instances[id_].deactivate()
119
120 # Reset CKAN to reflect the currently enabled extensions.
121 import ckan.config.environment as environment
122 environment.update_config()
123
124
125 def load_all():
126 '''
127 Load all plugins listed in the 'ckan.plugins' config directive.
128 '''
129 # Clear any loaded plugins
130 unload_all()
131
132 plugins = config.get('ckan.plugins', '').split() + find_system_plugins()
133 # Add the synchronous search plugin, unless already loaded or
134 # explicitly disabled
135 if 'synchronous_search' not in plugins and \
136 asbool(config.get('ckan.search.automatic_indexing', True)):
137 log.debug('Loading the synchronous search plugin')
138 plugins.append('synchronous_search')
139
140 load(*plugins)
141
142
143 def load(*plugins):
144 '''
145 Load named plugin(s).
146 '''
147 output = []
148
149 observers = PluginImplementations(interfaces.IPluginObserver)
150 for plugin in plugins:
151 if plugin in _PLUGINS:
152 raise Exception('Plugin `%s` already loaded' % plugin)
153
154 service = _get_service(plugin)
155 for observer_plugin in observers:
156 observer_plugin.before_load(service)
157 service.activate()
158 for observer_plugin in observers:
159 observer_plugin.after_load(service)
160
161 _PLUGINS.append(plugin)
162 _PLUGINS_CLASS.append(service.__class__)
163
164 if isinstance(service, SingletonPlugin):
165 _PLUGINS_SERVICE[plugin] = service
166
167 output.append(service)
168 plugins_update()
169
170 # Return extension instance if only one was loaded. If more that one
171 # has been requested then a list of instances is returned in the order
172 # they were asked for.
173 if len(output) == 1:
174 return output[0]
175 return output
176
177
178 def unload_all():
179 '''
180 Unload (deactivate) all loaded plugins in the reverse order that they
181 were loaded.
182 '''
183 unload(*reversed(_PLUGINS))
184
185
186 def unload(*plugins):
187 '''
188 Unload named plugin(s).
189 '''
190
191 observers = PluginImplementations(interfaces.IPluginObserver)
192
193 for plugin in plugins:
194 if plugin in _PLUGINS:
195 _PLUGINS.remove(plugin)
196 if plugin in _PLUGINS_SERVICE:
197 del _PLUGINS_SERVICE[plugin]
198 else:
199 raise Exception('Cannot unload plugin `%s`' % plugin)
200
201 service = _get_service(plugin)
202 for observer_plugin in observers:
203 observer_plugin.before_unload(service)
204
205 service.deactivate()
206
207 _PLUGINS_CLASS.remove(service.__class__)
208
209 for observer_plugin in observers:
210 observer_plugin.after_unload(service)
211 plugins_update()
212
213
214 def plugin_loaded(name):
215 '''
216 See if a particular plugin is loaded.
217 '''
218 if name in _PLUGINS:
219 return True
220 return False
221
222
223 def find_system_plugins():
224 '''
225 Return all plugins in the ckan.system_plugins entry point group.
226
227 These are essential for operation and therefore cannot be
228 enabled/disabled through the configuration file.
229 '''
230
231 eps = []
232 for ep in iter_entry_points(group=SYSTEM_PLUGINS_ENTRY_POINT_GROUP):
233 ep.load()
234 eps.append(ep.name)
235 return eps
236
237
238 def _get_service(plugin_name):
239 '''
240 Return a service (ie an instance of a plugin class).
241
242 :param plugin_name: the name of a plugin entry point
243 :type plugin_name: string
244
245 :return: the service object
246 '''
247
248 if isinstance(plugin_name, string_types):
249 for group in GROUPS:
250 iterator = iter_entry_points(
251 group=group,
252 name=plugin_name
253 )
254 plugin = next(iterator, None)
255 if plugin:
256 return plugin.load()(name=plugin_name)
257 raise PluginNotFoundException(plugin_name)
258 else:
259 raise TypeError('Expected a plugin name', plugin_name)
260
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckan/plugins/core.py b/ckan/plugins/core.py
--- a/ckan/plugins/core.py
+++ b/ckan/plugins/core.py
@@ -8,7 +8,7 @@
import logging
from pkg_resources import iter_entry_points
from pyutilib.component.core import PluginGlobals, implements
-from pyutilib.component.core import ExtensionPoint as PluginImplementations
+from pyutilib.component.core import ExtensionPoint
from pyutilib.component.core import SingletonPlugin as _pca_SingletonPlugin
from pyutilib.component.core import Plugin as _pca_Plugin
from ckan.common import asbool
@@ -71,6 +71,21 @@
unload(*plugins)
+class PluginImplementations(ExtensionPoint):
+
+ def __iter__(self):
+ '''
+ When we upgraded pyutilib on CKAN 2.9 the order in which
+ plugins were returned by `PluginImplementations` changed
+ so we use this wrapper to maintain the previous order
+ (which is the same as the ckan.plugins config option)
+ '''
+
+ iterator = super(PluginImplementations, self).__iter__()
+
+ return reversed(list(iterator))
+
+
class PluginNotFoundException(Exception):
'''
Raised when a requested plugin cannot be found.
|
{"golden_diff": "diff --git a/ckan/plugins/core.py b/ckan/plugins/core.py\n--- a/ckan/plugins/core.py\n+++ b/ckan/plugins/core.py\n@@ -8,7 +8,7 @@\n import logging\n from pkg_resources import iter_entry_points\n from pyutilib.component.core import PluginGlobals, implements\n-from pyutilib.component.core import ExtensionPoint as PluginImplementations\n+from pyutilib.component.core import ExtensionPoint\n from pyutilib.component.core import SingletonPlugin as _pca_SingletonPlugin\n from pyutilib.component.core import Plugin as _pca_Plugin\n from ckan.common import asbool\n@@ -71,6 +71,21 @@\n unload(*plugins)\n \n \n+class PluginImplementations(ExtensionPoint):\n+\n+ def __iter__(self):\n+ '''\n+ When we upgraded pyutilib on CKAN 2.9 the order in which\n+ plugins were returned by `PluginImplementations` changed\n+ so we use this wrapper to maintain the previous order\n+ (which is the same as the ckan.plugins config option)\n+ '''\n+\n+ iterator = super(PluginImplementations, self).__iter__()\n+\n+ return reversed(list(iterator))\n+\n+\n class PluginNotFoundException(Exception):\n '''\n Raised when a requested plugin cannot be found.\n", "issue": "CKAN 2.9 changes order in which plugins are returned by PluginImplementations\n## Summary\r\nI'm porting a big project from CKAN 2.8 to 2.9. My plugin overrides a template from ckanext-scheming to customize the form. After upgrade, my changes weren't reflected because my custom template wasn't loaded.\r\nOnly after changing the ordering of the plugins in `ckan.plugins` it was picked up.\r\n\r\nSo on CKAN <= 2.8, in order for plugin `abc` to override `scheming_datasets` you needed:\r\n\r\n```\r\nckan.plugins = abc scheming_datasets \r\n```\r\n\r\nIn CKAN 2.9, you need:\r\n```\r\nckan.plugins = scheming_datasets abc\r\n```\r\n\r\n### Why it is important\r\n\r\nThis is pretty significant change, which AFAICT wasn't mentioned in the changelog.\r\n\r\nAfter initial investigation it looks like the issue is not how we parse the config option or load the plugins, but how the `PluginImplementations` iterator returns them. We use them in all places where we let plugins integrate with CKAN core. For instance in `environment.py` we call:\r\n\r\n```python\r\n for plugin in p.PluginImplementations(p.IConfigurer): \r\n plugin.update_config(config) \r\n```\r\nThis is one is relevant to my issue, as it registers template directories from plugins and stores them on a list in `config['extra_template_paths']`. Order is important, as the first template path found will be used to render.\r\n\r\nAt [this point](https://github.com/ckan/ckan/blob/8eec3e27c320baf29e0d99b2ce20ed14ae10b0d3/ckan/config/environment.py#L173) we get the following behaviour:\r\n\r\n* On CKAN 2.8:\r\n\r\n```python \r\n[plugin for plugin in p.PluginImplementations(p.IConfigurer)]\r\n\r\n# [<Plugin AbcPlugin 'abc'>, <Plugin SchemingDatasetsPlugin 'scheming_datasets'>]\r\n\r\nconfig['extra_template_paths'].split(',')\r\n\r\n# [\r\n# u'/home/adria/dev/pyenvs/ckan/src/ckanext-abc/ckanext/abc/templates',\r\n# u'/home/adria/dev/pyenvs/ckan/src/ckanext-scheming/ckanext/scheming/templates',\r\n# ]\r\n```\r\n* On CKAN 2.9:\r\n\r\n```python\r\n[plugin for plugin in p.PluginImplementations(p.IConfigurer)]\r\n\r\n# [<Plugin SchemingDatasetsPlugin 'scheming_datasets'>, <Plugin AbcPlugin 'abc'>]\r\n\r\nconfig['extra_template_paths'].split(',')\r\n\r\n# [\r\n# u'/home/adria/dev/pyenvs/ckan/src/ckanext-scheming/ckanext/scheming/templates',\r\n# u'/home/adria/dev/pyenvs/ckan/src/ckanext-abc/ckanext/abc/templates',\r\n# ]\r\n```\r\n\r\nApart from template loading issues this is likely to affect everywhere where the order of plugins is important, eg chained actions, chained auth functions.\r\n\r\n### Root cause\r\n\r\nAfter looking at [ckan/plugins/core.py](https://github.com/ckan/ckan/blob/master/ckan/plugins/core.py) my current thinking is that this is *not* related to the loading of the plugins. AFAICT we\u00b4ve always loaded them in the order that they are defined in `ckan.plugins`. It\u00b4s the actual iterator returned by `PluginImplementations` that changed the order of the returned plugins at some point between the two versions (pyutilib.component.core==4.6.4 in CKAN 2.8, PyUtilib==5.7.1 in CKAN 2.9). We are importing this class directly from Pyutillib. The only work done on this code between these two versions was https://github.com/ckan/ckan/pull/4886, and I don\u00b4t think it should affect the ordering (apart from upgrading the library of course)\r\n\r\n### What should we do?\r\n\r\nMy ideas so far:\r\n\r\n1. Change nothing and assume this is the new behaviour, *but* documenting it in the relevant places (2.9 Changelog, plugins docs, mail to ckan-dev). I don\u00b4t think we can leave a change like this undocumented\r\n2. Create our own `PluginImplementations` wrapper that restores the old ordering (maybe optionally based on a config option). We would need to override the [`__iter__()`](https://github.com/PyUtilib/pyutilib/blob/5.7.3/pyutilib/component/core/core.py#L222) method, not sure how easy that is\r\n\r\nAny thoughts or other ideas on what to do? @ckan/core \r\n\n", "before_files": [{"content": "# encoding: utf-8\n\n'''\nProvides plugin services to the CKAN\n'''\n\nfrom contextlib import contextmanager\nimport logging\nfrom pkg_resources import iter_entry_points\nfrom pyutilib.component.core import PluginGlobals, implements\nfrom pyutilib.component.core import ExtensionPoint as PluginImplementations\nfrom pyutilib.component.core import SingletonPlugin as _pca_SingletonPlugin\nfrom pyutilib.component.core import Plugin as _pca_Plugin\nfrom ckan.common import asbool\nfrom six import string_types\n\nfrom ckan.plugins import interfaces\n\nfrom ckan.common import config\n\n\n__all__ = [\n 'PluginImplementations', 'implements',\n 'PluginNotFoundException', 'Plugin', 'SingletonPlugin',\n 'load', 'load_all', 'unload', 'unload_all',\n 'get_plugin', 'plugins_update',\n 'use_plugin', 'plugin_loaded',\n]\n\nlog = logging.getLogger(__name__)\n\n# Entry point group.\nPLUGINS_ENTRY_POINT_GROUP = 'ckan.plugins'\n\n# Entry point group for system plugins (those that are part of core ckan and\n# do not need to be explicitly enabled by the user)\nSYSTEM_PLUGINS_ENTRY_POINT_GROUP = 'ckan.system_plugins'\n\n# Entry point for test plugins.\nTEST_PLUGINS_ENTRY_POINT_GROUP = 'ckan.test_plugins'\n\nGROUPS = [\n PLUGINS_ENTRY_POINT_GROUP,\n SYSTEM_PLUGINS_ENTRY_POINT_GROUP,\n TEST_PLUGINS_ENTRY_POINT_GROUP,\n]\n# These lists are used to ensure that the correct extensions are enabled.\n_PLUGINS = []\n_PLUGINS_CLASS = []\n\n# To aid retrieving extensions by name\n_PLUGINS_SERVICE = {}\n\n\n@contextmanager\ndef use_plugin(*plugins):\n '''Load plugin(s) for testing purposes\n\n e.g.\n ```\n import ckan.plugins as p\n with p.use_plugin('my_plugin') as my_plugin:\n # run tests with plugin loaded\n ```\n '''\n\n p = load(*plugins)\n try:\n yield p\n finally:\n unload(*plugins)\n\n\nclass PluginNotFoundException(Exception):\n '''\n Raised when a requested plugin cannot be found.\n '''\n\n\nclass Plugin(_pca_Plugin):\n '''\n Base class for plugins which require multiple instances.\n\n Unless you need multiple instances of your plugin object you should\n probably use SingletonPlugin.\n '''\n\n\nclass SingletonPlugin(_pca_SingletonPlugin):\n '''\n Base class for plugins which are singletons (ie most of them)\n\n One singleton instance of this class will be created when the plugin is\n loaded. Subsequent calls to the class constructor will always return the\n same singleton instance.\n '''\n\n\ndef get_plugin(plugin):\n ''' Get an instance of a active plugin by name. This is helpful for\n testing. '''\n if plugin in _PLUGINS_SERVICE:\n return _PLUGINS_SERVICE[plugin]\n\n\ndef plugins_update():\n ''' This is run when plugins have been loaded or unloaded and allows us\n to run any specific code to ensure that the new plugin setting are\n correctly setup '''\n\n # It is posible for extra SingletonPlugin extensions to be activated if\n # the file containing them is imported, for example if two or more\n # extensions are defined in the same file. Therefore we do a sanity\n # check and disable any that should not be active.\n for env in PluginGlobals.env.values():\n for service, id_ in env.singleton_services.items():\n if service not in _PLUGINS_CLASS:\n PluginGlobals.plugin_instances[id_].deactivate()\n\n # Reset CKAN to reflect the currently enabled extensions.\n import ckan.config.environment as environment\n environment.update_config()\n\n\ndef load_all():\n '''\n Load all plugins listed in the 'ckan.plugins' config directive.\n '''\n # Clear any loaded plugins\n unload_all()\n\n plugins = config.get('ckan.plugins', '').split() + find_system_plugins()\n # Add the synchronous search plugin, unless already loaded or\n # explicitly disabled\n if 'synchronous_search' not in plugins and \\\n asbool(config.get('ckan.search.automatic_indexing', True)):\n log.debug('Loading the synchronous search plugin')\n plugins.append('synchronous_search')\n\n load(*plugins)\n\n\ndef load(*plugins):\n '''\n Load named plugin(s).\n '''\n output = []\n\n observers = PluginImplementations(interfaces.IPluginObserver)\n for plugin in plugins:\n if plugin in _PLUGINS:\n raise Exception('Plugin `%s` already loaded' % plugin)\n\n service = _get_service(plugin)\n for observer_plugin in observers:\n observer_plugin.before_load(service)\n service.activate()\n for observer_plugin in observers:\n observer_plugin.after_load(service)\n\n _PLUGINS.append(plugin)\n _PLUGINS_CLASS.append(service.__class__)\n\n if isinstance(service, SingletonPlugin):\n _PLUGINS_SERVICE[plugin] = service\n\n output.append(service)\n plugins_update()\n\n # Return extension instance if only one was loaded. If more that one\n # has been requested then a list of instances is returned in the order\n # they were asked for.\n if len(output) == 1:\n return output[0]\n return output\n\n\ndef unload_all():\n '''\n Unload (deactivate) all loaded plugins in the reverse order that they\n were loaded.\n '''\n unload(*reversed(_PLUGINS))\n\n\ndef unload(*plugins):\n '''\n Unload named plugin(s).\n '''\n\n observers = PluginImplementations(interfaces.IPluginObserver)\n\n for plugin in plugins:\n if plugin in _PLUGINS:\n _PLUGINS.remove(plugin)\n if plugin in _PLUGINS_SERVICE:\n del _PLUGINS_SERVICE[plugin]\n else:\n raise Exception('Cannot unload plugin `%s`' % plugin)\n\n service = _get_service(plugin)\n for observer_plugin in observers:\n observer_plugin.before_unload(service)\n\n service.deactivate()\n\n _PLUGINS_CLASS.remove(service.__class__)\n\n for observer_plugin in observers:\n observer_plugin.after_unload(service)\n plugins_update()\n\n\ndef plugin_loaded(name):\n '''\n See if a particular plugin is loaded.\n '''\n if name in _PLUGINS:\n return True\n return False\n\n\ndef find_system_plugins():\n '''\n Return all plugins in the ckan.system_plugins entry point group.\n\n These are essential for operation and therefore cannot be\n enabled/disabled through the configuration file.\n '''\n\n eps = []\n for ep in iter_entry_points(group=SYSTEM_PLUGINS_ENTRY_POINT_GROUP):\n ep.load()\n eps.append(ep.name)\n return eps\n\n\ndef _get_service(plugin_name):\n '''\n Return a service (ie an instance of a plugin class).\n\n :param plugin_name: the name of a plugin entry point\n :type plugin_name: string\n\n :return: the service object\n '''\n\n if isinstance(plugin_name, string_types):\n for group in GROUPS:\n iterator = iter_entry_points(\n group=group,\n name=plugin_name\n )\n plugin = next(iterator, None)\n if plugin:\n return plugin.load()(name=plugin_name)\n raise PluginNotFoundException(plugin_name)\n else:\n raise TypeError('Expected a plugin name', plugin_name)\n", "path": "ckan/plugins/core.py"}], "after_files": [{"content": "# encoding: utf-8\n\n'''\nProvides plugin services to the CKAN\n'''\n\nfrom contextlib import contextmanager\nimport logging\nfrom pkg_resources import iter_entry_points\nfrom pyutilib.component.core import PluginGlobals, implements\nfrom pyutilib.component.core import ExtensionPoint\nfrom pyutilib.component.core import SingletonPlugin as _pca_SingletonPlugin\nfrom pyutilib.component.core import Plugin as _pca_Plugin\nfrom ckan.common import asbool\nfrom six import string_types\n\nfrom ckan.plugins import interfaces\n\nfrom ckan.common import config\n\n\n__all__ = [\n 'PluginImplementations', 'implements',\n 'PluginNotFoundException', 'Plugin', 'SingletonPlugin',\n 'load', 'load_all', 'unload', 'unload_all',\n 'get_plugin', 'plugins_update',\n 'use_plugin', 'plugin_loaded',\n]\n\nlog = logging.getLogger(__name__)\n\n# Entry point group.\nPLUGINS_ENTRY_POINT_GROUP = 'ckan.plugins'\n\n# Entry point group for system plugins (those that are part of core ckan and\n# do not need to be explicitly enabled by the user)\nSYSTEM_PLUGINS_ENTRY_POINT_GROUP = 'ckan.system_plugins'\n\n# Entry point for test plugins.\nTEST_PLUGINS_ENTRY_POINT_GROUP = 'ckan.test_plugins'\n\nGROUPS = [\n PLUGINS_ENTRY_POINT_GROUP,\n SYSTEM_PLUGINS_ENTRY_POINT_GROUP,\n TEST_PLUGINS_ENTRY_POINT_GROUP,\n]\n# These lists are used to ensure that the correct extensions are enabled.\n_PLUGINS = []\n_PLUGINS_CLASS = []\n\n# To aid retrieving extensions by name\n_PLUGINS_SERVICE = {}\n\n\n@contextmanager\ndef use_plugin(*plugins):\n '''Load plugin(s) for testing purposes\n\n e.g.\n ```\n import ckan.plugins as p\n with p.use_plugin('my_plugin') as my_plugin:\n # run tests with plugin loaded\n ```\n '''\n\n p = load(*plugins)\n try:\n yield p\n finally:\n unload(*plugins)\n\n\nclass PluginImplementations(ExtensionPoint):\n\n def __iter__(self):\n '''\n When we upgraded pyutilib on CKAN 2.9 the order in which\n plugins were returned by `PluginImplementations` changed\n so we use this wrapper to maintain the previous order\n (which is the same as the ckan.plugins config option)\n '''\n\n iterator = super(PluginImplementations, self).__iter__()\n\n return reversed(list(iterator))\n\n\nclass PluginNotFoundException(Exception):\n '''\n Raised when a requested plugin cannot be found.\n '''\n\n\nclass Plugin(_pca_Plugin):\n '''\n Base class for plugins which require multiple instances.\n\n Unless you need multiple instances of your plugin object you should\n probably use SingletonPlugin.\n '''\n\n\nclass SingletonPlugin(_pca_SingletonPlugin):\n '''\n Base class for plugins which are singletons (ie most of them)\n\n One singleton instance of this class will be created when the plugin is\n loaded. Subsequent calls to the class constructor will always return the\n same singleton instance.\n '''\n\n\ndef get_plugin(plugin):\n ''' Get an instance of a active plugin by name. This is helpful for\n testing. '''\n if plugin in _PLUGINS_SERVICE:\n return _PLUGINS_SERVICE[plugin]\n\n\ndef plugins_update():\n ''' This is run when plugins have been loaded or unloaded and allows us\n to run any specific code to ensure that the new plugin setting are\n correctly setup '''\n\n # It is posible for extra SingletonPlugin extensions to be activated if\n # the file containing them is imported, for example if two or more\n # extensions are defined in the same file. Therefore we do a sanity\n # check and disable any that should not be active.\n for env in PluginGlobals.env.values():\n for service, id_ in env.singleton_services.items():\n if service not in _PLUGINS_CLASS:\n PluginGlobals.plugin_instances[id_].deactivate()\n\n # Reset CKAN to reflect the currently enabled extensions.\n import ckan.config.environment as environment\n environment.update_config()\n\n\ndef load_all():\n '''\n Load all plugins listed in the 'ckan.plugins' config directive.\n '''\n # Clear any loaded plugins\n unload_all()\n\n plugins = config.get('ckan.plugins', '').split() + find_system_plugins()\n # Add the synchronous search plugin, unless already loaded or\n # explicitly disabled\n if 'synchronous_search' not in plugins and \\\n asbool(config.get('ckan.search.automatic_indexing', True)):\n log.debug('Loading the synchronous search plugin')\n plugins.append('synchronous_search')\n\n load(*plugins)\n\n\ndef load(*plugins):\n '''\n Load named plugin(s).\n '''\n output = []\n\n observers = PluginImplementations(interfaces.IPluginObserver)\n for plugin in plugins:\n if plugin in _PLUGINS:\n raise Exception('Plugin `%s` already loaded' % plugin)\n\n service = _get_service(plugin)\n for observer_plugin in observers:\n observer_plugin.before_load(service)\n service.activate()\n for observer_plugin in observers:\n observer_plugin.after_load(service)\n\n _PLUGINS.append(plugin)\n _PLUGINS_CLASS.append(service.__class__)\n\n if isinstance(service, SingletonPlugin):\n _PLUGINS_SERVICE[plugin] = service\n\n output.append(service)\n plugins_update()\n\n # Return extension instance if only one was loaded. If more that one\n # has been requested then a list of instances is returned in the order\n # they were asked for.\n if len(output) == 1:\n return output[0]\n return output\n\n\ndef unload_all():\n '''\n Unload (deactivate) all loaded plugins in the reverse order that they\n were loaded.\n '''\n unload(*reversed(_PLUGINS))\n\n\ndef unload(*plugins):\n '''\n Unload named plugin(s).\n '''\n\n observers = PluginImplementations(interfaces.IPluginObserver)\n\n for plugin in plugins:\n if plugin in _PLUGINS:\n _PLUGINS.remove(plugin)\n if plugin in _PLUGINS_SERVICE:\n del _PLUGINS_SERVICE[plugin]\n else:\n raise Exception('Cannot unload plugin `%s`' % plugin)\n\n service = _get_service(plugin)\n for observer_plugin in observers:\n observer_plugin.before_unload(service)\n\n service.deactivate()\n\n _PLUGINS_CLASS.remove(service.__class__)\n\n for observer_plugin in observers:\n observer_plugin.after_unload(service)\n plugins_update()\n\n\ndef plugin_loaded(name):\n '''\n See if a particular plugin is loaded.\n '''\n if name in _PLUGINS:\n return True\n return False\n\n\ndef find_system_plugins():\n '''\n Return all plugins in the ckan.system_plugins entry point group.\n\n These are essential for operation and therefore cannot be\n enabled/disabled through the configuration file.\n '''\n\n eps = []\n for ep in iter_entry_points(group=SYSTEM_PLUGINS_ENTRY_POINT_GROUP):\n ep.load()\n eps.append(ep.name)\n return eps\n\n\ndef _get_service(plugin_name):\n '''\n Return a service (ie an instance of a plugin class).\n\n :param plugin_name: the name of a plugin entry point\n :type plugin_name: string\n\n :return: the service object\n '''\n\n if isinstance(plugin_name, string_types):\n for group in GROUPS:\n iterator = iter_entry_points(\n group=group,\n name=plugin_name\n )\n plugin = next(iterator, None)\n if plugin:\n return plugin.load()(name=plugin_name)\n raise PluginNotFoundException(plugin_name)\n else:\n raise TypeError('Expected a plugin name', plugin_name)\n", "path": "ckan/plugins/core.py"}]}
| 3,512 | 280 |
gh_patches_debug_26870
|
rasdani/github-patches
|
git_diff
|
qutebrowser__qutebrowser-1939
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
QtWebKit: Handle visibility API
See https://github.com/OtterBrowser/otter-browser/commit/6500972092a562e23271ccf9aff4fdeed21d8290
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutebrowser/browser/webkit/webview.py`
Content:
```
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2016 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """The main browser widgets."""
21
22 import sys
23
24 from PyQt5.QtCore import pyqtSignal, pyqtSlot, Qt, QUrl
25 from PyQt5.QtGui import QPalette
26 from PyQt5.QtWidgets import QStyleFactory
27 from PyQt5.QtWebKit import QWebSettings
28 from PyQt5.QtWebKitWidgets import QWebView, QWebPage, QWebFrame
29
30 from qutebrowser.config import config
31 from qutebrowser.keyinput import modeman
32 from qutebrowser.utils import log, usertypes, utils, qtutils, objreg, debug
33 from qutebrowser.browser.webkit import webpage
34
35
36 class WebView(QWebView):
37
38 """Custom QWebView subclass with qutebrowser-specific features.
39
40 Attributes:
41 tab: The WebKitTab object for this WebView
42 hintmanager: The HintManager instance for this view.
43 scroll_pos: The current scroll position as (x%, y%) tuple.
44 win_id: The window ID of the view.
45 _tab_id: The tab ID of the view.
46 _old_scroll_pos: The old scroll position.
47
48 Signals:
49 scroll_pos_changed: Scroll percentage of current tab changed.
50 arg 1: x-position in %.
51 arg 2: y-position in %.
52 shutting_down: Emitted when the view is shutting down.
53 """
54
55 scroll_pos_changed = pyqtSignal(int, int)
56 shutting_down = pyqtSignal()
57
58 def __init__(self, win_id, tab_id, tab, parent=None):
59 super().__init__(parent)
60 if sys.platform == 'darwin' and qtutils.version_check('5.4'):
61 # WORKAROUND for https://bugreports.qt.io/browse/QTBUG-42948
62 # See https://github.com/The-Compiler/qutebrowser/issues/462
63 self.setStyle(QStyleFactory.create('Fusion'))
64 # FIXME:qtwebengine this is only used to set the zoom factor from
65 # the QWebPage - we should get rid of it somehow (signals?)
66 self.tab = tab
67 self.win_id = win_id
68 self.scroll_pos = (-1, -1)
69 self._old_scroll_pos = (-1, -1)
70 self._set_bg_color()
71 self._tab_id = tab_id
72
73 page = webpage.BrowserPage(self.win_id, self._tab_id, tab.data,
74 parent=self)
75 self.setPage(page)
76
77 mode_manager = objreg.get('mode-manager', scope='window',
78 window=win_id)
79 mode_manager.entered.connect(self.on_mode_entered)
80 mode_manager.left.connect(self.on_mode_left)
81 objreg.get('config').changed.connect(self._set_bg_color)
82
83 def __repr__(self):
84 url = utils.elide(self.url().toDisplayString(QUrl.EncodeUnicode), 100)
85 return utils.get_repr(self, tab_id=self._tab_id, url=url)
86
87 def __del__(self):
88 # Explicitly releasing the page here seems to prevent some segfaults
89 # when quitting.
90 # Copied from:
91 # https://code.google.com/p/webscraping/source/browse/webkit.py#325
92 try:
93 self.setPage(None)
94 except RuntimeError:
95 # It seems sometimes Qt has already deleted the QWebView and we
96 # get: RuntimeError: wrapped C/C++ object of type WebView has been
97 # deleted
98 pass
99
100 @config.change_filter('colors', 'webpage.bg')
101 def _set_bg_color(self):
102 """Set the webpage background color as configured.
103
104 FIXME:qtwebengine
105 For QtWebEngine, doing the same has no effect, so we do it in here.
106 """
107 col = config.get('colors', 'webpage.bg')
108 palette = self.palette()
109 if col is None:
110 col = self.style().standardPalette().color(QPalette.Base)
111 palette.setColor(QPalette.Base, col)
112 self.setPalette(palette)
113
114 def shutdown(self):
115 """Shut down the webview."""
116 self.shutting_down.emit()
117 # We disable javascript because that prevents some segfaults when
118 # quitting it seems.
119 log.destroy.debug("Shutting down {!r}.".format(self))
120 settings = self.settings()
121 settings.setAttribute(QWebSettings.JavascriptEnabled, False)
122 self.stop()
123 self.page().shutdown()
124
125 def openurl(self, url):
126 """Open a URL in the browser.
127
128 Args:
129 url: The URL to load as QUrl
130 """
131 self.load(url)
132 if url.scheme() == 'qute':
133 frame = self.page().mainFrame()
134 frame.javaScriptWindowObjectCleared.connect(self.add_js_bridge)
135
136 @pyqtSlot()
137 def add_js_bridge(self):
138 """Add the javascript bridge for qute:... pages."""
139 frame = self.sender()
140 if not isinstance(frame, QWebFrame):
141 log.webview.error("Got non-QWebFrame {!r} in "
142 "add_js_bridge!".format(frame))
143 return
144
145 if frame.url().scheme() == 'qute':
146 bridge = objreg.get('js-bridge')
147 frame.addToJavaScriptWindowObject('qute', bridge)
148
149 @pyqtSlot(usertypes.KeyMode)
150 def on_mode_entered(self, mode):
151 """Ignore attempts to focus the widget if in any status-input mode.
152
153 FIXME:qtwebengine
154 For QtWebEngine, doing the same has no effect, so we do it in here.
155 """
156 if mode in [usertypes.KeyMode.command, usertypes.KeyMode.prompt,
157 usertypes.KeyMode.yesno]:
158 log.webview.debug("Ignoring focus because mode {} was "
159 "entered.".format(mode))
160 self.setFocusPolicy(Qt.NoFocus)
161
162 @pyqtSlot(usertypes.KeyMode)
163 def on_mode_left(self, mode):
164 """Restore focus policy if status-input modes were left.
165
166 FIXME:qtwebengine
167 For QtWebEngine, doing the same has no effect, so we do it in here.
168 """
169 if mode in [usertypes.KeyMode.command, usertypes.KeyMode.prompt,
170 usertypes.KeyMode.yesno]:
171 log.webview.debug("Restoring focus policy because mode {} was "
172 "left.".format(mode))
173 self.setFocusPolicy(Qt.WheelFocus)
174
175 def createWindow(self, wintype):
176 """Called by Qt when a page wants to create a new window.
177
178 This function is called from the createWindow() method of the
179 associated QWebPage, each time the page wants to create a new window of
180 the given type. This might be the result, for example, of a JavaScript
181 request to open a document in a new window.
182
183 Args:
184 wintype: This enum describes the types of window that can be
185 created by the createWindow() function.
186
187 QWebPage::WebBrowserWindow: The window is a regular web
188 browser window.
189 QWebPage::WebModalDialog: The window acts as modal dialog.
190
191 Return:
192 The new QWebView object.
193 """
194 debug_type = debug.qenum_key(QWebPage, wintype)
195 log.webview.debug("createWindow with type {}".format(debug_type))
196 if wintype == QWebPage.WebModalDialog:
197 log.webview.warning("WebModalDialog requested, but we don't "
198 "support that!")
199 tabbed_browser = objreg.get('tabbed-browser', scope='window',
200 window=self.win_id)
201 # pylint: disable=protected-access
202 return tabbed_browser.tabopen(background=False)._widget
203
204 def paintEvent(self, e):
205 """Extend paintEvent to emit a signal if the scroll position changed.
206
207 This is a bit of a hack: We listen to repaint requests here, in the
208 hope a repaint will always be requested when scrolling, and if the
209 scroll position actually changed, we emit a signal.
210
211 QtWebEngine has a scrollPositionChanged signal, so it's not needed
212 there.
213
214 Args:
215 e: The QPaintEvent.
216
217 Return:
218 The superclass event return value.
219 """
220 frame = self.page().mainFrame()
221 new_pos = (frame.scrollBarValue(Qt.Horizontal),
222 frame.scrollBarValue(Qt.Vertical))
223 if self._old_scroll_pos != new_pos:
224 self._old_scroll_pos = new_pos
225 m = (frame.scrollBarMaximum(Qt.Horizontal),
226 frame.scrollBarMaximum(Qt.Vertical))
227 perc = (round(100 * new_pos[0] / m[0]) if m[0] != 0 else 0,
228 round(100 * new_pos[1] / m[1]) if m[1] != 0 else 0)
229 self.scroll_pos = perc
230 self.scroll_pos_changed.emit(*perc)
231 # Let superclass handle the event
232 super().paintEvent(e)
233
234 def contextMenuEvent(self, e):
235 """Save a reference to the context menu so we can close it.
236
237 This is not needed for QtWebEngine, so it's in here.
238 """
239 menu = self.page().createStandardContextMenu()
240 self.shutting_down.connect(menu.close)
241 modeman.instance(self.win_id).entered.connect(menu.close)
242 menu.exec_(e.globalPos())
243
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qutebrowser/browser/webkit/webview.py b/qutebrowser/browser/webkit/webview.py
--- a/qutebrowser/browser/webkit/webview.py
+++ b/qutebrowser/browser/webkit/webview.py
@@ -72,6 +72,14 @@
page = webpage.BrowserPage(self.win_id, self._tab_id, tab.data,
parent=self)
+
+ try:
+ page.setVisibilityState(
+ QWebPage.VisibilityStateVisible if self.isVisible()
+ else QWebPage.VisibilityStateHidden)
+ except AttributeError:
+ pass
+
self.setPage(page)
mode_manager = objreg.get('mode-manager', scope='window',
@@ -240,3 +248,35 @@
self.shutting_down.connect(menu.close)
modeman.instance(self.win_id).entered.connect(menu.close)
menu.exec_(e.globalPos())
+
+ def showEvent(self, e):
+ """Extend showEvent to set the page visibility state to visible.
+
+ Args:
+ e: The QShowEvent.
+
+ Return:
+ The superclass event return value.
+ """
+ try:
+ self.page().setVisibilityState(QWebPage.VisibilityStateVisible)
+ except AttributeError:
+ pass
+
+ super().showEvent(e)
+
+ def hideEvent(self, e):
+ """Extend hideEvent to set the page visibility state to hidden.
+
+ Args:
+ e: The QHideEvent.
+
+ Return:
+ The superclass event return value.
+ """
+ try:
+ self.page().setVisibilityState(QWebPage.VisibilityStateHidden)
+ except AttributeError:
+ pass
+
+ super().hideEvent(e)
|
{"golden_diff": "diff --git a/qutebrowser/browser/webkit/webview.py b/qutebrowser/browser/webkit/webview.py\n--- a/qutebrowser/browser/webkit/webview.py\n+++ b/qutebrowser/browser/webkit/webview.py\n@@ -72,6 +72,14 @@\n \n page = webpage.BrowserPage(self.win_id, self._tab_id, tab.data,\n parent=self)\n+\n+ try:\n+ page.setVisibilityState(\n+ QWebPage.VisibilityStateVisible if self.isVisible()\n+ else QWebPage.VisibilityStateHidden)\n+ except AttributeError:\n+ pass\n+\n self.setPage(page)\n \n mode_manager = objreg.get('mode-manager', scope='window',\n@@ -240,3 +248,35 @@\n self.shutting_down.connect(menu.close)\n modeman.instance(self.win_id).entered.connect(menu.close)\n menu.exec_(e.globalPos())\n+\n+ def showEvent(self, e):\n+ \"\"\"Extend showEvent to set the page visibility state to visible.\n+\n+ Args:\n+ e: The QShowEvent.\n+\n+ Return:\n+ The superclass event return value.\n+ \"\"\"\n+ try:\n+ self.page().setVisibilityState(QWebPage.VisibilityStateVisible)\n+ except AttributeError:\n+ pass\n+\n+ super().showEvent(e)\n+\n+ def hideEvent(self, e):\n+ \"\"\"Extend hideEvent to set the page visibility state to hidden.\n+\n+ Args:\n+ e: The QHideEvent.\n+\n+ Return:\n+ The superclass event return value.\n+ \"\"\"\n+ try:\n+ self.page().setVisibilityState(QWebPage.VisibilityStateHidden)\n+ except AttributeError:\n+ pass\n+\n+ super().hideEvent(e)\n", "issue": "QtWebKit: Handle visibility API\nSee https://github.com/OtterBrowser/otter-browser/commit/6500972092a562e23271ccf9aff4fdeed21d8290\n\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2016 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"The main browser widgets.\"\"\"\n\nimport sys\n\nfrom PyQt5.QtCore import pyqtSignal, pyqtSlot, Qt, QUrl\nfrom PyQt5.QtGui import QPalette\nfrom PyQt5.QtWidgets import QStyleFactory\nfrom PyQt5.QtWebKit import QWebSettings\nfrom PyQt5.QtWebKitWidgets import QWebView, QWebPage, QWebFrame\n\nfrom qutebrowser.config import config\nfrom qutebrowser.keyinput import modeman\nfrom qutebrowser.utils import log, usertypes, utils, qtutils, objreg, debug\nfrom qutebrowser.browser.webkit import webpage\n\n\nclass WebView(QWebView):\n\n \"\"\"Custom QWebView subclass with qutebrowser-specific features.\n\n Attributes:\n tab: The WebKitTab object for this WebView\n hintmanager: The HintManager instance for this view.\n scroll_pos: The current scroll position as (x%, y%) tuple.\n win_id: The window ID of the view.\n _tab_id: The tab ID of the view.\n _old_scroll_pos: The old scroll position.\n\n Signals:\n scroll_pos_changed: Scroll percentage of current tab changed.\n arg 1: x-position in %.\n arg 2: y-position in %.\n shutting_down: Emitted when the view is shutting down.\n \"\"\"\n\n scroll_pos_changed = pyqtSignal(int, int)\n shutting_down = pyqtSignal()\n\n def __init__(self, win_id, tab_id, tab, parent=None):\n super().__init__(parent)\n if sys.platform == 'darwin' and qtutils.version_check('5.4'):\n # WORKAROUND for https://bugreports.qt.io/browse/QTBUG-42948\n # See https://github.com/The-Compiler/qutebrowser/issues/462\n self.setStyle(QStyleFactory.create('Fusion'))\n # FIXME:qtwebengine this is only used to set the zoom factor from\n # the QWebPage - we should get rid of it somehow (signals?)\n self.tab = tab\n self.win_id = win_id\n self.scroll_pos = (-1, -1)\n self._old_scroll_pos = (-1, -1)\n self._set_bg_color()\n self._tab_id = tab_id\n\n page = webpage.BrowserPage(self.win_id, self._tab_id, tab.data,\n parent=self)\n self.setPage(page)\n\n mode_manager = objreg.get('mode-manager', scope='window',\n window=win_id)\n mode_manager.entered.connect(self.on_mode_entered)\n mode_manager.left.connect(self.on_mode_left)\n objreg.get('config').changed.connect(self._set_bg_color)\n\n def __repr__(self):\n url = utils.elide(self.url().toDisplayString(QUrl.EncodeUnicode), 100)\n return utils.get_repr(self, tab_id=self._tab_id, url=url)\n\n def __del__(self):\n # Explicitly releasing the page here seems to prevent some segfaults\n # when quitting.\n # Copied from:\n # https://code.google.com/p/webscraping/source/browse/webkit.py#325\n try:\n self.setPage(None)\n except RuntimeError:\n # It seems sometimes Qt has already deleted the QWebView and we\n # get: RuntimeError: wrapped C/C++ object of type WebView has been\n # deleted\n pass\n\n @config.change_filter('colors', 'webpage.bg')\n def _set_bg_color(self):\n \"\"\"Set the webpage background color as configured.\n\n FIXME:qtwebengine\n For QtWebEngine, doing the same has no effect, so we do it in here.\n \"\"\"\n col = config.get('colors', 'webpage.bg')\n palette = self.palette()\n if col is None:\n col = self.style().standardPalette().color(QPalette.Base)\n palette.setColor(QPalette.Base, col)\n self.setPalette(palette)\n\n def shutdown(self):\n \"\"\"Shut down the webview.\"\"\"\n self.shutting_down.emit()\n # We disable javascript because that prevents some segfaults when\n # quitting it seems.\n log.destroy.debug(\"Shutting down {!r}.\".format(self))\n settings = self.settings()\n settings.setAttribute(QWebSettings.JavascriptEnabled, False)\n self.stop()\n self.page().shutdown()\n\n def openurl(self, url):\n \"\"\"Open a URL in the browser.\n\n Args:\n url: The URL to load as QUrl\n \"\"\"\n self.load(url)\n if url.scheme() == 'qute':\n frame = self.page().mainFrame()\n frame.javaScriptWindowObjectCleared.connect(self.add_js_bridge)\n\n @pyqtSlot()\n def add_js_bridge(self):\n \"\"\"Add the javascript bridge for qute:... pages.\"\"\"\n frame = self.sender()\n if not isinstance(frame, QWebFrame):\n log.webview.error(\"Got non-QWebFrame {!r} in \"\n \"add_js_bridge!\".format(frame))\n return\n\n if frame.url().scheme() == 'qute':\n bridge = objreg.get('js-bridge')\n frame.addToJavaScriptWindowObject('qute', bridge)\n\n @pyqtSlot(usertypes.KeyMode)\n def on_mode_entered(self, mode):\n \"\"\"Ignore attempts to focus the widget if in any status-input mode.\n\n FIXME:qtwebengine\n For QtWebEngine, doing the same has no effect, so we do it in here.\n \"\"\"\n if mode in [usertypes.KeyMode.command, usertypes.KeyMode.prompt,\n usertypes.KeyMode.yesno]:\n log.webview.debug(\"Ignoring focus because mode {} was \"\n \"entered.\".format(mode))\n self.setFocusPolicy(Qt.NoFocus)\n\n @pyqtSlot(usertypes.KeyMode)\n def on_mode_left(self, mode):\n \"\"\"Restore focus policy if status-input modes were left.\n\n FIXME:qtwebengine\n For QtWebEngine, doing the same has no effect, so we do it in here.\n \"\"\"\n if mode in [usertypes.KeyMode.command, usertypes.KeyMode.prompt,\n usertypes.KeyMode.yesno]:\n log.webview.debug(\"Restoring focus policy because mode {} was \"\n \"left.\".format(mode))\n self.setFocusPolicy(Qt.WheelFocus)\n\n def createWindow(self, wintype):\n \"\"\"Called by Qt when a page wants to create a new window.\n\n This function is called from the createWindow() method of the\n associated QWebPage, each time the page wants to create a new window of\n the given type. This might be the result, for example, of a JavaScript\n request to open a document in a new window.\n\n Args:\n wintype: This enum describes the types of window that can be\n created by the createWindow() function.\n\n QWebPage::WebBrowserWindow: The window is a regular web\n browser window.\n QWebPage::WebModalDialog: The window acts as modal dialog.\n\n Return:\n The new QWebView object.\n \"\"\"\n debug_type = debug.qenum_key(QWebPage, wintype)\n log.webview.debug(\"createWindow with type {}\".format(debug_type))\n if wintype == QWebPage.WebModalDialog:\n log.webview.warning(\"WebModalDialog requested, but we don't \"\n \"support that!\")\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=self.win_id)\n # pylint: disable=protected-access\n return tabbed_browser.tabopen(background=False)._widget\n\n def paintEvent(self, e):\n \"\"\"Extend paintEvent to emit a signal if the scroll position changed.\n\n This is a bit of a hack: We listen to repaint requests here, in the\n hope a repaint will always be requested when scrolling, and if the\n scroll position actually changed, we emit a signal.\n\n QtWebEngine has a scrollPositionChanged signal, so it's not needed\n there.\n\n Args:\n e: The QPaintEvent.\n\n Return:\n The superclass event return value.\n \"\"\"\n frame = self.page().mainFrame()\n new_pos = (frame.scrollBarValue(Qt.Horizontal),\n frame.scrollBarValue(Qt.Vertical))\n if self._old_scroll_pos != new_pos:\n self._old_scroll_pos = new_pos\n m = (frame.scrollBarMaximum(Qt.Horizontal),\n frame.scrollBarMaximum(Qt.Vertical))\n perc = (round(100 * new_pos[0] / m[0]) if m[0] != 0 else 0,\n round(100 * new_pos[1] / m[1]) if m[1] != 0 else 0)\n self.scroll_pos = perc\n self.scroll_pos_changed.emit(*perc)\n # Let superclass handle the event\n super().paintEvent(e)\n\n def contextMenuEvent(self, e):\n \"\"\"Save a reference to the context menu so we can close it.\n\n This is not needed for QtWebEngine, so it's in here.\n \"\"\"\n menu = self.page().createStandardContextMenu()\n self.shutting_down.connect(menu.close)\n modeman.instance(self.win_id).entered.connect(menu.close)\n menu.exec_(e.globalPos())\n", "path": "qutebrowser/browser/webkit/webview.py"}], "after_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2016 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"The main browser widgets.\"\"\"\n\nimport sys\n\nfrom PyQt5.QtCore import pyqtSignal, pyqtSlot, Qt, QUrl\nfrom PyQt5.QtGui import QPalette\nfrom PyQt5.QtWidgets import QStyleFactory\nfrom PyQt5.QtWebKit import QWebSettings\nfrom PyQt5.QtWebKitWidgets import QWebView, QWebPage, QWebFrame\n\nfrom qutebrowser.config import config\nfrom qutebrowser.keyinput import modeman\nfrom qutebrowser.utils import log, usertypes, utils, qtutils, objreg, debug\nfrom qutebrowser.browser.webkit import webpage\n\n\nclass WebView(QWebView):\n\n \"\"\"Custom QWebView subclass with qutebrowser-specific features.\n\n Attributes:\n tab: The WebKitTab object for this WebView\n hintmanager: The HintManager instance for this view.\n scroll_pos: The current scroll position as (x%, y%) tuple.\n win_id: The window ID of the view.\n _tab_id: The tab ID of the view.\n _old_scroll_pos: The old scroll position.\n\n Signals:\n scroll_pos_changed: Scroll percentage of current tab changed.\n arg 1: x-position in %.\n arg 2: y-position in %.\n shutting_down: Emitted when the view is shutting down.\n \"\"\"\n\n scroll_pos_changed = pyqtSignal(int, int)\n shutting_down = pyqtSignal()\n\n def __init__(self, win_id, tab_id, tab, parent=None):\n super().__init__(parent)\n if sys.platform == 'darwin' and qtutils.version_check('5.4'):\n # WORKAROUND for https://bugreports.qt.io/browse/QTBUG-42948\n # See https://github.com/The-Compiler/qutebrowser/issues/462\n self.setStyle(QStyleFactory.create('Fusion'))\n # FIXME:qtwebengine this is only used to set the zoom factor from\n # the QWebPage - we should get rid of it somehow (signals?)\n self.tab = tab\n self.win_id = win_id\n self.scroll_pos = (-1, -1)\n self._old_scroll_pos = (-1, -1)\n self._set_bg_color()\n self._tab_id = tab_id\n\n page = webpage.BrowserPage(self.win_id, self._tab_id, tab.data,\n parent=self)\n\n try:\n page.setVisibilityState(\n QWebPage.VisibilityStateVisible if self.isVisible()\n else QWebPage.VisibilityStateHidden)\n except AttributeError:\n pass\n\n self.setPage(page)\n\n mode_manager = objreg.get('mode-manager', scope='window',\n window=win_id)\n mode_manager.entered.connect(self.on_mode_entered)\n mode_manager.left.connect(self.on_mode_left)\n objreg.get('config').changed.connect(self._set_bg_color)\n\n def __repr__(self):\n url = utils.elide(self.url().toDisplayString(QUrl.EncodeUnicode), 100)\n return utils.get_repr(self, tab_id=self._tab_id, url=url)\n\n def __del__(self):\n # Explicitly releasing the page here seems to prevent some segfaults\n # when quitting.\n # Copied from:\n # https://code.google.com/p/webscraping/source/browse/webkit.py#325\n try:\n self.setPage(None)\n except RuntimeError:\n # It seems sometimes Qt has already deleted the QWebView and we\n # get: RuntimeError: wrapped C/C++ object of type WebView has been\n # deleted\n pass\n\n @config.change_filter('colors', 'webpage.bg')\n def _set_bg_color(self):\n \"\"\"Set the webpage background color as configured.\n\n FIXME:qtwebengine\n For QtWebEngine, doing the same has no effect, so we do it in here.\n \"\"\"\n col = config.get('colors', 'webpage.bg')\n palette = self.palette()\n if col is None:\n col = self.style().standardPalette().color(QPalette.Base)\n palette.setColor(QPalette.Base, col)\n self.setPalette(palette)\n\n def shutdown(self):\n \"\"\"Shut down the webview.\"\"\"\n self.shutting_down.emit()\n # We disable javascript because that prevents some segfaults when\n # quitting it seems.\n log.destroy.debug(\"Shutting down {!r}.\".format(self))\n settings = self.settings()\n settings.setAttribute(QWebSettings.JavascriptEnabled, False)\n self.stop()\n self.page().shutdown()\n\n def openurl(self, url):\n \"\"\"Open a URL in the browser.\n\n Args:\n url: The URL to load as QUrl\n \"\"\"\n self.load(url)\n if url.scheme() == 'qute':\n frame = self.page().mainFrame()\n frame.javaScriptWindowObjectCleared.connect(self.add_js_bridge)\n\n @pyqtSlot()\n def add_js_bridge(self):\n \"\"\"Add the javascript bridge for qute:... pages.\"\"\"\n frame = self.sender()\n if not isinstance(frame, QWebFrame):\n log.webview.error(\"Got non-QWebFrame {!r} in \"\n \"add_js_bridge!\".format(frame))\n return\n\n if frame.url().scheme() == 'qute':\n bridge = objreg.get('js-bridge')\n frame.addToJavaScriptWindowObject('qute', bridge)\n\n @pyqtSlot(usertypes.KeyMode)\n def on_mode_entered(self, mode):\n \"\"\"Ignore attempts to focus the widget if in any status-input mode.\n\n FIXME:qtwebengine\n For QtWebEngine, doing the same has no effect, so we do it in here.\n \"\"\"\n if mode in [usertypes.KeyMode.command, usertypes.KeyMode.prompt,\n usertypes.KeyMode.yesno]:\n log.webview.debug(\"Ignoring focus because mode {} was \"\n \"entered.\".format(mode))\n self.setFocusPolicy(Qt.NoFocus)\n\n @pyqtSlot(usertypes.KeyMode)\n def on_mode_left(self, mode):\n \"\"\"Restore focus policy if status-input modes were left.\n\n FIXME:qtwebengine\n For QtWebEngine, doing the same has no effect, so we do it in here.\n \"\"\"\n if mode in [usertypes.KeyMode.command, usertypes.KeyMode.prompt,\n usertypes.KeyMode.yesno]:\n log.webview.debug(\"Restoring focus policy because mode {} was \"\n \"left.\".format(mode))\n self.setFocusPolicy(Qt.WheelFocus)\n\n def createWindow(self, wintype):\n \"\"\"Called by Qt when a page wants to create a new window.\n\n This function is called from the createWindow() method of the\n associated QWebPage, each time the page wants to create a new window of\n the given type. This might be the result, for example, of a JavaScript\n request to open a document in a new window.\n\n Args:\n wintype: This enum describes the types of window that can be\n created by the createWindow() function.\n\n QWebPage::WebBrowserWindow: The window is a regular web\n browser window.\n QWebPage::WebModalDialog: The window acts as modal dialog.\n\n Return:\n The new QWebView object.\n \"\"\"\n debug_type = debug.qenum_key(QWebPage, wintype)\n log.webview.debug(\"createWindow with type {}\".format(debug_type))\n if wintype == QWebPage.WebModalDialog:\n log.webview.warning(\"WebModalDialog requested, but we don't \"\n \"support that!\")\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=self.win_id)\n # pylint: disable=protected-access\n return tabbed_browser.tabopen(background=False)._widget\n\n def paintEvent(self, e):\n \"\"\"Extend paintEvent to emit a signal if the scroll position changed.\n\n This is a bit of a hack: We listen to repaint requests here, in the\n hope a repaint will always be requested when scrolling, and if the\n scroll position actually changed, we emit a signal.\n\n QtWebEngine has a scrollPositionChanged signal, so it's not needed\n there.\n\n Args:\n e: The QPaintEvent.\n\n Return:\n The superclass event return value.\n \"\"\"\n frame = self.page().mainFrame()\n new_pos = (frame.scrollBarValue(Qt.Horizontal),\n frame.scrollBarValue(Qt.Vertical))\n if self._old_scroll_pos != new_pos:\n self._old_scroll_pos = new_pos\n m = (frame.scrollBarMaximum(Qt.Horizontal),\n frame.scrollBarMaximum(Qt.Vertical))\n perc = (round(100 * new_pos[0] / m[0]) if m[0] != 0 else 0,\n round(100 * new_pos[1] / m[1]) if m[1] != 0 else 0)\n self.scroll_pos = perc\n self.scroll_pos_changed.emit(*perc)\n # Let superclass handle the event\n super().paintEvent(e)\n\n def contextMenuEvent(self, e):\n \"\"\"Save a reference to the context menu so we can close it.\n\n This is not needed for QtWebEngine, so it's in here.\n \"\"\"\n menu = self.page().createStandardContextMenu()\n self.shutting_down.connect(menu.close)\n modeman.instance(self.win_id).entered.connect(menu.close)\n menu.exec_(e.globalPos())\n\n def showEvent(self, e):\n \"\"\"Extend showEvent to set the page visibility state to visible.\n\n Args:\n e: The QShowEvent.\n\n Return:\n The superclass event return value.\n \"\"\"\n try:\n self.page().setVisibilityState(QWebPage.VisibilityStateVisible)\n except AttributeError:\n pass\n\n super().showEvent(e)\n\n def hideEvent(self, e):\n \"\"\"Extend hideEvent to set the page visibility state to hidden.\n\n Args:\n e: The QHideEvent.\n\n Return:\n The superclass event return value.\n \"\"\"\n try:\n self.page().setVisibilityState(QWebPage.VisibilityStateHidden)\n except AttributeError:\n pass\n\n super().hideEvent(e)\n", "path": "qutebrowser/browser/webkit/webview.py"}]}
| 3,141 | 377 |
gh_patches_debug_28436
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-4749
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
shapely hook doesn't work on windows
Using current develop, the shapely hook fails when it runs `binaries += [(os.path.join(lib_dir, f), '') for f in os.listdir(lib_dir)]`. `lib_dir` here equals `Lib/site-packages/shapely/DLLs`. The actual directory on my conda python 3.6 installation is `Library/bin/`. My old spec file uses the following ugly code to copy these libraries over:
```
lib_dir = sys.executable.replace("python.exe", os.path.join("Library", "bin"))
binaries += [(os.path.join(lib_dir, 'geos_c.dll'), '')]
binaries += [(os.path.join(lib_dir, 'geos.dll'), '')]
binaries += [(os.path.join(lib_dir, 'mkl_*.dll'), '')]
```
Is there a better way to get a hold of this Library directory with some pyinstaller utility function? Does anyone know if other python environments (non-conda) have the directory used in the hook or @durden did you just guess on the Windows path?
Side issue: Shapely 1.6+ doesn't seem to work on at least windows (haven't updated on other platforms). It fails to find the geos libraries mentioned above unless you execute the pyinstaller-made (inno setup packaged) executable from the install directory (`C:\Program Files (x86)\myprgm\bin\`). For now I'm just downgrading to 1.5.17.
shapely hook doesn't work on windows
Using current develop, the shapely hook fails when it runs `binaries += [(os.path.join(lib_dir, f), '') for f in os.listdir(lib_dir)]`. `lib_dir` here equals `Lib/site-packages/shapely/DLLs`. The actual directory on my conda python 3.6 installation is `Library/bin/`. My old spec file uses the following ugly code to copy these libraries over:
```
lib_dir = sys.executable.replace("python.exe", os.path.join("Library", "bin"))
binaries += [(os.path.join(lib_dir, 'geos_c.dll'), '')]
binaries += [(os.path.join(lib_dir, 'geos.dll'), '')]
binaries += [(os.path.join(lib_dir, 'mkl_*.dll'), '')]
```
Is there a better way to get a hold of this Library directory with some pyinstaller utility function? Does anyone know if other python environments (non-conda) have the directory used in the hook or @durden did you just guess on the Windows path?
Side issue: Shapely 1.6+ doesn't seem to work on at least windows (haven't updated on other platforms). It fails to find the geos libraries mentioned above unless you execute the pyinstaller-made (inno setup packaged) executable from the install directory (`C:\Program Files (x86)\myprgm\bin\`). For now I'm just downgrading to 1.5.17.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-shapely.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2017-2020, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 import os
13
14 from PyInstaller.utils.hooks import get_package_paths
15 from PyInstaller.utils.hooks import is_module_satisfies
16 from PyInstaller import compat
17
18 # Necessary when using the vectorized subpackage
19 hiddenimports = ['shapely.prepared']
20
21 pkg_base, pkg_dir = get_package_paths('shapely')
22
23
24 binaries = []
25 if compat.is_win:
26 if compat.is_conda:
27 lib_dir = os.path.join(compat.base_prefix, 'Library', 'bin')
28 else:
29 lib_dir = os.path.join(pkg_dir, 'DLLs')
30 dll_files = ['geos_c.dll', 'geos.dll']
31 binaries += [(os.path.join(lib_dir, f), '.') for f in dll_files]
32 elif compat.is_linux:
33 lib_dir = os.path.join(pkg_dir, '.libs')
34 dest_dir = os.path.join('shapely', '.libs')
35
36 # This duplicates the libgeos*.so* files in the build. PyInstaller will
37 # copy them into the root of the build by default, but shapely cannot load
38 # them from there in linux IF shapely was installed via a whl file. The
39 # whl bundles its' own libgeos with a different name, something like
40 # libgeos_c-*.so.* but shapely tries to load libgeos_c.so if there isn't a
41 # ./libs directory under its' package. There is a proposed fix for this in
42 # shapely but it has not been accepted it:
43 # https://github.com/Toblerity/Shapely/pull/485
44 if is_module_satisfies('shapely <= 1.6'):
45 binaries += [(os.path.join(lib_dir, f), dest_dir) for f in os.listdir(lib_dir)]
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/PyInstaller/hooks/hook-shapely.py b/PyInstaller/hooks/hook-shapely.py
--- a/PyInstaller/hooks/hook-shapely.py
+++ b/PyInstaller/hooks/hook-shapely.py
@@ -10,6 +10,7 @@
#-----------------------------------------------------------------------------
import os
+from ctypes.util import find_library
from PyInstaller.utils.hooks import get_package_paths
from PyInstaller.utils.hooks import is_module_satisfies
@@ -23,12 +24,25 @@
binaries = []
if compat.is_win:
+ # Search conda directory if conda is active, then search standard
+ # directory. This is the same order of precidence used in shapely.
+ standard_path = os.path.join(pkg_dir, 'DLLs')
+ lib_paths = [standard_path, os.environ['PATH']]
if compat.is_conda:
- lib_dir = os.path.join(compat.base_prefix, 'Library', 'bin')
- else:
- lib_dir = os.path.join(pkg_dir, 'DLLs')
- dll_files = ['geos_c.dll', 'geos.dll']
- binaries += [(os.path.join(lib_dir, f), '.') for f in dll_files]
+ conda_path = os.path.join(compat.base_prefix, 'Library', 'bin')
+ lib_paths.insert(0, conda_path)
+ original_path = os.environ['PATH']
+ try:
+ os.environ['PATH'] = os.pathsep.join(lib_paths)
+ dll_path = find_library('geos_c')
+ finally:
+ os.environ['PATH'] = original_path
+ if dll_path is None:
+ raise SystemExit(
+ "Error: geos_c.dll not found, required by hook-shapely.py.\n"
+ "Please check your installation or provide a pull request to "
+ "PyInstaller to update hook-shapely.py.")
+ binaries += [(dll_path, '.')]
elif compat.is_linux:
lib_dir = os.path.join(pkg_dir, '.libs')
dest_dir = os.path.join('shapely', '.libs')
|
{"golden_diff": "diff --git a/PyInstaller/hooks/hook-shapely.py b/PyInstaller/hooks/hook-shapely.py\n--- a/PyInstaller/hooks/hook-shapely.py\n+++ b/PyInstaller/hooks/hook-shapely.py\n@@ -10,6 +10,7 @@\n #-----------------------------------------------------------------------------\n \n import os\n+from ctypes.util import find_library\n \n from PyInstaller.utils.hooks import get_package_paths\n from PyInstaller.utils.hooks import is_module_satisfies\n@@ -23,12 +24,25 @@\n \n binaries = []\n if compat.is_win:\n+ # Search conda directory if conda is active, then search standard\n+ # directory. This is the same order of precidence used in shapely.\n+ standard_path = os.path.join(pkg_dir, 'DLLs')\n+ lib_paths = [standard_path, os.environ['PATH']]\n if compat.is_conda:\n- lib_dir = os.path.join(compat.base_prefix, 'Library', 'bin')\n- else:\n- lib_dir = os.path.join(pkg_dir, 'DLLs')\n- dll_files = ['geos_c.dll', 'geos.dll']\n- binaries += [(os.path.join(lib_dir, f), '.') for f in dll_files]\n+ conda_path = os.path.join(compat.base_prefix, 'Library', 'bin')\n+ lib_paths.insert(0, conda_path)\n+ original_path = os.environ['PATH']\n+ try:\n+ os.environ['PATH'] = os.pathsep.join(lib_paths)\n+ dll_path = find_library('geos_c')\n+ finally:\n+ os.environ['PATH'] = original_path\n+ if dll_path is None:\n+ raise SystemExit(\n+ \"Error: geos_c.dll not found, required by hook-shapely.py.\\n\"\n+ \"Please check your installation or provide a pull request to \"\n+ \"PyInstaller to update hook-shapely.py.\")\n+ binaries += [(dll_path, '.')]\n elif compat.is_linux:\n lib_dir = os.path.join(pkg_dir, '.libs')\n dest_dir = os.path.join('shapely', '.libs')\n", "issue": "shapely hook doesn't work on windows\nUsing current develop, the shapely hook fails when it runs `binaries += [(os.path.join(lib_dir, f), '') for f in os.listdir(lib_dir)]`. `lib_dir` here equals `Lib/site-packages/shapely/DLLs`. The actual directory on my conda python 3.6 installation is `Library/bin/`. My old spec file uses the following ugly code to copy these libraries over:\r\n\r\n```\r\n lib_dir = sys.executable.replace(\"python.exe\", os.path.join(\"Library\", \"bin\"))\r\n binaries += [(os.path.join(lib_dir, 'geos_c.dll'), '')]\r\n binaries += [(os.path.join(lib_dir, 'geos.dll'), '')]\r\n binaries += [(os.path.join(lib_dir, 'mkl_*.dll'), '')]\r\n```\r\n\r\nIs there a better way to get a hold of this Library directory with some pyinstaller utility function? Does anyone know if other python environments (non-conda) have the directory used in the hook or @durden did you just guess on the Windows path?\r\n\r\nSide issue: Shapely 1.6+ doesn't seem to work on at least windows (haven't updated on other platforms). It fails to find the geos libraries mentioned above unless you execute the pyinstaller-made (inno setup packaged) executable from the install directory (`C:\\Program Files (x86)\\myprgm\\bin\\`). For now I'm just downgrading to 1.5.17.\nshapely hook doesn't work on windows\nUsing current develop, the shapely hook fails when it runs `binaries += [(os.path.join(lib_dir, f), '') for f in os.listdir(lib_dir)]`. `lib_dir` here equals `Lib/site-packages/shapely/DLLs`. The actual directory on my conda python 3.6 installation is `Library/bin/`. My old spec file uses the following ugly code to copy these libraries over:\r\n\r\n```\r\n lib_dir = sys.executable.replace(\"python.exe\", os.path.join(\"Library\", \"bin\"))\r\n binaries += [(os.path.join(lib_dir, 'geos_c.dll'), '')]\r\n binaries += [(os.path.join(lib_dir, 'geos.dll'), '')]\r\n binaries += [(os.path.join(lib_dir, 'mkl_*.dll'), '')]\r\n```\r\n\r\nIs there a better way to get a hold of this Library directory with some pyinstaller utility function? Does anyone know if other python environments (non-conda) have the directory used in the hook or @durden did you just guess on the Windows path?\r\n\r\nSide issue: Shapely 1.6+ doesn't seem to work on at least windows (haven't updated on other platforms). It fails to find the geos libraries mentioned above unless you execute the pyinstaller-made (inno setup packaged) executable from the install directory (`C:\\Program Files (x86)\\myprgm\\bin\\`). For now I'm just downgrading to 1.5.17.\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2017-2020, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nimport os\n\nfrom PyInstaller.utils.hooks import get_package_paths\nfrom PyInstaller.utils.hooks import is_module_satisfies\nfrom PyInstaller import compat\n\n# Necessary when using the vectorized subpackage\nhiddenimports = ['shapely.prepared']\n\npkg_base, pkg_dir = get_package_paths('shapely')\n\n\nbinaries = []\nif compat.is_win:\n if compat.is_conda:\n lib_dir = os.path.join(compat.base_prefix, 'Library', 'bin')\n else:\n lib_dir = os.path.join(pkg_dir, 'DLLs')\n dll_files = ['geos_c.dll', 'geos.dll']\n binaries += [(os.path.join(lib_dir, f), '.') for f in dll_files]\nelif compat.is_linux:\n lib_dir = os.path.join(pkg_dir, '.libs')\n dest_dir = os.path.join('shapely', '.libs')\n\n # This duplicates the libgeos*.so* files in the build. PyInstaller will\n # copy them into the root of the build by default, but shapely cannot load\n # them from there in linux IF shapely was installed via a whl file. The\n # whl bundles its' own libgeos with a different name, something like\n # libgeos_c-*.so.* but shapely tries to load libgeos_c.so if there isn't a\n # ./libs directory under its' package. There is a proposed fix for this in\n # shapely but it has not been accepted it:\n # https://github.com/Toblerity/Shapely/pull/485\n if is_module_satisfies('shapely <= 1.6'):\n binaries += [(os.path.join(lib_dir, f), dest_dir) for f in os.listdir(lib_dir)]\n", "path": "PyInstaller/hooks/hook-shapely.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2017-2020, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nimport os\nfrom ctypes.util import find_library\n\nfrom PyInstaller.utils.hooks import get_package_paths\nfrom PyInstaller.utils.hooks import is_module_satisfies\nfrom PyInstaller import compat\n\n# Necessary when using the vectorized subpackage\nhiddenimports = ['shapely.prepared']\n\npkg_base, pkg_dir = get_package_paths('shapely')\n\n\nbinaries = []\nif compat.is_win:\n # Search conda directory if conda is active, then search standard\n # directory. This is the same order of precidence used in shapely.\n standard_path = os.path.join(pkg_dir, 'DLLs')\n lib_paths = [standard_path, os.environ['PATH']]\n if compat.is_conda:\n conda_path = os.path.join(compat.base_prefix, 'Library', 'bin')\n lib_paths.insert(0, conda_path)\n original_path = os.environ['PATH']\n try:\n os.environ['PATH'] = os.pathsep.join(lib_paths)\n dll_path = find_library('geos_c')\n finally:\n os.environ['PATH'] = original_path\n if dll_path is None:\n raise SystemExit(\n \"Error: geos_c.dll not found, required by hook-shapely.py.\\n\"\n \"Please check your installation or provide a pull request to \"\n \"PyInstaller to update hook-shapely.py.\")\n binaries += [(dll_path, '.')]\nelif compat.is_linux:\n lib_dir = os.path.join(pkg_dir, '.libs')\n dest_dir = os.path.join('shapely', '.libs')\n\n # This duplicates the libgeos*.so* files in the build. PyInstaller will\n # copy them into the root of the build by default, but shapely cannot load\n # them from there in linux IF shapely was installed via a whl file. The\n # whl bundles its' own libgeos with a different name, something like\n # libgeos_c-*.so.* but shapely tries to load libgeos_c.so if there isn't a\n # ./libs directory under its' package. There is a proposed fix for this in\n # shapely but it has not been accepted it:\n # https://github.com/Toblerity/Shapely/pull/485\n if is_module_satisfies('shapely <= 1.6'):\n binaries += [(os.path.join(lib_dir, f), dest_dir) for f in os.listdir(lib_dir)]\n", "path": "PyInstaller/hooks/hook-shapely.py"}]}
| 1,467 | 466 |
gh_patches_debug_30430
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-1716
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move C code for padding into it's own .c and .h files
See `constant_time` for the same idea.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/primitives/padding.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import abc
8
9 import six
10
11 from cryptography import utils
12 from cryptography.exceptions import AlreadyFinalized
13 from cryptography.hazmat.bindings.utils import LazyLibrary, build_ffi
14
15
16 TYPES = """
17 uint8_t Cryptography_check_pkcs7_padding(const uint8_t *, uint8_t);
18 """
19
20 FUNCTIONS = """
21 /* Returns the value of the input with the most-significant-bit copied to all
22 of the bits. */
23 static uint8_t Cryptography_DUPLICATE_MSB_TO_ALL(uint8_t a) {
24 return (1 - (a >> (sizeof(uint8_t) * 8 - 1))) - 1;
25 }
26
27 /* This returns 0xFF if a < b else 0x00, but does so in a constant time
28 fashion */
29 static uint8_t Cryptography_constant_time_lt(uint8_t a, uint8_t b) {
30 a -= b;
31 return Cryptography_DUPLICATE_MSB_TO_ALL(a);
32 }
33
34 uint8_t Cryptography_check_pkcs7_padding(const uint8_t *data,
35 uint8_t block_len) {
36 uint8_t i;
37 uint8_t pad_size = data[block_len - 1];
38 uint8_t mismatch = 0;
39 for (i = 0; i < block_len; i++) {
40 unsigned int mask = Cryptography_constant_time_lt(i, pad_size);
41 uint8_t b = data[block_len - 1 - i];
42 mismatch |= (mask & (pad_size ^ b));
43 }
44
45 /* Check to make sure the pad_size was within the valid range. */
46 mismatch |= ~Cryptography_constant_time_lt(0, pad_size);
47 mismatch |= Cryptography_constant_time_lt(block_len, pad_size);
48
49 /* Make sure any bits set are copied to the lowest bit */
50 mismatch |= mismatch >> 4;
51 mismatch |= mismatch >> 2;
52 mismatch |= mismatch >> 1;
53 /* Now check the low bit to see if it's set */
54 return (mismatch & 1) == 0;
55 }
56 """
57
58
59 _ffi = build_ffi(cdef_source=TYPES, verify_source=FUNCTIONS)
60 _lib = LazyLibrary(_ffi)
61
62
63 @six.add_metaclass(abc.ABCMeta)
64 class PaddingContext(object):
65 @abc.abstractmethod
66 def update(self, data):
67 """
68 Pads the provided bytes and returns any available data as bytes.
69 """
70
71 @abc.abstractmethod
72 def finalize(self):
73 """
74 Finalize the padding, returns bytes.
75 """
76
77
78 class PKCS7(object):
79 def __init__(self, block_size):
80 if not (0 <= block_size < 256):
81 raise ValueError("block_size must be in range(0, 256).")
82
83 if block_size % 8 != 0:
84 raise ValueError("block_size must be a multiple of 8.")
85
86 self.block_size = block_size
87
88 def padder(self):
89 return _PKCS7PaddingContext(self.block_size)
90
91 def unpadder(self):
92 return _PKCS7UnpaddingContext(self.block_size)
93
94
95 @utils.register_interface(PaddingContext)
96 class _PKCS7PaddingContext(object):
97 def __init__(self, block_size):
98 self.block_size = block_size
99 # TODO: more copies than necessary, we should use zero-buffer (#193)
100 self._buffer = b""
101
102 def update(self, data):
103 if self._buffer is None:
104 raise AlreadyFinalized("Context was already finalized.")
105
106 if not isinstance(data, bytes):
107 raise TypeError("data must be bytes.")
108
109 self._buffer += data
110
111 finished_blocks = len(self._buffer) // (self.block_size // 8)
112
113 result = self._buffer[:finished_blocks * (self.block_size // 8)]
114 self._buffer = self._buffer[finished_blocks * (self.block_size // 8):]
115
116 return result
117
118 def finalize(self):
119 if self._buffer is None:
120 raise AlreadyFinalized("Context was already finalized.")
121
122 pad_size = self.block_size // 8 - len(self._buffer)
123 result = self._buffer + six.int2byte(pad_size) * pad_size
124 self._buffer = None
125 return result
126
127
128 @utils.register_interface(PaddingContext)
129 class _PKCS7UnpaddingContext(object):
130 def __init__(self, block_size):
131 self.block_size = block_size
132 # TODO: more copies than necessary, we should use zero-buffer (#193)
133 self._buffer = b""
134
135 def update(self, data):
136 if self._buffer is None:
137 raise AlreadyFinalized("Context was already finalized.")
138
139 if not isinstance(data, bytes):
140 raise TypeError("data must be bytes.")
141
142 self._buffer += data
143
144 finished_blocks = max(
145 len(self._buffer) // (self.block_size // 8) - 1,
146 0
147 )
148
149 result = self._buffer[:finished_blocks * (self.block_size // 8)]
150 self._buffer = self._buffer[finished_blocks * (self.block_size // 8):]
151
152 return result
153
154 def finalize(self):
155 if self._buffer is None:
156 raise AlreadyFinalized("Context was already finalized.")
157
158 if len(self._buffer) != self.block_size // 8:
159 raise ValueError("Invalid padding bytes.")
160
161 valid = _lib.Cryptography_check_pkcs7_padding(
162 self._buffer, self.block_size // 8
163 )
164
165 if not valid:
166 raise ValueError("Invalid padding bytes.")
167
168 pad_size = six.indexbytes(self._buffer, -1)
169 res = self._buffer[:-pad_size]
170 self._buffer = None
171 return res
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cryptography/hazmat/primitives/padding.py b/src/cryptography/hazmat/primitives/padding.py
--- a/src/cryptography/hazmat/primitives/padding.py
+++ b/src/cryptography/hazmat/primitives/padding.py
@@ -6,6 +6,8 @@
import abc
+import os
+
import six
from cryptography import utils
@@ -13,47 +15,11 @@
from cryptography.hazmat.bindings.utils import LazyLibrary, build_ffi
-TYPES = """
-uint8_t Cryptography_check_pkcs7_padding(const uint8_t *, uint8_t);
-"""
-
-FUNCTIONS = """
-/* Returns the value of the input with the most-significant-bit copied to all
- of the bits. */
-static uint8_t Cryptography_DUPLICATE_MSB_TO_ALL(uint8_t a) {
- return (1 - (a >> (sizeof(uint8_t) * 8 - 1))) - 1;
-}
-
-/* This returns 0xFF if a < b else 0x00, but does so in a constant time
- fashion */
-static uint8_t Cryptography_constant_time_lt(uint8_t a, uint8_t b) {
- a -= b;
- return Cryptography_DUPLICATE_MSB_TO_ALL(a);
-}
-
-uint8_t Cryptography_check_pkcs7_padding(const uint8_t *data,
- uint8_t block_len) {
- uint8_t i;
- uint8_t pad_size = data[block_len - 1];
- uint8_t mismatch = 0;
- for (i = 0; i < block_len; i++) {
- unsigned int mask = Cryptography_constant_time_lt(i, pad_size);
- uint8_t b = data[block_len - 1 - i];
- mismatch |= (mask & (pad_size ^ b));
- }
-
- /* Check to make sure the pad_size was within the valid range. */
- mismatch |= ~Cryptography_constant_time_lt(0, pad_size);
- mismatch |= Cryptography_constant_time_lt(block_len, pad_size);
-
- /* Make sure any bits set are copied to the lowest bit */
- mismatch |= mismatch >> 4;
- mismatch |= mismatch >> 2;
- mismatch |= mismatch >> 1;
- /* Now check the low bit to see if it's set */
- return (mismatch & 1) == 0;
-}
-"""
+with open(os.path.join(os.path.dirname(__file__), "src/padding.h")) as f:
+ TYPES = f.read()
+
+with open(os.path.join(os.path.dirname(__file__), "src/padding.c")) as f:
+ FUNCTIONS = f.read()
_ffi = build_ffi(cdef_source=TYPES, verify_source=FUNCTIONS)
|
{"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/padding.py b/src/cryptography/hazmat/primitives/padding.py\n--- a/src/cryptography/hazmat/primitives/padding.py\n+++ b/src/cryptography/hazmat/primitives/padding.py\n@@ -6,6 +6,8 @@\n \n import abc\n \n+import os\n+\n import six\n \n from cryptography import utils\n@@ -13,47 +15,11 @@\n from cryptography.hazmat.bindings.utils import LazyLibrary, build_ffi\n \n \n-TYPES = \"\"\"\n-uint8_t Cryptography_check_pkcs7_padding(const uint8_t *, uint8_t);\n-\"\"\"\n-\n-FUNCTIONS = \"\"\"\n-/* Returns the value of the input with the most-significant-bit copied to all\n- of the bits. */\n-static uint8_t Cryptography_DUPLICATE_MSB_TO_ALL(uint8_t a) {\n- return (1 - (a >> (sizeof(uint8_t) * 8 - 1))) - 1;\n-}\n-\n-/* This returns 0xFF if a < b else 0x00, but does so in a constant time\n- fashion */\n-static uint8_t Cryptography_constant_time_lt(uint8_t a, uint8_t b) {\n- a -= b;\n- return Cryptography_DUPLICATE_MSB_TO_ALL(a);\n-}\n-\n-uint8_t Cryptography_check_pkcs7_padding(const uint8_t *data,\n- uint8_t block_len) {\n- uint8_t i;\n- uint8_t pad_size = data[block_len - 1];\n- uint8_t mismatch = 0;\n- for (i = 0; i < block_len; i++) {\n- unsigned int mask = Cryptography_constant_time_lt(i, pad_size);\n- uint8_t b = data[block_len - 1 - i];\n- mismatch |= (mask & (pad_size ^ b));\n- }\n-\n- /* Check to make sure the pad_size was within the valid range. */\n- mismatch |= ~Cryptography_constant_time_lt(0, pad_size);\n- mismatch |= Cryptography_constant_time_lt(block_len, pad_size);\n-\n- /* Make sure any bits set are copied to the lowest bit */\n- mismatch |= mismatch >> 4;\n- mismatch |= mismatch >> 2;\n- mismatch |= mismatch >> 1;\n- /* Now check the low bit to see if it's set */\n- return (mismatch & 1) == 0;\n-}\n-\"\"\"\n+with open(os.path.join(os.path.dirname(__file__), \"src/padding.h\")) as f:\n+ TYPES = f.read()\n+\n+with open(os.path.join(os.path.dirname(__file__), \"src/padding.c\")) as f:\n+ FUNCTIONS = f.read()\n \n \n _ffi = build_ffi(cdef_source=TYPES, verify_source=FUNCTIONS)\n", "issue": "Move C code for padding into it's own .c and .h files\nSee `constant_time` for the same idea.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import AlreadyFinalized\nfrom cryptography.hazmat.bindings.utils import LazyLibrary, build_ffi\n\n\nTYPES = \"\"\"\nuint8_t Cryptography_check_pkcs7_padding(const uint8_t *, uint8_t);\n\"\"\"\n\nFUNCTIONS = \"\"\"\n/* Returns the value of the input with the most-significant-bit copied to all\n of the bits. */\nstatic uint8_t Cryptography_DUPLICATE_MSB_TO_ALL(uint8_t a) {\n return (1 - (a >> (sizeof(uint8_t) * 8 - 1))) - 1;\n}\n\n/* This returns 0xFF if a < b else 0x00, but does so in a constant time\n fashion */\nstatic uint8_t Cryptography_constant_time_lt(uint8_t a, uint8_t b) {\n a -= b;\n return Cryptography_DUPLICATE_MSB_TO_ALL(a);\n}\n\nuint8_t Cryptography_check_pkcs7_padding(const uint8_t *data,\n uint8_t block_len) {\n uint8_t i;\n uint8_t pad_size = data[block_len - 1];\n uint8_t mismatch = 0;\n for (i = 0; i < block_len; i++) {\n unsigned int mask = Cryptography_constant_time_lt(i, pad_size);\n uint8_t b = data[block_len - 1 - i];\n mismatch |= (mask & (pad_size ^ b));\n }\n\n /* Check to make sure the pad_size was within the valid range. */\n mismatch |= ~Cryptography_constant_time_lt(0, pad_size);\n mismatch |= Cryptography_constant_time_lt(block_len, pad_size);\n\n /* Make sure any bits set are copied to the lowest bit */\n mismatch |= mismatch >> 4;\n mismatch |= mismatch >> 2;\n mismatch |= mismatch >> 1;\n /* Now check the low bit to see if it's set */\n return (mismatch & 1) == 0;\n}\n\"\"\"\n\n\n_ffi = build_ffi(cdef_source=TYPES, verify_source=FUNCTIONS)\n_lib = LazyLibrary(_ffi)\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass PaddingContext(object):\n @abc.abstractmethod\n def update(self, data):\n \"\"\"\n Pads the provided bytes and returns any available data as bytes.\n \"\"\"\n\n @abc.abstractmethod\n def finalize(self):\n \"\"\"\n Finalize the padding, returns bytes.\n \"\"\"\n\n\nclass PKCS7(object):\n def __init__(self, block_size):\n if not (0 <= block_size < 256):\n raise ValueError(\"block_size must be in range(0, 256).\")\n\n if block_size % 8 != 0:\n raise ValueError(\"block_size must be a multiple of 8.\")\n\n self.block_size = block_size\n\n def padder(self):\n return _PKCS7PaddingContext(self.block_size)\n\n def unpadder(self):\n return _PKCS7UnpaddingContext(self.block_size)\n\n\[email protected]_interface(PaddingContext)\nclass _PKCS7PaddingContext(object):\n def __init__(self, block_size):\n self.block_size = block_size\n # TODO: more copies than necessary, we should use zero-buffer (#193)\n self._buffer = b\"\"\n\n def update(self, data):\n if self._buffer is None:\n raise AlreadyFinalized(\"Context was already finalized.\")\n\n if not isinstance(data, bytes):\n raise TypeError(\"data must be bytes.\")\n\n self._buffer += data\n\n finished_blocks = len(self._buffer) // (self.block_size // 8)\n\n result = self._buffer[:finished_blocks * (self.block_size // 8)]\n self._buffer = self._buffer[finished_blocks * (self.block_size // 8):]\n\n return result\n\n def finalize(self):\n if self._buffer is None:\n raise AlreadyFinalized(\"Context was already finalized.\")\n\n pad_size = self.block_size // 8 - len(self._buffer)\n result = self._buffer + six.int2byte(pad_size) * pad_size\n self._buffer = None\n return result\n\n\[email protected]_interface(PaddingContext)\nclass _PKCS7UnpaddingContext(object):\n def __init__(self, block_size):\n self.block_size = block_size\n # TODO: more copies than necessary, we should use zero-buffer (#193)\n self._buffer = b\"\"\n\n def update(self, data):\n if self._buffer is None:\n raise AlreadyFinalized(\"Context was already finalized.\")\n\n if not isinstance(data, bytes):\n raise TypeError(\"data must be bytes.\")\n\n self._buffer += data\n\n finished_blocks = max(\n len(self._buffer) // (self.block_size // 8) - 1,\n 0\n )\n\n result = self._buffer[:finished_blocks * (self.block_size // 8)]\n self._buffer = self._buffer[finished_blocks * (self.block_size // 8):]\n\n return result\n\n def finalize(self):\n if self._buffer is None:\n raise AlreadyFinalized(\"Context was already finalized.\")\n\n if len(self._buffer) != self.block_size // 8:\n raise ValueError(\"Invalid padding bytes.\")\n\n valid = _lib.Cryptography_check_pkcs7_padding(\n self._buffer, self.block_size // 8\n )\n\n if not valid:\n raise ValueError(\"Invalid padding bytes.\")\n\n pad_size = six.indexbytes(self._buffer, -1)\n res = self._buffer[:-pad_size]\n self._buffer = None\n return res\n", "path": "src/cryptography/hazmat/primitives/padding.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\n\nimport os\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import AlreadyFinalized\nfrom cryptography.hazmat.bindings.utils import LazyLibrary, build_ffi\n\n\nwith open(os.path.join(os.path.dirname(__file__), \"src/padding.h\")) as f:\n TYPES = f.read()\n\nwith open(os.path.join(os.path.dirname(__file__), \"src/padding.c\")) as f:\n FUNCTIONS = f.read()\n\n\n_ffi = build_ffi(cdef_source=TYPES, verify_source=FUNCTIONS)\n_lib = LazyLibrary(_ffi)\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass PaddingContext(object):\n @abc.abstractmethod\n def update(self, data):\n \"\"\"\n Pads the provided bytes and returns any available data as bytes.\n \"\"\"\n\n @abc.abstractmethod\n def finalize(self):\n \"\"\"\n Finalize the padding, returns bytes.\n \"\"\"\n\n\nclass PKCS7(object):\n def __init__(self, block_size):\n if not (0 <= block_size < 256):\n raise ValueError(\"block_size must be in range(0, 256).\")\n\n if block_size % 8 != 0:\n raise ValueError(\"block_size must be a multiple of 8.\")\n\n self.block_size = block_size\n\n def padder(self):\n return _PKCS7PaddingContext(self.block_size)\n\n def unpadder(self):\n return _PKCS7UnpaddingContext(self.block_size)\n\n\[email protected]_interface(PaddingContext)\nclass _PKCS7PaddingContext(object):\n def __init__(self, block_size):\n self.block_size = block_size\n # TODO: more copies than necessary, we should use zero-buffer (#193)\n self._buffer = b\"\"\n\n def update(self, data):\n if self._buffer is None:\n raise AlreadyFinalized(\"Context was already finalized.\")\n\n if not isinstance(data, bytes):\n raise TypeError(\"data must be bytes.\")\n\n self._buffer += data\n\n finished_blocks = len(self._buffer) // (self.block_size // 8)\n\n result = self._buffer[:finished_blocks * (self.block_size // 8)]\n self._buffer = self._buffer[finished_blocks * (self.block_size // 8):]\n\n return result\n\n def finalize(self):\n if self._buffer is None:\n raise AlreadyFinalized(\"Context was already finalized.\")\n\n pad_size = self.block_size // 8 - len(self._buffer)\n result = self._buffer + six.int2byte(pad_size) * pad_size\n self._buffer = None\n return result\n\n\[email protected]_interface(PaddingContext)\nclass _PKCS7UnpaddingContext(object):\n def __init__(self, block_size):\n self.block_size = block_size\n # TODO: more copies than necessary, we should use zero-buffer (#193)\n self._buffer = b\"\"\n\n def update(self, data):\n if self._buffer is None:\n raise AlreadyFinalized(\"Context was already finalized.\")\n\n if not isinstance(data, bytes):\n raise TypeError(\"data must be bytes.\")\n\n self._buffer += data\n\n finished_blocks = max(\n len(self._buffer) // (self.block_size // 8) - 1,\n 0\n )\n\n result = self._buffer[:finished_blocks * (self.block_size // 8)]\n self._buffer = self._buffer[finished_blocks * (self.block_size // 8):]\n\n return result\n\n def finalize(self):\n if self._buffer is None:\n raise AlreadyFinalized(\"Context was already finalized.\")\n\n if len(self._buffer) != self.block_size // 8:\n raise ValueError(\"Invalid padding bytes.\")\n\n valid = _lib.Cryptography_check_pkcs7_padding(\n self._buffer, self.block_size // 8\n )\n\n if not valid:\n raise ValueError(\"Invalid padding bytes.\")\n\n pad_size = six.indexbytes(self._buffer, -1)\n res = self._buffer[:-pad_size]\n self._buffer = None\n return res\n", "path": "src/cryptography/hazmat/primitives/padding.py"}]}
| 1,982 | 613 |
gh_patches_debug_4112
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-12417
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Google Pubsub push messages mis-identified as crawler
## Important Details
How are you running Sentry?
* [ ] On-Premise docker [Version xyz]
* [x] Saas (sentry.io)
* [ ] Other [briefly describe your environment]
## Description
We get the Sentry API error `Sentry responded with an API error: APIError(Event dropped due to filter: web-crawlers)` when there's an exception in a [Google Pubsub push](https://cloud.google.com/pubsub/docs/push) handler.
Apparently the user agent is `APIs-Google`.
## Steps to Reproduce
1. Set up a Google Pubsub push HTTP event handler
2. Have an exception in the message handler code
3. Not get report in Sentry
### What you expected to happen
`APIs-Google` isn't identified as a web crawler.
### Possible Solution
Improve the regex? 😸
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/filters/web_crawlers.py`
Content:
```
1 from __future__ import absolute_import
2
3 import re
4
5 from .base import Filter
6 from sentry.utils.data_filters import FilterStatKeys
7 from sentry.utils.safe import get_path
8
9 # not all of these agents are guaranteed to execute JavaScript, but to avoid
10 # overhead of identifying which ones do, and which ones will over time we simply
11 # target all of the major ones
12 CRAWLERS = re.compile(
13 r'|'.join(
14 (
15 # various Google services
16 r'AdsBot',
17 # Google Adsense
18 r'Mediapartners',
19 # Google+ and Google web search
20 r'Google',
21 # Bing search
22 r'BingBot',
23 r'BingPreview',
24 # Baidu search
25 r'Baiduspider',
26 # Yahoo
27 r'Slurp',
28 # Sogou
29 r'Sogou',
30 # facebook
31 r'facebook',
32 # Alexa
33 r'ia_archiver',
34 # Generic bot
35 r'bots?[\/\s\)\;]',
36 # Generic spider
37 r'spider[\/\s\)\;]',
38 # Slack - see https://api.slack.com/robots
39 r'Slack',
40 # Google indexing bot
41 r'Calypso AppCrawler',
42 )
43 ),
44 re.I
45 )
46
47
48 class WebCrawlersFilter(Filter):
49 id = FilterStatKeys.WEB_CRAWLER
50 name = 'Filter out known web crawlers'
51 description = 'Some crawlers may execute pages in incompatible ways which then cause errors that are unlikely to be seen by a normal user.'
52 default = True
53
54 def get_user_agent(self, data):
55 try:
56 for key, value in get_path(data, 'request', 'headers', filter=True) or ():
57 if key.lower() == 'user-agent':
58 return value
59 except LookupError:
60 return ''
61
62 def test(self, data):
63 # TODO(dcramer): we could also look at UA parser and use the 'Spider'
64 # device type
65 user_agent = self.get_user_agent(data)
66 if not user_agent:
67 return False
68 return bool(CRAWLERS.search(user_agent))
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/filters/web_crawlers.py b/src/sentry/filters/web_crawlers.py
--- a/src/sentry/filters/web_crawlers.py
+++ b/src/sentry/filters/web_crawlers.py
@@ -16,8 +16,8 @@
r'AdsBot',
# Google Adsense
r'Mediapartners',
- # Google+ and Google web search
- r'Google',
+ # Google+ and Google web search, but not apis-google
+ r'(?<!APIs-)Google',
# Bing search
r'BingBot',
r'BingPreview',
|
{"golden_diff": "diff --git a/src/sentry/filters/web_crawlers.py b/src/sentry/filters/web_crawlers.py\n--- a/src/sentry/filters/web_crawlers.py\n+++ b/src/sentry/filters/web_crawlers.py\n@@ -16,8 +16,8 @@\n r'AdsBot',\n # Google Adsense\n r'Mediapartners',\n- # Google+ and Google web search\n- r'Google',\n+ # Google+ and Google web search, but not apis-google\n+ r'(?<!APIs-)Google',\n # Bing search\n r'BingBot',\n r'BingPreview',\n", "issue": "Google Pubsub push messages mis-identified as crawler\n## Important Details\r\n\r\nHow are you running Sentry?\r\n\r\n* [ ] On-Premise docker [Version xyz]\r\n* [x] Saas (sentry.io)\r\n* [ ] Other [briefly describe your environment]\r\n\r\n## Description\r\n\r\nWe get the Sentry API error `Sentry responded with an API error: APIError(Event dropped due to filter: web-crawlers)` when there's an exception in a [Google Pubsub push](https://cloud.google.com/pubsub/docs/push) handler.\r\n\r\nApparently the user agent is `APIs-Google`.\r\n\r\n## Steps to Reproduce\r\n\r\n1. Set up a Google Pubsub push HTTP event handler\r\n2. Have an exception in the message handler code\r\n3. Not get report in Sentry\r\n\r\n### What you expected to happen\r\n\r\n`APIs-Google` isn't identified as a web crawler.\r\n\r\n### Possible Solution\r\n\r\nImprove the regex? \ud83d\ude38 \r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport re\n\nfrom .base import Filter\nfrom sentry.utils.data_filters import FilterStatKeys\nfrom sentry.utils.safe import get_path\n\n# not all of these agents are guaranteed to execute JavaScript, but to avoid\n# overhead of identifying which ones do, and which ones will over time we simply\n# target all of the major ones\nCRAWLERS = re.compile(\n r'|'.join(\n (\n # various Google services\n r'AdsBot',\n # Google Adsense\n r'Mediapartners',\n # Google+ and Google web search\n r'Google',\n # Bing search\n r'BingBot',\n r'BingPreview',\n # Baidu search\n r'Baiduspider',\n # Yahoo\n r'Slurp',\n # Sogou\n r'Sogou',\n # facebook\n r'facebook',\n # Alexa\n r'ia_archiver',\n # Generic bot\n r'bots?[\\/\\s\\)\\;]',\n # Generic spider\n r'spider[\\/\\s\\)\\;]',\n # Slack - see https://api.slack.com/robots\n r'Slack',\n # Google indexing bot\n r'Calypso AppCrawler',\n )\n ),\n re.I\n)\n\n\nclass WebCrawlersFilter(Filter):\n id = FilterStatKeys.WEB_CRAWLER\n name = 'Filter out known web crawlers'\n description = 'Some crawlers may execute pages in incompatible ways which then cause errors that are unlikely to be seen by a normal user.'\n default = True\n\n def get_user_agent(self, data):\n try:\n for key, value in get_path(data, 'request', 'headers', filter=True) or ():\n if key.lower() == 'user-agent':\n return value\n except LookupError:\n return ''\n\n def test(self, data):\n # TODO(dcramer): we could also look at UA parser and use the 'Spider'\n # device type\n user_agent = self.get_user_agent(data)\n if not user_agent:\n return False\n return bool(CRAWLERS.search(user_agent))\n", "path": "src/sentry/filters/web_crawlers.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport re\n\nfrom .base import Filter\nfrom sentry.utils.data_filters import FilterStatKeys\nfrom sentry.utils.safe import get_path\n\n# not all of these agents are guaranteed to execute JavaScript, but to avoid\n# overhead of identifying which ones do, and which ones will over time we simply\n# target all of the major ones\nCRAWLERS = re.compile(\n r'|'.join(\n (\n # various Google services\n r'AdsBot',\n # Google Adsense\n r'Mediapartners',\n # Google+ and Google web search, but not apis-google\n r'(?<!APIs-)Google',\n # Bing search\n r'BingBot',\n r'BingPreview',\n # Baidu search\n r'Baiduspider',\n # Yahoo\n r'Slurp',\n # Sogou\n r'Sogou',\n # facebook\n r'facebook',\n # Alexa\n r'ia_archiver',\n # Generic bot\n r'bots?[\\/\\s\\)\\;]',\n # Generic spider\n r'spider[\\/\\s\\)\\;]',\n # Slack - see https://api.slack.com/robots\n r'Slack',\n # Google indexing bot\n r'Calypso AppCrawler',\n )\n ),\n re.I\n)\n\n\nclass WebCrawlersFilter(Filter):\n id = FilterStatKeys.WEB_CRAWLER\n name = 'Filter out known web crawlers'\n description = 'Some crawlers may execute pages in incompatible ways which then cause errors that are unlikely to be seen by a normal user.'\n default = True\n\n def get_user_agent(self, data):\n try:\n for key, value in get_path(data, 'request', 'headers', filter=True) or ():\n if key.lower() == 'user-agent':\n return value\n except LookupError:\n return ''\n\n def test(self, data):\n # TODO(dcramer): we could also look at UA parser and use the 'Spider'\n # device type\n user_agent = self.get_user_agent(data)\n if not user_agent:\n return False\n return bool(CRAWLERS.search(user_agent))\n", "path": "src/sentry/filters/web_crawlers.py"}]}
| 1,067 | 143 |
gh_patches_debug_36807
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-2491
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bulk_update() in content-stages can cause (very rare) deadlock
**Version**
3.14
**Describe the bug**
In high-concurrency environments, with overlapping content, calling bulk_update() can cause a deadlock. Specifically, this call:
https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/content_stages.py#L158-L164
Ordering the list-to-be-updated does not, alas, protect us - because Postgres doesn't guarantee order when doing an update like this.
**To Reproduce**
We have only seen this "in the wild" once, syncing 8-10 repos with similar content at the same time with 10 workers available.
**Expected behavior**
Don't deadlock.
**Additional context**
This is the traceback from the initial description for
https://bugzilla.redhat.com/show_bug.cgi?id=2062526
We fixed the deadlock noted in https://bugzilla.redhat.com/show_bug.cgi?id=2062526#c2 under #2420
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/plugin/stages/content_stages.py`
Content:
```
1 from collections import defaultdict
2
3 from asgiref.sync import sync_to_async
4 from django.core.exceptions import ObjectDoesNotExist
5 from django.db import IntegrityError, transaction
6 from django.db.models import Q
7
8 from pulpcore.plugin.sync import sync_to_async_iterable
9
10 from pulpcore.plugin.models import Content, ContentArtifact, ProgressReport
11
12 from .api import Stage
13
14
15 class QueryExistingContents(Stage):
16 """
17 A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related
18 :class:`~pulpcore.plugin.models.ContentArtifact` objects too.
19
20 This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`
21 and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each
22 :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one
23 :class:`~pulpcore.plugin.models.Artifact`.
24
25 This stage inspects any "unsaved" Content unit objects and searches for existing saved Content
26 units inside Pulp with the same unit key. Any existing Content objects found, replace their
27 "unsaved" counterpart in the :class:`~pulpcore.plugin.stages.DeclarativeContent` object.
28
29 Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to `self._out_q` after it has
30 been handled.
31
32 This stage drains all available items from `self._in_q` and batches everything into one large
33 call to the db for efficiency.
34 """
35
36 async def run(self):
37 """
38 The coroutine for this stage.
39
40 Returns:
41 The coroutine for this stage.
42 """
43 async for batch in self.batches():
44 content_q_by_type = defaultdict(lambda: Q(pk__in=[]))
45 d_content_by_nat_key = defaultdict(list)
46 for d_content in batch:
47 if d_content.content._state.adding:
48 model_type = type(d_content.content)
49 unit_q = d_content.content.q()
50 content_q_by_type[model_type] = content_q_by_type[model_type] | unit_q
51 d_content_by_nat_key[d_content.content.natural_key()].append(d_content)
52
53 for model_type, content_q in content_q_by_type.items():
54 try:
55 await sync_to_async(model_type.objects.filter(content_q).touch)()
56 except AttributeError:
57 raise TypeError(
58 "Plugins which declare custom ORM managers on their content classes "
59 "should have those managers inherit from "
60 "pulpcore.plugin.models.ContentManager."
61 )
62 async for result in sync_to_async_iterable(
63 model_type.objects.filter(content_q).iterator()
64 ):
65 for d_content in d_content_by_nat_key[result.natural_key()]:
66 d_content.content = result
67
68 for d_content in batch:
69 await self.put(d_content)
70
71
72 class ContentSaver(Stage):
73 """
74 A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related
75 :class:`~pulpcore.plugin.models.ContentArtifact` objects too.
76
77 This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`
78 and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each
79 :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one
80 :class:`~pulpcore.plugin.models.Artifact`.
81
82 Each "unsaved" Content objects is saved and a :class:`~pulpcore.plugin.models.ContentArtifact`
83 objects too.
84
85 Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to after it has been handled.
86
87 This stage drains all available items from `self._in_q` and batches everything into one large
88 call to the db for efficiency.
89 """
90
91 async def run(self):
92 """
93 The coroutine for this stage.
94
95 Returns:
96 The coroutine for this stage.
97 """
98 async for batch in self.batches():
99
100 def process_batch():
101 content_artifact_bulk = []
102 to_update_ca_query = ContentArtifact.objects.none()
103 to_update_ca_bulk = []
104 to_update_ca_artifact = {}
105 with transaction.atomic():
106 self._pre_save(batch)
107 # Process the batch in dc.content.natural_keys order.
108 # This prevents deadlocks when we're processing the same/similar content
109 # in concurrent workers.
110 batch.sort(key=lambda x: "".join(map(str, x.content.natural_key())))
111 for d_content in batch:
112 # Are we saving to the database for the first time?
113 content_already_saved = not d_content.content._state.adding
114 if not content_already_saved:
115 try:
116 with transaction.atomic():
117 d_content.content.save()
118 except IntegrityError as e:
119 try:
120 d_content.content = d_content.content.__class__.objects.get(
121 d_content.content.q()
122 )
123 except ObjectDoesNotExist:
124 raise e
125 else:
126 for d_artifact in d_content.d_artifacts:
127 if not d_artifact.artifact._state.adding:
128 artifact = d_artifact.artifact
129 else:
130 # set to None for on-demand synced artifacts
131 artifact = None
132 content_artifact = ContentArtifact(
133 content=d_content.content,
134 artifact=artifact,
135 relative_path=d_artifact.relative_path,
136 )
137 content_artifact_bulk.append(content_artifact)
138 continue
139 # When the Content already exists, check if ContentArtifacts need to be
140 # updated
141 for d_artifact in d_content.d_artifacts:
142 if not d_artifact.artifact._state.adding:
143 # the artifact is already present in the database; update references
144 # Creating one large query and one large dictionary
145 to_update_ca_query |= ContentArtifact.objects.filter(
146 content=d_content.content,
147 relative_path=d_artifact.relative_path,
148 )
149 key = (d_content.content.pk, d_artifact.relative_path)
150 to_update_ca_artifact[key] = d_artifact.artifact
151 # Query db once and update each object in memory for bulk_update call
152 for content_artifact in to_update_ca_query.iterator():
153 key = (content_artifact.content_id, content_artifact.relative_path)
154 # Maybe remove dict elements after to reduce memory?
155 content_artifact.artifact = to_update_ca_artifact[key]
156 to_update_ca_bulk.append(content_artifact)
157
158 # Sort the lists we're about to do bulk updates/creates on.
159 # We know to_update_ca_bulk entries already are in the DB, so we can enforce
160 # order just using pulp_id.
161 to_update_ca_bulk.sort(key=lambda x: x.pulp_id)
162 content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))
163
164 ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
165 ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)
166 self._post_save(batch)
167
168 await sync_to_async(process_batch)()
169 for declarative_content in batch:
170 await self.put(declarative_content)
171
172 def _pre_save(self, batch):
173 """
174 A hook plugin-writers can override to save related objects prior to content unit saving.
175
176 This is run within the same transaction as the content unit saving.
177
178 Args:
179 batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of
180 :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.
181
182 """
183 pass
184
185 def _post_save(self, batch):
186 """
187 A hook plugin-writers can override to save related objects after content unit saving.
188
189 This is run within the same transaction as the content unit saving.
190
191 Args:
192 batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of
193 :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.
194
195 """
196 pass
197
198
199 class ResolveContentFutures(Stage):
200 """
201 This stage resolves the futures in :class:`~pulpcore.plugin.stages.DeclarativeContent`.
202
203 Futures results are set to the found/created :class:`~pulpcore.plugin.models.Content`.
204
205 This is useful when data downloaded from the plugin API needs to be parsed by FirstStage to
206 create additional :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be send down
207 the pipeline. Consider an example where content type `Foo` references additional instances of a
208 different content type `Bar`. Consider this code in FirstStage::
209
210 # Create d_content and d_artifact for a `foo_a`
211 foo_a = DeclarativeContent(...)
212 # Send it in the pipeline
213 await self.put(foo_a)
214
215 ...
216
217 foo_a_content = await foo_a.resolution() # awaits until the foo_a reaches this stage
218
219 This creates a "looping" pattern, of sorts, where downloaded content at the end of the pipeline
220 can introduce new additional to-be-downloaded content at the beginning of the pipeline.
221 On the other hand, it can impose a substantial performance decrement of batching content in the
222 earlier stages.
223 If you want to drop a declarative content prematurely from the pipeline, use the function
224 `resolve()` to unblock the coroutines awaiting the attached future and do not hand the content
225 to the next stage.
226 As a rule of thumb, sending more items into the pipeline first and awaiting their resolution
227 later is better.
228 """
229
230 async def run(self):
231 """
232 The coroutine for this stage.
233
234 Returns:
235 The coroutine for this stage.
236 """
237 async for d_content in self.items():
238 d_content.resolve()
239 await self.put(d_content)
240
241
242 class ContentAssociation(Stage):
243 """
244 A Stages API stage that associates content units with `new_version`.
245
246 This stage stores all content unit primary keys in memory before running. This is done to
247 compute the units already associated but not received from `self._in_q`. These units are passed
248 via `self._out_q` to the next stage as a :class:`django.db.models.query.QuerySet`.
249
250 This stage creates a ProgressReport named 'Associating Content' that counts the number of units
251 associated. Since it's a stream the total count isn't known until it's finished.
252
253 If `mirror` was enabled, then content units may also be un-assocated (removed) from
254 `new_version`. A ProgressReport named 'Un-Associating Content' is created that counts the number
255 of units un-associated.
256
257 Args:
258 new_version (:class:`~pulpcore.plugin.models.RepositoryVersion`): The repo version this
259 stage associates content with.
260 mirror (bool): Whether or not to "mirror" the stream of DeclarativeContent - whether content
261 not in the stream should be removed from the repository.
262 args: unused positional arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.
263 kwargs: unused keyword arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.
264 """
265
266 def __init__(self, new_version, mirror, *args, **kwargs):
267 super().__init__(*args, **kwargs)
268 self.new_version = new_version
269 self.allow_delete = mirror
270
271 async def run(self):
272 """
273 The coroutine for this stage.
274
275 Returns:
276 The coroutine for this stage.
277 """
278 async with ProgressReport(message="Associating Content", code="associating.content") as pb:
279 to_delete = {
280 i
281 async for i in sync_to_async_iterable(
282 self.new_version.content.values_list("pk", flat=True)
283 )
284 }
285
286 async for batch in self.batches():
287 to_add = set()
288 for d_content in batch:
289 try:
290 to_delete.remove(d_content.content.pk)
291 except KeyError:
292 to_add.add(d_content.content.pk)
293 await self.put(d_content)
294
295 if to_add:
296 await sync_to_async(self.new_version.add_content)(
297 Content.objects.filter(pk__in=to_add)
298 )
299 await pb.aincrease_by(len(to_add))
300
301 if self.allow_delete:
302 async with ProgressReport(
303 message="Un-Associating Content", code="unassociating.content"
304 ) as pb:
305 if to_delete:
306 await sync_to_async(self.new_version.remove_content)(
307 Content.objects.filter(pk__in=to_delete)
308 )
309 await pb.aincrease_by(len(to_delete))
310
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -155,13 +155,39 @@
content_artifact.artifact = to_update_ca_artifact[key]
to_update_ca_bulk.append(content_artifact)
- # Sort the lists we're about to do bulk updates/creates on.
- # We know to_update_ca_bulk entries already are in the DB, so we can enforce
- # order just using pulp_id.
- to_update_ca_bulk.sort(key=lambda x: x.pulp_id)
- content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))
+ # to_update_ca_bulk are the CAs that we know are already persisted.
+ # We need to update their artifact_ids, and wish to do it in bulk to
+ # avoid hundreds of round-trips to the database.
+ #
+ # To avoid deadlocks in high-concurrency environments with overlapping
+ # content, we need to update the rows in some defined order. Unfortunately,
+ # postgres doesn't support order-on-update - but it *does* support ordering
+ # on select-for-update. So, we select-for-update, in pulp_id order, the
+ # rows we're about to update as one db-call, and then do the update in a
+ # second.
+ ids = [k.pulp_id for k in to_update_ca_bulk]
+ with transaction.atomic():
+ # "len()" forces the QA to be evaluated. Using exist() or count() won't
+ # work for us - Django is smart enough to either not-order, or even
+ # not-emit, a select-for-update in these cases.
+ #
+ # To maximize performance, we make sure to only ask for pulp_ids, and
+ # avoid instantiating a python-object for the affected CAs by using
+ # values_list()
+ len(
+ ContentArtifact.objects.filter(pulp_id__in=ids)
+ .only("pulp_id")
+ .order_by("pulp_id")
+ .select_for_update()
+ .values_list()
+ )
+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
- ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
+ # To avoid a similar deadlock issue when calling get_or_create, we sort the
+ # "new" CAs to make sure inserts happen in a defined order. Since we can't
+ # trust the pulp_id (by the time we go to create a CA, it may already exist,
+ # and be replaced by the 'real' one), we sort by their "natural key".
+ content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))
ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)
self._post_save(batch)
|
{"golden_diff": "diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py\n--- a/pulpcore/plugin/stages/content_stages.py\n+++ b/pulpcore/plugin/stages/content_stages.py\n@@ -155,13 +155,39 @@\n content_artifact.artifact = to_update_ca_artifact[key]\n to_update_ca_bulk.append(content_artifact)\n \n- # Sort the lists we're about to do bulk updates/creates on.\n- # We know to_update_ca_bulk entries already are in the DB, so we can enforce\n- # order just using pulp_id.\n- to_update_ca_bulk.sort(key=lambda x: x.pulp_id)\n- content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))\n+ # to_update_ca_bulk are the CAs that we know are already persisted.\n+ # We need to update their artifact_ids, and wish to do it in bulk to\n+ # avoid hundreds of round-trips to the database.\n+ #\n+ # To avoid deadlocks in high-concurrency environments with overlapping\n+ # content, we need to update the rows in some defined order. Unfortunately,\n+ # postgres doesn't support order-on-update - but it *does* support ordering\n+ # on select-for-update. So, we select-for-update, in pulp_id order, the\n+ # rows we're about to update as one db-call, and then do the update in a\n+ # second.\n+ ids = [k.pulp_id for k in to_update_ca_bulk]\n+ with transaction.atomic():\n+ # \"len()\" forces the QA to be evaluated. Using exist() or count() won't\n+ # work for us - Django is smart enough to either not-order, or even\n+ # not-emit, a select-for-update in these cases.\n+ #\n+ # To maximize performance, we make sure to only ask for pulp_ids, and\n+ # avoid instantiating a python-object for the affected CAs by using\n+ # values_list()\n+ len(\n+ ContentArtifact.objects.filter(pulp_id__in=ids)\n+ .only(\"pulp_id\")\n+ .order_by(\"pulp_id\")\n+ .select_for_update()\n+ .values_list()\n+ )\n+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, [\"artifact\"])\n \n- ContentArtifact.objects.bulk_update(to_update_ca_bulk, [\"artifact\"])\n+ # To avoid a similar deadlock issue when calling get_or_create, we sort the\n+ # \"new\" CAs to make sure inserts happen in a defined order. Since we can't\n+ # trust the pulp_id (by the time we go to create a CA, it may already exist,\n+ # and be replaced by the 'real' one), we sort by their \"natural key\".\n+ content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))\n ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)\n self._post_save(batch)\n", "issue": "bulk_update() in content-stages can cause (very rare) deadlock\n**Version**\r\n3.14\r\n\r\n**Describe the bug**\r\nIn high-concurrency environments, with overlapping content, calling bulk_update() can cause a deadlock. Specifically, this call:\r\n\r\nhttps://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/content_stages.py#L158-L164\r\n\r\nOrdering the list-to-be-updated does not, alas, protect us - because Postgres doesn't guarantee order when doing an update like this.\r\n\r\n**To Reproduce**\r\nWe have only seen this \"in the wild\" once, syncing 8-10 repos with similar content at the same time with 10 workers available.\r\n\r\n**Expected behavior**\r\nDon't deadlock.\r\n\r\n**Additional context**\r\nThis is the traceback from the initial description for\r\n\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=2062526\r\n\r\nWe fixed the deadlock noted in https://bugzilla.redhat.com/show_bug.cgi?id=2062526#c2 under #2420 \r\n\r\n\n", "before_files": [{"content": "from collections import defaultdict\n\nfrom asgiref.sync import sync_to_async\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import IntegrityError, transaction\nfrom django.db.models import Q\n\nfrom pulpcore.plugin.sync import sync_to_async_iterable\n\nfrom pulpcore.plugin.models import Content, ContentArtifact, ProgressReport\n\nfrom .api import Stage\n\n\nclass QueryExistingContents(Stage):\n \"\"\"\n A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related\n :class:`~pulpcore.plugin.models.ContentArtifact` objects too.\n\n This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`\n and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each\n :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one\n :class:`~pulpcore.plugin.models.Artifact`.\n\n This stage inspects any \"unsaved\" Content unit objects and searches for existing saved Content\n units inside Pulp with the same unit key. Any existing Content objects found, replace their\n \"unsaved\" counterpart in the :class:`~pulpcore.plugin.stages.DeclarativeContent` object.\n\n Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to `self._out_q` after it has\n been handled.\n\n This stage drains all available items from `self._in_q` and batches everything into one large\n call to the db for efficiency.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for batch in self.batches():\n content_q_by_type = defaultdict(lambda: Q(pk__in=[]))\n d_content_by_nat_key = defaultdict(list)\n for d_content in batch:\n if d_content.content._state.adding:\n model_type = type(d_content.content)\n unit_q = d_content.content.q()\n content_q_by_type[model_type] = content_q_by_type[model_type] | unit_q\n d_content_by_nat_key[d_content.content.natural_key()].append(d_content)\n\n for model_type, content_q in content_q_by_type.items():\n try:\n await sync_to_async(model_type.objects.filter(content_q).touch)()\n except AttributeError:\n raise TypeError(\n \"Plugins which declare custom ORM managers on their content classes \"\n \"should have those managers inherit from \"\n \"pulpcore.plugin.models.ContentManager.\"\n )\n async for result in sync_to_async_iterable(\n model_type.objects.filter(content_q).iterator()\n ):\n for d_content in d_content_by_nat_key[result.natural_key()]:\n d_content.content = result\n\n for d_content in batch:\n await self.put(d_content)\n\n\nclass ContentSaver(Stage):\n \"\"\"\n A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related\n :class:`~pulpcore.plugin.models.ContentArtifact` objects too.\n\n This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`\n and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each\n :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one\n :class:`~pulpcore.plugin.models.Artifact`.\n\n Each \"unsaved\" Content objects is saved and a :class:`~pulpcore.plugin.models.ContentArtifact`\n objects too.\n\n Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to after it has been handled.\n\n This stage drains all available items from `self._in_q` and batches everything into one large\n call to the db for efficiency.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for batch in self.batches():\n\n def process_batch():\n content_artifact_bulk = []\n to_update_ca_query = ContentArtifact.objects.none()\n to_update_ca_bulk = []\n to_update_ca_artifact = {}\n with transaction.atomic():\n self._pre_save(batch)\n # Process the batch in dc.content.natural_keys order.\n # This prevents deadlocks when we're processing the same/similar content\n # in concurrent workers.\n batch.sort(key=lambda x: \"\".join(map(str, x.content.natural_key())))\n for d_content in batch:\n # Are we saving to the database for the first time?\n content_already_saved = not d_content.content._state.adding\n if not content_already_saved:\n try:\n with transaction.atomic():\n d_content.content.save()\n except IntegrityError as e:\n try:\n d_content.content = d_content.content.__class__.objects.get(\n d_content.content.q()\n )\n except ObjectDoesNotExist:\n raise e\n else:\n for d_artifact in d_content.d_artifacts:\n if not d_artifact.artifact._state.adding:\n artifact = d_artifact.artifact\n else:\n # set to None for on-demand synced artifacts\n artifact = None\n content_artifact = ContentArtifact(\n content=d_content.content,\n artifact=artifact,\n relative_path=d_artifact.relative_path,\n )\n content_artifact_bulk.append(content_artifact)\n continue\n # When the Content already exists, check if ContentArtifacts need to be\n # updated\n for d_artifact in d_content.d_artifacts:\n if not d_artifact.artifact._state.adding:\n # the artifact is already present in the database; update references\n # Creating one large query and one large dictionary\n to_update_ca_query |= ContentArtifact.objects.filter(\n content=d_content.content,\n relative_path=d_artifact.relative_path,\n )\n key = (d_content.content.pk, d_artifact.relative_path)\n to_update_ca_artifact[key] = d_artifact.artifact\n # Query db once and update each object in memory for bulk_update call\n for content_artifact in to_update_ca_query.iterator():\n key = (content_artifact.content_id, content_artifact.relative_path)\n # Maybe remove dict elements after to reduce memory?\n content_artifact.artifact = to_update_ca_artifact[key]\n to_update_ca_bulk.append(content_artifact)\n\n # Sort the lists we're about to do bulk updates/creates on.\n # We know to_update_ca_bulk entries already are in the DB, so we can enforce\n # order just using pulp_id.\n to_update_ca_bulk.sort(key=lambda x: x.pulp_id)\n content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))\n\n ContentArtifact.objects.bulk_update(to_update_ca_bulk, [\"artifact\"])\n ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)\n self._post_save(batch)\n\n await sync_to_async(process_batch)()\n for declarative_content in batch:\n await self.put(declarative_content)\n\n def _pre_save(self, batch):\n \"\"\"\n A hook plugin-writers can override to save related objects prior to content unit saving.\n\n This is run within the same transaction as the content unit saving.\n\n Args:\n batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of\n :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.\n\n \"\"\"\n pass\n\n def _post_save(self, batch):\n \"\"\"\n A hook plugin-writers can override to save related objects after content unit saving.\n\n This is run within the same transaction as the content unit saving.\n\n Args:\n batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of\n :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.\n\n \"\"\"\n pass\n\n\nclass ResolveContentFutures(Stage):\n \"\"\"\n This stage resolves the futures in :class:`~pulpcore.plugin.stages.DeclarativeContent`.\n\n Futures results are set to the found/created :class:`~pulpcore.plugin.models.Content`.\n\n This is useful when data downloaded from the plugin API needs to be parsed by FirstStage to\n create additional :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be send down\n the pipeline. Consider an example where content type `Foo` references additional instances of a\n different content type `Bar`. Consider this code in FirstStage::\n\n # Create d_content and d_artifact for a `foo_a`\n foo_a = DeclarativeContent(...)\n # Send it in the pipeline\n await self.put(foo_a)\n\n ...\n\n foo_a_content = await foo_a.resolution() # awaits until the foo_a reaches this stage\n\n This creates a \"looping\" pattern, of sorts, where downloaded content at the end of the pipeline\n can introduce new additional to-be-downloaded content at the beginning of the pipeline.\n On the other hand, it can impose a substantial performance decrement of batching content in the\n earlier stages.\n If you want to drop a declarative content prematurely from the pipeline, use the function\n `resolve()` to unblock the coroutines awaiting the attached future and do not hand the content\n to the next stage.\n As a rule of thumb, sending more items into the pipeline first and awaiting their resolution\n later is better.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for d_content in self.items():\n d_content.resolve()\n await self.put(d_content)\n\n\nclass ContentAssociation(Stage):\n \"\"\"\n A Stages API stage that associates content units with `new_version`.\n\n This stage stores all content unit primary keys in memory before running. This is done to\n compute the units already associated but not received from `self._in_q`. These units are passed\n via `self._out_q` to the next stage as a :class:`django.db.models.query.QuerySet`.\n\n This stage creates a ProgressReport named 'Associating Content' that counts the number of units\n associated. Since it's a stream the total count isn't known until it's finished.\n\n If `mirror` was enabled, then content units may also be un-assocated (removed) from\n `new_version`. A ProgressReport named 'Un-Associating Content' is created that counts the number\n of units un-associated.\n\n Args:\n new_version (:class:`~pulpcore.plugin.models.RepositoryVersion`): The repo version this\n stage associates content with.\n mirror (bool): Whether or not to \"mirror\" the stream of DeclarativeContent - whether content\n not in the stream should be removed from the repository.\n args: unused positional arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.\n kwargs: unused keyword arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.\n \"\"\"\n\n def __init__(self, new_version, mirror, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.new_version = new_version\n self.allow_delete = mirror\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async with ProgressReport(message=\"Associating Content\", code=\"associating.content\") as pb:\n to_delete = {\n i\n async for i in sync_to_async_iterable(\n self.new_version.content.values_list(\"pk\", flat=True)\n )\n }\n\n async for batch in self.batches():\n to_add = set()\n for d_content in batch:\n try:\n to_delete.remove(d_content.content.pk)\n except KeyError:\n to_add.add(d_content.content.pk)\n await self.put(d_content)\n\n if to_add:\n await sync_to_async(self.new_version.add_content)(\n Content.objects.filter(pk__in=to_add)\n )\n await pb.aincrease_by(len(to_add))\n\n if self.allow_delete:\n async with ProgressReport(\n message=\"Un-Associating Content\", code=\"unassociating.content\"\n ) as pb:\n if to_delete:\n await sync_to_async(self.new_version.remove_content)(\n Content.objects.filter(pk__in=to_delete)\n )\n await pb.aincrease_by(len(to_delete))\n", "path": "pulpcore/plugin/stages/content_stages.py"}], "after_files": [{"content": "from collections import defaultdict\n\nfrom asgiref.sync import sync_to_async\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import IntegrityError, transaction\nfrom django.db.models import Q\n\nfrom pulpcore.plugin.sync import sync_to_async_iterable\n\nfrom pulpcore.plugin.models import Content, ContentArtifact, ProgressReport\n\nfrom .api import Stage\n\n\nclass QueryExistingContents(Stage):\n \"\"\"\n A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related\n :class:`~pulpcore.plugin.models.ContentArtifact` objects too.\n\n This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`\n and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each\n :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one\n :class:`~pulpcore.plugin.models.Artifact`.\n\n This stage inspects any \"unsaved\" Content unit objects and searches for existing saved Content\n units inside Pulp with the same unit key. Any existing Content objects found, replace their\n \"unsaved\" counterpart in the :class:`~pulpcore.plugin.stages.DeclarativeContent` object.\n\n Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to `self._out_q` after it has\n been handled.\n\n This stage drains all available items from `self._in_q` and batches everything into one large\n call to the db for efficiency.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for batch in self.batches():\n content_q_by_type = defaultdict(lambda: Q(pk__in=[]))\n d_content_by_nat_key = defaultdict(list)\n for d_content in batch:\n if d_content.content._state.adding:\n model_type = type(d_content.content)\n unit_q = d_content.content.q()\n content_q_by_type[model_type] = content_q_by_type[model_type] | unit_q\n d_content_by_nat_key[d_content.content.natural_key()].append(d_content)\n\n for model_type, content_q in content_q_by_type.items():\n try:\n await sync_to_async(model_type.objects.filter(content_q).touch)()\n except AttributeError:\n raise TypeError(\n \"Plugins which declare custom ORM managers on their content classes \"\n \"should have those managers inherit from \"\n \"pulpcore.plugin.models.ContentManager.\"\n )\n async for result in sync_to_async_iterable(\n model_type.objects.filter(content_q).iterator()\n ):\n for d_content in d_content_by_nat_key[result.natural_key()]:\n d_content.content = result\n\n for d_content in batch:\n await self.put(d_content)\n\n\nclass ContentSaver(Stage):\n \"\"\"\n A Stages API stage that saves :attr:`DeclarativeContent.content` objects and saves its related\n :class:`~pulpcore.plugin.models.ContentArtifact` objects too.\n\n This stage expects :class:`~pulpcore.plugin.stages.DeclarativeContent` units from `self._in_q`\n and inspects their associated :class:`~pulpcore.plugin.stages.DeclarativeArtifact` objects. Each\n :class:`~pulpcore.plugin.stages.DeclarativeArtifact` object stores one\n :class:`~pulpcore.plugin.models.Artifact`.\n\n Each \"unsaved\" Content objects is saved and a :class:`~pulpcore.plugin.models.ContentArtifact`\n objects too.\n\n Each :class:`~pulpcore.plugin.stages.DeclarativeContent` is sent to after it has been handled.\n\n This stage drains all available items from `self._in_q` and batches everything into one large\n call to the db for efficiency.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for batch in self.batches():\n\n def process_batch():\n content_artifact_bulk = []\n to_update_ca_query = ContentArtifact.objects.none()\n to_update_ca_bulk = []\n to_update_ca_artifact = {}\n with transaction.atomic():\n self._pre_save(batch)\n # Process the batch in dc.content.natural_keys order.\n # This prevents deadlocks when we're processing the same/similar content\n # in concurrent workers.\n batch.sort(key=lambda x: \"\".join(map(str, x.content.natural_key())))\n for d_content in batch:\n # Are we saving to the database for the first time?\n content_already_saved = not d_content.content._state.adding\n if not content_already_saved:\n try:\n with transaction.atomic():\n d_content.content.save()\n except IntegrityError as e:\n try:\n d_content.content = d_content.content.__class__.objects.get(\n d_content.content.q()\n )\n except ObjectDoesNotExist:\n raise e\n else:\n for d_artifact in d_content.d_artifacts:\n if not d_artifact.artifact._state.adding:\n artifact = d_artifact.artifact\n else:\n # set to None for on-demand synced artifacts\n artifact = None\n content_artifact = ContentArtifact(\n content=d_content.content,\n artifact=artifact,\n relative_path=d_artifact.relative_path,\n )\n content_artifact_bulk.append(content_artifact)\n continue\n # When the Content already exists, check if ContentArtifacts need to be\n # updated\n for d_artifact in d_content.d_artifacts:\n if not d_artifact.artifact._state.adding:\n # the artifact is already present in the database; update references\n # Creating one large query and one large dictionary\n to_update_ca_query |= ContentArtifact.objects.filter(\n content=d_content.content,\n relative_path=d_artifact.relative_path,\n )\n key = (d_content.content.pk, d_artifact.relative_path)\n to_update_ca_artifact[key] = d_artifact.artifact\n # Query db once and update each object in memory for bulk_update call\n for content_artifact in to_update_ca_query.iterator():\n key = (content_artifact.content_id, content_artifact.relative_path)\n # Maybe remove dict elements after to reduce memory?\n content_artifact.artifact = to_update_ca_artifact[key]\n to_update_ca_bulk.append(content_artifact)\n\n # to_update_ca_bulk are the CAs that we know are already persisted.\n # We need to update their artifact_ids, and wish to do it in bulk to\n # avoid hundreds of round-trips to the database.\n #\n # To avoid deadlocks in high-concurrency environments with overlapping\n # content, we need to update the rows in some defined order. Unfortunately,\n # postgres doesn't support order-on-update - but it *does* support ordering\n # on select-for-update. So, we select-for-update, in pulp_id order, the\n # rows we're about to update as one db-call, and then do the update in a\n # second.\n ids = [k.pulp_id for k in to_update_ca_bulk]\n with transaction.atomic():\n # \"len()\" forces the QA to be evaluated. Using exist() or count() won't\n # work for us - Django is smart enough to either not-order, or even\n # not-emit, a select-for-update in these cases.\n #\n # To maximize performance, we make sure to only ask for pulp_ids, and\n # avoid instantiating a python-object for the affected CAs by using\n # values_list()\n len(\n ContentArtifact.objects.filter(pulp_id__in=ids)\n .only(\"pulp_id\")\n .order_by(\"pulp_id\")\n .select_for_update()\n .values_list()\n )\n ContentArtifact.objects.bulk_update(to_update_ca_bulk, [\"artifact\"])\n\n # To avoid a similar deadlock issue when calling get_or_create, we sort the\n # \"new\" CAs to make sure inserts happen in a defined order. Since we can't\n # trust the pulp_id (by the time we go to create a CA, it may already exist,\n # and be replaced by the 'real' one), we sort by their \"natural key\".\n content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))\n ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)\n self._post_save(batch)\n\n await sync_to_async(process_batch)()\n for declarative_content in batch:\n await self.put(declarative_content)\n\n def _pre_save(self, batch):\n \"\"\"\n A hook plugin-writers can override to save related objects prior to content unit saving.\n\n This is run within the same transaction as the content unit saving.\n\n Args:\n batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of\n :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.\n\n \"\"\"\n pass\n\n def _post_save(self, batch):\n \"\"\"\n A hook plugin-writers can override to save related objects after content unit saving.\n\n This is run within the same transaction as the content unit saving.\n\n Args:\n batch (list of :class:`~pulpcore.plugin.stages.DeclarativeContent`): The batch of\n :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be saved.\n\n \"\"\"\n pass\n\n\nclass ResolveContentFutures(Stage):\n \"\"\"\n This stage resolves the futures in :class:`~pulpcore.plugin.stages.DeclarativeContent`.\n\n Futures results are set to the found/created :class:`~pulpcore.plugin.models.Content`.\n\n This is useful when data downloaded from the plugin API needs to be parsed by FirstStage to\n create additional :class:`~pulpcore.plugin.stages.DeclarativeContent` objects to be send down\n the pipeline. Consider an example where content type `Foo` references additional instances of a\n different content type `Bar`. Consider this code in FirstStage::\n\n # Create d_content and d_artifact for a `foo_a`\n foo_a = DeclarativeContent(...)\n # Send it in the pipeline\n await self.put(foo_a)\n\n ...\n\n foo_a_content = await foo_a.resolution() # awaits until the foo_a reaches this stage\n\n This creates a \"looping\" pattern, of sorts, where downloaded content at the end of the pipeline\n can introduce new additional to-be-downloaded content at the beginning of the pipeline.\n On the other hand, it can impose a substantial performance decrement of batching content in the\n earlier stages.\n If you want to drop a declarative content prematurely from the pipeline, use the function\n `resolve()` to unblock the coroutines awaiting the attached future and do not hand the content\n to the next stage.\n As a rule of thumb, sending more items into the pipeline first and awaiting their resolution\n later is better.\n \"\"\"\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async for d_content in self.items():\n d_content.resolve()\n await self.put(d_content)\n\n\nclass ContentAssociation(Stage):\n \"\"\"\n A Stages API stage that associates content units with `new_version`.\n\n This stage stores all content unit primary keys in memory before running. This is done to\n compute the units already associated but not received from `self._in_q`. These units are passed\n via `self._out_q` to the next stage as a :class:`django.db.models.query.QuerySet`.\n\n This stage creates a ProgressReport named 'Associating Content' that counts the number of units\n associated. Since it's a stream the total count isn't known until it's finished.\n\n If `mirror` was enabled, then content units may also be un-assocated (removed) from\n `new_version`. A ProgressReport named 'Un-Associating Content' is created that counts the number\n of units un-associated.\n\n Args:\n new_version (:class:`~pulpcore.plugin.models.RepositoryVersion`): The repo version this\n stage associates content with.\n mirror (bool): Whether or not to \"mirror\" the stream of DeclarativeContent - whether content\n not in the stream should be removed from the repository.\n args: unused positional arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.\n kwargs: unused keyword arguments passed along to :class:`~pulpcore.plugin.stages.Stage`.\n \"\"\"\n\n def __init__(self, new_version, mirror, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.new_version = new_version\n self.allow_delete = mirror\n\n async def run(self):\n \"\"\"\n The coroutine for this stage.\n\n Returns:\n The coroutine for this stage.\n \"\"\"\n async with ProgressReport(message=\"Associating Content\", code=\"associating.content\") as pb:\n to_delete = {\n i\n async for i in sync_to_async_iterable(\n self.new_version.content.values_list(\"pk\", flat=True)\n )\n }\n\n async for batch in self.batches():\n to_add = set()\n for d_content in batch:\n try:\n to_delete.remove(d_content.content.pk)\n except KeyError:\n to_add.add(d_content.content.pk)\n await self.put(d_content)\n\n if to_add:\n await sync_to_async(self.new_version.add_content)(\n Content.objects.filter(pk__in=to_add)\n )\n await pb.aincrease_by(len(to_add))\n\n if self.allow_delete:\n async with ProgressReport(\n message=\"Un-Associating Content\", code=\"unassociating.content\"\n ) as pb:\n if to_delete:\n await sync_to_async(self.new_version.remove_content)(\n Content.objects.filter(pk__in=to_delete)\n )\n await pb.aincrease_by(len(to_delete))\n", "path": "pulpcore/plugin/stages/content_stages.py"}]}
| 3,994 | 659 |
gh_patches_debug_18931
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-2310
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
weakrefs to dead objects occuring when changing backends in pyhf benchmark
### Summary
I try to perform a benchmark of `pyhf` using `pytest-benchmark` quite similarly to the benchmark in the `tests/benchmarks` directory.
A short example of such a simple benchmark is given below. To reproduce this bug, the python code needs to be saved in a file of the format `test_<name>.py` and executed via `pytest test_<name>.py`.
The bug occurs only sometimes when the backend is changed between different benchmarking cases. Since the occurence of the bug includes some amount of randomness, it may happen that it doesn't occur on the first try but that the benchmark must be executed multiple times. The full benchmark takes around 1 min on my machine.
The suspected origin of this bug is that every time the backend is changed, an event called `tensorlib_changed` is triggered that in turn leads to the execution of some `_precompute()` functions on different objects (e.g. a `TensorViewer` object as in the error message). The problem occurs, when the referenced object no longer exists, as all references to it have been removed. The reference used to call the function does not change this as it is a weakref.
A proposed solution can be found in PR #2310.
### OS / Environment
```console
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
```
### Steps to Reproduce
<!--- Paste your minimal failing Python example code between the quotes below -->
```python (paste below)
# content of test_benchmark.py
import pytest
import pyhf
@pytest.fixture(
scope='function',
params=[
(pyhf.tensor.numpy_backend(), None),
(pyhf.tensor.pytorch_backend(), None),
(pyhf.tensor.pytorch_backend(precision='64b'), None),
(pyhf.tensor.tensorflow_backend(), None),
(pyhf.tensor.jax_backend(), None),
(
pyhf.tensor.numpy_backend(poisson_from_normal=True),
pyhf.optimize.minuit_optimizer(),
),
],
ids=['numpy', 'pytorch', 'pytorch64',
'tensorflow',
'jax', 'numpy_minuit'],
)
def backend(request):
# get the ids of all the backends
param_ids = request._fixturedef.ids
# the backend we're using: numpy, tensorflow, etc...
param_id = param_ids[request.param_index]
# name of function being called (with params), the original name is .originalname
func_name = request._pyfuncitem.name
pyhf.set_backend(*request.param)
yield request.param
def hypotest(data, pdf):
return pyhf.infer.hypotest(1.0, data, pdf, test_stat="qtilde", return_expected=True)
bins = [1, 2, 4, 8, 16, 32]
bin_ids = [f'{n_bins}_bins' for n_bins in bins]
@pytest.mark.parametrize('n_bins', bins, ids=bin_ids)
def test_hypotest(benchmark, backend, n_bins):
model = pyhf.simplemodels.uncorrelated_background(signal=[12.0]*n_bins, bkg=[50.0]*n_bins, bkg_uncertainty=[5.0]*n_bins)
data = [51.0]*n_bins + model.config.auxdata
assert benchmark(hypotest, data, model)
```
<!--- ...or if you have a failing CLI command paste it between the quotes below -->
```console (paste below)
pytest test_benchmark.py
```
### File Upload (optional)
_No response_
### Expected Results
The expected behavior is to output the benchmarking results for all considered cases as it can be observed when executing `pytest` in `pyhf/tests/benchmarks/`.
This output should not show any "test failures" as no normal tests are performed but only functions that run without an error, when called outside of the benchmark.
### Actual Results
```console
_________________________ ERROR at setup of test_hypotest[jax-1_bins] _________________________
request = <SubRequest 'backend' for <Function test_hypotest[jax-1_bins]>>
@pytest.fixture(
scope='function',
params=[
(pyhf.tensor.numpy_backend(), None),
(pyhf.tensor.pytorch_backend(), None),
(pyhf.tensor.pytorch_backend(precision='64b'), None),
(pyhf.tensor.tensorflow_backend(), None),
(pyhf.tensor.jax_backend(), None),
(
pyhf.tensor.numpy_backend(poisson_from_normal=True),
pyhf.optimize.minuit_optimizer(),
),
],
ids=['numpy', 'pytorch', 'pytorch64',
'tensorflow',
'jax', 'numpy_minuit'],
)
def backend(request):
# get the ids of all the backends
param_ids = request._fixturedef.ids
# the backend we're using: numpy, tensorflow, etc...
param_id = param_ids[request.param_index]
# name of function being called (with params), the original name is .originalname
func_name = request._pyfuncitem.name
> pyhf.set_backend(*request.param)
test_hypo_pyhf.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../pyhfDev/pyhf/src/pyhf/events.py:161: in register_wrapper
result = func(*args, **kwargs)
../../pyhfDev/pyhf/src/pyhf/tensor/manager.py:193: in set_backend
events.trigger("tensorlib_changed")()
../../pyhfDev/pyhf/src/pyhf/events.py:70: in __call__
func()(arg(), *args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = None
def _precompute(self):
tensorlib, _ = get_backend()
> self.sorted_indices = tensorlib.astensor(self._sorted_indices, dtype='int')
E AttributeError: 'NoneType' object has no attribute '_sorted_indices'
../../pyhfDev/pyhf/src/pyhf/tensor/common.py:33: AttributeError
```
### pyhf Version
```console
pyhf, version 0.7.1.dev116
```
### Code of Conduct
- [X] I agree to follow the Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyhf/events.py`
Content:
```
1 from __future__ import annotations
2
3 import weakref
4 from functools import wraps
5 from typing import Callable, TypeVar, cast
6
7 # See https://mypy.readthedocs.io/en/stable/generics.html#declaring-decorators
8 TCallable = TypeVar("TCallable", bound=Callable)
9
10
11 __events = {}
12 __disabled_events = set()
13
14 __all__ = ["Callables", "disable", "enable", "noop", "register", "subscribe", "trigger"]
15
16
17 def __dir__():
18 return __all__
19
20
21 def noop(*args, **kwargs):
22 pass
23
24
25 class Callables:
26 def __init__(self):
27 self._callbacks = []
28
29 @property
30 def callbacks(self):
31 """
32 Get the current list of living callbacks.
33 """
34 self._flush()
35 return self._callbacks
36
37 def append(self, callback):
38 """
39 Append a new bound method as a callback to the list of callables.
40 """
41 try:
42 # methods
43 callback_ref = weakref.ref(callback.__func__), weakref.ref(
44 callback.__self__
45 )
46 except AttributeError:
47 callback_ref = weakref.ref(callback), None
48 self._callbacks.append(callback_ref)
49
50 def _flush(self):
51 """
52 Flush the list of callbacks with those who are weakly-referencing deleted objects.
53
54 Note: must interact with the self._callbacks directly, and not
55 self.callbacks, to avoid infinite recursion.
56 """
57 _callbacks = []
58 for func, arg in self._callbacks:
59 if arg is not None:
60 arg_ref = arg()
61 if arg_ref is None:
62 continue
63 _callbacks.append((func, arg))
64 self._callbacks = _callbacks
65
66 def __call__(self, *args, **kwargs):
67 for func, arg in self.callbacks:
68 # weakref: needs to be de-ref'd first before calling
69 if arg is not None:
70 func()(arg(), *args, **kwargs)
71 else:
72 func()(*args, **kwargs)
73
74 def __iter__(self):
75 return iter(self.callbacks)
76
77 def __getitem__(self, index):
78 return self.callbacks[index]
79
80 def __len__(self):
81 return len(self.callbacks)
82
83 def __repr__(self):
84 return f"Callables({self.callbacks})"
85
86
87 def subscribe(event: str):
88 """
89 Subscribe a function or object method as a callback to an event.
90
91 .. note::
92
93 This is meant to be used as a decorator.
94
95 Args:
96 event (:obj:`str`): The name of the event to subscribe to.
97
98 Returns:
99 :obj:`function`: Decorated function.
100
101 Example:
102 >>> import pyhf
103 >>> @pyhf.events.subscribe("myevent")
104 ... def test(a, b):
105 ... print(a + b)
106 ...
107 >>> pyhf.events.trigger("myevent")(1, 2)
108 3
109
110 """
111
112 global __events
113
114 def __decorator(func: TCallable) -> TCallable:
115 __events.setdefault(event, Callables()).append(func)
116 return func
117
118 return cast(TCallable, __decorator)
119
120
121 def register(event: str) -> Callable[[TCallable], TCallable]:
122 """
123 Register a function or object method to trigger an event. This creates two
124 events: ``{event_name}::before`` and ``{event_name}::after``.
125
126 .. note::
127
128 This is meant to be used as a decorator.
129
130 Args:
131 event (:obj:`str`): The name of the event to subscribe to.
132
133 Returns:
134 :obj:`function`: Decorated function.
135
136 Example:
137 >>> import pyhf
138 >>> @pyhf.events.register("test_func")
139 ... def test(a, b):
140 ... print(a + b)
141 ...
142 >>> @pyhf.events.subscribe("test_func::before")
143 ... def precall():
144 ... print("before call")
145 ...
146 >>> @pyhf.events.subscribe("test_func::after")
147 ... def postcall():
148 ... print("after call")
149 ...
150 >>> test(1, 2)
151 before call
152 3
153 after call
154
155 """
156
157 def _register(func: TCallable) -> TCallable:
158 @wraps(func)
159 def register_wrapper(*args, **kwargs): # type: ignore
160 trigger(f"{event:s}::before")()
161 result = func(*args, **kwargs)
162 trigger(f"{event:s}::after")()
163 return result
164
165 return register_wrapper
166
167 return cast(TCallable, _register)
168
169
170 def trigger(event: str) -> Callables:
171 """
172 Trigger an event if not disabled.
173 """
174 global __events, __disabled_events, noop
175 is_noop = bool(event in __disabled_events or event not in __events)
176 return noop if is_noop else __events.get(event)
177
178
179 def disable(event: str):
180 """
181 Disable an event from firing.
182 """
183 global __disabled_events
184 __disabled_events.add(event)
185
186
187 def enable(event: str):
188 """
189 Enable an event to be fired if disabled.
190 """
191 global __disabled_events
192 __disabled_events.remove(event)
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pyhf/events.py b/src/pyhf/events.py
--- a/src/pyhf/events.py
+++ b/src/pyhf/events.py
@@ -64,12 +64,20 @@
self._callbacks = _callbacks
def __call__(self, *args, **kwargs):
- for func, arg in self.callbacks:
+ for func, arg in self._callbacks:
# weakref: needs to be de-ref'd first before calling
if arg is not None:
- func()(arg(), *args, **kwargs)
+ arg_ref = arg()
+ if arg_ref is not None:
+ func()(arg_ref, *args, **kwargs)
else:
func()(*args, **kwargs)
+ # Flush after calling all the callbacks, not before, as callbacks in the
+ # beginning of the iteration might cause new dead arg weakrefs in
+ # callbacks that are iterated over later.
+ # Checking for dead weakrefs in each iteration and flushing at the end
+ # avoids redundant dead weakref checking in subsequent calls.
+ self._flush()
def __iter__(self):
return iter(self.callbacks)
|
{"golden_diff": "diff --git a/src/pyhf/events.py b/src/pyhf/events.py\n--- a/src/pyhf/events.py\n+++ b/src/pyhf/events.py\n@@ -64,12 +64,20 @@\n self._callbacks = _callbacks\n \n def __call__(self, *args, **kwargs):\n- for func, arg in self.callbacks:\n+ for func, arg in self._callbacks:\n # weakref: needs to be de-ref'd first before calling\n if arg is not None:\n- func()(arg(), *args, **kwargs)\n+ arg_ref = arg()\n+ if arg_ref is not None:\n+ func()(arg_ref, *args, **kwargs)\n else:\n func()(*args, **kwargs)\n+ # Flush after calling all the callbacks, not before, as callbacks in the\n+ # beginning of the iteration might cause new dead arg weakrefs in\n+ # callbacks that are iterated over later.\n+ # Checking for dead weakrefs in each iteration and flushing at the end\n+ # avoids redundant dead weakref checking in subsequent calls.\n+ self._flush()\n \n def __iter__(self):\n return iter(self.callbacks)\n", "issue": "weakrefs to dead objects occuring when changing backends in pyhf benchmark\n### Summary\n\nI try to perform a benchmark of `pyhf` using `pytest-benchmark` quite similarly to the benchmark in the `tests/benchmarks` directory.\r\nA short example of such a simple benchmark is given below. To reproduce this bug, the python code needs to be saved in a file of the format `test_<name>.py` and executed via `pytest test_<name>.py`.\r\nThe bug occurs only sometimes when the backend is changed between different benchmarking cases. Since the occurence of the bug includes some amount of randomness, it may happen that it doesn't occur on the first try but that the benchmark must be executed multiple times. The full benchmark takes around 1 min on my machine.\r\n\r\nThe suspected origin of this bug is that every time the backend is changed, an event called `tensorlib_changed` is triggered that in turn leads to the execution of some `_precompute()` functions on different objects (e.g. a `TensorViewer` object as in the error message). The problem occurs, when the referenced object no longer exists, as all references to it have been removed. The reference used to call the function does not change this as it is a weakref.\r\n\r\nA proposed solution can be found in PR #2310. \n\n### OS / Environment\n\n```console\nPRETTY_NAME=\"Ubuntu 22.04.3 LTS\"\r\nNAME=\"Ubuntu\"\r\nVERSION_ID=\"22.04\"\r\nVERSION=\"22.04.3 LTS (Jammy Jellyfish)\"\r\nVERSION_CODENAME=jammy\r\nID=ubuntu\r\nID_LIKE=debian\r\nHOME_URL=\"https://www.ubuntu.com/\"\r\nSUPPORT_URL=\"https://help.ubuntu.com/\"\r\nBUG_REPORT_URL=\"https://bugs.launchpad.net/ubuntu/\"\r\nPRIVACY_POLICY_URL=\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\"\r\nUBUNTU_CODENAME=jammy\n```\n\n\n### Steps to Reproduce\n\n<!--- Paste your minimal failing Python example code between the quotes below -->\r\n```python (paste below)\r\n# content of test_benchmark.py\r\nimport pytest\r\nimport pyhf\r\n\r\[email protected](\r\n scope='function',\r\n params=[\r\n (pyhf.tensor.numpy_backend(), None),\r\n (pyhf.tensor.pytorch_backend(), None),\r\n (pyhf.tensor.pytorch_backend(precision='64b'), None),\r\n (pyhf.tensor.tensorflow_backend(), None),\r\n (pyhf.tensor.jax_backend(), None),\r\n (\r\n pyhf.tensor.numpy_backend(poisson_from_normal=True),\r\n pyhf.optimize.minuit_optimizer(),\r\n ),\r\n ],\r\n ids=['numpy', 'pytorch', 'pytorch64',\r\n 'tensorflow',\r\n 'jax', 'numpy_minuit'],\r\n)\r\ndef backend(request):\r\n # get the ids of all the backends\r\n param_ids = request._fixturedef.ids\r\n # the backend we're using: numpy, tensorflow, etc...\r\n param_id = param_ids[request.param_index]\r\n # name of function being called (with params), the original name is .originalname\r\n func_name = request._pyfuncitem.name\r\n\r\n pyhf.set_backend(*request.param)\r\n\r\n yield request.param\r\n\r\ndef hypotest(data, pdf):\r\n return pyhf.infer.hypotest(1.0, data, pdf, test_stat=\"qtilde\", return_expected=True)\r\n\r\nbins = [1, 2, 4, 8, 16, 32]\r\nbin_ids = [f'{n_bins}_bins' for n_bins in bins]\r\n\r\[email protected]('n_bins', bins, ids=bin_ids)\r\ndef test_hypotest(benchmark, backend, n_bins):\r\n model = pyhf.simplemodels.uncorrelated_background(signal=[12.0]*n_bins, bkg=[50.0]*n_bins, bkg_uncertainty=[5.0]*n_bins)\r\n data = [51.0]*n_bins + model.config.auxdata\r\n\r\n assert benchmark(hypotest, data, model)\r\n```\r\n\r\n<!--- ...or if you have a failing CLI command paste it between the quotes below -->\r\n```console (paste below)\r\npytest test_benchmark.py\r\n```\r\n\n\n### File Upload (optional)\n\n_No response_\n\n### Expected Results\n\nThe expected behavior is to output the benchmarking results for all considered cases as it can be observed when executing `pytest` in `pyhf/tests/benchmarks/`.\r\nThis output should not show any \"test failures\" as no normal tests are performed but only functions that run without an error, when called outside of the benchmark.\n\n### Actual Results\n\n```console\n_________________________ ERROR at setup of test_hypotest[jax-1_bins] _________________________\r\n\r\nrequest = <SubRequest 'backend' for <Function test_hypotest[jax-1_bins]>>\r\n\r\n @pytest.fixture(\r\n scope='function',\r\n params=[\r\n (pyhf.tensor.numpy_backend(), None),\r\n (pyhf.tensor.pytorch_backend(), None),\r\n (pyhf.tensor.pytorch_backend(precision='64b'), None),\r\n (pyhf.tensor.tensorflow_backend(), None),\r\n (pyhf.tensor.jax_backend(), None),\r\n (\r\n pyhf.tensor.numpy_backend(poisson_from_normal=True),\r\n pyhf.optimize.minuit_optimizer(),\r\n ),\r\n ],\r\n ids=['numpy', 'pytorch', 'pytorch64',\r\n 'tensorflow',\r\n 'jax', 'numpy_minuit'],\r\n )\r\n def backend(request):\r\n # get the ids of all the backends\r\n param_ids = request._fixturedef.ids\r\n # the backend we're using: numpy, tensorflow, etc...\r\n param_id = param_ids[request.param_index]\r\n # name of function being called (with params), the original name is .originalname\r\n func_name = request._pyfuncitem.name\r\n \r\n> pyhf.set_backend(*request.param)\r\n\r\ntest_hypo_pyhf.py:29: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n../../pyhfDev/pyhf/src/pyhf/events.py:161: in register_wrapper\r\n result = func(*args, **kwargs)\r\n../../pyhfDev/pyhf/src/pyhf/tensor/manager.py:193: in set_backend\r\n events.trigger(\"tensorlib_changed\")()\r\n../../pyhfDev/pyhf/src/pyhf/events.py:70: in __call__\r\n func()(arg(), *args, **kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = None\r\n\r\n def _precompute(self):\r\n tensorlib, _ = get_backend()\r\n> self.sorted_indices = tensorlib.astensor(self._sorted_indices, dtype='int')\r\nE AttributeError: 'NoneType' object has no attribute '_sorted_indices'\r\n\r\n../../pyhfDev/pyhf/src/pyhf/tensor/common.py:33: AttributeError\n```\n\n\n### pyhf Version\n\n```console\npyhf, version 0.7.1.dev116\n```\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Code of Conduct\n", "before_files": [{"content": "from __future__ import annotations\n\nimport weakref\nfrom functools import wraps\nfrom typing import Callable, TypeVar, cast\n\n# See https://mypy.readthedocs.io/en/stable/generics.html#declaring-decorators\nTCallable = TypeVar(\"TCallable\", bound=Callable)\n\n\n__events = {}\n__disabled_events = set()\n\n__all__ = [\"Callables\", \"disable\", \"enable\", \"noop\", \"register\", \"subscribe\", \"trigger\"]\n\n\ndef __dir__():\n return __all__\n\n\ndef noop(*args, **kwargs):\n pass\n\n\nclass Callables:\n def __init__(self):\n self._callbacks = []\n\n @property\n def callbacks(self):\n \"\"\"\n Get the current list of living callbacks.\n \"\"\"\n self._flush()\n return self._callbacks\n\n def append(self, callback):\n \"\"\"\n Append a new bound method as a callback to the list of callables.\n \"\"\"\n try:\n # methods\n callback_ref = weakref.ref(callback.__func__), weakref.ref(\n callback.__self__\n )\n except AttributeError:\n callback_ref = weakref.ref(callback), None\n self._callbacks.append(callback_ref)\n\n def _flush(self):\n \"\"\"\n Flush the list of callbacks with those who are weakly-referencing deleted objects.\n\n Note: must interact with the self._callbacks directly, and not\n self.callbacks, to avoid infinite recursion.\n \"\"\"\n _callbacks = []\n for func, arg in self._callbacks:\n if arg is not None:\n arg_ref = arg()\n if arg_ref is None:\n continue\n _callbacks.append((func, arg))\n self._callbacks = _callbacks\n\n def __call__(self, *args, **kwargs):\n for func, arg in self.callbacks:\n # weakref: needs to be de-ref'd first before calling\n if arg is not None:\n func()(arg(), *args, **kwargs)\n else:\n func()(*args, **kwargs)\n\n def __iter__(self):\n return iter(self.callbacks)\n\n def __getitem__(self, index):\n return self.callbacks[index]\n\n def __len__(self):\n return len(self.callbacks)\n\n def __repr__(self):\n return f\"Callables({self.callbacks})\"\n\n\ndef subscribe(event: str):\n \"\"\"\n Subscribe a function or object method as a callback to an event.\n\n .. note::\n\n This is meant to be used as a decorator.\n\n Args:\n event (:obj:`str`): The name of the event to subscribe to.\n\n Returns:\n :obj:`function`: Decorated function.\n\n Example:\n >>> import pyhf\n >>> @pyhf.events.subscribe(\"myevent\")\n ... def test(a, b):\n ... print(a + b)\n ...\n >>> pyhf.events.trigger(\"myevent\")(1, 2)\n 3\n\n \"\"\"\n\n global __events\n\n def __decorator(func: TCallable) -> TCallable:\n __events.setdefault(event, Callables()).append(func)\n return func\n\n return cast(TCallable, __decorator)\n\n\ndef register(event: str) -> Callable[[TCallable], TCallable]:\n \"\"\"\n Register a function or object method to trigger an event. This creates two\n events: ``{event_name}::before`` and ``{event_name}::after``.\n\n .. note::\n\n This is meant to be used as a decorator.\n\n Args:\n event (:obj:`str`): The name of the event to subscribe to.\n\n Returns:\n :obj:`function`: Decorated function.\n\n Example:\n >>> import pyhf\n >>> @pyhf.events.register(\"test_func\")\n ... def test(a, b):\n ... print(a + b)\n ...\n >>> @pyhf.events.subscribe(\"test_func::before\")\n ... def precall():\n ... print(\"before call\")\n ...\n >>> @pyhf.events.subscribe(\"test_func::after\")\n ... def postcall():\n ... print(\"after call\")\n ...\n >>> test(1, 2)\n before call\n 3\n after call\n\n \"\"\"\n\n def _register(func: TCallable) -> TCallable:\n @wraps(func)\n def register_wrapper(*args, **kwargs): # type: ignore\n trigger(f\"{event:s}::before\")()\n result = func(*args, **kwargs)\n trigger(f\"{event:s}::after\")()\n return result\n\n return register_wrapper\n\n return cast(TCallable, _register)\n\n\ndef trigger(event: str) -> Callables:\n \"\"\"\n Trigger an event if not disabled.\n \"\"\"\n global __events, __disabled_events, noop\n is_noop = bool(event in __disabled_events or event not in __events)\n return noop if is_noop else __events.get(event)\n\n\ndef disable(event: str):\n \"\"\"\n Disable an event from firing.\n \"\"\"\n global __disabled_events\n __disabled_events.add(event)\n\n\ndef enable(event: str):\n \"\"\"\n Enable an event to be fired if disabled.\n \"\"\"\n global __disabled_events\n __disabled_events.remove(event)\n", "path": "src/pyhf/events.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport weakref\nfrom functools import wraps\nfrom typing import Callable, TypeVar, cast\n\n# See https://mypy.readthedocs.io/en/stable/generics.html#declaring-decorators\nTCallable = TypeVar(\"TCallable\", bound=Callable)\n\n\n__events = {}\n__disabled_events = set()\n\n__all__ = [\"Callables\", \"disable\", \"enable\", \"noop\", \"register\", \"subscribe\", \"trigger\"]\n\n\ndef __dir__():\n return __all__\n\n\ndef noop(*args, **kwargs):\n pass\n\n\nclass Callables:\n def __init__(self):\n self._callbacks = []\n\n @property\n def callbacks(self):\n \"\"\"\n Get the current list of living callbacks.\n \"\"\"\n self._flush()\n return self._callbacks\n\n def append(self, callback):\n \"\"\"\n Append a new bound method as a callback to the list of callables.\n \"\"\"\n try:\n # methods\n callback_ref = weakref.ref(callback.__func__), weakref.ref(\n callback.__self__\n )\n except AttributeError:\n callback_ref = weakref.ref(callback), None\n self._callbacks.append(callback_ref)\n\n def _flush(self):\n \"\"\"\n Flush the list of callbacks with those who are weakly-referencing deleted objects.\n\n Note: must interact with the self._callbacks directly, and not\n self.callbacks, to avoid infinite recursion.\n \"\"\"\n _callbacks = []\n for func, arg in self._callbacks:\n if arg is not None:\n arg_ref = arg()\n if arg_ref is None:\n continue\n _callbacks.append((func, arg))\n self._callbacks = _callbacks\n\n def __call__(self, *args, **kwargs):\n for func, arg in self._callbacks:\n # weakref: needs to be de-ref'd first before calling\n if arg is not None:\n arg_ref = arg()\n if arg_ref is not None:\n func()(arg_ref, *args, **kwargs)\n else:\n func()(*args, **kwargs)\n # Flush after calling all the callbacks, not before, as callbacks in the\n # beginning of the iteration might cause new dead arg weakrefs in\n # callbacks that are iterated over later.\n # Checking for dead weakrefs in each iteration and flushing at the end\n # avoids redundant dead weakref checking in subsequent calls.\n self._flush()\n\n def __iter__(self):\n return iter(self.callbacks)\n\n def __getitem__(self, index):\n return self.callbacks[index]\n\n def __len__(self):\n return len(self.callbacks)\n\n def __repr__(self):\n return f\"Callables({self.callbacks})\"\n\n\ndef subscribe(event: str):\n \"\"\"\n Subscribe a function or object method as a callback to an event.\n\n .. note::\n\n This is meant to be used as a decorator.\n\n Args:\n event (:obj:`str`): The name of the event to subscribe to.\n\n Returns:\n :obj:`function`: Decorated function.\n\n Example:\n >>> import pyhf\n >>> @pyhf.events.subscribe(\"myevent\")\n ... def test(a, b):\n ... print(a + b)\n ...\n >>> pyhf.events.trigger(\"myevent\")(1, 2)\n 3\n\n \"\"\"\n\n global __events\n\n def __decorator(func: TCallable) -> TCallable:\n __events.setdefault(event, Callables()).append(func)\n return func\n\n return cast(TCallable, __decorator)\n\n\ndef register(event: str) -> Callable[[TCallable], TCallable]:\n \"\"\"\n Register a function or object method to trigger an event. This creates two\n events: ``{event_name}::before`` and ``{event_name}::after``.\n\n .. note::\n\n This is meant to be used as a decorator.\n\n Args:\n event (:obj:`str`): The name of the event to subscribe to.\n\n Returns:\n :obj:`function`: Decorated function.\n\n Example:\n >>> import pyhf\n >>> @pyhf.events.register(\"test_func\")\n ... def test(a, b):\n ... print(a + b)\n ...\n >>> @pyhf.events.subscribe(\"test_func::before\")\n ... def precall():\n ... print(\"before call\")\n ...\n >>> @pyhf.events.subscribe(\"test_func::after\")\n ... def postcall():\n ... print(\"after call\")\n ...\n >>> test(1, 2)\n before call\n 3\n after call\n\n \"\"\"\n\n def _register(func: TCallable) -> TCallable:\n @wraps(func)\n def register_wrapper(*args, **kwargs): # type: ignore\n trigger(f\"{event:s}::before\")()\n result = func(*args, **kwargs)\n trigger(f\"{event:s}::after\")()\n return result\n\n return register_wrapper\n\n return cast(TCallable, _register)\n\n\ndef trigger(event: str) -> Callables:\n \"\"\"\n Trigger an event if not disabled.\n \"\"\"\n global __events, __disabled_events, noop\n is_noop = bool(event in __disabled_events or event not in __events)\n return noop if is_noop else __events.get(event)\n\n\ndef disable(event: str):\n \"\"\"\n Disable an event from firing.\n \"\"\"\n global __disabled_events\n __disabled_events.add(event)\n\n\ndef enable(event: str):\n \"\"\"\n Enable an event to be fired if disabled.\n \"\"\"\n global __disabled_events\n __disabled_events.remove(event)\n", "path": "src/pyhf/events.py"}]}
| 3,430 | 258 |
gh_patches_debug_30232
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-5315
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/kernel/triton/rms_layernorm.py`
Content:
```
1 import torch
2
3 try:
4 import triton
5 import triton.language as tl
6
7 HAS_TRITON = True
8 except ImportError:
9 HAS_TRITON = False
10 print("please install triton from https://github.com/openai/triton")
11
12 if HAS_TRITON:
13 # CREDITS: These functions are adapted from the Triton tutorial
14 # https://triton-lang.org/main/getting-started/tutorials/05-layer-norm.html
15
16 @triton.jit
17 def _rmsnorm_kernel(
18 X, # pointer to the input
19 Y, # pointer to the output
20 W, # pointer to the weights
21 stride, # how much to increase the pointer when moving by 1 row
22 N, # number of columns in X
23 eps, # epsilon to avoid division by zero
24 BLOCK_SIZE: tl.constexpr,
25 ):
26
27 # This triton kernel implements Root Mean Square Layer Norm (RMSNorm).
28
29 # Map the program id to the row of X and Y it should compute.
30 row = tl.program_id(0)
31 Y += row * stride
32 X += row * stride
33 # Compute variance
34 _var = tl.zeros([BLOCK_SIZE], dtype=tl.float32)
35 for off in range(0, N, BLOCK_SIZE):
36 cols = off + tl.arange(0, BLOCK_SIZE)
37 x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32)
38 x = tl.where(cols < N, x, 0.0)
39 _var += x * x
40 var = tl.sum(_var, axis=0) / N
41 rstd = 1 / tl.sqrt(var + eps)
42 # Normalize and apply linear transformation
43 for off in range(0, N, BLOCK_SIZE):
44 cols = off + tl.arange(0, BLOCK_SIZE)
45 mask = cols < N
46 w = tl.load(W + cols, mask=mask)
47 x = tl.load(X + cols, mask=mask, other=0.0).to(tl.float32)
48 x_hat = x * rstd
49 y = x_hat * w
50 # Write output
51 tl.store(Y + cols, y.to(tl.float16), mask=mask)
52
53 @torch.no_grad()
54 def rms_layernorm(x, weight, eps):
55 # allocate output
56 y = torch.empty_like(x)
57 # reshape input data into 2D tensor
58 x_arg = x.reshape(-1, x.shape[-1])
59 M, N = x_arg.shape
60 # Less than 64KB per feature: enqueue fused kernel
61 MAX_FUSED_SIZE = 65536 // x.element_size()
62 BLOCK_SIZE = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
63 if N > BLOCK_SIZE:
64 raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.")
65 # heuristics for number of warps
66 num_warps = min(max(BLOCK_SIZE // 256, 1), 8)
67 # enqueue kernel
68 _rmsnorm_kernel[(M,)](
69 x_arg, y, weight, x_arg.stride(0), N, eps, BLOCK_SIZE=BLOCK_SIZE, num_warps=num_warps
70 )
71 return y
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/colossalai/kernel/triton/rms_layernorm.py b/colossalai/kernel/triton/rms_layernorm.py
--- a/colossalai/kernel/triton/rms_layernorm.py
+++ b/colossalai/kernel/triton/rms_layernorm.py
@@ -23,7 +23,6 @@
eps, # epsilon to avoid division by zero
BLOCK_SIZE: tl.constexpr,
):
-
# This triton kernel implements Root Mean Square Layer Norm (RMSNorm).
# Map the program id to the row of X and Y it should compute.
@@ -54,18 +53,19 @@
def rms_layernorm(x, weight, eps):
# allocate output
y = torch.empty_like(x)
- # reshape input data into 2D tensor
+ # reshape input data into 2D tensor, (total token, hidden_size)
x_arg = x.reshape(-1, x.shape[-1])
M, N = x_arg.shape
# Less than 64KB per feature: enqueue fused kernel
MAX_FUSED_SIZE = 65536 // x.element_size()
+
BLOCK_SIZE = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))
- if N > BLOCK_SIZE:
+ if N > MAX_FUSED_SIZE:
raise RuntimeError("This layer norm doesn't support feature dim >= 64KB.")
+
# heuristics for number of warps
- num_warps = min(max(BLOCK_SIZE // 256, 1), 8)
+ num_warps = min(max(triton.next_power_of_2(N) // 256, 8), 32)
+
# enqueue kernel
- _rmsnorm_kernel[(M,)](
- x_arg, y, weight, x_arg.stride(0), N, eps, BLOCK_SIZE=BLOCK_SIZE, num_warps=num_warps
- )
+ _rmsnorm_kernel[(M,)](x_arg, y, weight, x_arg.stride(0), N, eps, BLOCK_SIZE=BLOCK_SIZE, num_warps=num_warps)
return y
|
{"golden_diff": "diff --git a/colossalai/kernel/triton/rms_layernorm.py b/colossalai/kernel/triton/rms_layernorm.py\n--- a/colossalai/kernel/triton/rms_layernorm.py\n+++ b/colossalai/kernel/triton/rms_layernorm.py\n@@ -23,7 +23,6 @@\n eps, # epsilon to avoid division by zero\n BLOCK_SIZE: tl.constexpr,\n ):\n-\n # This triton kernel implements Root Mean Square Layer Norm (RMSNorm).\n \n # Map the program id to the row of X and Y it should compute.\n@@ -54,18 +53,19 @@\n def rms_layernorm(x, weight, eps):\n # allocate output\n y = torch.empty_like(x)\n- # reshape input data into 2D tensor\n+ # reshape input data into 2D tensor, (total token, hidden_size)\n x_arg = x.reshape(-1, x.shape[-1])\n M, N = x_arg.shape\n # Less than 64KB per feature: enqueue fused kernel\n MAX_FUSED_SIZE = 65536 // x.element_size()\n+\n BLOCK_SIZE = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))\n- if N > BLOCK_SIZE:\n+ if N > MAX_FUSED_SIZE:\n raise RuntimeError(\"This layer norm doesn't support feature dim >= 64KB.\")\n+\n # heuristics for number of warps\n- num_warps = min(max(BLOCK_SIZE // 256, 1), 8)\n+ num_warps = min(max(triton.next_power_of_2(N) // 256, 8), 32)\n+\n # enqueue kernel\n- _rmsnorm_kernel[(M,)](\n- x_arg, y, weight, x_arg.stride(0), N, eps, BLOCK_SIZE=BLOCK_SIZE, num_warps=num_warps\n- )\n+ _rmsnorm_kernel[(M,)](x_arg, y, weight, x_arg.stride(0), N, eps, BLOCK_SIZE=BLOCK_SIZE, num_warps=num_warps)\n return y\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import torch\n\ntry:\n import triton\n import triton.language as tl\n\n HAS_TRITON = True\nexcept ImportError:\n HAS_TRITON = False\n print(\"please install triton from https://github.com/openai/triton\")\n\nif HAS_TRITON:\n # CREDITS: These functions are adapted from the Triton tutorial\n # https://triton-lang.org/main/getting-started/tutorials/05-layer-norm.html\n\n @triton.jit\n def _rmsnorm_kernel(\n X, # pointer to the input\n Y, # pointer to the output\n W, # pointer to the weights\n stride, # how much to increase the pointer when moving by 1 row\n N, # number of columns in X\n eps, # epsilon to avoid division by zero\n BLOCK_SIZE: tl.constexpr,\n ):\n\n # This triton kernel implements Root Mean Square Layer Norm (RMSNorm).\n\n # Map the program id to the row of X and Y it should compute.\n row = tl.program_id(0)\n Y += row * stride\n X += row * stride\n # Compute variance\n _var = tl.zeros([BLOCK_SIZE], dtype=tl.float32)\n for off in range(0, N, BLOCK_SIZE):\n cols = off + tl.arange(0, BLOCK_SIZE)\n x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32)\n x = tl.where(cols < N, x, 0.0)\n _var += x * x\n var = tl.sum(_var, axis=0) / N\n rstd = 1 / tl.sqrt(var + eps)\n # Normalize and apply linear transformation\n for off in range(0, N, BLOCK_SIZE):\n cols = off + tl.arange(0, BLOCK_SIZE)\n mask = cols < N\n w = tl.load(W + cols, mask=mask)\n x = tl.load(X + cols, mask=mask, other=0.0).to(tl.float32)\n x_hat = x * rstd\n y = x_hat * w\n # Write output\n tl.store(Y + cols, y.to(tl.float16), mask=mask)\n\n @torch.no_grad()\n def rms_layernorm(x, weight, eps):\n # allocate output\n y = torch.empty_like(x)\n # reshape input data into 2D tensor\n x_arg = x.reshape(-1, x.shape[-1])\n M, N = x_arg.shape\n # Less than 64KB per feature: enqueue fused kernel\n MAX_FUSED_SIZE = 65536 // x.element_size()\n BLOCK_SIZE = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))\n if N > BLOCK_SIZE:\n raise RuntimeError(\"This layer norm doesn't support feature dim >= 64KB.\")\n # heuristics for number of warps\n num_warps = min(max(BLOCK_SIZE // 256, 1), 8)\n # enqueue kernel\n _rmsnorm_kernel[(M,)](\n x_arg, y, weight, x_arg.stride(0), N, eps, BLOCK_SIZE=BLOCK_SIZE, num_warps=num_warps\n )\n return y\n", "path": "colossalai/kernel/triton/rms_layernorm.py"}], "after_files": [{"content": "import torch\n\ntry:\n import triton\n import triton.language as tl\n\n HAS_TRITON = True\nexcept ImportError:\n HAS_TRITON = False\n print(\"please install triton from https://github.com/openai/triton\")\n\nif HAS_TRITON:\n # CREDITS: These functions are adapted from the Triton tutorial\n # https://triton-lang.org/main/getting-started/tutorials/05-layer-norm.html\n\n @triton.jit\n def _rmsnorm_kernel(\n X, # pointer to the input\n Y, # pointer to the output\n W, # pointer to the weights\n stride, # how much to increase the pointer when moving by 1 row\n N, # number of columns in X\n eps, # epsilon to avoid division by zero\n BLOCK_SIZE: tl.constexpr,\n ):\n # This triton kernel implements Root Mean Square Layer Norm (RMSNorm).\n\n # Map the program id to the row of X and Y it should compute.\n row = tl.program_id(0)\n Y += row * stride\n X += row * stride\n # Compute variance\n _var = tl.zeros([BLOCK_SIZE], dtype=tl.float32)\n for off in range(0, N, BLOCK_SIZE):\n cols = off + tl.arange(0, BLOCK_SIZE)\n x = tl.load(X + cols, mask=cols < N, other=0.0).to(tl.float32)\n x = tl.where(cols < N, x, 0.0)\n _var += x * x\n var = tl.sum(_var, axis=0) / N\n rstd = 1 / tl.sqrt(var + eps)\n # Normalize and apply linear transformation\n for off in range(0, N, BLOCK_SIZE):\n cols = off + tl.arange(0, BLOCK_SIZE)\n mask = cols < N\n w = tl.load(W + cols, mask=mask)\n x = tl.load(X + cols, mask=mask, other=0.0).to(tl.float32)\n x_hat = x * rstd\n y = x_hat * w\n # Write output\n tl.store(Y + cols, y.to(tl.float16), mask=mask)\n\n @torch.no_grad()\n def rms_layernorm(x, weight, eps):\n # allocate output\n y = torch.empty_like(x)\n # reshape input data into 2D tensor, (total token, hidden_size)\n x_arg = x.reshape(-1, x.shape[-1])\n M, N = x_arg.shape\n # Less than 64KB per feature: enqueue fused kernel\n MAX_FUSED_SIZE = 65536 // x.element_size()\n\n BLOCK_SIZE = min(MAX_FUSED_SIZE, triton.next_power_of_2(N))\n if N > MAX_FUSED_SIZE:\n raise RuntimeError(\"This layer norm doesn't support feature dim >= 64KB.\")\n\n # heuristics for number of warps\n num_warps = min(max(triton.next_power_of_2(N) // 256, 8), 32)\n\n # enqueue kernel\n _rmsnorm_kernel[(M,)](x_arg, y, weight, x_arg.stride(0), N, eps, BLOCK_SIZE=BLOCK_SIZE, num_warps=num_warps)\n return y\n", "path": "colossalai/kernel/triton/rms_layernorm.py"}]}
| 1,160 | 482 |
gh_patches_debug_25017
|
rasdani/github-patches
|
git_diff
|
pydantic__pydantic-1272
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
@validate_arguments on instance methods
# Bug
Hello, I tried using the new `@validate_arguments` decorator and it doesn't work when used on instance methods.
I didn't see it on the ToDo in #1205 and it seems like an oversight, maybe due to the special treatment of `self`.
Output of `python -c "import pydantic.utils; print(pydantic.utils.version_info())"`:
```
$ python3 -c "import pydantic.utils; print(pydantic.utils.version_info())"
pydantic version: 1.4a1
pydantic compiled: False
install path: /home/[user]/git/pydantic/pydantic
python version: 3.7.5 (default, Nov 20 2019, 09:21:52) [GCC 9.2.1 20191008]
platform: Linux-5.3.0-29-generic-x86_64-with-Ubuntu-19.10-eoan
optional deps. installed: []
```
```py
from pydantic import validate_arguments
class SomeObject:
@validate_arguments
def some_function(self, i: int):
print(type(self), self)
print(type(i), i)
o = SomeObject()
o.some_function(1) # doesn't work, instead of `i` `self` becomes 1
#pydantic.error_wrappers.ValidationError: 1 validation error for SomeFunction
#i
# field required (type=value_error.missing)
o.some_function(o, 1) # works, but not the way instance methods are meant to be used
#<class '__main__.SomeObject'> <__main__.SomeObject object at 0x7f32911af3d0>
#<class 'int'> 1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pydantic/decorator.py`
Content:
```
1 from functools import update_wrapper
2 from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Tuple, TypeVar, cast, get_type_hints
3
4 from . import validator
5 from .errors import ConfigError
6 from .main import BaseModel, Extra, create_model
7 from .utils import to_camel
8
9 __all__ = ('validate_arguments',)
10
11 if TYPE_CHECKING:
12 from .typing import AnyCallable
13
14 Callable = TypeVar('Callable', bound=AnyCallable)
15
16
17 def validate_arguments(function: 'Callable') -> 'Callable':
18 """
19 Decorator to validate the arguments passed to a function.
20 """
21 vd = ValidatedFunction(function)
22 vd = update_wrapper(vd, function) # type: ignore
23 return cast('Callable', vd)
24
25
26 ALT_V_ARGS = 'v__args'
27 ALT_V_KWARGS = 'v__kwargs'
28 V_POSITIONAL_ONLY_NAME = 'v__positional_only'
29
30
31 class ValidatedFunction:
32 def __init__(self, function: 'Callable'):
33 from inspect import signature, Parameter
34
35 parameters: Mapping[str, Parameter] = signature(function).parameters
36
37 if parameters.keys() & {ALT_V_ARGS, ALT_V_KWARGS, V_POSITIONAL_ONLY_NAME}:
38 raise ConfigError(
39 f'"{ALT_V_ARGS}", "{ALT_V_KWARGS}" and "{V_POSITIONAL_ONLY_NAME}" are not permitted as argument '
40 f'names when using the "{validate_arguments.__name__}" decorator'
41 )
42
43 self.raw_function = function
44 self.arg_mapping: Dict[int, str] = {}
45 self.positional_only_args = set()
46 self.v_args_name = 'args'
47 self.v_kwargs_name = 'kwargs'
48
49 type_hints = get_type_hints(function)
50 takes_args = False
51 takes_kwargs = False
52 fields: Dict[str, Tuple[Any, Any]] = {}
53 for i, (name, p) in enumerate(parameters.items()):
54 if p.annotation == p.empty:
55 annotation = Any
56 else:
57 annotation = type_hints[name]
58
59 default = ... if p.default == p.empty else p.default
60 if p.kind == Parameter.POSITIONAL_ONLY:
61 self.arg_mapping[i] = name
62 fields[name] = annotation, default
63 fields[V_POSITIONAL_ONLY_NAME] = List[str], None
64 self.positional_only_args.add(name)
65 elif p.kind == Parameter.POSITIONAL_OR_KEYWORD:
66 self.arg_mapping[i] = name
67 fields[name] = annotation, default
68 elif p.kind == Parameter.KEYWORD_ONLY:
69 fields[name] = annotation, default
70 elif p.kind == Parameter.VAR_POSITIONAL:
71 self.v_args_name = name
72 fields[name] = Tuple[annotation, ...], None
73 takes_args = True
74 else:
75 assert p.kind == Parameter.VAR_KEYWORD, p.kind
76 self.v_kwargs_name = name
77 fields[name] = Dict[str, annotation], None # type: ignore
78 takes_kwargs = True
79
80 # these checks avoid a clash between "args" and a field with that name
81 if not takes_args and self.v_args_name in fields:
82 self.v_args_name = ALT_V_ARGS
83
84 # same with "kwargs"
85 if not takes_kwargs and self.v_kwargs_name in fields:
86 self.v_kwargs_name = ALT_V_KWARGS
87
88 if not takes_args:
89 # we add the field so validation below can raise the correct exception
90 fields[self.v_args_name] = List[Any], None
91
92 if not takes_kwargs:
93 # same with kwargs
94 fields[self.v_kwargs_name] = Dict[Any, Any], None
95
96 self.create_model(fields, takes_args, takes_kwargs)
97
98 def __call__(self, *args: Any, **kwargs: Any) -> Any:
99 values = self.build_values(args, kwargs)
100 m = self.model(**values)
101 return self.execute(m)
102
103 def build_values(self, args: Tuple[Any, ...], kwargs: Dict[str, Any]) -> Dict[str, Any]:
104 values: Dict[str, Any] = {}
105 if args:
106 arg_iter = enumerate(args)
107 while True:
108 try:
109 i, a = next(arg_iter)
110 except StopIteration:
111 break
112 arg_name = self.arg_mapping.get(i)
113 if arg_name is not None:
114 values[arg_name] = a
115 else:
116 values[self.v_args_name] = [a] + [a for _, a in arg_iter]
117 break
118
119 var_kwargs = {}
120 wrong_positional_args = []
121 for k, v in kwargs.items():
122 if k in self.model.__fields__:
123 if k in self.positional_only_args:
124 wrong_positional_args.append(k)
125 values[k] = v
126 else:
127 var_kwargs[k] = v
128
129 if var_kwargs:
130 values[self.v_kwargs_name] = var_kwargs
131 if wrong_positional_args:
132 values[V_POSITIONAL_ONLY_NAME] = wrong_positional_args
133 return values
134
135 def execute(self, m: BaseModel) -> Any:
136 d = {k: v for k, v in m._iter() if k in m.__fields_set__}
137 kwargs = d.pop(self.v_kwargs_name, None)
138 if kwargs:
139 d.update(kwargs)
140
141 if self.v_args_name in d:
142 args_: List[Any] = []
143 in_kwargs = False
144 kwargs = {}
145 for name, value in d.items():
146 if in_kwargs:
147 kwargs[name] = value
148 elif name == self.v_args_name:
149 args_ += value
150 in_kwargs = True
151 else:
152 args_.append(value)
153 return self.raw_function(*args_, **kwargs)
154 elif self.positional_only_args:
155 args_ = []
156 kwargs = {}
157 for name, value in d.items():
158 if name in self.positional_only_args:
159 args_.append(value)
160 else:
161 kwargs[name] = value
162 return self.raw_function(*args_, **kwargs)
163 else:
164 return self.raw_function(**d)
165
166 def create_model(self, fields: Dict[str, Any], takes_args: bool, takes_kwargs: bool) -> None:
167 pos_args = len(self.arg_mapping)
168
169 class DecoratorBaseModel(BaseModel):
170 @validator(self.v_args_name, check_fields=False, allow_reuse=True)
171 def check_args(cls, v: List[Any]) -> List[Any]:
172 if takes_args:
173 return v
174
175 raise TypeError(f'{pos_args} positional arguments expected but {pos_args + len(v)} given')
176
177 @validator(self.v_kwargs_name, check_fields=False, allow_reuse=True)
178 def check_kwargs(cls, v: Dict[str, Any]) -> Dict[str, Any]:
179 if takes_kwargs:
180 return v
181
182 plural = '' if len(v) == 1 else 's'
183 keys = ', '.join(map(repr, v.keys()))
184 raise TypeError(f'unexpected keyword argument{plural}: {keys}')
185
186 @validator(V_POSITIONAL_ONLY_NAME, check_fields=False, allow_reuse=True)
187 def check_positional_only(cls, v: List[str]) -> None:
188 plural = '' if len(v) == 1 else 's'
189 keys = ', '.join(map(repr, v))
190 raise TypeError(f'positional-only argument{plural} passed as keyword argument{plural}: {keys}')
191
192 class Config:
193 extra = Extra.forbid
194
195 self.model = create_model(to_camel(self.raw_function.__name__), __base__=DecoratorBaseModel, **fields)
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pydantic/decorator.py b/pydantic/decorator.py
--- a/pydantic/decorator.py
+++ b/pydantic/decorator.py
@@ -1,4 +1,4 @@
-from functools import update_wrapper
+from functools import wraps
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Tuple, TypeVar, cast, get_type_hints
from . import validator
@@ -19,8 +19,15 @@
Decorator to validate the arguments passed to a function.
"""
vd = ValidatedFunction(function)
- vd = update_wrapper(vd, function) # type: ignore
- return cast('Callable', vd)
+
+ @wraps(function)
+ def wrapper_function(*args: Any, **kwargs: Any) -> Any:
+ return vd.call(*args, **kwargs)
+
+ wrapper_function.vd = vd # type: ignore
+ wrapper_function.raw_function = vd.raw_function # type: ignore
+ wrapper_function.model = vd.model # type: ignore
+ return cast('Callable', wrapper_function)
ALT_V_ARGS = 'v__args'
@@ -95,7 +102,7 @@
self.create_model(fields, takes_args, takes_kwargs)
- def __call__(self, *args: Any, **kwargs: Any) -> Any:
+ def call(self, *args: Any, **kwargs: Any) -> Any:
values = self.build_values(args, kwargs)
m = self.model(**values)
return self.execute(m)
|
{"golden_diff": "diff --git a/pydantic/decorator.py b/pydantic/decorator.py\n--- a/pydantic/decorator.py\n+++ b/pydantic/decorator.py\n@@ -1,4 +1,4 @@\n-from functools import update_wrapper\n+from functools import wraps\n from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Tuple, TypeVar, cast, get_type_hints\n \n from . import validator\n@@ -19,8 +19,15 @@\n Decorator to validate the arguments passed to a function.\n \"\"\"\n vd = ValidatedFunction(function)\n- vd = update_wrapper(vd, function) # type: ignore\n- return cast('Callable', vd)\n+\n+ @wraps(function)\n+ def wrapper_function(*args: Any, **kwargs: Any) -> Any:\n+ return vd.call(*args, **kwargs)\n+\n+ wrapper_function.vd = vd # type: ignore\n+ wrapper_function.raw_function = vd.raw_function # type: ignore\n+ wrapper_function.model = vd.model # type: ignore\n+ return cast('Callable', wrapper_function)\n \n \n ALT_V_ARGS = 'v__args'\n@@ -95,7 +102,7 @@\n \n self.create_model(fields, takes_args, takes_kwargs)\n \n- def __call__(self, *args: Any, **kwargs: Any) -> Any:\n+ def call(self, *args: Any, **kwargs: Any) -> Any:\n values = self.build_values(args, kwargs)\n m = self.model(**values)\n return self.execute(m)\n", "issue": "@validate_arguments on instance methods\n# Bug\r\n\r\nHello, I tried using the new `@validate_arguments` decorator and it doesn't work when used on instance methods.\r\n\r\nI didn't see it on the ToDo in #1205 and it seems like an oversight, maybe due to the special treatment of `self`.\r\n\r\nOutput of `python -c \"import pydantic.utils; print(pydantic.utils.version_info())\"`:\r\n```\r\n$ python3 -c \"import pydantic.utils; print(pydantic.utils.version_info())\"\r\n pydantic version: 1.4a1\r\n pydantic compiled: False\r\n install path: /home/[user]/git/pydantic/pydantic\r\n python version: 3.7.5 (default, Nov 20 2019, 09:21:52) [GCC 9.2.1 20191008]\r\n platform: Linux-5.3.0-29-generic-x86_64-with-Ubuntu-19.10-eoan\r\n optional deps. installed: []\r\n```\r\n\r\n\r\n```py\r\nfrom pydantic import validate_arguments\r\n\r\n\r\nclass SomeObject:\r\n @validate_arguments\r\n def some_function(self, i: int):\r\n print(type(self), self)\r\n print(type(i), i)\r\n\r\no = SomeObject()\r\no.some_function(1) # doesn't work, instead of `i` `self` becomes 1\r\n#pydantic.error_wrappers.ValidationError: 1 validation error for SomeFunction\r\n#i\r\n# field required (type=value_error.missing)\r\n\r\no.some_function(o, 1) # works, but not the way instance methods are meant to be used\r\n#<class '__main__.SomeObject'> <__main__.SomeObject object at 0x7f32911af3d0>\r\n#<class 'int'> 1\r\n```\n", "before_files": [{"content": "from functools import update_wrapper\nfrom typing import TYPE_CHECKING, Any, Dict, List, Mapping, Tuple, TypeVar, cast, get_type_hints\n\nfrom . import validator\nfrom .errors import ConfigError\nfrom .main import BaseModel, Extra, create_model\nfrom .utils import to_camel\n\n__all__ = ('validate_arguments',)\n\nif TYPE_CHECKING:\n from .typing import AnyCallable\n\n Callable = TypeVar('Callable', bound=AnyCallable)\n\n\ndef validate_arguments(function: 'Callable') -> 'Callable':\n \"\"\"\n Decorator to validate the arguments passed to a function.\n \"\"\"\n vd = ValidatedFunction(function)\n vd = update_wrapper(vd, function) # type: ignore\n return cast('Callable', vd)\n\n\nALT_V_ARGS = 'v__args'\nALT_V_KWARGS = 'v__kwargs'\nV_POSITIONAL_ONLY_NAME = 'v__positional_only'\n\n\nclass ValidatedFunction:\n def __init__(self, function: 'Callable'):\n from inspect import signature, Parameter\n\n parameters: Mapping[str, Parameter] = signature(function).parameters\n\n if parameters.keys() & {ALT_V_ARGS, ALT_V_KWARGS, V_POSITIONAL_ONLY_NAME}:\n raise ConfigError(\n f'\"{ALT_V_ARGS}\", \"{ALT_V_KWARGS}\" and \"{V_POSITIONAL_ONLY_NAME}\" are not permitted as argument '\n f'names when using the \"{validate_arguments.__name__}\" decorator'\n )\n\n self.raw_function = function\n self.arg_mapping: Dict[int, str] = {}\n self.positional_only_args = set()\n self.v_args_name = 'args'\n self.v_kwargs_name = 'kwargs'\n\n type_hints = get_type_hints(function)\n takes_args = False\n takes_kwargs = False\n fields: Dict[str, Tuple[Any, Any]] = {}\n for i, (name, p) in enumerate(parameters.items()):\n if p.annotation == p.empty:\n annotation = Any\n else:\n annotation = type_hints[name]\n\n default = ... if p.default == p.empty else p.default\n if p.kind == Parameter.POSITIONAL_ONLY:\n self.arg_mapping[i] = name\n fields[name] = annotation, default\n fields[V_POSITIONAL_ONLY_NAME] = List[str], None\n self.positional_only_args.add(name)\n elif p.kind == Parameter.POSITIONAL_OR_KEYWORD:\n self.arg_mapping[i] = name\n fields[name] = annotation, default\n elif p.kind == Parameter.KEYWORD_ONLY:\n fields[name] = annotation, default\n elif p.kind == Parameter.VAR_POSITIONAL:\n self.v_args_name = name\n fields[name] = Tuple[annotation, ...], None\n takes_args = True\n else:\n assert p.kind == Parameter.VAR_KEYWORD, p.kind\n self.v_kwargs_name = name\n fields[name] = Dict[str, annotation], None # type: ignore\n takes_kwargs = True\n\n # these checks avoid a clash between \"args\" and a field with that name\n if not takes_args and self.v_args_name in fields:\n self.v_args_name = ALT_V_ARGS\n\n # same with \"kwargs\"\n if not takes_kwargs and self.v_kwargs_name in fields:\n self.v_kwargs_name = ALT_V_KWARGS\n\n if not takes_args:\n # we add the field so validation below can raise the correct exception\n fields[self.v_args_name] = List[Any], None\n\n if not takes_kwargs:\n # same with kwargs\n fields[self.v_kwargs_name] = Dict[Any, Any], None\n\n self.create_model(fields, takes_args, takes_kwargs)\n\n def __call__(self, *args: Any, **kwargs: Any) -> Any:\n values = self.build_values(args, kwargs)\n m = self.model(**values)\n return self.execute(m)\n\n def build_values(self, args: Tuple[Any, ...], kwargs: Dict[str, Any]) -> Dict[str, Any]:\n values: Dict[str, Any] = {}\n if args:\n arg_iter = enumerate(args)\n while True:\n try:\n i, a = next(arg_iter)\n except StopIteration:\n break\n arg_name = self.arg_mapping.get(i)\n if arg_name is not None:\n values[arg_name] = a\n else:\n values[self.v_args_name] = [a] + [a for _, a in arg_iter]\n break\n\n var_kwargs = {}\n wrong_positional_args = []\n for k, v in kwargs.items():\n if k in self.model.__fields__:\n if k in self.positional_only_args:\n wrong_positional_args.append(k)\n values[k] = v\n else:\n var_kwargs[k] = v\n\n if var_kwargs:\n values[self.v_kwargs_name] = var_kwargs\n if wrong_positional_args:\n values[V_POSITIONAL_ONLY_NAME] = wrong_positional_args\n return values\n\n def execute(self, m: BaseModel) -> Any:\n d = {k: v for k, v in m._iter() if k in m.__fields_set__}\n kwargs = d.pop(self.v_kwargs_name, None)\n if kwargs:\n d.update(kwargs)\n\n if self.v_args_name in d:\n args_: List[Any] = []\n in_kwargs = False\n kwargs = {}\n for name, value in d.items():\n if in_kwargs:\n kwargs[name] = value\n elif name == self.v_args_name:\n args_ += value\n in_kwargs = True\n else:\n args_.append(value)\n return self.raw_function(*args_, **kwargs)\n elif self.positional_only_args:\n args_ = []\n kwargs = {}\n for name, value in d.items():\n if name in self.positional_only_args:\n args_.append(value)\n else:\n kwargs[name] = value\n return self.raw_function(*args_, **kwargs)\n else:\n return self.raw_function(**d)\n\n def create_model(self, fields: Dict[str, Any], takes_args: bool, takes_kwargs: bool) -> None:\n pos_args = len(self.arg_mapping)\n\n class DecoratorBaseModel(BaseModel):\n @validator(self.v_args_name, check_fields=False, allow_reuse=True)\n def check_args(cls, v: List[Any]) -> List[Any]:\n if takes_args:\n return v\n\n raise TypeError(f'{pos_args} positional arguments expected but {pos_args + len(v)} given')\n\n @validator(self.v_kwargs_name, check_fields=False, allow_reuse=True)\n def check_kwargs(cls, v: Dict[str, Any]) -> Dict[str, Any]:\n if takes_kwargs:\n return v\n\n plural = '' if len(v) == 1 else 's'\n keys = ', '.join(map(repr, v.keys()))\n raise TypeError(f'unexpected keyword argument{plural}: {keys}')\n\n @validator(V_POSITIONAL_ONLY_NAME, check_fields=False, allow_reuse=True)\n def check_positional_only(cls, v: List[str]) -> None:\n plural = '' if len(v) == 1 else 's'\n keys = ', '.join(map(repr, v))\n raise TypeError(f'positional-only argument{plural} passed as keyword argument{plural}: {keys}')\n\n class Config:\n extra = Extra.forbid\n\n self.model = create_model(to_camel(self.raw_function.__name__), __base__=DecoratorBaseModel, **fields)\n", "path": "pydantic/decorator.py"}], "after_files": [{"content": "from functools import wraps\nfrom typing import TYPE_CHECKING, Any, Dict, List, Mapping, Tuple, TypeVar, cast, get_type_hints\n\nfrom . import validator\nfrom .errors import ConfigError\nfrom .main import BaseModel, Extra, create_model\nfrom .utils import to_camel\n\n__all__ = ('validate_arguments',)\n\nif TYPE_CHECKING:\n from .typing import AnyCallable\n\n Callable = TypeVar('Callable', bound=AnyCallable)\n\n\ndef validate_arguments(function: 'Callable') -> 'Callable':\n \"\"\"\n Decorator to validate the arguments passed to a function.\n \"\"\"\n vd = ValidatedFunction(function)\n\n @wraps(function)\n def wrapper_function(*args: Any, **kwargs: Any) -> Any:\n return vd.call(*args, **kwargs)\n\n wrapper_function.vd = vd # type: ignore\n wrapper_function.raw_function = vd.raw_function # type: ignore\n wrapper_function.model = vd.model # type: ignore\n return cast('Callable', wrapper_function)\n\n\nALT_V_ARGS = 'v__args'\nALT_V_KWARGS = 'v__kwargs'\nV_POSITIONAL_ONLY_NAME = 'v__positional_only'\n\n\nclass ValidatedFunction:\n def __init__(self, function: 'Callable'):\n from inspect import signature, Parameter\n\n parameters: Mapping[str, Parameter] = signature(function).parameters\n\n if parameters.keys() & {ALT_V_ARGS, ALT_V_KWARGS, V_POSITIONAL_ONLY_NAME}:\n raise ConfigError(\n f'\"{ALT_V_ARGS}\", \"{ALT_V_KWARGS}\" and \"{V_POSITIONAL_ONLY_NAME}\" are not permitted as argument '\n f'names when using the \"{validate_arguments.__name__}\" decorator'\n )\n\n self.raw_function = function\n self.arg_mapping: Dict[int, str] = {}\n self.positional_only_args = set()\n self.v_args_name = 'args'\n self.v_kwargs_name = 'kwargs'\n\n type_hints = get_type_hints(function)\n takes_args = False\n takes_kwargs = False\n fields: Dict[str, Tuple[Any, Any]] = {}\n for i, (name, p) in enumerate(parameters.items()):\n if p.annotation == p.empty:\n annotation = Any\n else:\n annotation = type_hints[name]\n\n default = ... if p.default == p.empty else p.default\n if p.kind == Parameter.POSITIONAL_ONLY:\n self.arg_mapping[i] = name\n fields[name] = annotation, default\n fields[V_POSITIONAL_ONLY_NAME] = List[str], None\n self.positional_only_args.add(name)\n elif p.kind == Parameter.POSITIONAL_OR_KEYWORD:\n self.arg_mapping[i] = name\n fields[name] = annotation, default\n elif p.kind == Parameter.KEYWORD_ONLY:\n fields[name] = annotation, default\n elif p.kind == Parameter.VAR_POSITIONAL:\n self.v_args_name = name\n fields[name] = Tuple[annotation, ...], None\n takes_args = True\n else:\n assert p.kind == Parameter.VAR_KEYWORD, p.kind\n self.v_kwargs_name = name\n fields[name] = Dict[str, annotation], None # type: ignore\n takes_kwargs = True\n\n # these checks avoid a clash between \"args\" and a field with that name\n if not takes_args and self.v_args_name in fields:\n self.v_args_name = ALT_V_ARGS\n\n # same with \"kwargs\"\n if not takes_kwargs and self.v_kwargs_name in fields:\n self.v_kwargs_name = ALT_V_KWARGS\n\n if not takes_args:\n # we add the field so validation below can raise the correct exception\n fields[self.v_args_name] = List[Any], None\n\n if not takes_kwargs:\n # same with kwargs\n fields[self.v_kwargs_name] = Dict[Any, Any], None\n\n self.create_model(fields, takes_args, takes_kwargs)\n\n def call(self, *args: Any, **kwargs: Any) -> Any:\n values = self.build_values(args, kwargs)\n m = self.model(**values)\n return self.execute(m)\n\n def build_values(self, args: Tuple[Any, ...], kwargs: Dict[str, Any]) -> Dict[str, Any]:\n values: Dict[str, Any] = {}\n if args:\n arg_iter = enumerate(args)\n while True:\n try:\n i, a = next(arg_iter)\n except StopIteration:\n break\n arg_name = self.arg_mapping.get(i)\n if arg_name is not None:\n values[arg_name] = a\n else:\n values[self.v_args_name] = [a] + [a for _, a in arg_iter]\n break\n\n var_kwargs = {}\n wrong_positional_args = []\n for k, v in kwargs.items():\n if k in self.model.__fields__:\n if k in self.positional_only_args:\n wrong_positional_args.append(k)\n values[k] = v\n else:\n var_kwargs[k] = v\n\n if var_kwargs:\n values[self.v_kwargs_name] = var_kwargs\n if wrong_positional_args:\n values[V_POSITIONAL_ONLY_NAME] = wrong_positional_args\n return values\n\n def execute(self, m: BaseModel) -> Any:\n d = {k: v for k, v in m._iter() if k in m.__fields_set__}\n kwargs = d.pop(self.v_kwargs_name, None)\n if kwargs:\n d.update(kwargs)\n\n if self.v_args_name in d:\n args_: List[Any] = []\n in_kwargs = False\n kwargs = {}\n for name, value in d.items():\n if in_kwargs:\n kwargs[name] = value\n elif name == self.v_args_name:\n args_ += value\n in_kwargs = True\n else:\n args_.append(value)\n return self.raw_function(*args_, **kwargs)\n elif self.positional_only_args:\n args_ = []\n kwargs = {}\n for name, value in d.items():\n if name in self.positional_only_args:\n args_.append(value)\n else:\n kwargs[name] = value\n return self.raw_function(*args_, **kwargs)\n else:\n return self.raw_function(**d)\n\n def create_model(self, fields: Dict[str, Any], takes_args: bool, takes_kwargs: bool) -> None:\n pos_args = len(self.arg_mapping)\n\n class DecoratorBaseModel(BaseModel):\n @validator(self.v_args_name, check_fields=False, allow_reuse=True)\n def check_args(cls, v: List[Any]) -> List[Any]:\n if takes_args:\n return v\n\n raise TypeError(f'{pos_args} positional arguments expected but {pos_args + len(v)} given')\n\n @validator(self.v_kwargs_name, check_fields=False, allow_reuse=True)\n def check_kwargs(cls, v: Dict[str, Any]) -> Dict[str, Any]:\n if takes_kwargs:\n return v\n\n plural = '' if len(v) == 1 else 's'\n keys = ', '.join(map(repr, v.keys()))\n raise TypeError(f'unexpected keyword argument{plural}: {keys}')\n\n @validator(V_POSITIONAL_ONLY_NAME, check_fields=False, allow_reuse=True)\n def check_positional_only(cls, v: List[str]) -> None:\n plural = '' if len(v) == 1 else 's'\n keys = ', '.join(map(repr, v))\n raise TypeError(f'positional-only argument{plural} passed as keyword argument{plural}: {keys}')\n\n class Config:\n extra = Extra.forbid\n\n self.model = create_model(to_camel(self.raw_function.__name__), __base__=DecoratorBaseModel, **fields)\n", "path": "pydantic/decorator.py"}]}
| 2,771 | 348 |
gh_patches_debug_12269
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-513
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement schema list page
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
Users should be able to create a new schema, edit schema names, and delete schemas.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
We should provide a way to do these actions from the UI using the schema list page introduced in the [design spec](https://wiki.mathesar.org/en/design/specs/schemas).
**Additional context**
<!-- Add any other context or screenshots about the feature request here.-->
- #166
- #168
- #170
- #393
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/urls.py`
Content:
```
1 from django.urls import include, path
2 from rest_framework_nested import routers
3
4 from mathesar.views import api, frontend
5
6
7 router = routers.DefaultRouter()
8 router.register(r'tables', api.TableViewSet, basename='table')
9 router.register(r'schemas', api.SchemaViewSet, basename='schema')
10 router.register(r'database_keys', api.DatabaseKeyViewSet, basename='database-key')
11 router.register(r'databases', api.DatabaseViewSet, basename='database')
12 router.register(r'data_files', api.DataFileViewSet, basename='data-file')
13
14 table_router = routers.NestedSimpleRouter(router, r'tables', lookup='table')
15 table_router.register(r'records', api.RecordViewSet, basename='table-record')
16 table_router.register(r'columns', api.ColumnViewSet, basename='table-column')
17
18 urlpatterns = [
19 path('', frontend.index, name="index"),
20 path('api/v0/', include(router.urls)),
21 path('api/v0/', include(table_router.urls)),
22 # TODO: Handle known urls like /favicon.ico etc.,
23 # Currenty, this catches all
24 path('<dbname>', frontend.index, name="index"),
25 ]
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mathesar/urls.py b/mathesar/urls.py
--- a/mathesar/urls.py
+++ b/mathesar/urls.py
@@ -1,4 +1,4 @@
-from django.urls import include, path
+from django.urls import include, path, re_path
from rest_framework_nested import routers
from mathesar.views import api, frontend
@@ -20,6 +20,6 @@
path('api/v0/', include(router.urls)),
path('api/v0/', include(table_router.urls)),
# TODO: Handle known urls like /favicon.ico etc.,
- # Currenty, this catches all
- path('<dbname>', frontend.index, name="index"),
+ # Currently, this catches all
+ re_path(r'(?P<dbname>\w+)/.*$', frontend.index, name="index"),
]
|
{"golden_diff": "diff --git a/mathesar/urls.py b/mathesar/urls.py\n--- a/mathesar/urls.py\n+++ b/mathesar/urls.py\n@@ -1,4 +1,4 @@\n-from django.urls import include, path\n+from django.urls import include, path, re_path\n from rest_framework_nested import routers\n \n from mathesar.views import api, frontend\n@@ -20,6 +20,6 @@\n path('api/v0/', include(router.urls)),\n path('api/v0/', include(table_router.urls)),\n # TODO: Handle known urls like /favicon.ico etc.,\n- # Currenty, this catches all\n- path('<dbname>', frontend.index, name=\"index\"),\n+ # Currently, this catches all\n+ re_path(r'(?P<dbname>\\w+)/.*$', frontend.index, name=\"index\"),\n ]\n", "issue": "Implement schema list page\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nUsers should be able to create a new schema, edit schema names, and delete schemas.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nWe should provide a way to do these actions from the UI using the schema list page introduced in the [design spec](https://wiki.mathesar.org/en/design/specs/schemas).\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\n- #166\r\n- #168 \r\n- #170\r\n- #393\n", "before_files": [{"content": "from django.urls import include, path\nfrom rest_framework_nested import routers\n\nfrom mathesar.views import api, frontend\n\n\nrouter = routers.DefaultRouter()\nrouter.register(r'tables', api.TableViewSet, basename='table')\nrouter.register(r'schemas', api.SchemaViewSet, basename='schema')\nrouter.register(r'database_keys', api.DatabaseKeyViewSet, basename='database-key')\nrouter.register(r'databases', api.DatabaseViewSet, basename='database')\nrouter.register(r'data_files', api.DataFileViewSet, basename='data-file')\n\ntable_router = routers.NestedSimpleRouter(router, r'tables', lookup='table')\ntable_router.register(r'records', api.RecordViewSet, basename='table-record')\ntable_router.register(r'columns', api.ColumnViewSet, basename='table-column')\n\nurlpatterns = [\n path('', frontend.index, name=\"index\"),\n path('api/v0/', include(router.urls)),\n path('api/v0/', include(table_router.urls)),\n # TODO: Handle known urls like /favicon.ico etc.,\n # Currenty, this catches all\n path('<dbname>', frontend.index, name=\"index\"),\n]\n", "path": "mathesar/urls.py"}], "after_files": [{"content": "from django.urls import include, path, re_path\nfrom rest_framework_nested import routers\n\nfrom mathesar.views import api, frontend\n\n\nrouter = routers.DefaultRouter()\nrouter.register(r'tables', api.TableViewSet, basename='table')\nrouter.register(r'schemas', api.SchemaViewSet, basename='schema')\nrouter.register(r'database_keys', api.DatabaseKeyViewSet, basename='database-key')\nrouter.register(r'databases', api.DatabaseViewSet, basename='database')\nrouter.register(r'data_files', api.DataFileViewSet, basename='data-file')\n\ntable_router = routers.NestedSimpleRouter(router, r'tables', lookup='table')\ntable_router.register(r'records', api.RecordViewSet, basename='table-record')\ntable_router.register(r'columns', api.ColumnViewSet, basename='table-column')\n\nurlpatterns = [\n path('', frontend.index, name=\"index\"),\n path('api/v0/', include(router.urls)),\n path('api/v0/', include(table_router.urls)),\n # TODO: Handle known urls like /favicon.ico etc.,\n # Currently, this catches all\n re_path(r'(?P<dbname>\\w+)/.*$', frontend.index, name=\"index\"),\n]\n", "path": "mathesar/urls.py"}]}
| 682 | 181 |
gh_patches_debug_41643
|
rasdani/github-patches
|
git_diff
|
microsoft__Qcodes-1171
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Keithley 2400 does not get added to the station cleanly
The ":read:" command and possibly others does not work when output is off but fails with an error. This is called when getting volt and current are snapshotted
We should wrap these calls in checking that output is off
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qcodes/instrument_drivers/tektronix/Keithley_2400.py`
Content:
```
1 from qcodes import VisaInstrument
2 from qcodes.utils.validators import Strings, Enum
3
4
5 class Keithley_2400(VisaInstrument):
6 """
7 QCoDeS driver for the Keithley 2400 voltage source.
8 """
9 def __init__(self, name, address, **kwargs):
10 super().__init__(name, address, terminator='\n', **kwargs)
11
12 self.add_parameter('rangev',
13 get_cmd='SENS:VOLT:RANG?',
14 get_parser=float,
15 set_cmd='SOUR:VOLT:RANG {:f}',
16 label='Voltage range')
17
18 self.add_parameter('rangei',
19 get_cmd='SENS:CURR:RANG?',
20 get_parser=float,
21 set_cmd='SOUR:CURR:RANG {:f}',
22 label='Current range')
23
24 self.add_parameter('compliancev',
25 get_cmd='SENS:VOLT:PROT?',
26 get_parser=float,
27 set_cmd='SENS:VOLT:PROT {:f}',
28 label='Voltage Compliance')
29
30 self.add_parameter('compliancei',
31 get_cmd='SENS:CURR:PROT?',
32 get_parser=float,
33 set_cmd='SENS:CURR:PROT {:f}',
34 label='Current Compliance')
35
36 self.add_parameter('volt',
37 get_cmd=':READ?',
38 get_parser=self._volt_parser,
39 set_cmd=':SOUR:VOLT:LEV {:.8f}',
40 label='Voltage',
41 unit='V')
42
43 self.add_parameter('curr',
44 get_cmd=':READ?',
45 get_parser=self._curr_parser,
46 set_cmd=':SOUR:CURR:LEV {:.8f}',
47 label='Current',
48 unit='A')
49
50 self.add_parameter('mode',
51 vals=Enum('VOLT', 'CURR'),
52 get_cmd=':SOUR:FUNC?',
53 set_cmd=self._set_mode_and_sense,
54 label='Mode')
55
56 self.add_parameter('sense',
57 vals=Strings(),
58 get_cmd=':SENS:FUNC?',
59 set_cmd=':SENS:FUNC "{:s}"',
60 label='Sense mode')
61
62 self.add_parameter('output',
63 get_parser=int,
64 set_cmd=':OUTP:STAT {:d}',
65 get_cmd=':OUTP:STAT?')
66
67 self.add_parameter('nplcv',
68 get_cmd='SENS:VOLT:NPLC?',
69 get_parser=float,
70 set_cmd='SENS:VOLT:NPLC {:f}',
71 label='Voltage integration time')
72
73 self.add_parameter('nplci',
74 get_cmd='SENS:CURR:NPLC?',
75 get_parser=float,
76 set_cmd='SENS:CURR:NPLC {:f}',
77 label='Current integration time')
78
79 self.add_parameter('resistance',
80 get_cmd=':READ?',
81 get_parser=self._resistance_parser,
82 label='Resistance',
83 unit='Ohm')
84
85 def _set_mode_and_sense(self, msg):
86 # This helps set the correct read out curr/volt
87 if msg == 'VOLT':
88 self.sense('CURR')
89 elif msg == 'CURR':
90 self.sense('VOLT')
91 else:
92 raise AttributeError('Mode does not exist')
93 self.write(':SOUR:FUNC {:s}'.format(msg))
94
95 def reset(self):
96 """
97 Reset the instrument. When the instrument is reset, it performs the
98 following actions.
99
100 Returns the SourceMeter to the GPIB default conditions.
101
102 Cancels all pending commands.
103
104 Cancels all previously send `*OPC` and `*OPC?`
105 """
106 self.write(':*RST')
107
108 def _volt_parser(self, msg):
109 fields = [float(x) for x in msg.split(',')]
110 return fields[0]
111
112 def _curr_parser(self, msg):
113 fields = [float(x) for x in msg.split(',')]
114 return fields[1]
115
116 def _resistance_parser(self, msg):
117 fields = [float(x) for x in msg.split(',')]
118 return fields[0]/fields[1]
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qcodes/instrument_drivers/tektronix/Keithley_2400.py b/qcodes/instrument_drivers/tektronix/Keithley_2400.py
--- a/qcodes/instrument_drivers/tektronix/Keithley_2400.py
+++ b/qcodes/instrument_drivers/tektronix/Keithley_2400.py
@@ -34,18 +34,31 @@
label='Current Compliance')
self.add_parameter('volt',
- get_cmd=':READ?',
+ get_cmd=self._get_read_output_protected,
get_parser=self._volt_parser,
set_cmd=':SOUR:VOLT:LEV {:.8f}',
label='Voltage',
- unit='V')
+ unit='V',
+ docstring="Sets voltage in 'VOLT' mode. "
+ "Get returns measured voltage if "
+ "sensing 'VOLT' otherwise it returns "
+ "setpoint value. "
+ "Note that it is an error to read voltage with "
+ "output off")
self.add_parameter('curr',
- get_cmd=':READ?',
+ get_cmd=self._get_read_output_protected,
get_parser=self._curr_parser,
set_cmd=':SOUR:CURR:LEV {:.8f}',
label='Current',
- unit='A')
+ unit='A',
+ docstring = "Sets current in 'CURR' mode. "
+ "Get returns measured current if "
+ "sensing 'CURR' otherwise it returns "
+ "setpoint value. "
+ "Note that it is an error to read current with "
+ "output off")
+
self.add_parameter('mode',
vals=Enum('VOLT', 'CURR'),
@@ -77,10 +90,32 @@
label='Current integration time')
self.add_parameter('resistance',
- get_cmd=':READ?',
+ get_cmd=self._get_read_output_protected,
get_parser=self._resistance_parser,
label='Resistance',
- unit='Ohm')
+ unit='Ohm',
+ docstring="Measure resistance from current and voltage "
+ "Note that it is an error to read current "
+ "and voltage with output off")
+
+ def _get_read_output_protected(self) -> str:
+ """
+ This wrapper function around ":READ?" exists because calling
+ ":READ?" on an instrument with output disabled is an error.
+ So first we check that output is on and if not we return
+ nan for volt, curr etc.
+ """
+ output = self.output.get_latest()
+ if output is None:
+ # if get_latest returns None we have
+ # to ask the instrument for the status of output
+ output = self.output.get()
+
+ if output == 1:
+ msg = self.ask(':READ?')
+ else:
+ raise RuntimeError("Cannot perform read with output off")
+ return msg
def _set_mode_and_sense(self, msg):
# This helps set the correct read out curr/volt
@@ -115,4 +150,5 @@
def _resistance_parser(self, msg):
fields = [float(x) for x in msg.split(',')]
- return fields[0]/fields[1]
+ res = fields[0] / fields[1]
+ return res
|
{"golden_diff": "diff --git a/qcodes/instrument_drivers/tektronix/Keithley_2400.py b/qcodes/instrument_drivers/tektronix/Keithley_2400.py\n--- a/qcodes/instrument_drivers/tektronix/Keithley_2400.py\n+++ b/qcodes/instrument_drivers/tektronix/Keithley_2400.py\n@@ -34,18 +34,31 @@\n label='Current Compliance')\n \n self.add_parameter('volt',\n- get_cmd=':READ?',\n+ get_cmd=self._get_read_output_protected,\n get_parser=self._volt_parser,\n set_cmd=':SOUR:VOLT:LEV {:.8f}',\n label='Voltage',\n- unit='V')\n+ unit='V',\n+ docstring=\"Sets voltage in 'VOLT' mode. \"\n+ \"Get returns measured voltage if \"\n+ \"sensing 'VOLT' otherwise it returns \"\n+ \"setpoint value. \"\n+ \"Note that it is an error to read voltage with \"\n+ \"output off\")\n \n self.add_parameter('curr',\n- get_cmd=':READ?',\n+ get_cmd=self._get_read_output_protected,\n get_parser=self._curr_parser,\n set_cmd=':SOUR:CURR:LEV {:.8f}',\n label='Current',\n- unit='A')\n+ unit='A',\n+ docstring = \"Sets current in 'CURR' mode. \"\n+ \"Get returns measured current if \"\n+ \"sensing 'CURR' otherwise it returns \"\n+ \"setpoint value. \"\n+ \"Note that it is an error to read current with \"\n+ \"output off\")\n+\n \n self.add_parameter('mode',\n vals=Enum('VOLT', 'CURR'),\n@@ -77,10 +90,32 @@\n label='Current integration time')\n \n self.add_parameter('resistance',\n- get_cmd=':READ?',\n+ get_cmd=self._get_read_output_protected,\n get_parser=self._resistance_parser,\n label='Resistance',\n- unit='Ohm')\n+ unit='Ohm',\n+ docstring=\"Measure resistance from current and voltage \"\n+ \"Note that it is an error to read current \"\n+ \"and voltage with output off\")\n+\n+ def _get_read_output_protected(self) -> str:\n+ \"\"\"\n+ This wrapper function around \":READ?\" exists because calling\n+ \":READ?\" on an instrument with output disabled is an error.\n+ So first we check that output is on and if not we return\n+ nan for volt, curr etc.\n+ \"\"\"\n+ output = self.output.get_latest()\n+ if output is None:\n+ # if get_latest returns None we have\n+ # to ask the instrument for the status of output\n+ output = self.output.get()\n+\n+ if output == 1:\n+ msg = self.ask(':READ?')\n+ else:\n+ raise RuntimeError(\"Cannot perform read with output off\")\n+ return msg\n \n def _set_mode_and_sense(self, msg):\n # This helps set the correct read out curr/volt\n@@ -115,4 +150,5 @@\n \n def _resistance_parser(self, msg):\n fields = [float(x) for x in msg.split(',')]\n- return fields[0]/fields[1]\n+ res = fields[0] / fields[1]\n+ return res\n", "issue": "Keithley 2400 does not get added to the station cleanly\nThe \":read:\" command and possibly others does not work when output is off but fails with an error. This is called when getting volt and current are snapshotted \r\n\r\nWe should wrap these calls in checking that output is off\n", "before_files": [{"content": "from qcodes import VisaInstrument\nfrom qcodes.utils.validators import Strings, Enum\n\n\nclass Keithley_2400(VisaInstrument):\n \"\"\"\n QCoDeS driver for the Keithley 2400 voltage source.\n \"\"\"\n def __init__(self, name, address, **kwargs):\n super().__init__(name, address, terminator='\\n', **kwargs)\n\n self.add_parameter('rangev',\n get_cmd='SENS:VOLT:RANG?',\n get_parser=float,\n set_cmd='SOUR:VOLT:RANG {:f}',\n label='Voltage range')\n\n self.add_parameter('rangei',\n get_cmd='SENS:CURR:RANG?',\n get_parser=float,\n set_cmd='SOUR:CURR:RANG {:f}',\n label='Current range')\n\n self.add_parameter('compliancev',\n get_cmd='SENS:VOLT:PROT?',\n get_parser=float,\n set_cmd='SENS:VOLT:PROT {:f}',\n label='Voltage Compliance')\n\n self.add_parameter('compliancei',\n get_cmd='SENS:CURR:PROT?',\n get_parser=float,\n set_cmd='SENS:CURR:PROT {:f}',\n label='Current Compliance')\n\n self.add_parameter('volt',\n get_cmd=':READ?',\n get_parser=self._volt_parser,\n set_cmd=':SOUR:VOLT:LEV {:.8f}',\n label='Voltage',\n unit='V')\n\n self.add_parameter('curr',\n get_cmd=':READ?',\n get_parser=self._curr_parser,\n set_cmd=':SOUR:CURR:LEV {:.8f}',\n label='Current',\n unit='A')\n\n self.add_parameter('mode',\n vals=Enum('VOLT', 'CURR'),\n get_cmd=':SOUR:FUNC?',\n set_cmd=self._set_mode_and_sense,\n label='Mode')\n\n self.add_parameter('sense',\n vals=Strings(),\n get_cmd=':SENS:FUNC?',\n set_cmd=':SENS:FUNC \"{:s}\"',\n label='Sense mode')\n\n self.add_parameter('output',\n get_parser=int,\n set_cmd=':OUTP:STAT {:d}',\n get_cmd=':OUTP:STAT?')\n\n self.add_parameter('nplcv',\n get_cmd='SENS:VOLT:NPLC?',\n get_parser=float,\n set_cmd='SENS:VOLT:NPLC {:f}',\n label='Voltage integration time')\n\n self.add_parameter('nplci',\n get_cmd='SENS:CURR:NPLC?',\n get_parser=float,\n set_cmd='SENS:CURR:NPLC {:f}',\n label='Current integration time')\n\n self.add_parameter('resistance',\n get_cmd=':READ?',\n get_parser=self._resistance_parser,\n label='Resistance',\n unit='Ohm')\n\n def _set_mode_and_sense(self, msg):\n # This helps set the correct read out curr/volt\n if msg == 'VOLT':\n self.sense('CURR')\n elif msg == 'CURR':\n self.sense('VOLT')\n else:\n raise AttributeError('Mode does not exist')\n self.write(':SOUR:FUNC {:s}'.format(msg))\n\n def reset(self):\n \"\"\"\n Reset the instrument. When the instrument is reset, it performs the\n following actions.\n\n Returns the SourceMeter to the GPIB default conditions.\n\n Cancels all pending commands.\n\n Cancels all previously send `*OPC` and `*OPC?`\n \"\"\"\n self.write(':*RST')\n\n def _volt_parser(self, msg):\n fields = [float(x) for x in msg.split(',')]\n return fields[0]\n\n def _curr_parser(self, msg):\n fields = [float(x) for x in msg.split(',')]\n return fields[1]\n\n def _resistance_parser(self, msg):\n fields = [float(x) for x in msg.split(',')]\n return fields[0]/fields[1]\n", "path": "qcodes/instrument_drivers/tektronix/Keithley_2400.py"}], "after_files": [{"content": "from qcodes import VisaInstrument\nfrom qcodes.utils.validators import Strings, Enum\n\n\nclass Keithley_2400(VisaInstrument):\n \"\"\"\n QCoDeS driver for the Keithley 2400 voltage source.\n \"\"\"\n def __init__(self, name, address, **kwargs):\n super().__init__(name, address, terminator='\\n', **kwargs)\n\n self.add_parameter('rangev',\n get_cmd='SENS:VOLT:RANG?',\n get_parser=float,\n set_cmd='SOUR:VOLT:RANG {:f}',\n label='Voltage range')\n\n self.add_parameter('rangei',\n get_cmd='SENS:CURR:RANG?',\n get_parser=float,\n set_cmd='SOUR:CURR:RANG {:f}',\n label='Current range')\n\n self.add_parameter('compliancev',\n get_cmd='SENS:VOLT:PROT?',\n get_parser=float,\n set_cmd='SENS:VOLT:PROT {:f}',\n label='Voltage Compliance')\n\n self.add_parameter('compliancei',\n get_cmd='SENS:CURR:PROT?',\n get_parser=float,\n set_cmd='SENS:CURR:PROT {:f}',\n label='Current Compliance')\n\n self.add_parameter('volt',\n get_cmd=self._get_read_output_protected,\n get_parser=self._volt_parser,\n set_cmd=':SOUR:VOLT:LEV {:.8f}',\n label='Voltage',\n unit='V',\n docstring=\"Sets voltage in 'VOLT' mode. \"\n \"Get returns measured voltage if \"\n \"sensing 'VOLT' otherwise it returns \"\n \"setpoint value. \"\n \"Note that it is an error to read voltage with \"\n \"output off\")\n\n self.add_parameter('curr',\n get_cmd=self._get_read_output_protected,\n get_parser=self._curr_parser,\n set_cmd=':SOUR:CURR:LEV {:.8f}',\n label='Current',\n unit='A',\n docstring = \"Sets current in 'CURR' mode. \"\n \"Get returns measured current if \"\n \"sensing 'CURR' otherwise it returns \"\n \"setpoint value. \"\n \"Note that it is an error to read current with \"\n \"output off\")\n\n\n self.add_parameter('mode',\n vals=Enum('VOLT', 'CURR'),\n get_cmd=':SOUR:FUNC?',\n set_cmd=self._set_mode_and_sense,\n label='Mode')\n\n self.add_parameter('sense',\n vals=Strings(),\n get_cmd=':SENS:FUNC?',\n set_cmd=':SENS:FUNC \"{:s}\"',\n label='Sense mode')\n\n self.add_parameter('output',\n get_parser=int,\n set_cmd=':OUTP:STAT {:d}',\n get_cmd=':OUTP:STAT?')\n\n self.add_parameter('nplcv',\n get_cmd='SENS:VOLT:NPLC?',\n get_parser=float,\n set_cmd='SENS:VOLT:NPLC {:f}',\n label='Voltage integration time')\n\n self.add_parameter('nplci',\n get_cmd='SENS:CURR:NPLC?',\n get_parser=float,\n set_cmd='SENS:CURR:NPLC {:f}',\n label='Current integration time')\n\n self.add_parameter('resistance',\n get_cmd=self._get_read_output_protected,\n get_parser=self._resistance_parser,\n label='Resistance',\n unit='Ohm',\n docstring=\"Measure resistance from current and voltage \"\n \"Note that it is an error to read current \"\n \"and voltage with output off\")\n\n def _get_read_output_protected(self) -> str:\n \"\"\"\n This wrapper function around \":READ?\" exists because calling\n \":READ?\" on an instrument with output disabled is an error.\n So first we check that output is on and if not we return\n nan for volt, curr etc.\n \"\"\"\n output = self.output.get_latest()\n if output is None:\n # if get_latest returns None we have\n # to ask the instrument for the status of output\n output = self.output.get()\n\n if output == 1:\n msg = self.ask(':READ?')\n else:\n raise RuntimeError(\"Cannot perform read with output off\")\n return msg\n\n def _set_mode_and_sense(self, msg):\n # This helps set the correct read out curr/volt\n if msg == 'VOLT':\n self.sense('CURR')\n elif msg == 'CURR':\n self.sense('VOLT')\n else:\n raise AttributeError('Mode does not exist')\n self.write(':SOUR:FUNC {:s}'.format(msg))\n\n def reset(self):\n \"\"\"\n Reset the instrument. When the instrument is reset, it performs the\n following actions.\n\n Returns the SourceMeter to the GPIB default conditions.\n\n Cancels all pending commands.\n\n Cancels all previously send `*OPC` and `*OPC?`\n \"\"\"\n self.write(':*RST')\n\n def _volt_parser(self, msg):\n fields = [float(x) for x in msg.split(',')]\n return fields[0]\n\n def _curr_parser(self, msg):\n fields = [float(x) for x in msg.split(',')]\n return fields[1]\n\n def _resistance_parser(self, msg):\n fields = [float(x) for x in msg.split(',')]\n res = fields[0] / fields[1]\n return res\n", "path": "qcodes/instrument_drivers/tektronix/Keithley_2400.py"}]}
| 1,465 | 765 |
gh_patches_debug_17250
|
rasdani/github-patches
|
git_diff
|
ibis-project__ibis-7472
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug(oracle): Failing metadata query
### What happened?
Metadata query for Oracle is failing due to filtering the nullable column between SELECT and FROM statements, which is only possible after WHERE.
### What version of ibis are you using?
7.0.0
### What backend(s) are you using, if any?
Oracle
### Relevant log output
```sh
sqlalchemy.exc.DatabaseError: (oracledb.exceptions.DatabaseError) ORA-00923: FROM keyword not found where expected
[SQL: SELECT all_tab_columns.column_name, all_tab_columns.data_type, all_tab_columns.data_precision, all_tab_columns.data_scale, all_tab_columns.nullable = :nullable_1 AS nullable
FROM all_tab_columns
WHERE all_tab_columns.table_name = :table_name_1]
[parameters: {'nullable_1': 'Y', 'table_name_1': '_ibis_oracle_metadata_7djjvezdl5bnrmqkf6grsevvjq'}]
```
```
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ibis/backends/oracle/__init__.py`
Content:
```
1 """The Oracle backend."""
2
3 from __future__ import annotations
4
5 import atexit
6 import contextlib
7 import sys
8 import warnings
9 from typing import TYPE_CHECKING, Any
10
11 import oracledb
12 import sqlglot as sg
13
14 from ibis import util
15
16 # Wow, this is truly horrible
17 # Get out your clippers, it's time to shave a yak.
18 #
19 # 1. snowflake-sqlalchemy doesn't support sqlalchemy 2.0
20 # 2. oracledb is only supported in sqlalchemy 2.0
21 # 3. Ergo, module hacking is required to avoid doing a silly amount of work
22 # to create multiple lockfiles or port snowflake away from sqlalchemy
23 # 4. Also the version needs to be spoofed to be >= 7 or else the cx_Oracle
24 # dialect barfs
25 oracledb.__version__ = oracledb.version = "7"
26
27 sys.modules["cx_Oracle"] = oracledb
28
29 import sqlalchemy as sa # noqa: E402
30
31 import ibis.common.exceptions as exc # noqa: E402
32 import ibis.expr.datatypes as dt # noqa: E402
33 import ibis.expr.operations as ops # noqa: E402
34 import ibis.expr.schema as sch # noqa: E402
35 from ibis.backends.base.sql.alchemy import ( # noqa: E402
36 AlchemyCompiler,
37 AlchemyExprTranslator,
38 BaseAlchemyBackend,
39 )
40 from ibis.backends.oracle.datatypes import OracleType # noqa: E402
41 from ibis.backends.oracle.registry import operation_registry # noqa: E402
42 from ibis.expr.rewrites import rewrite_sample # noqa: E402
43
44 if TYPE_CHECKING:
45 from collections.abc import Iterable
46
47
48 class OracleExprTranslator(AlchemyExprTranslator):
49 _registry = operation_registry.copy()
50 _rewrites = AlchemyExprTranslator._rewrites.copy()
51 _dialect_name = "oracle"
52 _has_reduction_filter_syntax = False
53 _require_order_by = (
54 *AlchemyExprTranslator._require_order_by,
55 ops.Reduction,
56 ops.Lag,
57 ops.Lead,
58 )
59
60 _forbids_frame_clause = (
61 *AlchemyExprTranslator._forbids_frame_clause,
62 ops.Lag,
63 ops.Lead,
64 )
65
66 _quote_column_names = True
67 _quote_table_names = True
68
69 type_mapper = OracleType
70
71
72 class OracleCompiler(AlchemyCompiler):
73 translator_class = OracleExprTranslator
74 support_values_syntax_in_select = False
75 supports_indexed_grouping_keys = False
76 null_limit = None
77 rewrites = AlchemyCompiler.rewrites | rewrite_sample
78
79
80 class Backend(BaseAlchemyBackend):
81 name = "oracle"
82 compiler = OracleCompiler
83 supports_create_or_replace = False
84 supports_temporary_tables = True
85 _temporary_prefix = "GLOBAL TEMPORARY"
86
87 def do_connect(
88 self,
89 *,
90 user: str,
91 password: str,
92 host: str = "localhost",
93 port: int = 1521,
94 database: str | None = None,
95 sid: str | None = None,
96 service_name: str | None = None,
97 dsn: str | None = None,
98 **_: Any,
99 ) -> None:
100 """Create an Ibis client using the passed connection parameters.
101
102 Parameters
103 ----------
104 user
105 Username
106 password
107 Password
108 host
109 Hostname
110 port
111 Port
112 database
113 Used as an Oracle service name if provided.
114 sid
115 Unique name of an Oracle Instance, used to construct a DSN if
116 provided.
117 service_name
118 Oracle service name, used to construct a DSN if provided. Only one
119 of database and service_name should be provided.
120 dsn
121 An Oracle Data Source Name. If provided, overrides all other
122 connection arguments except username and password.
123 """
124 # SID: unique name of an INSTANCE running an oracle process (a single, identifiable machine)
125 # service name: an ALIAS to one (or many) individual instances that can
126 # be hotswapped without the client knowing / caring
127 if dsn is not None and (
128 database is not None or sid is not None or service_name is not None
129 ):
130 warnings.warn(
131 "Oracle DSN provided, overriding additional provided connection arguments"
132 )
133
134 if service_name is not None and database is not None:
135 raise exc.IbisInputError(
136 "Values provided for both service_name and database. "
137 "Both of these values map to an Oracle service_name, "
138 "please provide only one of them."
139 )
140
141 if service_name is None and database is not None:
142 service_name = database
143
144 if dsn is None:
145 dsn = oracledb.makedsn(host, port, service_name=service_name, sid=sid)
146 url = sa.engine.url.make_url(f"oracle://{user}:{password}@{dsn}")
147
148 engine = sa.create_engine(
149 url,
150 poolclass=sa.pool.StaticPool,
151 # We set the statement cache size to 0 because Oracle will otherwise
152 # attempt to reuse prepared statements even if the type of the bound variable
153 # has changed.
154 # This is apparently accepted behavior.
155 # https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_b.html#statement-caching-in-thin-and-thick-modes
156 connect_args={"stmtcachesize": 0},
157 )
158
159 super().do_connect(engine)
160
161 def normalize_name(name):
162 if name is None:
163 return None
164 elif not name:
165 return ""
166 elif name.lower() == name:
167 return sa.sql.quoted_name(name, quote=True)
168 else:
169 return name
170
171 self.con.dialect.normalize_name = normalize_name
172
173 def _from_url(self, url: str, **kwargs):
174 return self.do_connect(user=url.username, password=url.password, dsn=url.host)
175
176 @property
177 def current_database(self) -> str:
178 return self._scalar_query("SELECT * FROM global_name")
179
180 def list_tables(self, like=None, database=None, schema=None):
181 """List the tables in the database.
182
183 Parameters
184 ----------
185 like
186 A pattern to use for listing tables.
187 database
188 (deprecated) The database to perform the list against.
189 schema
190 The schema to perform the list against.
191
192 ::: {.callout-warning}
193 ## `schema` refers to database hierarchy
194
195 The `schema` parameter does **not** refer to the column names and
196 types of `table`.
197 :::
198 """
199 if database is not None:
200 util.warn_deprecated(
201 "database",
202 instead="Use the `schema` keyword argument instead",
203 as_of="7.1",
204 removed_in="8.0",
205 )
206 schema = schema or database
207 tables = self.inspector.get_table_names(schema=schema)
208 views = self.inspector.get_view_names(schema=schema)
209 return self._filter_with_like(tables + views, like)
210
211 def _metadata(self, query: str) -> Iterable[tuple[str, dt.DataType]]:
212 from sqlalchemy_views import CreateView, DropView
213
214 name = util.gen_name("oracle_metadata")
215
216 try:
217 sg_expr = sg.parse_one(query, into=sg.exp.Table, dialect="oracle")
218 except sg.ParseError:
219 sg_expr = sg.parse_one(query, dialect="oracle")
220
221 # If query is a table, adjust the query accordingly
222 if isinstance(sg_expr, sg.exp.Table):
223 sg_expr = sg.select("*").from_(sg_expr)
224
225 query = sg_expr.sql(dialect="oracle")
226
227 view = sa.table(name)
228 create_view = CreateView(view, sa.text(query))
229 drop_view = DropView(view, if_exists=False)
230
231 t = sa.table(
232 "all_tab_columns",
233 sa.column("table_name"),
234 sa.column("column_name"),
235 sa.column("data_type"),
236 sa.column("data_precision"),
237 sa.column("data_scale"),
238 sa.column("nullable"),
239 )
240 metadata_query = sa.select(
241 t.c.column_name,
242 t.c.data_type,
243 t.c.data_precision,
244 t.c.data_scale,
245 (t.c.nullable == "Y").label("nullable"),
246 ).where(t.c.table_name == name)
247
248 with self.begin() as con:
249 con.execute(create_view)
250 try:
251 results = con.execute(metadata_query).fetchall()
252 finally:
253 # drop the view no matter what
254 con.execute(drop_view)
255
256 for name, type_string, precision, scale, nullable in results:
257 if precision is not None and scale is not None and precision != 0:
258 typ = dt.Decimal(precision=precision, scale=scale, nullable=nullable)
259 elif precision == 0:
260 # TODO: how to disambiguate between int and float here without inspecting the value?
261 typ = dt.float
262 else:
263 typ = OracleType.from_string(type_string, nullable=nullable)
264 yield name, typ
265
266 def _table_from_schema(
267 self,
268 name: str,
269 schema: sch.Schema,
270 temp: bool = False,
271 database: str | None = None,
272 **kwargs: Any,
273 ) -> sa.Table:
274 if temp:
275 kwargs["oracle_on_commit"] = "PRESERVE ROWS"
276 t = super()._table_from_schema(name, schema, temp, database, **kwargs)
277 if temp:
278 atexit.register(self._clean_up_tmp_table, t)
279 return t
280
281 def _clean_up_tmp_table(self, name: str) -> None:
282 tmptable = self._get_sqla_table(name, autoload=False)
283 with self.begin() as bind:
284 # global temporary tables cannot be dropped without first truncating them
285 #
286 # https://stackoverflow.com/questions/32423397/force-oracle-drop-global-temp-table
287 #
288 # ignore DatabaseError exceptions because the table may not exist
289 # because it's already been deleted
290 with contextlib.suppress(sa.exc.DatabaseError):
291 bind.exec_driver_sql(f'TRUNCATE TABLE "{tmptable.name}"')
292 with contextlib.suppress(sa.exc.DatabaseError):
293 tmptable.drop(bind=bind)
294
295 def _clean_up_cached_table(self, op):
296 self._clean_up_tmp_table(op.name)
297
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ibis/backends/oracle/__init__.py b/ibis/backends/oracle/__init__.py
--- a/ibis/backends/oracle/__init__.py
+++ b/ibis/backends/oracle/__init__.py
@@ -209,6 +209,7 @@
return self._filter_with_like(tables + views, like)
def _metadata(self, query: str) -> Iterable[tuple[str, dt.DataType]]:
+ from sqlalchemy import case
from sqlalchemy_views import CreateView, DropView
name = util.gen_name("oracle_metadata")
@@ -242,7 +243,7 @@
t.c.data_type,
t.c.data_precision,
t.c.data_scale,
- (t.c.nullable == "Y").label("nullable"),
+ case((t.c.nullable == "Y", True), else_=False).label("nullable"),
).where(t.c.table_name == name)
with self.begin() as con:
|
{"golden_diff": "diff --git a/ibis/backends/oracle/__init__.py b/ibis/backends/oracle/__init__.py\n--- a/ibis/backends/oracle/__init__.py\n+++ b/ibis/backends/oracle/__init__.py\n@@ -209,6 +209,7 @@\n return self._filter_with_like(tables + views, like)\n \n def _metadata(self, query: str) -> Iterable[tuple[str, dt.DataType]]:\n+ from sqlalchemy import case\n from sqlalchemy_views import CreateView, DropView\n \n name = util.gen_name(\"oracle_metadata\")\n@@ -242,7 +243,7 @@\n t.c.data_type,\n t.c.data_precision,\n t.c.data_scale,\n- (t.c.nullable == \"Y\").label(\"nullable\"),\n+ case((t.c.nullable == \"Y\", True), else_=False).label(\"nullable\"),\n ).where(t.c.table_name == name)\n \n with self.begin() as con:\n", "issue": "bug(oracle): Failing metadata query\n### What happened?\n\nMetadata query for Oracle is failing due to filtering the nullable column between SELECT and FROM statements, which is only possible after WHERE.\n\n### What version of ibis are you using?\n\n7.0.0\n\n### What backend(s) are you using, if any?\n\nOracle\n\n### Relevant log output\n\n```sh\nsqlalchemy.exc.DatabaseError: (oracledb.exceptions.DatabaseError) ORA-00923: FROM keyword not found where expected\r\n[SQL: SELECT all_tab_columns.column_name, all_tab_columns.data_type, all_tab_columns.data_precision, all_tab_columns.data_scale, all_tab_columns.nullable = :nullable_1 AS nullable\r\nFROM all_tab_columns\r\nWHERE all_tab_columns.table_name = :table_name_1]\r\n[parameters: {'nullable_1': 'Y', 'table_name_1': '_ibis_oracle_metadata_7djjvezdl5bnrmqkf6grsevvjq'}]\r\n```\n```\n\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "\"\"\"The Oracle backend.\"\"\"\n\nfrom __future__ import annotations\n\nimport atexit\nimport contextlib\nimport sys\nimport warnings\nfrom typing import TYPE_CHECKING, Any\n\nimport oracledb\nimport sqlglot as sg\n\nfrom ibis import util\n\n# Wow, this is truly horrible\n# Get out your clippers, it's time to shave a yak.\n#\n# 1. snowflake-sqlalchemy doesn't support sqlalchemy 2.0\n# 2. oracledb is only supported in sqlalchemy 2.0\n# 3. Ergo, module hacking is required to avoid doing a silly amount of work\n# to create multiple lockfiles or port snowflake away from sqlalchemy\n# 4. Also the version needs to be spoofed to be >= 7 or else the cx_Oracle\n# dialect barfs\noracledb.__version__ = oracledb.version = \"7\"\n\nsys.modules[\"cx_Oracle\"] = oracledb\n\nimport sqlalchemy as sa # noqa: E402\n\nimport ibis.common.exceptions as exc # noqa: E402\nimport ibis.expr.datatypes as dt # noqa: E402\nimport ibis.expr.operations as ops # noqa: E402\nimport ibis.expr.schema as sch # noqa: E402\nfrom ibis.backends.base.sql.alchemy import ( # noqa: E402\n AlchemyCompiler,\n AlchemyExprTranslator,\n BaseAlchemyBackend,\n)\nfrom ibis.backends.oracle.datatypes import OracleType # noqa: E402\nfrom ibis.backends.oracle.registry import operation_registry # noqa: E402\nfrom ibis.expr.rewrites import rewrite_sample # noqa: E402\n\nif TYPE_CHECKING:\n from collections.abc import Iterable\n\n\nclass OracleExprTranslator(AlchemyExprTranslator):\n _registry = operation_registry.copy()\n _rewrites = AlchemyExprTranslator._rewrites.copy()\n _dialect_name = \"oracle\"\n _has_reduction_filter_syntax = False\n _require_order_by = (\n *AlchemyExprTranslator._require_order_by,\n ops.Reduction,\n ops.Lag,\n ops.Lead,\n )\n\n _forbids_frame_clause = (\n *AlchemyExprTranslator._forbids_frame_clause,\n ops.Lag,\n ops.Lead,\n )\n\n _quote_column_names = True\n _quote_table_names = True\n\n type_mapper = OracleType\n\n\nclass OracleCompiler(AlchemyCompiler):\n translator_class = OracleExprTranslator\n support_values_syntax_in_select = False\n supports_indexed_grouping_keys = False\n null_limit = None\n rewrites = AlchemyCompiler.rewrites | rewrite_sample\n\n\nclass Backend(BaseAlchemyBackend):\n name = \"oracle\"\n compiler = OracleCompiler\n supports_create_or_replace = False\n supports_temporary_tables = True\n _temporary_prefix = \"GLOBAL TEMPORARY\"\n\n def do_connect(\n self,\n *,\n user: str,\n password: str,\n host: str = \"localhost\",\n port: int = 1521,\n database: str | None = None,\n sid: str | None = None,\n service_name: str | None = None,\n dsn: str | None = None,\n **_: Any,\n ) -> None:\n \"\"\"Create an Ibis client using the passed connection parameters.\n\n Parameters\n ----------\n user\n Username\n password\n Password\n host\n Hostname\n port\n Port\n database\n Used as an Oracle service name if provided.\n sid\n Unique name of an Oracle Instance, used to construct a DSN if\n provided.\n service_name\n Oracle service name, used to construct a DSN if provided. Only one\n of database and service_name should be provided.\n dsn\n An Oracle Data Source Name. If provided, overrides all other\n connection arguments except username and password.\n \"\"\"\n # SID: unique name of an INSTANCE running an oracle process (a single, identifiable machine)\n # service name: an ALIAS to one (or many) individual instances that can\n # be hotswapped without the client knowing / caring\n if dsn is not None and (\n database is not None or sid is not None or service_name is not None\n ):\n warnings.warn(\n \"Oracle DSN provided, overriding additional provided connection arguments\"\n )\n\n if service_name is not None and database is not None:\n raise exc.IbisInputError(\n \"Values provided for both service_name and database. \"\n \"Both of these values map to an Oracle service_name, \"\n \"please provide only one of them.\"\n )\n\n if service_name is None and database is not None:\n service_name = database\n\n if dsn is None:\n dsn = oracledb.makedsn(host, port, service_name=service_name, sid=sid)\n url = sa.engine.url.make_url(f\"oracle://{user}:{password}@{dsn}\")\n\n engine = sa.create_engine(\n url,\n poolclass=sa.pool.StaticPool,\n # We set the statement cache size to 0 because Oracle will otherwise\n # attempt to reuse prepared statements even if the type of the bound variable\n # has changed.\n # This is apparently accepted behavior.\n # https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_b.html#statement-caching-in-thin-and-thick-modes\n connect_args={\"stmtcachesize\": 0},\n )\n\n super().do_connect(engine)\n\n def normalize_name(name):\n if name is None:\n return None\n elif not name:\n return \"\"\n elif name.lower() == name:\n return sa.sql.quoted_name(name, quote=True)\n else:\n return name\n\n self.con.dialect.normalize_name = normalize_name\n\n def _from_url(self, url: str, **kwargs):\n return self.do_connect(user=url.username, password=url.password, dsn=url.host)\n\n @property\n def current_database(self) -> str:\n return self._scalar_query(\"SELECT * FROM global_name\")\n\n def list_tables(self, like=None, database=None, schema=None):\n \"\"\"List the tables in the database.\n\n Parameters\n ----------\n like\n A pattern to use for listing tables.\n database\n (deprecated) The database to perform the list against.\n schema\n The schema to perform the list against.\n\n ::: {.callout-warning}\n ## `schema` refers to database hierarchy\n\n The `schema` parameter does **not** refer to the column names and\n types of `table`.\n :::\n \"\"\"\n if database is not None:\n util.warn_deprecated(\n \"database\",\n instead=\"Use the `schema` keyword argument instead\",\n as_of=\"7.1\",\n removed_in=\"8.0\",\n )\n schema = schema or database\n tables = self.inspector.get_table_names(schema=schema)\n views = self.inspector.get_view_names(schema=schema)\n return self._filter_with_like(tables + views, like)\n\n def _metadata(self, query: str) -> Iterable[tuple[str, dt.DataType]]:\n from sqlalchemy_views import CreateView, DropView\n\n name = util.gen_name(\"oracle_metadata\")\n\n try:\n sg_expr = sg.parse_one(query, into=sg.exp.Table, dialect=\"oracle\")\n except sg.ParseError:\n sg_expr = sg.parse_one(query, dialect=\"oracle\")\n\n # If query is a table, adjust the query accordingly\n if isinstance(sg_expr, sg.exp.Table):\n sg_expr = sg.select(\"*\").from_(sg_expr)\n\n query = sg_expr.sql(dialect=\"oracle\")\n\n view = sa.table(name)\n create_view = CreateView(view, sa.text(query))\n drop_view = DropView(view, if_exists=False)\n\n t = sa.table(\n \"all_tab_columns\",\n sa.column(\"table_name\"),\n sa.column(\"column_name\"),\n sa.column(\"data_type\"),\n sa.column(\"data_precision\"),\n sa.column(\"data_scale\"),\n sa.column(\"nullable\"),\n )\n metadata_query = sa.select(\n t.c.column_name,\n t.c.data_type,\n t.c.data_precision,\n t.c.data_scale,\n (t.c.nullable == \"Y\").label(\"nullable\"),\n ).where(t.c.table_name == name)\n\n with self.begin() as con:\n con.execute(create_view)\n try:\n results = con.execute(metadata_query).fetchall()\n finally:\n # drop the view no matter what\n con.execute(drop_view)\n\n for name, type_string, precision, scale, nullable in results:\n if precision is not None and scale is not None and precision != 0:\n typ = dt.Decimal(precision=precision, scale=scale, nullable=nullable)\n elif precision == 0:\n # TODO: how to disambiguate between int and float here without inspecting the value?\n typ = dt.float\n else:\n typ = OracleType.from_string(type_string, nullable=nullable)\n yield name, typ\n\n def _table_from_schema(\n self,\n name: str,\n schema: sch.Schema,\n temp: bool = False,\n database: str | None = None,\n **kwargs: Any,\n ) -> sa.Table:\n if temp:\n kwargs[\"oracle_on_commit\"] = \"PRESERVE ROWS\"\n t = super()._table_from_schema(name, schema, temp, database, **kwargs)\n if temp:\n atexit.register(self._clean_up_tmp_table, t)\n return t\n\n def _clean_up_tmp_table(self, name: str) -> None:\n tmptable = self._get_sqla_table(name, autoload=False)\n with self.begin() as bind:\n # global temporary tables cannot be dropped without first truncating them\n #\n # https://stackoverflow.com/questions/32423397/force-oracle-drop-global-temp-table\n #\n # ignore DatabaseError exceptions because the table may not exist\n # because it's already been deleted\n with contextlib.suppress(sa.exc.DatabaseError):\n bind.exec_driver_sql(f'TRUNCATE TABLE \"{tmptable.name}\"')\n with contextlib.suppress(sa.exc.DatabaseError):\n tmptable.drop(bind=bind)\n\n def _clean_up_cached_table(self, op):\n self._clean_up_tmp_table(op.name)\n", "path": "ibis/backends/oracle/__init__.py"}], "after_files": [{"content": "\"\"\"The Oracle backend.\"\"\"\n\nfrom __future__ import annotations\n\nimport atexit\nimport contextlib\nimport sys\nimport warnings\nfrom typing import TYPE_CHECKING, Any\n\nimport oracledb\nimport sqlglot as sg\n\nfrom ibis import util\n\n# Wow, this is truly horrible\n# Get out your clippers, it's time to shave a yak.\n#\n# 1. snowflake-sqlalchemy doesn't support sqlalchemy 2.0\n# 2. oracledb is only supported in sqlalchemy 2.0\n# 3. Ergo, module hacking is required to avoid doing a silly amount of work\n# to create multiple lockfiles or port snowflake away from sqlalchemy\n# 4. Also the version needs to be spoofed to be >= 7 or else the cx_Oracle\n# dialect barfs\noracledb.__version__ = oracledb.version = \"7\"\n\nsys.modules[\"cx_Oracle\"] = oracledb\n\nimport sqlalchemy as sa # noqa: E402\n\nimport ibis.common.exceptions as exc # noqa: E402\nimport ibis.expr.datatypes as dt # noqa: E402\nimport ibis.expr.operations as ops # noqa: E402\nimport ibis.expr.schema as sch # noqa: E402\nfrom ibis.backends.base.sql.alchemy import ( # noqa: E402\n AlchemyCompiler,\n AlchemyExprTranslator,\n BaseAlchemyBackend,\n)\nfrom ibis.backends.oracle.datatypes import OracleType # noqa: E402\nfrom ibis.backends.oracle.registry import operation_registry # noqa: E402\nfrom ibis.expr.rewrites import rewrite_sample # noqa: E402\n\nif TYPE_CHECKING:\n from collections.abc import Iterable\n\n\nclass OracleExprTranslator(AlchemyExprTranslator):\n _registry = operation_registry.copy()\n _rewrites = AlchemyExprTranslator._rewrites.copy()\n _dialect_name = \"oracle\"\n _has_reduction_filter_syntax = False\n _require_order_by = (\n *AlchemyExprTranslator._require_order_by,\n ops.Reduction,\n ops.Lag,\n ops.Lead,\n )\n\n _forbids_frame_clause = (\n *AlchemyExprTranslator._forbids_frame_clause,\n ops.Lag,\n ops.Lead,\n )\n\n _quote_column_names = True\n _quote_table_names = True\n\n type_mapper = OracleType\n\n\nclass OracleCompiler(AlchemyCompiler):\n translator_class = OracleExprTranslator\n support_values_syntax_in_select = False\n supports_indexed_grouping_keys = False\n null_limit = None\n rewrites = AlchemyCompiler.rewrites | rewrite_sample\n\n\nclass Backend(BaseAlchemyBackend):\n name = \"oracle\"\n compiler = OracleCompiler\n supports_create_or_replace = False\n supports_temporary_tables = True\n _temporary_prefix = \"GLOBAL TEMPORARY\"\n\n def do_connect(\n self,\n *,\n user: str,\n password: str,\n host: str = \"localhost\",\n port: int = 1521,\n database: str | None = None,\n sid: str | None = None,\n service_name: str | None = None,\n dsn: str | None = None,\n **_: Any,\n ) -> None:\n \"\"\"Create an Ibis client using the passed connection parameters.\n\n Parameters\n ----------\n user\n Username\n password\n Password\n host\n Hostname\n port\n Port\n database\n Used as an Oracle service name if provided.\n sid\n Unique name of an Oracle Instance, used to construct a DSN if\n provided.\n service_name\n Oracle service name, used to construct a DSN if provided. Only one\n of database and service_name should be provided.\n dsn\n An Oracle Data Source Name. If provided, overrides all other\n connection arguments except username and password.\n \"\"\"\n # SID: unique name of an INSTANCE running an oracle process (a single, identifiable machine)\n # service name: an ALIAS to one (or many) individual instances that can\n # be hotswapped without the client knowing / caring\n if dsn is not None and (\n database is not None or sid is not None or service_name is not None\n ):\n warnings.warn(\n \"Oracle DSN provided, overriding additional provided connection arguments\"\n )\n\n if service_name is not None and database is not None:\n raise exc.IbisInputError(\n \"Values provided for both service_name and database. \"\n \"Both of these values map to an Oracle service_name, \"\n \"please provide only one of them.\"\n )\n\n if service_name is None and database is not None:\n service_name = database\n\n if dsn is None:\n dsn = oracledb.makedsn(host, port, service_name=service_name, sid=sid)\n url = sa.engine.url.make_url(f\"oracle://{user}:{password}@{dsn}\")\n\n engine = sa.create_engine(\n url,\n poolclass=sa.pool.StaticPool,\n # We set the statement cache size to 0 because Oracle will otherwise\n # attempt to reuse prepared statements even if the type of the bound variable\n # has changed.\n # This is apparently accepted behavior.\n # https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_b.html#statement-caching-in-thin-and-thick-modes\n connect_args={\"stmtcachesize\": 0},\n )\n\n super().do_connect(engine)\n\n def normalize_name(name):\n if name is None:\n return None\n elif not name:\n return \"\"\n elif name.lower() == name:\n return sa.sql.quoted_name(name, quote=True)\n else:\n return name\n\n self.con.dialect.normalize_name = normalize_name\n\n def _from_url(self, url: str, **kwargs):\n return self.do_connect(user=url.username, password=url.password, dsn=url.host)\n\n @property\n def current_database(self) -> str:\n return self._scalar_query(\"SELECT * FROM global_name\")\n\n def list_tables(self, like=None, database=None, schema=None):\n \"\"\"List the tables in the database.\n\n Parameters\n ----------\n like\n A pattern to use for listing tables.\n database\n (deprecated) The database to perform the list against.\n schema\n The schema to perform the list against.\n\n ::: {.callout-warning}\n ## `schema` refers to database hierarchy\n\n The `schema` parameter does **not** refer to the column names and\n types of `table`.\n :::\n \"\"\"\n if database is not None:\n util.warn_deprecated(\n \"database\",\n instead=\"Use the `schema` keyword argument instead\",\n as_of=\"7.1\",\n removed_in=\"8.0\",\n )\n schema = schema or database\n tables = self.inspector.get_table_names(schema=schema)\n views = self.inspector.get_view_names(schema=schema)\n return self._filter_with_like(tables + views, like)\n\n def _metadata(self, query: str) -> Iterable[tuple[str, dt.DataType]]:\n from sqlalchemy import case\n from sqlalchemy_views import CreateView, DropView\n\n name = util.gen_name(\"oracle_metadata\")\n\n try:\n sg_expr = sg.parse_one(query, into=sg.exp.Table, dialect=\"oracle\")\n except sg.ParseError:\n sg_expr = sg.parse_one(query, dialect=\"oracle\")\n\n # If query is a table, adjust the query accordingly\n if isinstance(sg_expr, sg.exp.Table):\n sg_expr = sg.select(\"*\").from_(sg_expr)\n\n query = sg_expr.sql(dialect=\"oracle\")\n\n view = sa.table(name)\n create_view = CreateView(view, sa.text(query))\n drop_view = DropView(view, if_exists=False)\n\n t = sa.table(\n \"all_tab_columns\",\n sa.column(\"table_name\"),\n sa.column(\"column_name\"),\n sa.column(\"data_type\"),\n sa.column(\"data_precision\"),\n sa.column(\"data_scale\"),\n sa.column(\"nullable\"),\n )\n metadata_query = sa.select(\n t.c.column_name,\n t.c.data_type,\n t.c.data_precision,\n t.c.data_scale,\n case((t.c.nullable == \"Y\", True), else_=False).label(\"nullable\"),\n ).where(t.c.table_name == name)\n\n with self.begin() as con:\n con.execute(create_view)\n try:\n results = con.execute(metadata_query).fetchall()\n finally:\n # drop the view no matter what\n con.execute(drop_view)\n\n for name, type_string, precision, scale, nullable in results:\n if precision is not None and scale is not None and precision != 0:\n typ = dt.Decimal(precision=precision, scale=scale, nullable=nullable)\n elif precision == 0:\n # TODO: how to disambiguate between int and float here without inspecting the value?\n typ = dt.float\n else:\n typ = OracleType.from_string(type_string, nullable=nullable)\n yield name, typ\n\n def _table_from_schema(\n self,\n name: str,\n schema: sch.Schema,\n temp: bool = False,\n database: str | None = None,\n **kwargs: Any,\n ) -> sa.Table:\n if temp:\n kwargs[\"oracle_on_commit\"] = \"PRESERVE ROWS\"\n t = super()._table_from_schema(name, schema, temp, database, **kwargs)\n if temp:\n atexit.register(self._clean_up_tmp_table, t)\n return t\n\n def _clean_up_tmp_table(self, name: str) -> None:\n tmptable = self._get_sqla_table(name, autoload=False)\n with self.begin() as bind:\n # global temporary tables cannot be dropped without first truncating them\n #\n # https://stackoverflow.com/questions/32423397/force-oracle-drop-global-temp-table\n #\n # ignore DatabaseError exceptions because the table may not exist\n # because it's already been deleted\n with contextlib.suppress(sa.exc.DatabaseError):\n bind.exec_driver_sql(f'TRUNCATE TABLE \"{tmptable.name}\"')\n with contextlib.suppress(sa.exc.DatabaseError):\n tmptable.drop(bind=bind)\n\n def _clean_up_cached_table(self, op):\n self._clean_up_tmp_table(op.name)\n", "path": "ibis/backends/oracle/__init__.py"}]}
| 3,557 | 220 |
gh_patches_debug_20675
|
rasdani/github-patches
|
git_diff
|
ansible-collections__amazon.aws-425
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add new example for `amazon.aws.aws_secret` that includes use of `region=` and `aws_profile=` parameters
### Summary
In order to get my `amazon.aws.aws_secret` lookup to work, I had to add the `region=` and `aws_profile=` parameters. I found these through Google searches and guesswork... Would it be worth having a new example that shows them in use?
For example, to take one of the existing examples and modify it a little:
```
- name: lookup secretsmanager secret in the current region using the nested feature - specify region and AWS Profile
debug: msg="{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', nested=true, region=us-east-1, aws_profile=dev-profile) }}"
# The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.
# If an object is of the form `{"key1":{"key2":{"key3":1}}}` the query would return the value `1`.
# region= should be changed to reflect what region AWS Secret is stored
# aws_profile= should reflect what AWS Profile to use, that has access to the AWS Secret
```
Just a thought, if you feel this isn't needed, please ignore.
Thanks!
### Issue Type
Documentation Report
### Component Name
amazon.aws.aws_secret
### Ansible Version
```console (paste below)
$ ansible --version
```
### Collection Versions
```console (paste below)
$ ansible-galaxy collection list
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
_No response_
### Additional Information
_No response_
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/lookup/aws_secret.py`
Content:
```
1 # Copyright: (c) 2018, Aaron Smith <[email protected]>
2 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
3
4 from __future__ import (absolute_import, division, print_function)
5 __metaclass__ = type
6
7 DOCUMENTATION = r'''
8 lookup: aws_secret
9 author:
10 - Aaron Smith <[email protected]>
11 requirements:
12 - python >= 3.6
13 - boto3
14 - botocore >= 1.16.0
15 extends_documentation_fragment:
16 - amazon.aws.aws_credentials
17 - amazon.aws.aws_region
18
19 short_description: Look up secrets stored in AWS Secrets Manager.
20 description:
21 - Look up secrets stored in AWS Secrets Manager provided the caller
22 has the appropriate permissions to read the secret.
23 - Lookup is based on the secret's I(Name) value.
24 - Optional parameters can be passed into this lookup; I(version_id) and I(version_stage)
25 options:
26 _terms:
27 description: Name of the secret to look up in AWS Secrets Manager.
28 required: True
29 bypath:
30 description: A boolean to indicate whether the parameter is provided as a hierarchy.
31 default: false
32 type: boolean
33 version_added: 1.4.0
34 nested:
35 description: A boolean to indicate the secret contains nested values.
36 type: boolean
37 default: false
38 version_added: 1.4.0
39 version_id:
40 description: Version of the secret(s).
41 required: False
42 version_stage:
43 description: Stage of the secret version.
44 required: False
45 join:
46 description:
47 - Join two or more entries to form an extended secret.
48 - This is useful for overcoming the 4096 character limit imposed by AWS.
49 - No effect when used with I(bypath).
50 type: boolean
51 default: false
52 on_missing:
53 description:
54 - Action to take if the secret is missing.
55 - C(error) will raise a fatal error when the secret is missing.
56 - C(skip) will silently ignore the missing secret.
57 - C(warn) will skip over the missing secret but issue a warning.
58 default: error
59 type: string
60 choices: ['error', 'skip', 'warn']
61 on_denied:
62 description:
63 - Action to take if access to the secret is denied.
64 - C(error) will raise a fatal error when access to the secret is denied.
65 - C(skip) will silently ignore the denied secret.
66 - C(warn) will skip over the denied secret but issue a warning.
67 default: error
68 type: string
69 choices: ['error', 'skip', 'warn']
70 '''
71
72 EXAMPLES = r"""
73 - name: lookup secretsmanager secret in the current region
74 debug: msg="{{ lookup('amazon.aws.aws_secret', '/path/to/secrets', bypath=true) }}"
75
76 - name: Create RDS instance with aws_secret lookup for password param
77 rds:
78 command: create
79 instance_name: app-db
80 db_engine: MySQL
81 size: 10
82 instance_type: db.m1.small
83 username: dbadmin
84 password: "{{ lookup('amazon.aws.aws_secret', 'DbSecret') }}"
85 tags:
86 Environment: staging
87
88 - name: skip if secret does not exist
89 debug: msg="{{ lookup('amazon.aws.aws_secret', 'secret-not-exist', on_missing='skip')}}"
90
91 - name: warn if access to the secret is denied
92 debug: msg="{{ lookup('amazon.aws.aws_secret', 'secret-denied', on_denied='warn')}}"
93
94 - name: lookup secretsmanager secret in the current region using the nested feature
95 debug: msg="{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', nested=true) }}"
96 # The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.
97 # If an object is of the form `{"key1":{"key2":{"key3":1}}}` the query would return the value `1`.
98 """
99
100 RETURN = r"""
101 _raw:
102 description:
103 Returns the value of the secret stored in AWS Secrets Manager.
104 """
105
106 import json
107
108 try:
109 import boto3
110 import botocore
111 except ImportError:
112 pass # will be captured by imported HAS_BOTO3
113
114 from ansible.errors import AnsibleError
115 from ansible.module_utils.six import string_types
116 from ansible.module_utils._text import to_native
117 from ansible.plugins.lookup import LookupBase
118
119 from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
120 from ansible_collections.amazon.aws.plugins.module_utils.ec2 import HAS_BOTO3
121
122
123 def _boto3_conn(region, credentials):
124 boto_profile = credentials.pop('aws_profile', None)
125
126 try:
127 connection = boto3.session.Session(profile_name=boto_profile).client('secretsmanager', region, **credentials)
128 except (botocore.exceptions.ProfileNotFound, botocore.exceptions.PartialCredentialsError) as e:
129 if boto_profile:
130 try:
131 connection = boto3.session.Session(profile_name=boto_profile).client('secretsmanager', region)
132 except (botocore.exceptions.ProfileNotFound, botocore.exceptions.PartialCredentialsError) as e:
133 raise AnsibleError("Insufficient credentials found.")
134 else:
135 raise AnsibleError("Insufficient credentials found.")
136 return connection
137
138
139 class LookupModule(LookupBase):
140 def run(self, terms, variables=None, boto_profile=None, aws_profile=None,
141 aws_secret_key=None, aws_access_key=None, aws_security_token=None, region=None,
142 bypath=False, nested=False, join=False, version_stage=None, version_id=None, on_missing='error',
143 on_denied='error'):
144 '''
145 :arg terms: a list of lookups to run.
146 e.g. ['parameter_name', 'parameter_name_too' ]
147 :kwarg variables: ansible variables active at the time of the lookup
148 :kwarg aws_secret_key: identity of the AWS key to use
149 :kwarg aws_access_key: AWS secret key (matching identity)
150 :kwarg aws_security_token: AWS session key if using STS
151 :kwarg decrypt: Set to True to get decrypted parameters
152 :kwarg region: AWS region in which to do the lookup
153 :kwarg bypath: Set to True to do a lookup of variables under a path
154 :kwarg nested: Set to True to do a lookup of nested secrets
155 :kwarg join: Join two or more entries to form an extended secret
156 :kwarg version_stage: Stage of the secret version
157 :kwarg version_id: Version of the secret(s)
158 :kwarg on_missing: Action to take if the secret is missing
159 :kwarg on_denied: Action to take if access to the secret is denied
160 :returns: A list of parameter values or a list of dictionaries if bypath=True.
161 '''
162 if not HAS_BOTO3:
163 raise AnsibleError('botocore and boto3 are required for aws_ssm lookup.')
164
165 missing = on_missing.lower()
166 if not isinstance(missing, string_types) or missing not in ['error', 'warn', 'skip']:
167 raise AnsibleError('"on_missing" must be a string and one of "error", "warn" or "skip", not %s' % missing)
168
169 denied = on_denied.lower()
170 if not isinstance(denied, string_types) or denied not in ['error', 'warn', 'skip']:
171 raise AnsibleError('"on_denied" must be a string and one of "error", "warn" or "skip", not %s' % denied)
172
173 credentials = {}
174 if aws_profile:
175 credentials['aws_profile'] = aws_profile
176 else:
177 credentials['aws_profile'] = boto_profile
178 credentials['aws_secret_access_key'] = aws_secret_key
179 credentials['aws_access_key_id'] = aws_access_key
180 credentials['aws_session_token'] = aws_security_token
181
182 # fallback to IAM role credentials
183 if not credentials['aws_profile'] and not (
184 credentials['aws_access_key_id'] and credentials['aws_secret_access_key']):
185 session = botocore.session.get_session()
186 if session.get_credentials() is not None:
187 credentials['aws_access_key_id'] = session.get_credentials().access_key
188 credentials['aws_secret_access_key'] = session.get_credentials().secret_key
189 credentials['aws_session_token'] = session.get_credentials().token
190
191 client = _boto3_conn(region, credentials)
192
193 if bypath:
194 secrets = {}
195 for term in terms:
196 try:
197 response = client.list_secrets(Filters=[{'Key': 'name', 'Values': [term]}])
198
199 if 'SecretList' in response:
200 for secret in response['SecretList']:
201 secrets.update({secret['Name']: self.get_secret_value(secret['Name'], client,
202 on_missing=missing,
203 on_denied=denied)})
204 except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
205 raise AnsibleError("Failed to retrieve secret: %s" % to_native(e))
206 secrets = [secrets]
207 else:
208 secrets = []
209 for term in terms:
210 value = self.get_secret_value(term, client,
211 version_stage=version_stage, version_id=version_id,
212 on_missing=missing, on_denied=denied, nested=nested)
213 if value:
214 secrets.append(value)
215 if join:
216 joined_secret = []
217 joined_secret.append(''.join(secrets))
218 return joined_secret
219
220 return secrets
221
222 def get_secret_value(self, term, client, version_stage=None, version_id=None, on_missing=None, on_denied=None, nested=False):
223 params = {}
224 params['SecretId'] = term
225 if version_id:
226 params['VersionId'] = version_id
227 if version_stage:
228 params['VersionStage'] = version_stage
229 if nested:
230 if len(term.split('.')) < 2:
231 raise AnsibleError("Nested query must use the following syntax: `aws_secret_name.<key_name>.<key_name>")
232 secret_name = term.split('.')[0]
233 params['SecretId'] = secret_name
234
235 try:
236 response = client.get_secret_value(**params)
237 if 'SecretBinary' in response:
238 return response['SecretBinary']
239 if 'SecretString' in response:
240 if nested:
241 secrets = []
242 query = term.split('.')[1:]
243 secret_string = json.loads(response['SecretString'])
244 ret_val = secret_string
245 for key in query:
246 if key in ret_val:
247 ret_val = ret_val[key]
248 else:
249 raise AnsibleError("Successfully retrieved secret but there exists no key {0} in the secret".format(key))
250 return str(ret_val)
251 else:
252 return response['SecretString']
253 except is_boto3_error_code('ResourceNotFoundException'):
254 if on_missing == 'error':
255 raise AnsibleError("Failed to find secret %s (ResourceNotFound)" % term)
256 elif on_missing == 'warn':
257 self._display.warning('Skipping, did not find secret %s' % term)
258 except is_boto3_error_code('AccessDeniedException'): # pylint: disable=duplicate-except
259 if on_denied == 'error':
260 raise AnsibleError("Failed to access secret %s (AccessDenied)" % term)
261 elif on_denied == 'warn':
262 self._display.warning('Skipping, access denied for secret %s' % term)
263 except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
264 raise AnsibleError("Failed to retrieve secret: %s" % to_native(e))
265
266 return None
267
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugins/lookup/aws_secret.py b/plugins/lookup/aws_secret.py
--- a/plugins/lookup/aws_secret.py
+++ b/plugins/lookup/aws_secret.py
@@ -95,6 +95,14 @@
debug: msg="{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', nested=true) }}"
# The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.
# If an object is of the form `{"key1":{"key2":{"key3":1}}}` the query would return the value `1`.
+ - name: lookup secretsmanager secret in a specific region using specified region and aws profile using nested feature
+ debug: >
+ msg="{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', region=region, aws_profile=aws_profile,
+ aws_access_key=aws_access_key, aws_secret_key=aws_secret_key, nested=true) }}"
+ # The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.
+ # If an object is of the form `{"key1":{"key2":{"key3":1}}}` the query would return the value `1`.
+ # Region is the AWS region where the AWS secret is stored.
+ # AWS_profile is the aws profile to use, that has access to the AWS secret.
"""
RETURN = r"""
|
{"golden_diff": "diff --git a/plugins/lookup/aws_secret.py b/plugins/lookup/aws_secret.py\n--- a/plugins/lookup/aws_secret.py\n+++ b/plugins/lookup/aws_secret.py\n@@ -95,6 +95,14 @@\n debug: msg=\"{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', nested=true) }}\"\n # The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.\n # If an object is of the form `{\"key1\":{\"key2\":{\"key3\":1}}}` the query would return the value `1`.\n+ - name: lookup secretsmanager secret in a specific region using specified region and aws profile using nested feature\n+ debug: >\n+ msg=\"{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', region=region, aws_profile=aws_profile,\n+ aws_access_key=aws_access_key, aws_secret_key=aws_secret_key, nested=true) }}\"\n+ # The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.\n+ # If an object is of the form `{\"key1\":{\"key2\":{\"key3\":1}}}` the query would return the value `1`.\n+ # Region is the AWS region where the AWS secret is stored.\n+ # AWS_profile is the aws profile to use, that has access to the AWS secret.\n \"\"\"\n \n RETURN = r\"\"\"\n", "issue": "Add new example for `amazon.aws.aws_secret` that includes use of `region=` and `aws_profile=` parameters\n### Summary\r\n\r\nIn order to get my `amazon.aws.aws_secret` lookup to work, I had to add the `region=` and `aws_profile=` parameters. I found these through Google searches and guesswork... Would it be worth having a new example that shows them in use?\r\n\r\nFor example, to take one of the existing examples and modify it a little:\r\n```\r\n- name: lookup secretsmanager secret in the current region using the nested feature - specify region and AWS Profile\r\n debug: msg=\"{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', nested=true, region=us-east-1, aws_profile=dev-profile) }}\"\r\n # The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.\r\n # If an object is of the form `{\"key1\":{\"key2\":{\"key3\":1}}}` the query would return the value `1`.\r\n # region= should be changed to reflect what region AWS Secret is stored\r\n # aws_profile= should reflect what AWS Profile to use, that has access to the AWS Secret\r\n```\r\nJust a thought, if you feel this isn't needed, please ignore.\r\n\r\nThanks!\r\n\r\n### Issue Type\r\n\r\nDocumentation Report\r\n\r\n### Component Name\r\n\r\namazon.aws.aws_secret\r\n\r\n### Ansible Version\r\n\r\n```console (paste below)\r\n$ ansible --version\r\n\r\n```\r\n\r\n\r\n### Collection Versions\r\n\r\n```console (paste below)\r\n$ ansible-galaxy collection list\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\n_No response_\r\n\r\n### Additional Information\r\n\r\n_No response_\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "# Copyright: (c) 2018, Aaron Smith <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = r'''\nlookup: aws_secret\nauthor:\n - Aaron Smith <[email protected]>\nrequirements:\n - python >= 3.6\n - boto3\n - botocore >= 1.16.0\nextends_documentation_fragment:\n- amazon.aws.aws_credentials\n- amazon.aws.aws_region\n\nshort_description: Look up secrets stored in AWS Secrets Manager.\ndescription:\n - Look up secrets stored in AWS Secrets Manager provided the caller\n has the appropriate permissions to read the secret.\n - Lookup is based on the secret's I(Name) value.\n - Optional parameters can be passed into this lookup; I(version_id) and I(version_stage)\noptions:\n _terms:\n description: Name of the secret to look up in AWS Secrets Manager.\n required: True\n bypath:\n description: A boolean to indicate whether the parameter is provided as a hierarchy.\n default: false\n type: boolean\n version_added: 1.4.0\n nested:\n description: A boolean to indicate the secret contains nested values.\n type: boolean\n default: false\n version_added: 1.4.0\n version_id:\n description: Version of the secret(s).\n required: False\n version_stage:\n description: Stage of the secret version.\n required: False\n join:\n description:\n - Join two or more entries to form an extended secret.\n - This is useful for overcoming the 4096 character limit imposed by AWS.\n - No effect when used with I(bypath).\n type: boolean\n default: false\n on_missing:\n description:\n - Action to take if the secret is missing.\n - C(error) will raise a fatal error when the secret is missing.\n - C(skip) will silently ignore the missing secret.\n - C(warn) will skip over the missing secret but issue a warning.\n default: error\n type: string\n choices: ['error', 'skip', 'warn']\n on_denied:\n description:\n - Action to take if access to the secret is denied.\n - C(error) will raise a fatal error when access to the secret is denied.\n - C(skip) will silently ignore the denied secret.\n - C(warn) will skip over the denied secret but issue a warning.\n default: error\n type: string\n choices: ['error', 'skip', 'warn']\n'''\n\nEXAMPLES = r\"\"\"\n - name: lookup secretsmanager secret in the current region\n debug: msg=\"{{ lookup('amazon.aws.aws_secret', '/path/to/secrets', bypath=true) }}\"\n\n - name: Create RDS instance with aws_secret lookup for password param\n rds:\n command: create\n instance_name: app-db\n db_engine: MySQL\n size: 10\n instance_type: db.m1.small\n username: dbadmin\n password: \"{{ lookup('amazon.aws.aws_secret', 'DbSecret') }}\"\n tags:\n Environment: staging\n\n - name: skip if secret does not exist\n debug: msg=\"{{ lookup('amazon.aws.aws_secret', 'secret-not-exist', on_missing='skip')}}\"\n\n - name: warn if access to the secret is denied\n debug: msg=\"{{ lookup('amazon.aws.aws_secret', 'secret-denied', on_denied='warn')}}\"\n\n - name: lookup secretsmanager secret in the current region using the nested feature\n debug: msg=\"{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', nested=true) }}\"\n # The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.\n # If an object is of the form `{\"key1\":{\"key2\":{\"key3\":1}}}` the query would return the value `1`.\n\"\"\"\n\nRETURN = r\"\"\"\n_raw:\n description:\n Returns the value of the secret stored in AWS Secrets Manager.\n\"\"\"\n\nimport json\n\ntry:\n import boto3\n import botocore\nexcept ImportError:\n pass # will be captured by imported HAS_BOTO3\n\nfrom ansible.errors import AnsibleError\nfrom ansible.module_utils.six import string_types\nfrom ansible.module_utils._text import to_native\nfrom ansible.plugins.lookup import LookupBase\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import HAS_BOTO3\n\n\ndef _boto3_conn(region, credentials):\n boto_profile = credentials.pop('aws_profile', None)\n\n try:\n connection = boto3.session.Session(profile_name=boto_profile).client('secretsmanager', region, **credentials)\n except (botocore.exceptions.ProfileNotFound, botocore.exceptions.PartialCredentialsError) as e:\n if boto_profile:\n try:\n connection = boto3.session.Session(profile_name=boto_profile).client('secretsmanager', region)\n except (botocore.exceptions.ProfileNotFound, botocore.exceptions.PartialCredentialsError) as e:\n raise AnsibleError(\"Insufficient credentials found.\")\n else:\n raise AnsibleError(\"Insufficient credentials found.\")\n return connection\n\n\nclass LookupModule(LookupBase):\n def run(self, terms, variables=None, boto_profile=None, aws_profile=None,\n aws_secret_key=None, aws_access_key=None, aws_security_token=None, region=None,\n bypath=False, nested=False, join=False, version_stage=None, version_id=None, on_missing='error',\n on_denied='error'):\n '''\n :arg terms: a list of lookups to run.\n e.g. ['parameter_name', 'parameter_name_too' ]\n :kwarg variables: ansible variables active at the time of the lookup\n :kwarg aws_secret_key: identity of the AWS key to use\n :kwarg aws_access_key: AWS secret key (matching identity)\n :kwarg aws_security_token: AWS session key if using STS\n :kwarg decrypt: Set to True to get decrypted parameters\n :kwarg region: AWS region in which to do the lookup\n :kwarg bypath: Set to True to do a lookup of variables under a path\n :kwarg nested: Set to True to do a lookup of nested secrets\n :kwarg join: Join two or more entries to form an extended secret\n :kwarg version_stage: Stage of the secret version\n :kwarg version_id: Version of the secret(s)\n :kwarg on_missing: Action to take if the secret is missing\n :kwarg on_denied: Action to take if access to the secret is denied\n :returns: A list of parameter values or a list of dictionaries if bypath=True.\n '''\n if not HAS_BOTO3:\n raise AnsibleError('botocore and boto3 are required for aws_ssm lookup.')\n\n missing = on_missing.lower()\n if not isinstance(missing, string_types) or missing not in ['error', 'warn', 'skip']:\n raise AnsibleError('\"on_missing\" must be a string and one of \"error\", \"warn\" or \"skip\", not %s' % missing)\n\n denied = on_denied.lower()\n if not isinstance(denied, string_types) or denied not in ['error', 'warn', 'skip']:\n raise AnsibleError('\"on_denied\" must be a string and one of \"error\", \"warn\" or \"skip\", not %s' % denied)\n\n credentials = {}\n if aws_profile:\n credentials['aws_profile'] = aws_profile\n else:\n credentials['aws_profile'] = boto_profile\n credentials['aws_secret_access_key'] = aws_secret_key\n credentials['aws_access_key_id'] = aws_access_key\n credentials['aws_session_token'] = aws_security_token\n\n # fallback to IAM role credentials\n if not credentials['aws_profile'] and not (\n credentials['aws_access_key_id'] and credentials['aws_secret_access_key']):\n session = botocore.session.get_session()\n if session.get_credentials() is not None:\n credentials['aws_access_key_id'] = session.get_credentials().access_key\n credentials['aws_secret_access_key'] = session.get_credentials().secret_key\n credentials['aws_session_token'] = session.get_credentials().token\n\n client = _boto3_conn(region, credentials)\n\n if bypath:\n secrets = {}\n for term in terms:\n try:\n response = client.list_secrets(Filters=[{'Key': 'name', 'Values': [term]}])\n\n if 'SecretList' in response:\n for secret in response['SecretList']:\n secrets.update({secret['Name']: self.get_secret_value(secret['Name'], client,\n on_missing=missing,\n on_denied=denied)})\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n raise AnsibleError(\"Failed to retrieve secret: %s\" % to_native(e))\n secrets = [secrets]\n else:\n secrets = []\n for term in terms:\n value = self.get_secret_value(term, client,\n version_stage=version_stage, version_id=version_id,\n on_missing=missing, on_denied=denied, nested=nested)\n if value:\n secrets.append(value)\n if join:\n joined_secret = []\n joined_secret.append(''.join(secrets))\n return joined_secret\n\n return secrets\n\n def get_secret_value(self, term, client, version_stage=None, version_id=None, on_missing=None, on_denied=None, nested=False):\n params = {}\n params['SecretId'] = term\n if version_id:\n params['VersionId'] = version_id\n if version_stage:\n params['VersionStage'] = version_stage\n if nested:\n if len(term.split('.')) < 2:\n raise AnsibleError(\"Nested query must use the following syntax: `aws_secret_name.<key_name>.<key_name>\")\n secret_name = term.split('.')[0]\n params['SecretId'] = secret_name\n\n try:\n response = client.get_secret_value(**params)\n if 'SecretBinary' in response:\n return response['SecretBinary']\n if 'SecretString' in response:\n if nested:\n secrets = []\n query = term.split('.')[1:]\n secret_string = json.loads(response['SecretString'])\n ret_val = secret_string\n for key in query:\n if key in ret_val:\n ret_val = ret_val[key]\n else:\n raise AnsibleError(\"Successfully retrieved secret but there exists no key {0} in the secret\".format(key))\n return str(ret_val)\n else:\n return response['SecretString']\n except is_boto3_error_code('ResourceNotFoundException'):\n if on_missing == 'error':\n raise AnsibleError(\"Failed to find secret %s (ResourceNotFound)\" % term)\n elif on_missing == 'warn':\n self._display.warning('Skipping, did not find secret %s' % term)\n except is_boto3_error_code('AccessDeniedException'): # pylint: disable=duplicate-except\n if on_denied == 'error':\n raise AnsibleError(\"Failed to access secret %s (AccessDenied)\" % term)\n elif on_denied == 'warn':\n self._display.warning('Skipping, access denied for secret %s' % term)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except\n raise AnsibleError(\"Failed to retrieve secret: %s\" % to_native(e))\n\n return None\n", "path": "plugins/lookup/aws_secret.py"}], "after_files": [{"content": "# Copyright: (c) 2018, Aaron Smith <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = r'''\nlookup: aws_secret\nauthor:\n - Aaron Smith <[email protected]>\nrequirements:\n - python >= 3.6\n - boto3\n - botocore >= 1.16.0\nextends_documentation_fragment:\n- amazon.aws.aws_credentials\n- amazon.aws.aws_region\n\nshort_description: Look up secrets stored in AWS Secrets Manager.\ndescription:\n - Look up secrets stored in AWS Secrets Manager provided the caller\n has the appropriate permissions to read the secret.\n - Lookup is based on the secret's I(Name) value.\n - Optional parameters can be passed into this lookup; I(version_id) and I(version_stage)\noptions:\n _terms:\n description: Name of the secret to look up in AWS Secrets Manager.\n required: True\n bypath:\n description: A boolean to indicate whether the parameter is provided as a hierarchy.\n default: false\n type: boolean\n version_added: 1.4.0\n nested:\n description: A boolean to indicate the secret contains nested values.\n type: boolean\n default: false\n version_added: 1.4.0\n version_id:\n description: Version of the secret(s).\n required: False\n version_stage:\n description: Stage of the secret version.\n required: False\n join:\n description:\n - Join two or more entries to form an extended secret.\n - This is useful for overcoming the 4096 character limit imposed by AWS.\n - No effect when used with I(bypath).\n type: boolean\n default: false\n on_missing:\n description:\n - Action to take if the secret is missing.\n - C(error) will raise a fatal error when the secret is missing.\n - C(skip) will silently ignore the missing secret.\n - C(warn) will skip over the missing secret but issue a warning.\n default: error\n type: string\n choices: ['error', 'skip', 'warn']\n on_denied:\n description:\n - Action to take if access to the secret is denied.\n - C(error) will raise a fatal error when access to the secret is denied.\n - C(skip) will silently ignore the denied secret.\n - C(warn) will skip over the denied secret but issue a warning.\n default: error\n type: string\n choices: ['error', 'skip', 'warn']\n'''\n\nEXAMPLES = r\"\"\"\n - name: lookup secretsmanager secret in the current region\n debug: msg=\"{{ lookup('amazon.aws.aws_secret', '/path/to/secrets', bypath=true) }}\"\n\n - name: Create RDS instance with aws_secret lookup for password param\n rds:\n command: create\n instance_name: app-db\n db_engine: MySQL\n size: 10\n instance_type: db.m1.small\n username: dbadmin\n password: \"{{ lookup('amazon.aws.aws_secret', 'DbSecret') }}\"\n tags:\n Environment: staging\n\n - name: skip if secret does not exist\n debug: msg=\"{{ lookup('amazon.aws.aws_secret', 'secret-not-exist', on_missing='skip')}}\"\n\n - name: warn if access to the secret is denied\n debug: msg=\"{{ lookup('amazon.aws.aws_secret', 'secret-denied', on_denied='warn')}}\"\n\n - name: lookup secretsmanager secret in the current region using the nested feature\n debug: msg=\"{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', nested=true) }}\"\n # The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.\n # If an object is of the form `{\"key1\":{\"key2\":{\"key3\":1}}}` the query would return the value `1`.\n - name: lookup secretsmanager secret in a specific region using specified region and aws profile using nested feature\n debug: >\n msg=\"{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', region=region, aws_profile=aws_profile,\n aws_access_key=aws_access_key, aws_secret_key=aws_secret_key, nested=true) }}\"\n # The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.\n # If an object is of the form `{\"key1\":{\"key2\":{\"key3\":1}}}` the query would return the value `1`.\n # Region is the AWS region where the AWS secret is stored.\n # AWS_profile is the aws profile to use, that has access to the AWS secret.\n\"\"\"\n\nRETURN = r\"\"\"\n_raw:\n description:\n Returns the value of the secret stored in AWS Secrets Manager.\n\"\"\"\n\nimport json\n\ntry:\n import boto3\n import botocore\nexcept ImportError:\n pass # will be captured by imported HAS_BOTO3\n\nfrom ansible.errors import AnsibleError\nfrom ansible.module_utils.six import string_types\nfrom ansible.module_utils._text import to_native\nfrom ansible.plugins.lookup import LookupBase\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import HAS_BOTO3\n\n\ndef _boto3_conn(region, credentials):\n boto_profile = credentials.pop('aws_profile', None)\n\n try:\n connection = boto3.session.Session(profile_name=boto_profile).client('secretsmanager', region, **credentials)\n except (botocore.exceptions.ProfileNotFound, botocore.exceptions.PartialCredentialsError) as e:\n if boto_profile:\n try:\n connection = boto3.session.Session(profile_name=boto_profile).client('secretsmanager', region)\n except (botocore.exceptions.ProfileNotFound, botocore.exceptions.PartialCredentialsError) as e:\n raise AnsibleError(\"Insufficient credentials found.\")\n else:\n raise AnsibleError(\"Insufficient credentials found.\")\n return connection\n\n\nclass LookupModule(LookupBase):\n def run(self, terms, variables=None, boto_profile=None, aws_profile=None,\n aws_secret_key=None, aws_access_key=None, aws_security_token=None, region=None,\n bypath=False, nested=False, join=False, version_stage=None, version_id=None, on_missing='error',\n on_denied='error'):\n '''\n :arg terms: a list of lookups to run.\n e.g. ['parameter_name', 'parameter_name_too' ]\n :kwarg variables: ansible variables active at the time of the lookup\n :kwarg aws_secret_key: identity of the AWS key to use\n :kwarg aws_access_key: AWS secret key (matching identity)\n :kwarg aws_security_token: AWS session key if using STS\n :kwarg decrypt: Set to True to get decrypted parameters\n :kwarg region: AWS region in which to do the lookup\n :kwarg bypath: Set to True to do a lookup of variables under a path\n :kwarg nested: Set to True to do a lookup of nested secrets\n :kwarg join: Join two or more entries to form an extended secret\n :kwarg version_stage: Stage of the secret version\n :kwarg version_id: Version of the secret(s)\n :kwarg on_missing: Action to take if the secret is missing\n :kwarg on_denied: Action to take if access to the secret is denied\n :returns: A list of parameter values or a list of dictionaries if bypath=True.\n '''\n if not HAS_BOTO3:\n raise AnsibleError('botocore and boto3 are required for aws_ssm lookup.')\n\n missing = on_missing.lower()\n if not isinstance(missing, string_types) or missing not in ['error', 'warn', 'skip']:\n raise AnsibleError('\"on_missing\" must be a string and one of \"error\", \"warn\" or \"skip\", not %s' % missing)\n\n denied = on_denied.lower()\n if not isinstance(denied, string_types) or denied not in ['error', 'warn', 'skip']:\n raise AnsibleError('\"on_denied\" must be a string and one of \"error\", \"warn\" or \"skip\", not %s' % denied)\n\n credentials = {}\n if aws_profile:\n credentials['aws_profile'] = aws_profile\n else:\n credentials['aws_profile'] = boto_profile\n credentials['aws_secret_access_key'] = aws_secret_key\n credentials['aws_access_key_id'] = aws_access_key\n credentials['aws_session_token'] = aws_security_token\n\n # fallback to IAM role credentials\n if not credentials['aws_profile'] and not (\n credentials['aws_access_key_id'] and credentials['aws_secret_access_key']):\n session = botocore.session.get_session()\n if session.get_credentials() is not None:\n credentials['aws_access_key_id'] = session.get_credentials().access_key\n credentials['aws_secret_access_key'] = session.get_credentials().secret_key\n credentials['aws_session_token'] = session.get_credentials().token\n\n client = _boto3_conn(region, credentials)\n\n if bypath:\n secrets = {}\n for term in terms:\n try:\n response = client.list_secrets(Filters=[{'Key': 'name', 'Values': [term]}])\n\n if 'SecretList' in response:\n for secret in response['SecretList']:\n secrets.update({secret['Name']: self.get_secret_value(secret['Name'], client,\n on_missing=missing,\n on_denied=denied)})\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n raise AnsibleError(\"Failed to retrieve secret: %s\" % to_native(e))\n secrets = [secrets]\n else:\n secrets = []\n for term in terms:\n value = self.get_secret_value(term, client,\n version_stage=version_stage, version_id=version_id,\n on_missing=missing, on_denied=denied, nested=nested)\n if value:\n secrets.append(value)\n if join:\n joined_secret = []\n joined_secret.append(''.join(secrets))\n return joined_secret\n\n return secrets\n\n def get_secret_value(self, term, client, version_stage=None, version_id=None, on_missing=None, on_denied=None, nested=False):\n params = {}\n params['SecretId'] = term\n if version_id:\n params['VersionId'] = version_id\n if version_stage:\n params['VersionStage'] = version_stage\n if nested:\n if len(term.split('.')) < 2:\n raise AnsibleError(\"Nested query must use the following syntax: `aws_secret_name.<key_name>.<key_name>\")\n secret_name = term.split('.')[0]\n params['SecretId'] = secret_name\n\n try:\n response = client.get_secret_value(**params)\n if 'SecretBinary' in response:\n return response['SecretBinary']\n if 'SecretString' in response:\n if nested:\n secrets = []\n query = term.split('.')[1:]\n secret_string = json.loads(response['SecretString'])\n ret_val = secret_string\n for key in query:\n if key in ret_val:\n ret_val = ret_val[key]\n else:\n raise AnsibleError(\"Successfully retrieved secret but there exists no key {0} in the secret\".format(key))\n return str(ret_val)\n else:\n return response['SecretString']\n except is_boto3_error_code('ResourceNotFoundException'):\n if on_missing == 'error':\n raise AnsibleError(\"Failed to find secret %s (ResourceNotFound)\" % term)\n elif on_missing == 'warn':\n self._display.warning('Skipping, did not find secret %s' % term)\n except is_boto3_error_code('AccessDeniedException'): # pylint: disable=duplicate-except\n if on_denied == 'error':\n raise AnsibleError(\"Failed to access secret %s (AccessDenied)\" % term)\n elif on_denied == 'warn':\n self._display.warning('Skipping, access denied for secret %s' % term)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except\n raise AnsibleError(\"Failed to retrieve secret: %s\" % to_native(e))\n\n return None\n", "path": "plugins/lookup/aws_secret.py"}]}
| 3,907 | 306 |
gh_patches_debug_17736
|
rasdani/github-patches
|
git_diff
|
beeware__toga-31
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"ImportError: cannot import name WebKit" on Ubuntu 14.04
Installed toga via global `sudo pip install toga`. Then, tried to import it:
```
>>> import toga
ERROR:root:Could not find any typelib for WebKit
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/toga/__init__.py", line 86, in <module>
from .platform.gtk.app import *
File "/usr/local/lib/python2.7/dist-packages/toga/platform/gtk/app.py", line 7, in <module>
from .window import Window
File "/usr/local/lib/python2.7/dist-packages/toga/platform/gtk/window.py", line 6, in <module>
from .command import SEPARATOR, SPACER, EXPANDING_SPACER
File "/usr/local/lib/python2.7/dist-packages/toga/platform/gtk/command.py", line 1, in <module>
from .widgets import Icon
File "/usr/local/lib/python2.7/dist-packages/toga/platform/gtk/widgets/__init__.py", line 17, in <module>
from .webview import WebView
File "/usr/local/lib/python2.7/dist-packages/toga/platform/gtk/widgets/webview.py", line 3, in <module>
from gi.repository import Gtk, WebKit
ImportError: cannot import name WebKit
```
Did a `sudo apt-get install python-webkit`, but still getting the same import error. I'm running Ubuntu under Crouton on a Chromebook, which doesn't always contain the full set of packages.
Since the application I aim to create (a GUI launcher for [KA Lite](https://github.com/learningequality/ka-lite/)) would rely on toga's awesome dedication to being pure Python and not needing any extra packages to be installed to work cross-platform, and since we wouldn't be needing the WebView, would it be possible to have it handle a lack of WebKit more gracefully, only erroring out if a WebView was actually used? Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `toga/platform/gtk/widgets/webview.py`
Content:
```
1 from __future__ import print_function, absolute_import, division
2
3 from gi.repository import Gtk, WebKit
4
5 from .base import Widget
6
7
8 class WebView(Widget):
9 def __init__(self, url=None):
10 super(WebView, self).__init__()
11 self._url = url
12
13 self._webview = None
14
15 def _startup(self):
16 self._impl = Gtk.ScrolledWindow()
17 self._impl.set_policy(Gtk.PolicyType.AUTOMATIC, Gtk.PolicyType.AUTOMATIC)
18
19 self._webview = WebKit.WebView()
20
21 if self._url:
22 self._webview.load_uri(self._url)
23
24 self._impl.add(self._webview)
25 self._impl.set_min_content_width(200)
26 self._impl.set_min_content_height(200)
27
28 @property
29 def url(self):
30 return self._url
31
32 @url.setter
33 def url(self, value):
34 self._url = value
35 if self._impl:
36 self._webview.load_uri(self._url)
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/toga/platform/gtk/widgets/webview.py b/toga/platform/gtk/widgets/webview.py
--- a/toga/platform/gtk/widgets/webview.py
+++ b/toga/platform/gtk/widgets/webview.py
@@ -1,6 +1,13 @@
from __future__ import print_function, absolute_import, division
-from gi.repository import Gtk, WebKit
+from gi.repository import Gtk
+
+# The following import sometimes fails; handle failure gracefully
+# (see https://github.com/pybee/toga/issues/26)
+try:
+ from gi.repository import WebKit
+except ImportError:
+ WebKit = None
from .base import Widget
@@ -13,6 +20,12 @@
self._webview = None
def _startup(self):
+
+ if WebKit is None:
+ raise RuntimeError(
+ "Import 'from gi.repository import WebKit' failed;" +
+ " may need to install gir1.2-webkit-3.0 or similar.")
+
self._impl = Gtk.ScrolledWindow()
self._impl.set_policy(Gtk.PolicyType.AUTOMATIC, Gtk.PolicyType.AUTOMATIC)
|
{"golden_diff": "diff --git a/toga/platform/gtk/widgets/webview.py b/toga/platform/gtk/widgets/webview.py\n--- a/toga/platform/gtk/widgets/webview.py\n+++ b/toga/platform/gtk/widgets/webview.py\n@@ -1,6 +1,13 @@\n from __future__ import print_function, absolute_import, division\n \n-from gi.repository import Gtk, WebKit\n+from gi.repository import Gtk\n+\n+# The following import sometimes fails; handle failure gracefully\n+# (see https://github.com/pybee/toga/issues/26)\n+try:\n+ from gi.repository import WebKit\n+except ImportError:\n+ WebKit = None\n \n from .base import Widget\n \n@@ -13,6 +20,12 @@\n self._webview = None\n \n def _startup(self):\n+\n+ if WebKit is None:\n+ raise RuntimeError(\n+ \"Import 'from gi.repository import WebKit' failed;\" +\n+ \" may need to install gir1.2-webkit-3.0 or similar.\")\n+\n self._impl = Gtk.ScrolledWindow()\n self._impl.set_policy(Gtk.PolicyType.AUTOMATIC, Gtk.PolicyType.AUTOMATIC)\n", "issue": "\"ImportError: cannot import name WebKit\" on Ubuntu 14.04\nInstalled toga via global `sudo pip install toga`. Then, tried to import it:\n\n```\n>>> import toga\nERROR:root:Could not find any typelib for WebKit\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/usr/local/lib/python2.7/dist-packages/toga/__init__.py\", line 86, in <module>\n from .platform.gtk.app import *\n File \"/usr/local/lib/python2.7/dist-packages/toga/platform/gtk/app.py\", line 7, in <module>\n from .window import Window\n File \"/usr/local/lib/python2.7/dist-packages/toga/platform/gtk/window.py\", line 6, in <module>\n from .command import SEPARATOR, SPACER, EXPANDING_SPACER\n File \"/usr/local/lib/python2.7/dist-packages/toga/platform/gtk/command.py\", line 1, in <module>\n from .widgets import Icon\n File \"/usr/local/lib/python2.7/dist-packages/toga/platform/gtk/widgets/__init__.py\", line 17, in <module>\n from .webview import WebView\n File \"/usr/local/lib/python2.7/dist-packages/toga/platform/gtk/widgets/webview.py\", line 3, in <module>\n from gi.repository import Gtk, WebKit\nImportError: cannot import name WebKit\n```\n\nDid a `sudo apt-get install python-webkit`, but still getting the same import error. I'm running Ubuntu under Crouton on a Chromebook, which doesn't always contain the full set of packages.\n\nSince the application I aim to create (a GUI launcher for [KA Lite](https://github.com/learningequality/ka-lite/)) would rely on toga's awesome dedication to being pure Python and not needing any extra packages to be installed to work cross-platform, and since we wouldn't be needing the WebView, would it be possible to have it handle a lack of WebKit more gracefully, only erroring out if a WebView was actually used? Thanks!\n\n", "before_files": [{"content": "from __future__ import print_function, absolute_import, division\n\nfrom gi.repository import Gtk, WebKit\n\nfrom .base import Widget\n\n\nclass WebView(Widget):\n def __init__(self, url=None):\n super(WebView, self).__init__()\n self._url = url\n\n self._webview = None\n\n def _startup(self):\n self._impl = Gtk.ScrolledWindow()\n self._impl.set_policy(Gtk.PolicyType.AUTOMATIC, Gtk.PolicyType.AUTOMATIC)\n\n self._webview = WebKit.WebView()\n\n if self._url:\n self._webview.load_uri(self._url)\n\n self._impl.add(self._webview)\n self._impl.set_min_content_width(200)\n self._impl.set_min_content_height(200)\n\n @property\n def url(self):\n return self._url\n\n @url.setter\n def url(self, value):\n self._url = value\n if self._impl:\n self._webview.load_uri(self._url)\n", "path": "toga/platform/gtk/widgets/webview.py"}], "after_files": [{"content": "from __future__ import print_function, absolute_import, division\n\nfrom gi.repository import Gtk\n\n# The following import sometimes fails; handle failure gracefully\n# (see https://github.com/pybee/toga/issues/26)\ntry:\n from gi.repository import WebKit\nexcept ImportError:\n WebKit = None\n\nfrom .base import Widget\n\n\nclass WebView(Widget):\n def __init__(self, url=None):\n super(WebView, self).__init__()\n self._url = url\n\n self._webview = None\n\n def _startup(self):\n\n if WebKit is None:\n raise RuntimeError(\n \"Import 'from gi.repository import WebKit' failed;\" +\n \" may need to install gir1.2-webkit-3.0 or similar.\")\n\n self._impl = Gtk.ScrolledWindow()\n self._impl.set_policy(Gtk.PolicyType.AUTOMATIC, Gtk.PolicyType.AUTOMATIC)\n\n self._webview = WebKit.WebView()\n\n if self._url:\n self._webview.load_uri(self._url)\n\n self._impl.add(self._webview)\n self._impl.set_min_content_width(200)\n self._impl.set_min_content_height(200)\n\n @property\n def url(self):\n return self._url\n\n @url.setter\n def url(self, value):\n self._url = value\n if self._impl:\n self._webview.load_uri(self._url)\n", "path": "toga/platform/gtk/widgets/webview.py"}]}
| 1,019 | 254 |
gh_patches_debug_6994
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-2146
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Count of results displayed on the challenge card does not match leaderboard count
**Describe the bug**
The card for the node21 challenge currently notes there are 21 results. Clicking on this brings you to the leaderboard where only 2 results are present. It seems that the count is including submissions which failed and/or submissions where the evaluation failed, which is misleading.
**To Reproduce**
Steps to reproduce the behavior:
- Choose a challenge where the database includes many failed submissions or failed evaluations (e.g. node21 at present)
- View the card for this challenge (currently it is on the GC front page)
- Verify that the number of results shown on the card does not match the number of results on the leaderboard (click the number shown on the card).
**Expected behavior**
The number of reported results should match the number of results on the leaderboard
**Screenshots**

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/challenges/tasks.py`
Content:
```
1 from celery import shared_task
2 from django.contrib.auth import get_user_model
3 from django.core.mail import mail_managers
4 from django.db.models import Count, Max
5 from requests import exceptions, get
6
7 from grandchallenge.challenges.models import Challenge, ExternalChallenge
8 from grandchallenge.evaluation.models import Evaluation
9 from grandchallenge.subdomains.utils import reverse
10
11
12 @shared_task
13 def update_challenge_results_cache():
14 challenges = Challenge.objects.all()
15 evaluation_info = (
16 Evaluation.objects.filter(published=True)
17 .values("submission__phase__challenge_id")
18 .annotate(
19 cached_num_results=Count("submission__phase__challenge_id"),
20 cached_latest_result=Max("created"),
21 )
22 )
23 evaluation_info_by_challenge = {
24 str(v["submission__phase__challenge_id"]): v for v in evaluation_info
25 }
26 participant_counts = (
27 get_user_model()
28 .objects.values("groups__participants_of_challenge")
29 .annotate(cached_num_participants=Count("pk"))
30 )
31 participant_counts_by_challenge = {
32 str(v["groups__participants_of_challenge"]): v
33 for v in participant_counts
34 }
35
36 for c in challenges:
37 c.cached_num_results = evaluation_info_by_challenge.get(
38 str(c.pk), {}
39 ).get("cached_num_results", 0)
40 c.cached_latest_result = evaluation_info_by_challenge.get(
41 str(c.pk), {}
42 ).get("cached_latest_result", None)
43 c.cached_num_participants = participant_counts_by_challenge.get(
44 str(c.pk), {}
45 ).get("cached_num_participants", 0)
46
47 Challenge.objects.bulk_update(
48 challenges,
49 [
50 "cached_num_results",
51 "cached_num_participants",
52 "cached_latest_result",
53 ],
54 )
55
56
57 @shared_task
58 def check_external_challenge_urls():
59 """
60 Checks that all external challenge urls are reachable.
61
62 Emails the managers if any of the challenges are not.
63 """
64 challenges = ExternalChallenge.objects.filter(hidden=False)
65 errors = []
66
67 for challenge in challenges:
68 try:
69 url = challenge.homepage
70 if not url.startswith("http"):
71 url = "http://" + url
72 r = get(url, timeout=60)
73 # raise an exception when we receive a http error (e.g., 404)
74 r.raise_for_status()
75 except exceptions.RequestException as err:
76 update_url = reverse(
77 "challenges:external-update",
78 kwargs={"short_name": challenge.short_name},
79 )
80 errors.append(
81 f"Error when trying to access '{challenge}': {err}. You can "
82 f"update it here: {update_url}"
83 )
84
85 if errors:
86 mail_managers(
87 subject=f"Unreachable external challenges ({len(errors)})",
88 message="\n\n".join(errors),
89 )
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/grandchallenge/challenges/tasks.py b/app/grandchallenge/challenges/tasks.py
--- a/app/grandchallenge/challenges/tasks.py
+++ b/app/grandchallenge/challenges/tasks.py
@@ -13,7 +13,7 @@
def update_challenge_results_cache():
challenges = Challenge.objects.all()
evaluation_info = (
- Evaluation.objects.filter(published=True)
+ Evaluation.objects.filter(published=True, rank__gt=0)
.values("submission__phase__challenge_id")
.annotate(
cached_num_results=Count("submission__phase__challenge_id"),
|
{"golden_diff": "diff --git a/app/grandchallenge/challenges/tasks.py b/app/grandchallenge/challenges/tasks.py\n--- a/app/grandchallenge/challenges/tasks.py\n+++ b/app/grandchallenge/challenges/tasks.py\n@@ -13,7 +13,7 @@\n def update_challenge_results_cache():\n challenges = Challenge.objects.all()\n evaluation_info = (\n- Evaluation.objects.filter(published=True)\n+ Evaluation.objects.filter(published=True, rank__gt=0)\n .values(\"submission__phase__challenge_id\")\n .annotate(\n cached_num_results=Count(\"submission__phase__challenge_id\"),\n", "issue": "Count of results displayed on the challenge card does not match leaderboard count\n**Describe the bug**\r\nThe card for the node21 challenge currently notes there are 21 results. Clicking on this brings you to the leaderboard where only 2 results are present. It seems that the count is including submissions which failed and/or submissions where the evaluation failed, which is misleading. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n - Choose a challenge where the database includes many failed submissions or failed evaluations (e.g. node21 at present)\r\n- View the card for this challenge (currently it is on the GC front page)\r\n - Verify that the number of results shown on the card does not match the number of results on the leaderboard (click the number shown on the card).\r\n\r\n**Expected behavior**\r\nThe number of reported results should match the number of results on the leaderboard\r\n\r\n**Screenshots**\r\n\r\n\r\n\n", "before_files": [{"content": "from celery import shared_task\nfrom django.contrib.auth import get_user_model\nfrom django.core.mail import mail_managers\nfrom django.db.models import Count, Max\nfrom requests import exceptions, get\n\nfrom grandchallenge.challenges.models import Challenge, ExternalChallenge\nfrom grandchallenge.evaluation.models import Evaluation\nfrom grandchallenge.subdomains.utils import reverse\n\n\n@shared_task\ndef update_challenge_results_cache():\n challenges = Challenge.objects.all()\n evaluation_info = (\n Evaluation.objects.filter(published=True)\n .values(\"submission__phase__challenge_id\")\n .annotate(\n cached_num_results=Count(\"submission__phase__challenge_id\"),\n cached_latest_result=Max(\"created\"),\n )\n )\n evaluation_info_by_challenge = {\n str(v[\"submission__phase__challenge_id\"]): v for v in evaluation_info\n }\n participant_counts = (\n get_user_model()\n .objects.values(\"groups__participants_of_challenge\")\n .annotate(cached_num_participants=Count(\"pk\"))\n )\n participant_counts_by_challenge = {\n str(v[\"groups__participants_of_challenge\"]): v\n for v in participant_counts\n }\n\n for c in challenges:\n c.cached_num_results = evaluation_info_by_challenge.get(\n str(c.pk), {}\n ).get(\"cached_num_results\", 0)\n c.cached_latest_result = evaluation_info_by_challenge.get(\n str(c.pk), {}\n ).get(\"cached_latest_result\", None)\n c.cached_num_participants = participant_counts_by_challenge.get(\n str(c.pk), {}\n ).get(\"cached_num_participants\", 0)\n\n Challenge.objects.bulk_update(\n challenges,\n [\n \"cached_num_results\",\n \"cached_num_participants\",\n \"cached_latest_result\",\n ],\n )\n\n\n@shared_task\ndef check_external_challenge_urls():\n \"\"\"\n Checks that all external challenge urls are reachable.\n\n Emails the managers if any of the challenges are not.\n \"\"\"\n challenges = ExternalChallenge.objects.filter(hidden=False)\n errors = []\n\n for challenge in challenges:\n try:\n url = challenge.homepage\n if not url.startswith(\"http\"):\n url = \"http://\" + url\n r = get(url, timeout=60)\n # raise an exception when we receive a http error (e.g., 404)\n r.raise_for_status()\n except exceptions.RequestException as err:\n update_url = reverse(\n \"challenges:external-update\",\n kwargs={\"short_name\": challenge.short_name},\n )\n errors.append(\n f\"Error when trying to access '{challenge}': {err}. You can \"\n f\"update it here: {update_url}\"\n )\n\n if errors:\n mail_managers(\n subject=f\"Unreachable external challenges ({len(errors)})\",\n message=\"\\n\\n\".join(errors),\n )\n", "path": "app/grandchallenge/challenges/tasks.py"}], "after_files": [{"content": "from celery import shared_task\nfrom django.contrib.auth import get_user_model\nfrom django.core.mail import mail_managers\nfrom django.db.models import Count, Max\nfrom requests import exceptions, get\n\nfrom grandchallenge.challenges.models import Challenge, ExternalChallenge\nfrom grandchallenge.evaluation.models import Evaluation\nfrom grandchallenge.subdomains.utils import reverse\n\n\n@shared_task\ndef update_challenge_results_cache():\n challenges = Challenge.objects.all()\n evaluation_info = (\n Evaluation.objects.filter(published=True, rank__gt=0)\n .values(\"submission__phase__challenge_id\")\n .annotate(\n cached_num_results=Count(\"submission__phase__challenge_id\"),\n cached_latest_result=Max(\"created\"),\n )\n )\n evaluation_info_by_challenge = {\n str(v[\"submission__phase__challenge_id\"]): v for v in evaluation_info\n }\n participant_counts = (\n get_user_model()\n .objects.values(\"groups__participants_of_challenge\")\n .annotate(cached_num_participants=Count(\"pk\"))\n )\n participant_counts_by_challenge = {\n str(v[\"groups__participants_of_challenge\"]): v\n for v in participant_counts\n }\n\n for c in challenges:\n c.cached_num_results = evaluation_info_by_challenge.get(\n str(c.pk), {}\n ).get(\"cached_num_results\", 0)\n c.cached_latest_result = evaluation_info_by_challenge.get(\n str(c.pk), {}\n ).get(\"cached_latest_result\", None)\n c.cached_num_participants = participant_counts_by_challenge.get(\n str(c.pk), {}\n ).get(\"cached_num_participants\", 0)\n\n Challenge.objects.bulk_update(\n challenges,\n [\n \"cached_num_results\",\n \"cached_num_participants\",\n \"cached_latest_result\",\n ],\n )\n\n\n@shared_task\ndef check_external_challenge_urls():\n \"\"\"\n Checks that all external challenge urls are reachable.\n\n Emails the managers if any of the challenges are not.\n \"\"\"\n challenges = ExternalChallenge.objects.filter(hidden=False)\n errors = []\n\n for challenge in challenges:\n try:\n url = challenge.homepage\n if not url.startswith(\"http\"):\n url = \"http://\" + url\n r = get(url, timeout=60)\n # raise an exception when we receive a http error (e.g., 404)\n r.raise_for_status()\n except exceptions.RequestException as err:\n update_url = reverse(\n \"challenges:external-update\",\n kwargs={\"short_name\": challenge.short_name},\n )\n errors.append(\n f\"Error when trying to access '{challenge}': {err}. You can \"\n f\"update it here: {update_url}\"\n )\n\n if errors:\n mail_managers(\n subject=f\"Unreachable external challenges ({len(errors)})\",\n message=\"\\n\\n\".join(errors),\n )\n", "path": "app/grandchallenge/challenges/tasks.py"}]}
| 1,278 | 127 |
gh_patches_debug_29843
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-2017
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pwndbg might fail to show the state of the GOT of i386 libc
```console
$ cat a.c
#include <stdio.h>
int main(){puts("hello world");return 0;}
$ gcc -m32 a.c
$ gdb -q a.out -ex 'break main' -ex 'run' -ex 'got -p libc'
```
<img width="1514" alt="image" src="https://github.com/pwndbg/pwndbg/assets/61896187/e0492360-8c33-495a-aad1-99e0a91ad4c8">
The above error was triggered with i386 libc with version: `2.35-0ubuntu3.6`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/got.py`
Content:
```
1 from __future__ import annotations
2
3 import argparse
4 from typing import Dict
5 from typing import List
6 from typing import Union
7
8 from elftools.elf.elffile import ELFFile
9
10 import pwndbg.chain
11 import pwndbg.color.memory as M
12 import pwndbg.commands
13 import pwndbg.enhance
14 import pwndbg.gdblib.arch
15 import pwndbg.gdblib.file
16 import pwndbg.gdblib.info
17 import pwndbg.gdblib.proc
18 import pwndbg.gdblib.qemu
19 import pwndbg.gdblib.vmmap
20 import pwndbg.wrappers.checksec
21 import pwndbg.wrappers.readelf
22 from pwndbg.color import message
23 from pwndbg.commands import CommandCategory
24 from pwndbg.wrappers.readelf import RelocationType
25
26 parser = argparse.ArgumentParser(
27 formatter_class=argparse.RawTextHelpFormatter,
28 description="""Show the state of the Global Offset Table.
29
30 Examples:
31 got
32 got puts
33 got -p libc
34 got -a
35 """,
36 )
37 group = parser.add_mutually_exclusive_group()
38 group.add_argument(
39 "-p",
40 "--path",
41 help="Filter results by library/objfile path.",
42 type=str,
43 default="",
44 dest="path_filter",
45 )
46 group.add_argument(
47 "-a",
48 "--all",
49 help="Process all libs/obfjiles including the target executable.",
50 action="store_true",
51 default=False,
52 dest="all_",
53 )
54 parser.add_argument(
55 "-r",
56 "--show-readonly",
57 help="Also display read-only entries (which are filtered out by default).",
58 action="store_true",
59 default=False,
60 dest="accept_readonly",
61 )
62 parser.add_argument(
63 "symbol_filter", help="Filter results by symbol name.", type=str, nargs="?", default=""
64 )
65
66
67 @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.LINUX)
68 @pwndbg.commands.OnlyWhenRunning
69 def got(path_filter: str, all_: bool, accept_readonly: bool, symbol_filter: str) -> None:
70 if pwndbg.gdblib.qemu.is_qemu_usermode():
71 print(
72 "QEMU target detected - the result might not be accurate when checking if the entry is writable and getting the information for libraries/objfiles"
73 )
74 print()
75 # Show the filters we are using
76 if path_filter:
77 print("Filtering by lib/objfile path: " + message.hint(path_filter))
78 if symbol_filter:
79 print("Filtering by symbol name: " + message.hint(symbol_filter))
80 if not accept_readonly:
81 print("Filtering out read-only entries (display them with -r or --show-readonly)")
82
83 if path_filter or not accept_readonly or symbol_filter:
84 print()
85
86 # Calculate the base address
87 if not path_filter:
88 first_print = False
89 _got(pwndbg.gdblib.proc.exe, accept_readonly, symbol_filter)
90 else:
91 first_print = True
92
93 if not all_ and not path_filter:
94 return
95 # TODO: We might fail to find shared libraries if GDB can't find them (can't show them in `info sharedlibrary`)
96 paths = pwndbg.gdblib.info.sharedlibrary_paths()
97 for path in paths:
98 if path_filter not in path:
99 continue
100 if not first_print:
101 print()
102 first_print = False
103 _got(path, accept_readonly, symbol_filter)
104
105 # Maybe user have a typo or something in the path filter, show the available shared libraries
106 if first_print and path_filter:
107 print(message.error("No shared library matching the path filter found."))
108 if paths:
109 print(message.notice("Available shared libraries:"))
110 for path in paths:
111 print(" " + path)
112
113
114 def _got(path: str, accept_readonly: bool, symbol_filter: str) -> None:
115 # Maybe download the file from remote
116 local_path = pwndbg.gdblib.file.get_file(path, try_local_path=True)
117
118 relro_status = pwndbg.wrappers.checksec.relro_status(local_path)
119 pie_status = pwndbg.wrappers.checksec.pie_status(local_path)
120 got_entry = pwndbg.wrappers.readelf.get_got_entry(local_path)
121
122 # The following code is inspired by the "got" command of https://github.com/bata24/gef/blob/dev/gef.py by @bata24, thank you!
123 # TODO/FIXME: Maybe a -v option to show more information will be better
124 outputs: List[Dict[str, Union[str, int]]] = []
125 if path == pwndbg.gdblib.proc.exe:
126 bin_base_offset = pwndbg.gdblib.proc.binary_base_addr if "PIE enabled" in pie_status else 0
127 else:
128 # TODO/FIXME: Is there a better way to get the base address of the loaded shared library?
129 # I guess parsing the vmmap result might also work, but what if it's not reliable or not available? (e.g. debugging with qemu-user)
130 text_section_addr = pwndbg.gdblib.info.parsed_sharedlibrary()[path][0]
131 with open(local_path, "rb") as f:
132 bin_base_offset = (
133 text_section_addr - ELFFile(f).get_section_by_name(".text").header["sh_addr"]
134 )
135
136 # Parse the output of readelf line by line
137 for category, lines in got_entry.items():
138 for line in lines:
139 # line might be something like:
140 # 00000000001ec018 0000000000000025 R_X86_64_IRELATIVE a0480
141 # or something like:
142 # 00000000001ec030 0000020a00000007 R_X86_64_JUMP_SLOT 000000000009ae80 realloc@@GLIBC_2.2.5 + 0
143 offset, _, rtype, *rest = line.split()[:5]
144 if len(rest) == 1:
145 value = rest[0]
146 name = ""
147 else:
148 value, name = rest
149 address = int(offset, 16) + bin_base_offset
150 # TODO/FIXME: This check might not work correctly if we failed to get the correct vmmap result
151 if not accept_readonly and not pwndbg.gdblib.vmmap.find(address).write:
152 continue
153 if not name and category == RelocationType.IRELATIVE:
154 # TODO/FIXME: I don't know the naming logic behind this yet, I'm just modifying @bata24's code here :p
155 # We might need to add some comments here to explain the logic in the future, and also fix it if something wrong
156 if pwndbg.gdblib.arch.name == "i386":
157 name = "*ABS*"
158 else:
159 name = f"*ABS*+0x{int(value, 16):x}"
160 if symbol_filter not in name:
161 continue
162 outputs.append(
163 {
164 "name": name or "????",
165 "address": address,
166 }
167 )
168 # By sorting the outputs by address, we can get a more intuitive output
169 outputs.sort(key=lambda x: x["address"])
170 relro_color = message.off
171 if "Partial" in relro_status:
172 relro_color = message.warn
173 elif "Full" in relro_status:
174 relro_color = message.on
175 print(f"State of the GOT of {message.notice(path)}:")
176 print(
177 f"GOT protection: {relro_color(relro_status)} | Found {message.hint(len(outputs))} GOT entries passing the filter"
178 )
179 for output in outputs:
180 print(
181 f"[{M.get(output['address'])}] {message.hint(output['name'])} -> {pwndbg.chain.format(pwndbg.gdblib.memory.pvoid(output['address']))}" # type: ignore[arg-type]
182 )
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pwndbg/commands/got.py b/pwndbg/commands/got.py
--- a/pwndbg/commands/got.py
+++ b/pwndbg/commands/got.py
@@ -136,15 +136,23 @@
# Parse the output of readelf line by line
for category, lines in got_entry.items():
for line in lines:
- # line might be something like:
- # 00000000001ec018 0000000000000025 R_X86_64_IRELATIVE a0480
- # or something like:
- # 00000000001ec030 0000020a00000007 R_X86_64_JUMP_SLOT 000000000009ae80 realloc@@GLIBC_2.2.5 + 0
- offset, _, rtype, *rest = line.split()[:5]
- if len(rest) == 1:
- value = rest[0]
+ # There are 5 fields in the output of readelf:
+ # "Offset", "Info", "Type", "Sym. Value", and "Symbol's Name"
+ # We only care about "Offset", "Sym. Value" and "Symbol's Name" here
+ offset, _, _, *rest = line.split()[:5]
+ if len(rest) < 2:
+ # "Sym. Value" or "Symbol's Name" are not present in this case
+ # The output of readelf might look like this (missing both value and name):
+ # 00004e88 00000008 R_386_RELATIVE
+ # or something like this (only missing name):
+ # 00000000001ec018 0000000000000025 R_X86_64_IRELATIVE a0480
+ # TODO: Is it possible that we are missing the value but not the name?
+ value = rest[0] if rest else ""
name = ""
else:
+ # Every fields are present in this case
+ # The output of readelf might look like this:
+ # 00000000001ec030 0000020a00000007 R_X86_64_JUMP_SLOT 000000000009ae80 realloc@@GLIBC_2.2.5 + 0
value, name = rest
address = int(offset, 16) + bin_base_offset
# TODO/FIXME: This check might not work correctly if we failed to get the correct vmmap result
|
{"golden_diff": "diff --git a/pwndbg/commands/got.py b/pwndbg/commands/got.py\n--- a/pwndbg/commands/got.py\n+++ b/pwndbg/commands/got.py\n@@ -136,15 +136,23 @@\n # Parse the output of readelf line by line\n for category, lines in got_entry.items():\n for line in lines:\n- # line might be something like:\n- # 00000000001ec018 0000000000000025 R_X86_64_IRELATIVE a0480\n- # or something like:\n- # 00000000001ec030 0000020a00000007 R_X86_64_JUMP_SLOT 000000000009ae80 realloc@@GLIBC_2.2.5 + 0\n- offset, _, rtype, *rest = line.split()[:5]\n- if len(rest) == 1:\n- value = rest[0]\n+ # There are 5 fields in the output of readelf:\n+ # \"Offset\", \"Info\", \"Type\", \"Sym. Value\", and \"Symbol's Name\"\n+ # We only care about \"Offset\", \"Sym. Value\" and \"Symbol's Name\" here\n+ offset, _, _, *rest = line.split()[:5]\n+ if len(rest) < 2:\n+ # \"Sym. Value\" or \"Symbol's Name\" are not present in this case\n+ # The output of readelf might look like this (missing both value and name):\n+ # 00004e88 00000008 R_386_RELATIVE\n+ # or something like this (only missing name):\n+ # 00000000001ec018 0000000000000025 R_X86_64_IRELATIVE a0480\n+ # TODO: Is it possible that we are missing the value but not the name?\n+ value = rest[0] if rest else \"\"\n name = \"\"\n else:\n+ # Every fields are present in this case\n+ # The output of readelf might look like this:\n+ # 00000000001ec030 0000020a00000007 R_X86_64_JUMP_SLOT 000000000009ae80 realloc@@GLIBC_2.2.5 + 0\n value, name = rest\n address = int(offset, 16) + bin_base_offset\n # TODO/FIXME: This check might not work correctly if we failed to get the correct vmmap result\n", "issue": "pwndbg might fail to show the state of the GOT of i386 libc\n```console\r\n$ cat a.c\r\n#include <stdio.h>\r\nint main(){puts(\"hello world\");return 0;}\r\n$ gcc -m32 a.c\r\n$ gdb -q a.out -ex 'break main' -ex 'run' -ex 'got -p libc'\r\n```\r\n\r\n<img width=\"1514\" alt=\"image\" src=\"https://github.com/pwndbg/pwndbg/assets/61896187/e0492360-8c33-495a-aad1-99e0a91ad4c8\">\r\n\r\nThe above error was triggered with i386 libc with version: `2.35-0ubuntu3.6`.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport argparse\nfrom typing import Dict\nfrom typing import List\nfrom typing import Union\n\nfrom elftools.elf.elffile import ELFFile\n\nimport pwndbg.chain\nimport pwndbg.color.memory as M\nimport pwndbg.commands\nimport pwndbg.enhance\nimport pwndbg.gdblib.arch\nimport pwndbg.gdblib.file\nimport pwndbg.gdblib.info\nimport pwndbg.gdblib.proc\nimport pwndbg.gdblib.qemu\nimport pwndbg.gdblib.vmmap\nimport pwndbg.wrappers.checksec\nimport pwndbg.wrappers.readelf\nfrom pwndbg.color import message\nfrom pwndbg.commands import CommandCategory\nfrom pwndbg.wrappers.readelf import RelocationType\n\nparser = argparse.ArgumentParser(\n formatter_class=argparse.RawTextHelpFormatter,\n description=\"\"\"Show the state of the Global Offset Table.\n\nExamples:\n got\n got puts\n got -p libc\n got -a\n\"\"\",\n)\ngroup = parser.add_mutually_exclusive_group()\ngroup.add_argument(\n \"-p\",\n \"--path\",\n help=\"Filter results by library/objfile path.\",\n type=str,\n default=\"\",\n dest=\"path_filter\",\n)\ngroup.add_argument(\n \"-a\",\n \"--all\",\n help=\"Process all libs/obfjiles including the target executable.\",\n action=\"store_true\",\n default=False,\n dest=\"all_\",\n)\nparser.add_argument(\n \"-r\",\n \"--show-readonly\",\n help=\"Also display read-only entries (which are filtered out by default).\",\n action=\"store_true\",\n default=False,\n dest=\"accept_readonly\",\n)\nparser.add_argument(\n \"symbol_filter\", help=\"Filter results by symbol name.\", type=str, nargs=\"?\", default=\"\"\n)\n\n\[email protected](parser, category=CommandCategory.LINUX)\[email protected]\ndef got(path_filter: str, all_: bool, accept_readonly: bool, symbol_filter: str) -> None:\n if pwndbg.gdblib.qemu.is_qemu_usermode():\n print(\n \"QEMU target detected - the result might not be accurate when checking if the entry is writable and getting the information for libraries/objfiles\"\n )\n print()\n # Show the filters we are using\n if path_filter:\n print(\"Filtering by lib/objfile path: \" + message.hint(path_filter))\n if symbol_filter:\n print(\"Filtering by symbol name: \" + message.hint(symbol_filter))\n if not accept_readonly:\n print(\"Filtering out read-only entries (display them with -r or --show-readonly)\")\n\n if path_filter or not accept_readonly or symbol_filter:\n print()\n\n # Calculate the base address\n if not path_filter:\n first_print = False\n _got(pwndbg.gdblib.proc.exe, accept_readonly, symbol_filter)\n else:\n first_print = True\n\n if not all_ and not path_filter:\n return\n # TODO: We might fail to find shared libraries if GDB can't find them (can't show them in `info sharedlibrary`)\n paths = pwndbg.gdblib.info.sharedlibrary_paths()\n for path in paths:\n if path_filter not in path:\n continue\n if not first_print:\n print()\n first_print = False\n _got(path, accept_readonly, symbol_filter)\n\n # Maybe user have a typo or something in the path filter, show the available shared libraries\n if first_print and path_filter:\n print(message.error(\"No shared library matching the path filter found.\"))\n if paths:\n print(message.notice(\"Available shared libraries:\"))\n for path in paths:\n print(\" \" + path)\n\n\ndef _got(path: str, accept_readonly: bool, symbol_filter: str) -> None:\n # Maybe download the file from remote\n local_path = pwndbg.gdblib.file.get_file(path, try_local_path=True)\n\n relro_status = pwndbg.wrappers.checksec.relro_status(local_path)\n pie_status = pwndbg.wrappers.checksec.pie_status(local_path)\n got_entry = pwndbg.wrappers.readelf.get_got_entry(local_path)\n\n # The following code is inspired by the \"got\" command of https://github.com/bata24/gef/blob/dev/gef.py by @bata24, thank you!\n # TODO/FIXME: Maybe a -v option to show more information will be better\n outputs: List[Dict[str, Union[str, int]]] = []\n if path == pwndbg.gdblib.proc.exe:\n bin_base_offset = pwndbg.gdblib.proc.binary_base_addr if \"PIE enabled\" in pie_status else 0\n else:\n # TODO/FIXME: Is there a better way to get the base address of the loaded shared library?\n # I guess parsing the vmmap result might also work, but what if it's not reliable or not available? (e.g. debugging with qemu-user)\n text_section_addr = pwndbg.gdblib.info.parsed_sharedlibrary()[path][0]\n with open(local_path, \"rb\") as f:\n bin_base_offset = (\n text_section_addr - ELFFile(f).get_section_by_name(\".text\").header[\"sh_addr\"]\n )\n\n # Parse the output of readelf line by line\n for category, lines in got_entry.items():\n for line in lines:\n # line might be something like:\n # 00000000001ec018 0000000000000025 R_X86_64_IRELATIVE a0480\n # or something like:\n # 00000000001ec030 0000020a00000007 R_X86_64_JUMP_SLOT 000000000009ae80 realloc@@GLIBC_2.2.5 + 0\n offset, _, rtype, *rest = line.split()[:5]\n if len(rest) == 1:\n value = rest[0]\n name = \"\"\n else:\n value, name = rest\n address = int(offset, 16) + bin_base_offset\n # TODO/FIXME: This check might not work correctly if we failed to get the correct vmmap result\n if not accept_readonly and not pwndbg.gdblib.vmmap.find(address).write:\n continue\n if not name and category == RelocationType.IRELATIVE:\n # TODO/FIXME: I don't know the naming logic behind this yet, I'm just modifying @bata24's code here :p\n # We might need to add some comments here to explain the logic in the future, and also fix it if something wrong\n if pwndbg.gdblib.arch.name == \"i386\":\n name = \"*ABS*\"\n else:\n name = f\"*ABS*+0x{int(value, 16):x}\"\n if symbol_filter not in name:\n continue\n outputs.append(\n {\n \"name\": name or \"????\",\n \"address\": address,\n }\n )\n # By sorting the outputs by address, we can get a more intuitive output\n outputs.sort(key=lambda x: x[\"address\"])\n relro_color = message.off\n if \"Partial\" in relro_status:\n relro_color = message.warn\n elif \"Full\" in relro_status:\n relro_color = message.on\n print(f\"State of the GOT of {message.notice(path)}:\")\n print(\n f\"GOT protection: {relro_color(relro_status)} | Found {message.hint(len(outputs))} GOT entries passing the filter\"\n )\n for output in outputs:\n print(\n f\"[{M.get(output['address'])}] {message.hint(output['name'])} -> {pwndbg.chain.format(pwndbg.gdblib.memory.pvoid(output['address']))}\" # type: ignore[arg-type]\n )\n", "path": "pwndbg/commands/got.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport argparse\nfrom typing import Dict\nfrom typing import List\nfrom typing import Union\n\nfrom elftools.elf.elffile import ELFFile\n\nimport pwndbg.chain\nimport pwndbg.color.memory as M\nimport pwndbg.commands\nimport pwndbg.enhance\nimport pwndbg.gdblib.arch\nimport pwndbg.gdblib.file\nimport pwndbg.gdblib.info\nimport pwndbg.gdblib.proc\nimport pwndbg.gdblib.qemu\nimport pwndbg.gdblib.vmmap\nimport pwndbg.wrappers.checksec\nimport pwndbg.wrappers.readelf\nfrom pwndbg.color import message\nfrom pwndbg.commands import CommandCategory\nfrom pwndbg.wrappers.readelf import RelocationType\n\nparser = argparse.ArgumentParser(\n formatter_class=argparse.RawTextHelpFormatter,\n description=\"\"\"Show the state of the Global Offset Table.\n\nExamples:\n got\n got puts\n got -p libc\n got -a\n\"\"\",\n)\ngroup = parser.add_mutually_exclusive_group()\ngroup.add_argument(\n \"-p\",\n \"--path\",\n help=\"Filter results by library/objfile path.\",\n type=str,\n default=\"\",\n dest=\"path_filter\",\n)\ngroup.add_argument(\n \"-a\",\n \"--all\",\n help=\"Process all libs/obfjiles including the target executable.\",\n action=\"store_true\",\n default=False,\n dest=\"all_\",\n)\nparser.add_argument(\n \"-r\",\n \"--show-readonly\",\n help=\"Also display read-only entries (which are filtered out by default).\",\n action=\"store_true\",\n default=False,\n dest=\"accept_readonly\",\n)\nparser.add_argument(\n \"symbol_filter\", help=\"Filter results by symbol name.\", type=str, nargs=\"?\", default=\"\"\n)\n\n\[email protected](parser, category=CommandCategory.LINUX)\[email protected]\ndef got(path_filter: str, all_: bool, accept_readonly: bool, symbol_filter: str) -> None:\n if pwndbg.gdblib.qemu.is_qemu_usermode():\n print(\n \"QEMU target detected - the result might not be accurate when checking if the entry is writable and getting the information for libraries/objfiles\"\n )\n print()\n # Show the filters we are using\n if path_filter:\n print(\"Filtering by lib/objfile path: \" + message.hint(path_filter))\n if symbol_filter:\n print(\"Filtering by symbol name: \" + message.hint(symbol_filter))\n if not accept_readonly:\n print(\"Filtering out read-only entries (display them with -r or --show-readonly)\")\n\n if path_filter or not accept_readonly or symbol_filter:\n print()\n\n # Calculate the base address\n if not path_filter:\n first_print = False\n _got(pwndbg.gdblib.proc.exe, accept_readonly, symbol_filter)\n else:\n first_print = True\n\n if not all_ and not path_filter:\n return\n # TODO: We might fail to find shared libraries if GDB can't find them (can't show them in `info sharedlibrary`)\n paths = pwndbg.gdblib.info.sharedlibrary_paths()\n for path in paths:\n if path_filter not in path:\n continue\n if not first_print:\n print()\n first_print = False\n _got(path, accept_readonly, symbol_filter)\n\n # Maybe user have a typo or something in the path filter, show the available shared libraries\n if first_print and path_filter:\n print(message.error(\"No shared library matching the path filter found.\"))\n if paths:\n print(message.notice(\"Available shared libraries:\"))\n for path in paths:\n print(\" \" + path)\n\n\ndef _got(path: str, accept_readonly: bool, symbol_filter: str) -> None:\n # Maybe download the file from remote\n local_path = pwndbg.gdblib.file.get_file(path, try_local_path=True)\n\n relro_status = pwndbg.wrappers.checksec.relro_status(local_path)\n pie_status = pwndbg.wrappers.checksec.pie_status(local_path)\n got_entry = pwndbg.wrappers.readelf.get_got_entry(local_path)\n\n # The following code is inspired by the \"got\" command of https://github.com/bata24/gef/blob/dev/gef.py by @bata24, thank you!\n # TODO/FIXME: Maybe a -v option to show more information will be better\n outputs: List[Dict[str, Union[str, int]]] = []\n if path == pwndbg.gdblib.proc.exe:\n bin_base_offset = pwndbg.gdblib.proc.binary_base_addr if \"PIE enabled\" in pie_status else 0\n else:\n # TODO/FIXME: Is there a better way to get the base address of the loaded shared library?\n # I guess parsing the vmmap result might also work, but what if it's not reliable or not available? (e.g. debugging with qemu-user)\n text_section_addr = pwndbg.gdblib.info.parsed_sharedlibrary()[path][0]\n with open(local_path, \"rb\") as f:\n bin_base_offset = (\n text_section_addr - ELFFile(f).get_section_by_name(\".text\").header[\"sh_addr\"]\n )\n\n # Parse the output of readelf line by line\n for category, lines in got_entry.items():\n for line in lines:\n # There are 5 fields in the output of readelf:\n # \"Offset\", \"Info\", \"Type\", \"Sym. Value\", and \"Symbol's Name\"\n # We only care about \"Offset\", \"Sym. Value\" and \"Symbol's Name\" here\n offset, _, _, *rest = line.split()[:5]\n if len(rest) < 2:\n # \"Sym. Value\" or \"Symbol's Name\" are not present in this case\n # The output of readelf might look like this (missing both value and name):\n # 00004e88 00000008 R_386_RELATIVE\n # or something like this (only missing name):\n # 00000000001ec018 0000000000000025 R_X86_64_IRELATIVE a0480\n # TODO: Is it possible that we are missing the value but not the name?\n value = rest[0] if rest else \"\"\n name = \"\"\n else:\n # Every fields are present in this case\n # The output of readelf might look like this:\n # 00000000001ec030 0000020a00000007 R_X86_64_JUMP_SLOT 000000000009ae80 realloc@@GLIBC_2.2.5 + 0\n value, name = rest\n address = int(offset, 16) + bin_base_offset\n # TODO/FIXME: This check might not work correctly if we failed to get the correct vmmap result\n if not accept_readonly and not pwndbg.gdblib.vmmap.find(address).write:\n continue\n if not name and category == RelocationType.IRELATIVE:\n # TODO/FIXME: I don't know the naming logic behind this yet, I'm just modifying @bata24's code here :p\n # We might need to add some comments here to explain the logic in the future, and also fix it if something wrong\n if pwndbg.gdblib.arch.name == \"i386\":\n name = \"*ABS*\"\n else:\n name = f\"*ABS*+0x{int(value, 16):x}\"\n if symbol_filter not in name:\n continue\n outputs.append(\n {\n \"name\": name or \"????\",\n \"address\": address,\n }\n )\n # By sorting the outputs by address, we can get a more intuitive output\n outputs.sort(key=lambda x: x[\"address\"])\n relro_color = message.off\n if \"Partial\" in relro_status:\n relro_color = message.warn\n elif \"Full\" in relro_status:\n relro_color = message.on\n print(f\"State of the GOT of {message.notice(path)}:\")\n print(\n f\"GOT protection: {relro_color(relro_status)} | Found {message.hint(len(outputs))} GOT entries passing the filter\"\n )\n for output in outputs:\n print(\n f\"[{M.get(output['address'])}] {message.hint(output['name'])} -> {pwndbg.chain.format(pwndbg.gdblib.memory.pvoid(output['address']))}\" # type: ignore[arg-type]\n )\n", "path": "pwndbg/commands/got.py"}]}
| 2,666 | 679 |
gh_patches_debug_11634
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-623
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicate logs when using root logger with 'gunicorn -k uvicorn.workers.UvicornWorker ...'
Here is a small test file with minimal gunicorn and uvicorn apps. But my real interest is the log statements at the top of the file.
```
import logging
logging.error('TEST 1 -- LOGGING ERROR')
logging.getLogger().error('TEST 2 -- ROOT LOGGER ERROR')
logging.getLogger('foo').error('TEST 3 -- FOO LOGGER ERROR')
# minimal gunicorn app
def appG(environ, start_response):
data = b'Hello, World!\n'
status = '200 OK'
response_headers = [
('Content-type', 'text/plain'),
('Content-Length', str(len(data)))
]
start_response(status, response_headers)
return iter([data])
# minimal uvicorn app
async def appU(scope, receive, send):
assert scope['type'] == 'http'
await send({
'type': 'http.response.start',
'status': 200,
'headers': [
[b'content-type', b'text/plain'],
]
})
await send({
'type': 'http.response.body',
'body': b'Hello, world!',
})
```
The logs "work" when the file is run by gunicorn or uvicorn individually.
But when I use gunicorn and uvicorn **together**, I get doubled uvicorn logs.
```
$ gunicorn -k uvicorn.workers.UvicornWorker test3:appU
[2020-04-07 22:47:53 -0400] [16015] [INFO] Starting gunicorn 20.0.4
[2020-04-07 22:47:53 -0400] [16015] [INFO] Listening at: http://127.0.0.1:8000 (16015)
[2020-04-07 22:47:53 -0400] [16015] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2020-04-07 22:47:53 -0400] [16018] [INFO] Booting worker with pid: 16018
ERROR:root:TEST 1 -- LOGGING ERROR
ERROR:root:TEST 2 -- ROOT LOGGER ERROR
ERROR:foo:TEST 3 -- FOO LOGGER ERROR
[2020-04-07 22:47:53 -0400] [16018] [INFO] Started server process [16018]
INFO:uvicorn.error:Started server process [16018]
[2020-04-07 22:47:53 -0400] [16018] [INFO] Waiting for application startup.
INFO:uvicorn.error:Waiting for application startup.
[2020-04-07 22:47:53 -0400] [16018] [INFO] ASGI 'lifespan' protocol appears unsupported.
INFO:uvicorn.error:ASGI 'lifespan' protocol appears unsupported.
[2020-04-07 22:47:53 -0400] [16018] [INFO] Application startup complete.
INFO:uvicorn.error:Application startup complete.
```
Note the last several lines are double logged with different formats. (Two handlers?)
FYI,
```
$ pip freeze |grep corn
gunicorn==20.0.4
uvicorn==0.11.3
```
I'd love a work around for **both** `gunicorn -k uvicorn.workers.UvicornWorker ...` and `uvicorn ...` that has an inheritable root logger.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/workers.py`
Content:
```
1 import asyncio
2 import logging
3
4 from gunicorn.workers.base import Worker
5 from uvicorn.config import Config
6 from uvicorn.main import Server
7
8
9 class UvicornWorker(Worker):
10 """
11 A worker class for Gunicorn that interfaces with an ASGI consumer callable,
12 rather than a WSGI callable.
13 """
14
15 CONFIG_KWARGS = {"loop": "uvloop", "http": "httptools"}
16
17 def __init__(self, *args, **kwargs):
18 super(UvicornWorker, self).__init__(*args, **kwargs)
19
20 logger = logging.getLogger("uvicorn.error")
21 logger.handlers = self.log.error_log.handlers
22 logger.setLevel(self.log.error_log.level)
23
24 logger = logging.getLogger("uvicorn.access")
25 logger.handlers = self.log.access_log.handlers
26 logger.setLevel(self.log.access_log.level)
27
28 config_kwargs = {
29 "app": None,
30 "log_config": None,
31 "timeout_keep_alive": self.cfg.keepalive,
32 "timeout_notify": self.timeout,
33 "callback_notify": self.callback_notify,
34 "limit_max_requests": self.max_requests,
35 }
36
37 if self.cfg.is_ssl:
38 ssl_kwargs = {
39 "ssl_keyfile": self.cfg.ssl_options.get("keyfile"),
40 "ssl_certfile": self.cfg.ssl_options.get("certfile"),
41 "ssl_version": self.cfg.ssl_options.get("ssl_version"),
42 "ssl_cert_reqs": self.cfg.ssl_options.get("cert_reqs"),
43 "ssl_ca_certs": self.cfg.ssl_options.get("ca_certs"),
44 "ssl_ciphers": self.cfg.ssl_options.get("ciphers"),
45 }
46 config_kwargs.update(ssl_kwargs)
47
48 if self.cfg.settings["backlog"].value:
49 config_kwargs["backlog"] = self.cfg.settings["backlog"].value
50
51 config_kwargs.update(self.CONFIG_KWARGS)
52
53 self.config = Config(**config_kwargs)
54
55 def init_process(self):
56 self.config.setup_event_loop()
57 super(UvicornWorker, self).init_process()
58
59 def init_signals(self):
60 pass
61
62 def run(self):
63 self.config.app = self.wsgi
64 server = Server(config=self.config)
65 loop = asyncio.get_event_loop()
66 loop.run_until_complete(server.serve(sockets=self.sockets))
67
68 async def callback_notify(self):
69 self.notify()
70
71
72 class UvicornH11Worker(UvicornWorker):
73 CONFIG_KWARGS = {"loop": "asyncio", "http": "h11"}
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/uvicorn/workers.py b/uvicorn/workers.py
--- a/uvicorn/workers.py
+++ b/uvicorn/workers.py
@@ -20,10 +20,12 @@
logger = logging.getLogger("uvicorn.error")
logger.handlers = self.log.error_log.handlers
logger.setLevel(self.log.error_log.level)
+ logger.propagate = False
logger = logging.getLogger("uvicorn.access")
logger.handlers = self.log.access_log.handlers
logger.setLevel(self.log.access_log.level)
+ logger.propagate = False
config_kwargs = {
"app": None,
|
{"golden_diff": "diff --git a/uvicorn/workers.py b/uvicorn/workers.py\n--- a/uvicorn/workers.py\n+++ b/uvicorn/workers.py\n@@ -20,10 +20,12 @@\n logger = logging.getLogger(\"uvicorn.error\")\n logger.handlers = self.log.error_log.handlers\n logger.setLevel(self.log.error_log.level)\n+ logger.propagate = False\n \n logger = logging.getLogger(\"uvicorn.access\")\n logger.handlers = self.log.access_log.handlers\n logger.setLevel(self.log.access_log.level)\n+ logger.propagate = False\n \n config_kwargs = {\n \"app\": None,\n", "issue": "Duplicate logs when using root logger with 'gunicorn -k uvicorn.workers.UvicornWorker ...'\nHere is a small test file with minimal gunicorn and uvicorn apps. But my real interest is the log statements at the top of the file.\r\n\r\n```\r\nimport logging\r\n\r\nlogging.error('TEST 1 -- LOGGING ERROR')\r\nlogging.getLogger().error('TEST 2 -- ROOT LOGGER ERROR')\r\nlogging.getLogger('foo').error('TEST 3 -- FOO LOGGER ERROR')\r\n\r\n\r\n# minimal gunicorn app\r\ndef appG(environ, start_response):\r\n data = b'Hello, World!\\n'\r\n status = '200 OK'\r\n response_headers = [\r\n ('Content-type', 'text/plain'),\r\n ('Content-Length', str(len(data)))\r\n ]\r\n start_response(status, response_headers)\r\n return iter([data])\r\n\r\n\r\n# minimal uvicorn app\r\nasync def appU(scope, receive, send):\r\n assert scope['type'] == 'http'\r\n await send({\r\n 'type': 'http.response.start',\r\n 'status': 200,\r\n 'headers': [\r\n [b'content-type', b'text/plain'],\r\n ]\r\n })\r\n await send({\r\n 'type': 'http.response.body',\r\n 'body': b'Hello, world!',\r\n })\r\n```\r\n\r\nThe logs \"work\" when the file is run by gunicorn or uvicorn individually.\r\n\r\nBut when I use gunicorn and uvicorn **together**, I get doubled uvicorn logs.\r\n\r\n```\r\n$ gunicorn -k uvicorn.workers.UvicornWorker test3:appU\r\n[2020-04-07 22:47:53 -0400] [16015] [INFO] Starting gunicorn 20.0.4\r\n[2020-04-07 22:47:53 -0400] [16015] [INFO] Listening at: http://127.0.0.1:8000 (16015)\r\n[2020-04-07 22:47:53 -0400] [16015] [INFO] Using worker: uvicorn.workers.UvicornWorker\r\n[2020-04-07 22:47:53 -0400] [16018] [INFO] Booting worker with pid: 16018\r\nERROR:root:TEST 1 -- LOGGING ERROR\r\nERROR:root:TEST 2 -- ROOT LOGGER ERROR\r\nERROR:foo:TEST 3 -- FOO LOGGER ERROR\r\n[2020-04-07 22:47:53 -0400] [16018] [INFO] Started server process [16018]\r\nINFO:uvicorn.error:Started server process [16018]\r\n[2020-04-07 22:47:53 -0400] [16018] [INFO] Waiting for application startup.\r\nINFO:uvicorn.error:Waiting for application startup.\r\n[2020-04-07 22:47:53 -0400] [16018] [INFO] ASGI 'lifespan' protocol appears unsupported.\r\nINFO:uvicorn.error:ASGI 'lifespan' protocol appears unsupported.\r\n[2020-04-07 22:47:53 -0400] [16018] [INFO] Application startup complete.\r\nINFO:uvicorn.error:Application startup complete.\r\n```\r\nNote the last several lines are double logged with different formats. (Two handlers?)\r\n\r\nFYI,\r\n```\r\n$ pip freeze |grep corn\r\ngunicorn==20.0.4\r\nuvicorn==0.11.3\r\n```\r\n\r\nI'd love a work around for **both** `gunicorn -k uvicorn.workers.UvicornWorker ...` and `uvicorn ...` that has an inheritable root logger.\r\n\n", "before_files": [{"content": "import asyncio\nimport logging\n\nfrom gunicorn.workers.base import Worker\nfrom uvicorn.config import Config\nfrom uvicorn.main import Server\n\n\nclass UvicornWorker(Worker):\n \"\"\"\n A worker class for Gunicorn that interfaces with an ASGI consumer callable,\n rather than a WSGI callable.\n \"\"\"\n\n CONFIG_KWARGS = {\"loop\": \"uvloop\", \"http\": \"httptools\"}\n\n def __init__(self, *args, **kwargs):\n super(UvicornWorker, self).__init__(*args, **kwargs)\n\n logger = logging.getLogger(\"uvicorn.error\")\n logger.handlers = self.log.error_log.handlers\n logger.setLevel(self.log.error_log.level)\n\n logger = logging.getLogger(\"uvicorn.access\")\n logger.handlers = self.log.access_log.handlers\n logger.setLevel(self.log.access_log.level)\n\n config_kwargs = {\n \"app\": None,\n \"log_config\": None,\n \"timeout_keep_alive\": self.cfg.keepalive,\n \"timeout_notify\": self.timeout,\n \"callback_notify\": self.callback_notify,\n \"limit_max_requests\": self.max_requests,\n }\n\n if self.cfg.is_ssl:\n ssl_kwargs = {\n \"ssl_keyfile\": self.cfg.ssl_options.get(\"keyfile\"),\n \"ssl_certfile\": self.cfg.ssl_options.get(\"certfile\"),\n \"ssl_version\": self.cfg.ssl_options.get(\"ssl_version\"),\n \"ssl_cert_reqs\": self.cfg.ssl_options.get(\"cert_reqs\"),\n \"ssl_ca_certs\": self.cfg.ssl_options.get(\"ca_certs\"),\n \"ssl_ciphers\": self.cfg.ssl_options.get(\"ciphers\"),\n }\n config_kwargs.update(ssl_kwargs)\n\n if self.cfg.settings[\"backlog\"].value:\n config_kwargs[\"backlog\"] = self.cfg.settings[\"backlog\"].value\n\n config_kwargs.update(self.CONFIG_KWARGS)\n\n self.config = Config(**config_kwargs)\n\n def init_process(self):\n self.config.setup_event_loop()\n super(UvicornWorker, self).init_process()\n\n def init_signals(self):\n pass\n\n def run(self):\n self.config.app = self.wsgi\n server = Server(config=self.config)\n loop = asyncio.get_event_loop()\n loop.run_until_complete(server.serve(sockets=self.sockets))\n\n async def callback_notify(self):\n self.notify()\n\n\nclass UvicornH11Worker(UvicornWorker):\n CONFIG_KWARGS = {\"loop\": \"asyncio\", \"http\": \"h11\"}\n", "path": "uvicorn/workers.py"}], "after_files": [{"content": "import asyncio\nimport logging\n\nfrom gunicorn.workers.base import Worker\nfrom uvicorn.config import Config\nfrom uvicorn.main import Server\n\n\nclass UvicornWorker(Worker):\n \"\"\"\n A worker class for Gunicorn that interfaces with an ASGI consumer callable,\n rather than a WSGI callable.\n \"\"\"\n\n CONFIG_KWARGS = {\"loop\": \"uvloop\", \"http\": \"httptools\"}\n\n def __init__(self, *args, **kwargs):\n super(UvicornWorker, self).__init__(*args, **kwargs)\n\n logger = logging.getLogger(\"uvicorn.error\")\n logger.handlers = self.log.error_log.handlers\n logger.setLevel(self.log.error_log.level)\n logger.propagate = False\n\n logger = logging.getLogger(\"uvicorn.access\")\n logger.handlers = self.log.access_log.handlers\n logger.setLevel(self.log.access_log.level)\n logger.propagate = False\n\n config_kwargs = {\n \"app\": None,\n \"log_config\": None,\n \"timeout_keep_alive\": self.cfg.keepalive,\n \"timeout_notify\": self.timeout,\n \"callback_notify\": self.callback_notify,\n \"limit_max_requests\": self.max_requests,\n }\n\n if self.cfg.is_ssl:\n ssl_kwargs = {\n \"ssl_keyfile\": self.cfg.ssl_options.get(\"keyfile\"),\n \"ssl_certfile\": self.cfg.ssl_options.get(\"certfile\"),\n \"ssl_version\": self.cfg.ssl_options.get(\"ssl_version\"),\n \"ssl_cert_reqs\": self.cfg.ssl_options.get(\"cert_reqs\"),\n \"ssl_ca_certs\": self.cfg.ssl_options.get(\"ca_certs\"),\n \"ssl_ciphers\": self.cfg.ssl_options.get(\"ciphers\"),\n }\n config_kwargs.update(ssl_kwargs)\n\n if self.cfg.settings[\"backlog\"].value:\n config_kwargs[\"backlog\"] = self.cfg.settings[\"backlog\"].value\n\n config_kwargs.update(self.CONFIG_KWARGS)\n\n self.config = Config(**config_kwargs)\n\n def init_process(self):\n self.config.setup_event_loop()\n super(UvicornWorker, self).init_process()\n\n def init_signals(self):\n pass\n\n def run(self):\n self.config.app = self.wsgi\n server = Server(config=self.config)\n loop = asyncio.get_event_loop()\n loop.run_until_complete(server.serve(sockets=self.sockets))\n\n async def callback_notify(self):\n self.notify()\n\n\nclass UvicornH11Worker(UvicornWorker):\n CONFIG_KWARGS = {\"loop\": \"asyncio\", \"http\": \"h11\"}\n", "path": "uvicorn/workers.py"}]}
| 1,829 | 135 |
gh_patches_debug_1153
|
rasdani/github-patches
|
git_diff
|
scverse__scanpy-997
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`datasets.pbmc68k_reduced` isn't contained in the pypi package anymore
This still works in `1.4.4.post1`. It's very likely caused by changes to `setup.py`. I experienced similar problems before and fixed them via `package_data`. But this got removed. It's probably only a problem for the source-based installs.
https://github.com/theislab/scanpy/commit/881f0bef31cdfe0df7333641dc847a60894b5c41#diff-2eeaed663bd0d25b7e608891384b7298
```
>>> import scanpy
>>> scanpy.__version__
<Version('1.4.5.post2')>
>>> scanpy.datasets.pbmc68k_reduced()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/alexwolf/miniconda3/lib/python3.6/site-packages/scanpy/datasets/__init__.py", line 239, in pbmc68k_reduced
return read(filename)
File "/Users/alexwolf/miniconda3/lib/python3.6/site-packages/scanpy/readwrite.py", line 114, in read
**kwargs,
File "/Users/alexwolf/miniconda3/lib/python3.6/site-packages/scanpy/readwrite.py", line 524, in _read
return read_h5ad(filename, backed=backed)
File "/Users/alexwolf/miniconda3/lib/python3.6/site-packages/anndata/readwrite/read.py", line 447, in read_h5ad
constructor_args = _read_args_from_h5ad(filename=filename, chunk_size=chunk_size)
File "/Users/alexwolf/miniconda3/lib/python3.6/site-packages/anndata/readwrite/read.py", line 481, in _read_args_from_h5ad
f = h5py.File(filename, 'r')
File "/Users/alexwolf/miniconda3/lib/python3.6/site-packages/anndata/h5py/h5sparse.py", line 162, in __init__
**kwds,
File "/Users/alexwolf/miniconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 312, in __init__
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/Users/alexwolf/miniconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 142, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import sys
2
3 if sys.version_info < (3, 6):
4 sys.exit('scanpy requires Python >= 3.6')
5 from pathlib import Path
6
7 from setuptools import setup, find_packages
8
9
10 try:
11 from scanpy import __author__, __email__
12 except ImportError: # Deps not yet installed
13 __author__ = __email__ = ''
14
15 setup(
16 name='scanpy',
17 use_scm_version=True,
18 setup_requires=['setuptools_scm'],
19 description='Single-Cell Analysis in Python.',
20 long_description=Path('README.rst').read_text('utf-8'),
21 url='http://github.com/theislab/scanpy',
22 author=__author__,
23 author_email=__email__,
24 license='BSD',
25 python_requires='>=3.6',
26 install_requires=[
27 l.strip() for l in Path('requirements.txt').read_text('utf-8').splitlines()
28 ],
29 extras_require=dict(
30 louvain=['python-igraph', 'louvain>=0.6'],
31 leiden=['python-igraph', 'leidenalg'],
32 bbknn=['bbknn'],
33 rapids=['cudf', 'cuml', 'cugraph'],
34 magic=['magic-impute>=2.0'],
35 doc=[
36 'sphinx',
37 'sphinx_rtd_theme',
38 'sphinx_autodoc_typehints',
39 'scanpydoc>=0.4.3',
40 'typing_extensions; python_version < "3.8"', # for `Literal`
41 ],
42 test=[
43 'pytest>=4.4',
44 'dask[array]',
45 'fsspec',
46 'zappy',
47 'zarr',
48 'black',
49 'profimp',
50 ],
51 ),
52 packages=find_packages(),
53 entry_points=dict(console_scripts=['scanpy=scanpy.cli:console_main']),
54 zip_safe=False,
55 classifiers=[
56 'Development Status :: 5 - Production/Stable',
57 'Environment :: Console',
58 'Framework :: Jupyter',
59 'Intended Audience :: Developers',
60 'Intended Audience :: Science/Research',
61 'Natural Language :: English',
62 'Operating System :: MacOS :: MacOS X',
63 'Operating System :: Microsoft :: Windows',
64 'Operating System :: POSIX :: Linux',
65 'Programming Language :: Python :: 3',
66 'Programming Language :: Python :: 3.5',
67 'Programming Language :: Python :: 3.6',
68 'Programming Language :: Python :: 3.7',
69 'Topic :: Scientific/Engineering :: Bio-Informatics',
70 'Topic :: Scientific/Engineering :: Visualization',
71 ],
72 )
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -50,6 +50,7 @@
],
),
packages=find_packages(),
+ include_package_data=True,
entry_points=dict(console_scripts=['scanpy=scanpy.cli:console_main']),
zip_safe=False,
classifiers=[
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -50,6 +50,7 @@\n ],\n ),\n packages=find_packages(),\n+ include_package_data=True,\n entry_points=dict(console_scripts=['scanpy=scanpy.cli:console_main']),\n zip_safe=False,\n classifiers=[\n", "issue": "`datasets.pbmc68k_reduced` isn't contained in the pypi package anymore\nThis still works in `1.4.4.post1`. It's very likely caused by changes to `setup.py`. I experienced similar problems before and fixed them via `package_data`. But this got removed. It's probably only a problem for the source-based installs.\r\n\r\nhttps://github.com/theislab/scanpy/commit/881f0bef31cdfe0df7333641dc847a60894b5c41#diff-2eeaed663bd0d25b7e608891384b7298\r\n\r\n```\r\n>>> import scanpy\r\n>>> scanpy.__version__\r\n<Version('1.4.5.post2')>\r\n>>> scanpy.datasets.pbmc68k_reduced()\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/alexwolf/miniconda3/lib/python3.6/site-packages/scanpy/datasets/__init__.py\", line 239, in pbmc68k_reduced\r\n return read(filename)\r\n File \"/Users/alexwolf/miniconda3/lib/python3.6/site-packages/scanpy/readwrite.py\", line 114, in read\r\n **kwargs,\r\n File \"/Users/alexwolf/miniconda3/lib/python3.6/site-packages/scanpy/readwrite.py\", line 524, in _read\r\n return read_h5ad(filename, backed=backed)\r\n File \"/Users/alexwolf/miniconda3/lib/python3.6/site-packages/anndata/readwrite/read.py\", line 447, in read_h5ad\r\n constructor_args = _read_args_from_h5ad(filename=filename, chunk_size=chunk_size)\r\n File \"/Users/alexwolf/miniconda3/lib/python3.6/site-packages/anndata/readwrite/read.py\", line 481, in _read_args_from_h5ad\r\n f = h5py.File(filename, 'r')\r\n File \"/Users/alexwolf/miniconda3/lib/python3.6/site-packages/anndata/h5py/h5sparse.py\", line 162, in __init__\r\n **kwds,\r\n File \"/Users/alexwolf/miniconda3/lib/python3.6/site-packages/h5py/_hl/files.py\", line 312, in __init__\r\n fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)\r\n File \"/Users/alexwolf/miniconda3/lib/python3.6/site-packages/h5py/_hl/files.py\", line 142, in make_fid\r\n fid = h5f.open(name, flags, fapl=fapl)\r\n File \"h5py/_objects.pyx\", line 54, in h5py._objects.with_phil.wrapper\r\n File \"h5py/_objects.pyx\", line 55, in h5py._objects.with_phil.wrapper\r\n File \"h5py/h5f.pyx\", line 78, in h5py.h5f.open\r\n```\n", "before_files": [{"content": "import sys\n\nif sys.version_info < (3, 6):\n sys.exit('scanpy requires Python >= 3.6')\nfrom pathlib import Path\n\nfrom setuptools import setup, find_packages\n\n\ntry:\n from scanpy import __author__, __email__\nexcept ImportError: # Deps not yet installed\n __author__ = __email__ = ''\n\nsetup(\n name='scanpy',\n use_scm_version=True,\n setup_requires=['setuptools_scm'],\n description='Single-Cell Analysis in Python.',\n long_description=Path('README.rst').read_text('utf-8'),\n url='http://github.com/theislab/scanpy',\n author=__author__,\n author_email=__email__,\n license='BSD',\n python_requires='>=3.6',\n install_requires=[\n l.strip() for l in Path('requirements.txt').read_text('utf-8').splitlines()\n ],\n extras_require=dict(\n louvain=['python-igraph', 'louvain>=0.6'],\n leiden=['python-igraph', 'leidenalg'],\n bbknn=['bbknn'],\n rapids=['cudf', 'cuml', 'cugraph'],\n magic=['magic-impute>=2.0'],\n doc=[\n 'sphinx',\n 'sphinx_rtd_theme',\n 'sphinx_autodoc_typehints',\n 'scanpydoc>=0.4.3',\n 'typing_extensions; python_version < \"3.8\"', # for `Literal`\n ],\n test=[\n 'pytest>=4.4',\n 'dask[array]',\n 'fsspec',\n 'zappy',\n 'zarr',\n 'black',\n 'profimp',\n ],\n ),\n packages=find_packages(),\n entry_points=dict(console_scripts=['scanpy=scanpy.cli:console_main']),\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Framework :: Jupyter',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Natural Language :: English',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Bio-Informatics',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "import sys\n\nif sys.version_info < (3, 6):\n sys.exit('scanpy requires Python >= 3.6')\nfrom pathlib import Path\n\nfrom setuptools import setup, find_packages\n\n\ntry:\n from scanpy import __author__, __email__\nexcept ImportError: # Deps not yet installed\n __author__ = __email__ = ''\n\nsetup(\n name='scanpy',\n use_scm_version=True,\n setup_requires=['setuptools_scm'],\n description='Single-Cell Analysis in Python.',\n long_description=Path('README.rst').read_text('utf-8'),\n url='http://github.com/theislab/scanpy',\n author=__author__,\n author_email=__email__,\n license='BSD',\n python_requires='>=3.6',\n install_requires=[\n l.strip() for l in Path('requirements.txt').read_text('utf-8').splitlines()\n ],\n extras_require=dict(\n louvain=['python-igraph', 'louvain>=0.6'],\n leiden=['python-igraph', 'leidenalg'],\n bbknn=['bbknn'],\n rapids=['cudf', 'cuml', 'cugraph'],\n magic=['magic-impute>=2.0'],\n doc=[\n 'sphinx',\n 'sphinx_rtd_theme',\n 'sphinx_autodoc_typehints',\n 'scanpydoc>=0.4.3',\n 'typing_extensions; python_version < \"3.8\"', # for `Literal`\n ],\n test=[\n 'pytest>=4.4',\n 'dask[array]',\n 'fsspec',\n 'zappy',\n 'zarr',\n 'black',\n 'profimp',\n ],\n ),\n packages=find_packages(),\n include_package_data=True,\n entry_points=dict(console_scripts=['scanpy=scanpy.cli:console_main']),\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Framework :: Jupyter',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Natural Language :: English',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Bio-Informatics',\n 'Topic :: Scientific/Engineering :: Visualization',\n ],\n)\n", "path": "setup.py"}]}
| 1,639 | 73 |
gh_patches_debug_30970
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-1512
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check if ThreadLocalRuntimeContext can be removed since python3.4 support is dropped
https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-api/src/opentelemetry/context/threadlocal_context.py#L21
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/src/opentelemetry/context/threadlocal_context.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import threading
16
17 from opentelemetry.context.context import Context, RuntimeContext
18
19
20 class ThreadLocalRuntimeContext(RuntimeContext):
21 """An implementation of the RuntimeContext interface
22 which uses thread-local storage under the hood. This
23 implementation is available for usage with Python 3.4.
24 """
25
26 class Token:
27 def __init__(self, context: Context) -> None:
28 self._context = context
29
30 _CONTEXT_KEY = "current_context"
31
32 def __init__(self) -> None:
33 self._current_context = threading.local()
34
35 def attach(self, context: Context) -> object:
36 """See `opentelemetry.context.RuntimeContext.attach`."""
37 current = self.get_current()
38 setattr(self._current_context, self._CONTEXT_KEY, context)
39 return self.Token(current)
40
41 def get_current(self) -> Context:
42 """See `opentelemetry.context.RuntimeContext.get_current`."""
43 if not hasattr(self._current_context, self._CONTEXT_KEY):
44 setattr(
45 self._current_context, self._CONTEXT_KEY, Context(),
46 )
47 context = getattr(
48 self._current_context, self._CONTEXT_KEY
49 ) # type: Context
50 return context
51
52 def detach(self, token: object) -> None:
53 """See `opentelemetry.context.RuntimeContext.detach`."""
54 if not isinstance(token, self.Token):
55 raise ValueError("invalid token")
56 # pylint: disable=protected-access
57 setattr(self._current_context, self._CONTEXT_KEY, token._context)
58
59
60 __all__ = ["ThreadLocalRuntimeContext"]
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opentelemetry-api/src/opentelemetry/context/threadlocal_context.py b/opentelemetry-api/src/opentelemetry/context/threadlocal_context.py
deleted file mode 100644
--- a/opentelemetry-api/src/opentelemetry/context/threadlocal_context.py
+++ /dev/null
@@ -1,60 +0,0 @@
-# Copyright The OpenTelemetry Authors
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import threading
-
-from opentelemetry.context.context import Context, RuntimeContext
-
-
-class ThreadLocalRuntimeContext(RuntimeContext):
- """An implementation of the RuntimeContext interface
- which uses thread-local storage under the hood. This
- implementation is available for usage with Python 3.4.
- """
-
- class Token:
- def __init__(self, context: Context) -> None:
- self._context = context
-
- _CONTEXT_KEY = "current_context"
-
- def __init__(self) -> None:
- self._current_context = threading.local()
-
- def attach(self, context: Context) -> object:
- """See `opentelemetry.context.RuntimeContext.attach`."""
- current = self.get_current()
- setattr(self._current_context, self._CONTEXT_KEY, context)
- return self.Token(current)
-
- def get_current(self) -> Context:
- """See `opentelemetry.context.RuntimeContext.get_current`."""
- if not hasattr(self._current_context, self._CONTEXT_KEY):
- setattr(
- self._current_context, self._CONTEXT_KEY, Context(),
- )
- context = getattr(
- self._current_context, self._CONTEXT_KEY
- ) # type: Context
- return context
-
- def detach(self, token: object) -> None:
- """See `opentelemetry.context.RuntimeContext.detach`."""
- if not isinstance(token, self.Token):
- raise ValueError("invalid token")
- # pylint: disable=protected-access
- setattr(self._current_context, self._CONTEXT_KEY, token._context)
-
-
-__all__ = ["ThreadLocalRuntimeContext"]
|
{"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/context/threadlocal_context.py b/opentelemetry-api/src/opentelemetry/context/threadlocal_context.py\ndeleted file mode 100644\n--- a/opentelemetry-api/src/opentelemetry/context/threadlocal_context.py\n+++ /dev/null\n@@ -1,60 +0,0 @@\n-# Copyright The OpenTelemetry Authors\n-#\n-# Licensed under the Apache License, Version 2.0 (the \"License\");\n-# you may not use this file except in compliance with the License.\n-# You may obtain a copy of the License at\n-#\n-# http://www.apache.org/licenses/LICENSE-2.0\n-#\n-# Unless required by applicable law or agreed to in writing, software\n-# distributed under the License is distributed on an \"AS IS\" BASIS,\n-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-# See the License for the specific language governing permissions and\n-# limitations under the License.\n-\n-import threading\n-\n-from opentelemetry.context.context import Context, RuntimeContext\n-\n-\n-class ThreadLocalRuntimeContext(RuntimeContext):\n- \"\"\"An implementation of the RuntimeContext interface\n- which uses thread-local storage under the hood. This\n- implementation is available for usage with Python 3.4.\n- \"\"\"\n-\n- class Token:\n- def __init__(self, context: Context) -> None:\n- self._context = context\n-\n- _CONTEXT_KEY = \"current_context\"\n-\n- def __init__(self) -> None:\n- self._current_context = threading.local()\n-\n- def attach(self, context: Context) -> object:\n- \"\"\"See `opentelemetry.context.RuntimeContext.attach`.\"\"\"\n- current = self.get_current()\n- setattr(self._current_context, self._CONTEXT_KEY, context)\n- return self.Token(current)\n-\n- def get_current(self) -> Context:\n- \"\"\"See `opentelemetry.context.RuntimeContext.get_current`.\"\"\"\n- if not hasattr(self._current_context, self._CONTEXT_KEY):\n- setattr(\n- self._current_context, self._CONTEXT_KEY, Context(),\n- )\n- context = getattr(\n- self._current_context, self._CONTEXT_KEY\n- ) # type: Context\n- return context\n-\n- def detach(self, token: object) -> None:\n- \"\"\"See `opentelemetry.context.RuntimeContext.detach`.\"\"\"\n- if not isinstance(token, self.Token):\n- raise ValueError(\"invalid token\")\n- # pylint: disable=protected-access\n- setattr(self._current_context, self._CONTEXT_KEY, token._context)\n-\n-\n-__all__ = [\"ThreadLocalRuntimeContext\"]\n", "issue": "Check if ThreadLocalRuntimeContext can be removed since python3.4 support is dropped\nhttps://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-api/src/opentelemetry/context/threadlocal_context.py#L21\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport threading\n\nfrom opentelemetry.context.context import Context, RuntimeContext\n\n\nclass ThreadLocalRuntimeContext(RuntimeContext):\n \"\"\"An implementation of the RuntimeContext interface\n which uses thread-local storage under the hood. This\n implementation is available for usage with Python 3.4.\n \"\"\"\n\n class Token:\n def __init__(self, context: Context) -> None:\n self._context = context\n\n _CONTEXT_KEY = \"current_context\"\n\n def __init__(self) -> None:\n self._current_context = threading.local()\n\n def attach(self, context: Context) -> object:\n \"\"\"See `opentelemetry.context.RuntimeContext.attach`.\"\"\"\n current = self.get_current()\n setattr(self._current_context, self._CONTEXT_KEY, context)\n return self.Token(current)\n\n def get_current(self) -> Context:\n \"\"\"See `opentelemetry.context.RuntimeContext.get_current`.\"\"\"\n if not hasattr(self._current_context, self._CONTEXT_KEY):\n setattr(\n self._current_context, self._CONTEXT_KEY, Context(),\n )\n context = getattr(\n self._current_context, self._CONTEXT_KEY\n ) # type: Context\n return context\n\n def detach(self, token: object) -> None:\n \"\"\"See `opentelemetry.context.RuntimeContext.detach`.\"\"\"\n if not isinstance(token, self.Token):\n raise ValueError(\"invalid token\")\n # pylint: disable=protected-access\n setattr(self._current_context, self._CONTEXT_KEY, token._context)\n\n\n__all__ = [\"ThreadLocalRuntimeContext\"]\n", "path": "opentelemetry-api/src/opentelemetry/context/threadlocal_context.py"}], "after_files": [{"content": null, "path": "opentelemetry-api/src/opentelemetry/context/threadlocal_context.py"}]}
| 893 | 584 |
gh_patches_debug_2310
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-4935
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auto Email Report Should be fetched from site_config
#### Expected Behaviour
Adding the `max_reports_per_user` field in site_config should be fetched for Auto Email Report limit instead of the static 3 used currently.
Reference: https://discuss.erpnext.com/t/auto-email-report-why-there-is-a-limit-of-3-user-field/23296/4
Frappé version: 10.0.16
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/email/doctype/auto_email_report/auto_email_report.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2015, Frappe Technologies and contributors
3 # For license information, please see license.txt
4
5 from __future__ import unicode_literals
6 import frappe, json
7 from frappe import _
8 from frappe.model.document import Document
9 from datetime import timedelta
10 import frappe.utils
11 from frappe.utils import now, global_date_format, format_time
12 from frappe.utils.xlsxutils import make_xlsx
13 from frappe.utils.csvutils import to_csv
14
15 max_reports_per_user = 3
16
17 class AutoEmailReport(Document):
18 def autoname(self):
19 self.name = _(self.report)
20
21 def validate(self):
22 self.validate_report_count()
23 self.validate_emails()
24 self.validate_report_format()
25
26 def validate_emails(self):
27 '''Cleanup list of emails'''
28 if ',' in self.email_to:
29 self.email_to.replace(',', '\n')
30
31 valid = []
32 for email in self.email_to.split():
33 if email:
34 frappe.utils.validate_email_add(email, True)
35 valid.append(email)
36
37 self.email_to = '\n'.join(valid)
38
39 def validate_report_count(self):
40 '''check that there are only 3 enabled reports per user'''
41 count = frappe.db.sql('select count(*) from `tabAuto Email Report` where user=%s and enabled=1', self.user)[0][0]
42 if count > max_reports_per_user + (-1 if self.flags.in_insert else 0):
43 frappe.throw(_('Only {0} emailed reports are allowed per user').format(max_reports_per_user))
44
45 def validate_report_format(self):
46 """ check if user has select correct report format """
47 valid_report_formats = ["HTML", "XLSX", "CSV"]
48 if self.format not in valid_report_formats:
49 frappe.throw(_("%s is not a valid report format. Report format should \
50 one of the following %s"%(frappe.bold(self.format), frappe.bold(", ".join(valid_report_formats)))))
51
52 def get_report_content(self):
53 '''Returns file in for the report in given format'''
54 report = frappe.get_doc('Report', self.report)
55
56 if self.report_type=='Report Builder' and self.data_modified_till:
57 self.filters = json.loads(self.filters) if self.filters else {}
58 self.filters['modified'] = ('>', frappe.utils.now_datetime() - timedelta(hours=self.data_modified_till))
59
60 columns, data = report.get_data(limit=self.no_of_rows or 100, user = self.user,
61 filters = self.filters, as_dict=True)
62
63 # add serial numbers
64 columns.insert(0, frappe._dict(fieldname='idx', label='', width='30px'))
65 for i in range(len(data)):
66 data[i]['idx'] = i+1
67
68 if len(data)==0 and self.send_if_data:
69 return None
70
71 if self.format == 'HTML':
72 return self.get_html_table(columns, data)
73
74 elif self.format == 'XLSX':
75 spreadsheet_data = self.get_spreadsheet_data(columns, data)
76 xlsx_file = make_xlsx(spreadsheet_data, "Auto Email Report")
77 return xlsx_file.getvalue()
78
79 elif self.format == 'CSV':
80 spreadsheet_data = self.get_spreadsheet_data(columns, data)
81 return to_csv(spreadsheet_data)
82
83 else:
84 frappe.throw(_('Invalid Output Format'))
85
86 def get_html_table(self, columns=None, data=None):
87
88 date_time = global_date_format(now()) + ' ' + format_time(now())
89 report_doctype = frappe.db.get_value('Report', self.report, 'ref_doctype')
90
91 return frappe.render_template('frappe/templates/emails/auto_email_report.html', {
92 'title': self.name,
93 'description': self.description,
94 'date_time': date_time,
95 'columns': columns,
96 'data': data,
97 'report_url': frappe.utils.get_url_to_report(self.report,
98 self.report_type, report_doctype),
99 'report_name': self.report,
100 'edit_report_settings': frappe.utils.get_link_to_form('Auto Email Report',
101 self.name)
102 })
103
104 @staticmethod
105 def get_spreadsheet_data(columns, data):
106 out = [[_(df.label) for df in columns], ]
107 for row in data:
108 new_row = []
109 out.append(new_row)
110 for df in columns:
111 new_row.append(frappe.format(row[df.fieldname], df, row))
112
113 return out
114
115 def get_file_name(self):
116 return "{0}.{1}".format(self.report.replace(" ", "-").replace("/", "-"), self.format.lower())
117
118 def send(self):
119 if self.filter_meta and not self.filters:
120 frappe.throw(_("Please set filters value in Report Filter table."))
121
122 data = self.get_report_content()
123 if not data:
124 return
125
126 attachments = None
127 if self.format == "HTML":
128 message = data
129 else:
130 message = self.get_html_table()
131
132 if not self.format=='HTML':
133 attachments = [{
134 'fname': self.get_file_name(),
135 'fcontent': data
136 }]
137
138 frappe.sendmail(
139 recipients = self.email_to.split(),
140 subject = self.name,
141 message = message,
142 attachments = attachments,
143 reference_doctype = self.doctype,
144 reference_name = self.name
145 )
146
147 @frappe.whitelist()
148 def download(name):
149 '''Download report locally'''
150 auto_email_report = frappe.get_doc('Auto Email Report', name)
151 auto_email_report.check_permission()
152 data = auto_email_report.get_report_content()
153
154 if not data:
155 frappe.msgprint(_('No Data'))
156 return
157
158 frappe.local.response.filecontent = data
159 frappe.local.response.type = "download"
160 frappe.local.response.filename = auto_email_report.get_file_name()
161
162 @frappe.whitelist()
163 def send_now(name):
164 '''Send Auto Email report now'''
165 auto_email_report = frappe.get_doc('Auto Email Report', name)
166 auto_email_report.check_permission()
167 auto_email_report.send()
168
169 def send_daily():
170 '''Check reports to be sent daily'''
171 now = frappe.utils.now_datetime()
172 for report in frappe.get_all('Auto Email Report',
173 {'enabled': 1, 'frequency': ('in', ('Daily', 'Weekly'))}):
174 auto_email_report = frappe.get_doc('Auto Email Report', report.name)
175
176 # if not correct weekday, skip
177 if auto_email_report.frequency=='Weekly':
178 if now.weekday()!={'Monday':0,'Tuesday':1,'Wednesday':2,
179 'Thursday':3,'Friday':4,'Saturday':5,'Sunday':6}[auto_email_report.day_of_week]:
180 continue
181
182 auto_email_report.send()
183
184
185 def send_monthly():
186 '''Check reports to be sent monthly'''
187 for report in frappe.get_all('Auto Email Report', {'enabled': 1, 'frequency': 'Monthly'}):
188 frappe.get_doc('Auto Email Report', report.name).send()
189
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/frappe/email/doctype/auto_email_report/auto_email_report.py b/frappe/email/doctype/auto_email_report/auto_email_report.py
--- a/frappe/email/doctype/auto_email_report/auto_email_report.py
+++ b/frappe/email/doctype/auto_email_report/auto_email_report.py
@@ -12,7 +12,7 @@
from frappe.utils.xlsxutils import make_xlsx
from frappe.utils.csvutils import to_csv
-max_reports_per_user = 3
+max_reports_per_user = frappe.local.conf.max_reports_per_user or 3
class AutoEmailReport(Document):
def autoname(self):
|
{"golden_diff": "diff --git a/frappe/email/doctype/auto_email_report/auto_email_report.py b/frappe/email/doctype/auto_email_report/auto_email_report.py\n--- a/frappe/email/doctype/auto_email_report/auto_email_report.py\n+++ b/frappe/email/doctype/auto_email_report/auto_email_report.py\n@@ -12,7 +12,7 @@\n from frappe.utils.xlsxutils import make_xlsx\n from frappe.utils.csvutils import to_csv\n \n-max_reports_per_user = 3\n+max_reports_per_user = frappe.local.conf.max_reports_per_user or 3\n \n class AutoEmailReport(Document):\n \tdef autoname(self):\n", "issue": "Auto Email Report Should be fetched from site_config\n#### Expected Behaviour\r\nAdding the `max_reports_per_user` field in site_config should be fetched for Auto Email Report limit instead of the static 3 used currently.\r\n\r\nReference: https://discuss.erpnext.com/t/auto-email-report-why-there-is-a-limit-of-3-user-field/23296/4\r\n\r\nFrapp\u00e9 version: 10.0.16\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2015, Frappe Technologies and contributors\n# For license information, please see license.txt\n\nfrom __future__ import unicode_literals\nimport frappe, json\nfrom frappe import _\nfrom frappe.model.document import Document\nfrom datetime import timedelta\nimport frappe.utils\nfrom frappe.utils import now, global_date_format, format_time\nfrom frappe.utils.xlsxutils import make_xlsx\nfrom frappe.utils.csvutils import to_csv\n\nmax_reports_per_user = 3\n\nclass AutoEmailReport(Document):\n\tdef autoname(self):\n\t\tself.name = _(self.report)\n\n\tdef validate(self):\n\t\tself.validate_report_count()\n\t\tself.validate_emails()\n\t\tself.validate_report_format()\n\n\tdef validate_emails(self):\n\t\t'''Cleanup list of emails'''\n\t\tif ',' in self.email_to:\n\t\t\tself.email_to.replace(',', '\\n')\n\n\t\tvalid = []\n\t\tfor email in self.email_to.split():\n\t\t\tif email:\n\t\t\t\tfrappe.utils.validate_email_add(email, True)\n\t\t\t\tvalid.append(email)\n\n\t\tself.email_to = '\\n'.join(valid)\n\n\tdef validate_report_count(self):\n\t\t'''check that there are only 3 enabled reports per user'''\n\t\tcount = frappe.db.sql('select count(*) from `tabAuto Email Report` where user=%s and enabled=1', self.user)[0][0]\n\t\tif count > max_reports_per_user + (-1 if self.flags.in_insert else 0):\n\t\t\tfrappe.throw(_('Only {0} emailed reports are allowed per user').format(max_reports_per_user))\n\n\tdef validate_report_format(self):\n\t\t\"\"\" check if user has select correct report format \"\"\"\n\t\tvalid_report_formats = [\"HTML\", \"XLSX\", \"CSV\"]\n\t\tif self.format not in valid_report_formats:\n\t\t\tfrappe.throw(_(\"%s is not a valid report format. Report format should \\\n\t\t\t\tone of the following %s\"%(frappe.bold(self.format), frappe.bold(\", \".join(valid_report_formats)))))\n\n\tdef get_report_content(self):\n\t\t'''Returns file in for the report in given format'''\n\t\treport = frappe.get_doc('Report', self.report)\n\n\t\tif self.report_type=='Report Builder' and self.data_modified_till:\n\t\t\tself.filters = json.loads(self.filters) if self.filters else {}\n\t\t\tself.filters['modified'] = ('>', frappe.utils.now_datetime() - timedelta(hours=self.data_modified_till))\n\n\t\tcolumns, data = report.get_data(limit=self.no_of_rows or 100, user = self.user,\n\t\t\tfilters = self.filters, as_dict=True)\n\n\t\t# add serial numbers\n\t\tcolumns.insert(0, frappe._dict(fieldname='idx', label='', width='30px'))\n\t\tfor i in range(len(data)):\n\t\t\tdata[i]['idx'] = i+1\n\n\t\tif len(data)==0 and self.send_if_data:\n\t\t\treturn None\n\n\t\tif self.format == 'HTML':\n\t\t\treturn self.get_html_table(columns, data)\n\n\t\telif self.format == 'XLSX':\n\t\t\tspreadsheet_data = self.get_spreadsheet_data(columns, data)\n\t\t\txlsx_file = make_xlsx(spreadsheet_data, \"Auto Email Report\")\n\t\t\treturn xlsx_file.getvalue()\n\n\t\telif self.format == 'CSV':\n\t\t\tspreadsheet_data = self.get_spreadsheet_data(columns, data)\n\t\t\treturn to_csv(spreadsheet_data)\n\n\t\telse:\n\t\t\tfrappe.throw(_('Invalid Output Format'))\n\n\tdef get_html_table(self, columns=None, data=None):\n\n\t\tdate_time = global_date_format(now()) + ' ' + format_time(now())\n\t\treport_doctype = frappe.db.get_value('Report', self.report, 'ref_doctype')\n\n\t\treturn frappe.render_template('frappe/templates/emails/auto_email_report.html', {\n\t\t\t'title': self.name,\n\t\t\t'description': self.description,\n\t\t\t'date_time': date_time,\n\t\t\t'columns': columns,\n\t\t\t'data': data,\n\t\t\t'report_url': frappe.utils.get_url_to_report(self.report,\n\t\t\t\tself.report_type, report_doctype),\n\t\t\t'report_name': self.report,\n\t\t\t'edit_report_settings': frappe.utils.get_link_to_form('Auto Email Report',\n\t\t\t\tself.name)\n\t\t})\n\n\t@staticmethod\n\tdef get_spreadsheet_data(columns, data):\n\t\tout = [[_(df.label) for df in columns], ]\n\t\tfor row in data:\n\t\t\tnew_row = []\n\t\t\tout.append(new_row)\n\t\t\tfor df in columns:\n\t\t\t\tnew_row.append(frappe.format(row[df.fieldname], df, row))\n\n\t\treturn out\n\n\tdef get_file_name(self):\n\t\treturn \"{0}.{1}\".format(self.report.replace(\" \", \"-\").replace(\"/\", \"-\"), self.format.lower())\n\n\tdef send(self):\n\t\tif self.filter_meta and not self.filters:\n\t\t\tfrappe.throw(_(\"Please set filters value in Report Filter table.\"))\n\n\t\tdata = self.get_report_content()\n\t\tif not data:\n\t\t\treturn\n\n\t\tattachments = None\n\t\tif self.format == \"HTML\":\n\t\t\tmessage = data\n\t\telse:\n\t\t\tmessage = self.get_html_table()\n\n\t\tif not self.format=='HTML':\n\t\t\tattachments = [{\n\t\t\t\t'fname': self.get_file_name(),\n\t\t\t\t'fcontent': data\n\t\t\t}]\n\n\t\tfrappe.sendmail(\n\t\t\trecipients = self.email_to.split(),\n\t\t\tsubject = self.name,\n\t\t\tmessage = message,\n\t\t\tattachments = attachments,\n\t\t\treference_doctype = self.doctype,\n\t\t\treference_name = self.name\n\t\t)\n\[email protected]()\ndef download(name):\n\t'''Download report locally'''\n\tauto_email_report = frappe.get_doc('Auto Email Report', name)\n\tauto_email_report.check_permission()\n\tdata = auto_email_report.get_report_content()\n\n\tif not data:\n\t\tfrappe.msgprint(_('No Data'))\n\t\treturn\n\n\tfrappe.local.response.filecontent = data\n\tfrappe.local.response.type = \"download\"\n\tfrappe.local.response.filename = auto_email_report.get_file_name()\n\[email protected]()\ndef send_now(name):\n\t'''Send Auto Email report now'''\n\tauto_email_report = frappe.get_doc('Auto Email Report', name)\n\tauto_email_report.check_permission()\n\tauto_email_report.send()\n\ndef send_daily():\n\t'''Check reports to be sent daily'''\n\tnow = frappe.utils.now_datetime()\n\tfor report in frappe.get_all('Auto Email Report',\n\t\t{'enabled': 1, 'frequency': ('in', ('Daily', 'Weekly'))}):\n\t\tauto_email_report = frappe.get_doc('Auto Email Report', report.name)\n\n\t\t# if not correct weekday, skip\n\t\tif auto_email_report.frequency=='Weekly':\n\t\t\tif now.weekday()!={'Monday':0,'Tuesday':1,'Wednesday':2,\n\t\t\t\t'Thursday':3,'Friday':4,'Saturday':5,'Sunday':6}[auto_email_report.day_of_week]:\n\t\t\t\tcontinue\n\n\t\tauto_email_report.send()\n\n\ndef send_monthly():\n\t'''Check reports to be sent monthly'''\n\tfor report in frappe.get_all('Auto Email Report', {'enabled': 1, 'frequency': 'Monthly'}):\n\t\tfrappe.get_doc('Auto Email Report', report.name).send()\n", "path": "frappe/email/doctype/auto_email_report/auto_email_report.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2015, Frappe Technologies and contributors\n# For license information, please see license.txt\n\nfrom __future__ import unicode_literals\nimport frappe, json\nfrom frappe import _\nfrom frappe.model.document import Document\nfrom datetime import timedelta\nimport frappe.utils\nfrom frappe.utils import now, global_date_format, format_time\nfrom frappe.utils.xlsxutils import make_xlsx\nfrom frappe.utils.csvutils import to_csv\n\nmax_reports_per_user = frappe.local.conf.max_reports_per_user or 3\n\nclass AutoEmailReport(Document):\n\tdef autoname(self):\n\t\tself.name = _(self.report)\n\n\tdef validate(self):\n\t\tself.validate_report_count()\n\t\tself.validate_emails()\n\t\tself.validate_report_format()\n\n\tdef validate_emails(self):\n\t\t'''Cleanup list of emails'''\n\t\tif ',' in self.email_to:\n\t\t\tself.email_to.replace(',', '\\n')\n\n\t\tvalid = []\n\t\tfor email in self.email_to.split():\n\t\t\tif email:\n\t\t\t\tfrappe.utils.validate_email_add(email, True)\n\t\t\t\tvalid.append(email)\n\n\t\tself.email_to = '\\n'.join(valid)\n\n\tdef validate_report_count(self):\n\t\t'''check that there are only 3 enabled reports per user'''\n\t\tcount = frappe.db.sql('select count(*) from `tabAuto Email Report` where user=%s and enabled=1', self.user)[0][0]\n\t\tif count > max_reports_per_user + (-1 if self.flags.in_insert else 0):\n\t\t\tfrappe.throw(_('Only {0} emailed reports are allowed per user').format(max_reports_per_user))\n\n\tdef validate_report_format(self):\n\t\t\"\"\" check if user has select correct report format \"\"\"\n\t\tvalid_report_formats = [\"HTML\", \"XLSX\", \"CSV\"]\n\t\tif self.format not in valid_report_formats:\n\t\t\tfrappe.throw(_(\"%s is not a valid report format. Report format should \\\n\t\t\t\tone of the following %s\"%(frappe.bold(self.format), frappe.bold(\", \".join(valid_report_formats)))))\n\n\tdef get_report_content(self):\n\t\t'''Returns file in for the report in given format'''\n\t\treport = frappe.get_doc('Report', self.report)\n\n\t\tif self.report_type=='Report Builder' and self.data_modified_till:\n\t\t\tself.filters = json.loads(self.filters) if self.filters else {}\n\t\t\tself.filters['modified'] = ('>', frappe.utils.now_datetime() - timedelta(hours=self.data_modified_till))\n\n\t\tcolumns, data = report.get_data(limit=self.no_of_rows or 100, user = self.user,\n\t\t\tfilters = self.filters, as_dict=True)\n\n\t\t# add serial numbers\n\t\tcolumns.insert(0, frappe._dict(fieldname='idx', label='', width='30px'))\n\t\tfor i in range(len(data)):\n\t\t\tdata[i]['idx'] = i+1\n\n\t\tif len(data)==0 and self.send_if_data:\n\t\t\treturn None\n\n\t\tif self.format == 'HTML':\n\t\t\treturn self.get_html_table(columns, data)\n\n\t\telif self.format == 'XLSX':\n\t\t\tspreadsheet_data = self.get_spreadsheet_data(columns, data)\n\t\t\txlsx_file = make_xlsx(spreadsheet_data, \"Auto Email Report\")\n\t\t\treturn xlsx_file.getvalue()\n\n\t\telif self.format == 'CSV':\n\t\t\tspreadsheet_data = self.get_spreadsheet_data(columns, data)\n\t\t\treturn to_csv(spreadsheet_data)\n\n\t\telse:\n\t\t\tfrappe.throw(_('Invalid Output Format'))\n\n\tdef get_html_table(self, columns=None, data=None):\n\n\t\tdate_time = global_date_format(now()) + ' ' + format_time(now())\n\t\treport_doctype = frappe.db.get_value('Report', self.report, 'ref_doctype')\n\n\t\treturn frappe.render_template('frappe/templates/emails/auto_email_report.html', {\n\t\t\t'title': self.name,\n\t\t\t'description': self.description,\n\t\t\t'date_time': date_time,\n\t\t\t'columns': columns,\n\t\t\t'data': data,\n\t\t\t'report_url': frappe.utils.get_url_to_report(self.report,\n\t\t\t\tself.report_type, report_doctype),\n\t\t\t'report_name': self.report,\n\t\t\t'edit_report_settings': frappe.utils.get_link_to_form('Auto Email Report',\n\t\t\t\tself.name)\n\t\t})\n\n\t@staticmethod\n\tdef get_spreadsheet_data(columns, data):\n\t\tout = [[_(df.label) for df in columns], ]\n\t\tfor row in data:\n\t\t\tnew_row = []\n\t\t\tout.append(new_row)\n\t\t\tfor df in columns:\n\t\t\t\tnew_row.append(frappe.format(row[df.fieldname], df, row))\n\n\t\treturn out\n\n\tdef get_file_name(self):\n\t\treturn \"{0}.{1}\".format(self.report.replace(\" \", \"-\").replace(\"/\", \"-\"), self.format.lower())\n\n\tdef send(self):\n\t\tif self.filter_meta and not self.filters:\n\t\t\tfrappe.throw(_(\"Please set filters value in Report Filter table.\"))\n\n\t\tdata = self.get_report_content()\n\t\tif not data:\n\t\t\treturn\n\n\t\tattachments = None\n\t\tif self.format == \"HTML\":\n\t\t\tmessage = data\n\t\telse:\n\t\t\tmessage = self.get_html_table()\n\n\t\tif not self.format=='HTML':\n\t\t\tattachments = [{\n\t\t\t\t'fname': self.get_file_name(),\n\t\t\t\t'fcontent': data\n\t\t\t}]\n\n\t\tfrappe.sendmail(\n\t\t\trecipients = self.email_to.split(),\n\t\t\tsubject = self.name,\n\t\t\tmessage = message,\n\t\t\tattachments = attachments,\n\t\t\treference_doctype = self.doctype,\n\t\t\treference_name = self.name\n\t\t)\n\[email protected]()\ndef download(name):\n\t'''Download report locally'''\n\tauto_email_report = frappe.get_doc('Auto Email Report', name)\n\tauto_email_report.check_permission()\n\tdata = auto_email_report.get_report_content()\n\n\tif not data:\n\t\tfrappe.msgprint(_('No Data'))\n\t\treturn\n\n\tfrappe.local.response.filecontent = data\n\tfrappe.local.response.type = \"download\"\n\tfrappe.local.response.filename = auto_email_report.get_file_name()\n\[email protected]()\ndef send_now(name):\n\t'''Send Auto Email report now'''\n\tauto_email_report = frappe.get_doc('Auto Email Report', name)\n\tauto_email_report.check_permission()\n\tauto_email_report.send()\n\ndef send_daily():\n\t'''Check reports to be sent daily'''\n\tnow = frappe.utils.now_datetime()\n\tfor report in frappe.get_all('Auto Email Report',\n\t\t{'enabled': 1, 'frequency': ('in', ('Daily', 'Weekly'))}):\n\t\tauto_email_report = frappe.get_doc('Auto Email Report', report.name)\n\n\t\t# if not correct weekday, skip\n\t\tif auto_email_report.frequency=='Weekly':\n\t\t\tif now.weekday()!={'Monday':0,'Tuesday':1,'Wednesday':2,\n\t\t\t\t'Thursday':3,'Friday':4,'Saturday':5,'Sunday':6}[auto_email_report.day_of_week]:\n\t\t\t\tcontinue\n\n\t\tauto_email_report.send()\n\n\ndef send_monthly():\n\t'''Check reports to be sent monthly'''\n\tfor report in frappe.get_all('Auto Email Report', {'enabled': 1, 'frequency': 'Monthly'}):\n\t\tfrappe.get_doc('Auto Email Report', report.name).send()\n", "path": "frappe/email/doctype/auto_email_report/auto_email_report.py"}]}
| 2,381 | 132 |
gh_patches_debug_30159
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-1129
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Redis: Support publish and subscribe methods
While running the examples from the blog post [How to instrument a polyglot microservices application with Elastic APM](https://www.elastic.co/blog/how-to-instrument-a-polyglot-microservices-application-with-elastic-apm), I noticed Redis doesn't show up on the service map as being connected to the Python service in this example.
It looks like that's because [according to our documentation we don't have the `publish` and `subscribe` methods instrumented](https://www.elastic.co/guide/en/apm/agent/python/5.x/supported-technologies.html#automatic-instrumentation-db-redis).
If these methods were instrumented we would be able to see Redis on service maps for applications that are using it for pub/sub.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/instrumentation/packages/asyncio/aioredis.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from __future__ import absolute_import
32
33 from elasticapm.contrib.asyncio.traces import async_capture_span
34 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
35 from elasticapm.traces import execution_context
36
37
38 class RedisConnectionPoolInstrumentation(AbstractInstrumentedModule):
39 name = "aioredis"
40
41 instrument_list = [("aioredis.pool", "ConnectionsPool.execute")]
42
43 def call(self, module, method, wrapped, instance, args, kwargs):
44 if len(args) > 0:
45 wrapped_name = args[0].decode()
46 else:
47 wrapped_name = self.get_wrapped_name(wrapped, instance, method)
48
49 with async_capture_span(
50 wrapped_name, span_type="db", span_subtype="redis", span_action="query", leaf=True
51 ) as span:
52 span.context["destination"] = _get_destination_info(instance)
53
54 return wrapped(*args, **kwargs)
55
56
57 class RedisPipelineInstrumentation(AbstractInstrumentedModule):
58 name = "aioredis"
59
60 instrument_list = [("aioredis.commands.transaction", "Pipeline.execute")]
61
62 def call(self, module, method, wrapped, instance, args, kwargs):
63 wrapped_name = self.get_wrapped_name(wrapped, instance, method)
64
65 with async_capture_span(
66 wrapped_name, span_type="db", span_subtype="redis", span_action="query", leaf=True
67 ) as span:
68 span.context["destination"] = _get_destination_info(instance)
69
70 return wrapped(*args, **kwargs)
71
72
73 class RedisConnectionInstrumentation(AbstractInstrumentedModule):
74 name = "aioredis"
75
76 instrument_list = (("aioredis.connection", "RedisConnection.execute"),)
77
78 def call(self, module, method, wrapped, instance, args, kwargs):
79 span = execution_context.get_span()
80 if span and span.subtype == "aioredis":
81 span.context["destination"] = _get_destination_info(instance)
82 return wrapped(*args, **kwargs)
83
84
85 def _get_destination_info(connection):
86 destination_info = {"service": {"name": "aioredis", "resource": "redis", "type": "db"}}
87
88 if hasattr(connection, "_pool_or_conn"):
89 destination_info["port"] = connection._pool_or_conn.address[1]
90 destination_info["address"] = connection._pool_or_conn.address[0]
91 else:
92 destination_info["port"] = connection.address[1]
93 destination_info["address"] = connection.address[0]
94
95 return destination_info
96
```
Path: `elasticapm/instrumentation/packages/redis.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from __future__ import absolute_import
32
33 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
34 from elasticapm.traces import capture_span, execution_context
35
36
37 class Redis3CheckMixin(object):
38 instrument_list_3 = []
39 instrument_list = []
40
41 def get_instrument_list(self):
42 try:
43 from redis import VERSION
44
45 if VERSION[0] >= 3:
46 return self.instrument_list_3
47 return self.instrument_list
48 except ImportError:
49 return self.instrument_list
50
51
52 class RedisInstrumentation(Redis3CheckMixin, AbstractInstrumentedModule):
53 name = "redis"
54
55 # no need to instrument StrictRedis in redis-py >= 3.0
56 instrument_list_3 = [("redis.client", "Redis.execute_command")]
57 instrument_list = [("redis.client", "Redis.execute_command"), ("redis.client", "StrictRedis.execute_command")]
58
59 def call(self, module, method, wrapped, instance, args, kwargs):
60 if len(args) > 0:
61 wrapped_name = str(args[0])
62 else:
63 wrapped_name = self.get_wrapped_name(wrapped, instance, method)
64
65 with capture_span(wrapped_name, span_type="db", span_subtype="redis", span_action="query", leaf=True):
66 return wrapped(*args, **kwargs)
67
68
69 class RedisPipelineInstrumentation(Redis3CheckMixin, AbstractInstrumentedModule):
70 name = "redis"
71
72 # BasePipeline has been renamed to Pipeline in redis-py 3
73 instrument_list_3 = [("redis.client", "Pipeline.execute")]
74 instrument_list = [("redis.client", "BasePipeline.execute")]
75
76 def call(self, module, method, wrapped, instance, args, kwargs):
77 wrapped_name = self.get_wrapped_name(wrapped, instance, method)
78 with capture_span(wrapped_name, span_type="db", span_subtype="redis", span_action="query", leaf=True):
79 return wrapped(*args, **kwargs)
80
81
82 class RedisConnectionInstrumentation(AbstractInstrumentedModule):
83 name = "redis"
84
85 instrument_list = (("redis.connection", "Connection.send_packed_command"),)
86
87 def call(self, module, method, wrapped, instance, args, kwargs):
88 span = execution_context.get_span()
89 if span and span.subtype == "redis":
90 span.context["destination"] = get_destination_info(instance)
91 return wrapped(*args, **kwargs)
92
93
94 def get_destination_info(connection):
95 destination_info = {"service": {"name": "redis", "resource": "redis", "type": "db"}}
96 if hasattr(connection, "port"):
97 destination_info["port"] = connection.port
98 destination_info["address"] = connection.host
99 elif hasattr(connection, "path"):
100 destination_info["port"] = None
101 destination_info["address"] = "unix://" + connection.path
102 return destination_info
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticapm/instrumentation/packages/asyncio/aioredis.py b/elasticapm/instrumentation/packages/asyncio/aioredis.py
--- a/elasticapm/instrumentation/packages/asyncio/aioredis.py
+++ b/elasticapm/instrumentation/packages/asyncio/aioredis.py
@@ -38,7 +38,8 @@
class RedisConnectionPoolInstrumentation(AbstractInstrumentedModule):
name = "aioredis"
- instrument_list = [("aioredis.pool", "ConnectionsPool.execute")]
+ instrument_list = [("aioredis.pool", "ConnectionsPool.execute"),
+ ("aioredis.pool", "ConnectionsPool.execute_pubsub")]
def call(self, module, method, wrapped, instance, args, kwargs):
if len(args) > 0:
@@ -73,7 +74,8 @@
class RedisConnectionInstrumentation(AbstractInstrumentedModule):
name = "aioredis"
- instrument_list = (("aioredis.connection", "RedisConnection.execute"),)
+ instrument_list = (("aioredis.connection", "RedisConnection.execute"),
+ ("aioredis.pool", "ConnectionsPool.execute_pubsub"))
def call(self, module, method, wrapped, instance, args, kwargs):
span = execution_context.get_span()
diff --git a/elasticapm/instrumentation/packages/redis.py b/elasticapm/instrumentation/packages/redis.py
--- a/elasticapm/instrumentation/packages/redis.py
+++ b/elasticapm/instrumentation/packages/redis.py
@@ -53,7 +53,7 @@
name = "redis"
# no need to instrument StrictRedis in redis-py >= 3.0
- instrument_list_3 = [("redis.client", "Redis.execute_command")]
+ instrument_list_3 = [("redis.client", "Redis.execute_command"), ("redis.client", "PubSub.execute_command")]
instrument_list = [("redis.client", "Redis.execute_command"), ("redis.client", "StrictRedis.execute_command")]
def call(self, module, method, wrapped, instance, args, kwargs):
|
{"golden_diff": "diff --git a/elasticapm/instrumentation/packages/asyncio/aioredis.py b/elasticapm/instrumentation/packages/asyncio/aioredis.py\n--- a/elasticapm/instrumentation/packages/asyncio/aioredis.py\n+++ b/elasticapm/instrumentation/packages/asyncio/aioredis.py\n@@ -38,7 +38,8 @@\n class RedisConnectionPoolInstrumentation(AbstractInstrumentedModule):\n name = \"aioredis\"\n \n- instrument_list = [(\"aioredis.pool\", \"ConnectionsPool.execute\")]\n+ instrument_list = [(\"aioredis.pool\", \"ConnectionsPool.execute\"),\n+ (\"aioredis.pool\", \"ConnectionsPool.execute_pubsub\")]\n \n def call(self, module, method, wrapped, instance, args, kwargs):\n if len(args) > 0:\n@@ -73,7 +74,8 @@\n class RedisConnectionInstrumentation(AbstractInstrumentedModule):\n name = \"aioredis\"\n \n- instrument_list = ((\"aioredis.connection\", \"RedisConnection.execute\"),)\n+ instrument_list = ((\"aioredis.connection\", \"RedisConnection.execute\"),\n+ (\"aioredis.pool\", \"ConnectionsPool.execute_pubsub\"))\n \n def call(self, module, method, wrapped, instance, args, kwargs):\n span = execution_context.get_span()\ndiff --git a/elasticapm/instrumentation/packages/redis.py b/elasticapm/instrumentation/packages/redis.py\n--- a/elasticapm/instrumentation/packages/redis.py\n+++ b/elasticapm/instrumentation/packages/redis.py\n@@ -53,7 +53,7 @@\n name = \"redis\"\n \n # no need to instrument StrictRedis in redis-py >= 3.0\n- instrument_list_3 = [(\"redis.client\", \"Redis.execute_command\")]\n+ instrument_list_3 = [(\"redis.client\", \"Redis.execute_command\"), (\"redis.client\", \"PubSub.execute_command\")]\n instrument_list = [(\"redis.client\", \"Redis.execute_command\"), (\"redis.client\", \"StrictRedis.execute_command\")]\n \n def call(self, module, method, wrapped, instance, args, kwargs):\n", "issue": "Redis: Support publish and subscribe methods\nWhile running the examples from the blog post [How to instrument a polyglot microservices application with Elastic APM](https://www.elastic.co/blog/how-to-instrument-a-polyglot-microservices-application-with-elastic-apm), I noticed Redis doesn't show up on the service map as being connected to the Python service in this example.\r\n\r\nIt looks like that's because [according to our documentation we don't have the `publish` and `subscribe` methods instrumented](https://www.elastic.co/guide/en/apm/agent/python/5.x/supported-technologies.html#automatic-instrumentation-db-redis).\r\n\r\nIf these methods were instrumented we would be able to see Redis on service maps for applications that are using it for pub/sub.\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom __future__ import absolute_import\n\nfrom elasticapm.contrib.asyncio.traces import async_capture_span\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import execution_context\n\n\nclass RedisConnectionPoolInstrumentation(AbstractInstrumentedModule):\n name = \"aioredis\"\n\n instrument_list = [(\"aioredis.pool\", \"ConnectionsPool.execute\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if len(args) > 0:\n wrapped_name = args[0].decode()\n else:\n wrapped_name = self.get_wrapped_name(wrapped, instance, method)\n\n with async_capture_span(\n wrapped_name, span_type=\"db\", span_subtype=\"redis\", span_action=\"query\", leaf=True\n ) as span:\n span.context[\"destination\"] = _get_destination_info(instance)\n\n return wrapped(*args, **kwargs)\n\n\nclass RedisPipelineInstrumentation(AbstractInstrumentedModule):\n name = \"aioredis\"\n\n instrument_list = [(\"aioredis.commands.transaction\", \"Pipeline.execute\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n wrapped_name = self.get_wrapped_name(wrapped, instance, method)\n\n with async_capture_span(\n wrapped_name, span_type=\"db\", span_subtype=\"redis\", span_action=\"query\", leaf=True\n ) as span:\n span.context[\"destination\"] = _get_destination_info(instance)\n\n return wrapped(*args, **kwargs)\n\n\nclass RedisConnectionInstrumentation(AbstractInstrumentedModule):\n name = \"aioredis\"\n\n instrument_list = ((\"aioredis.connection\", \"RedisConnection.execute\"),)\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n span = execution_context.get_span()\n if span and span.subtype == \"aioredis\":\n span.context[\"destination\"] = _get_destination_info(instance)\n return wrapped(*args, **kwargs)\n\n\ndef _get_destination_info(connection):\n destination_info = {\"service\": {\"name\": \"aioredis\", \"resource\": \"redis\", \"type\": \"db\"}}\n\n if hasattr(connection, \"_pool_or_conn\"):\n destination_info[\"port\"] = connection._pool_or_conn.address[1]\n destination_info[\"address\"] = connection._pool_or_conn.address[0]\n else:\n destination_info[\"port\"] = connection.address[1]\n destination_info[\"address\"] = connection.address[0]\n\n return destination_info\n", "path": "elasticapm/instrumentation/packages/asyncio/aioredis.py"}, {"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom __future__ import absolute_import\n\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import capture_span, execution_context\n\n\nclass Redis3CheckMixin(object):\n instrument_list_3 = []\n instrument_list = []\n\n def get_instrument_list(self):\n try:\n from redis import VERSION\n\n if VERSION[0] >= 3:\n return self.instrument_list_3\n return self.instrument_list\n except ImportError:\n return self.instrument_list\n\n\nclass RedisInstrumentation(Redis3CheckMixin, AbstractInstrumentedModule):\n name = \"redis\"\n\n # no need to instrument StrictRedis in redis-py >= 3.0\n instrument_list_3 = [(\"redis.client\", \"Redis.execute_command\")]\n instrument_list = [(\"redis.client\", \"Redis.execute_command\"), (\"redis.client\", \"StrictRedis.execute_command\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if len(args) > 0:\n wrapped_name = str(args[0])\n else:\n wrapped_name = self.get_wrapped_name(wrapped, instance, method)\n\n with capture_span(wrapped_name, span_type=\"db\", span_subtype=\"redis\", span_action=\"query\", leaf=True):\n return wrapped(*args, **kwargs)\n\n\nclass RedisPipelineInstrumentation(Redis3CheckMixin, AbstractInstrumentedModule):\n name = \"redis\"\n\n # BasePipeline has been renamed to Pipeline in redis-py 3\n instrument_list_3 = [(\"redis.client\", \"Pipeline.execute\")]\n instrument_list = [(\"redis.client\", \"BasePipeline.execute\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n wrapped_name = self.get_wrapped_name(wrapped, instance, method)\n with capture_span(wrapped_name, span_type=\"db\", span_subtype=\"redis\", span_action=\"query\", leaf=True):\n return wrapped(*args, **kwargs)\n\n\nclass RedisConnectionInstrumentation(AbstractInstrumentedModule):\n name = \"redis\"\n\n instrument_list = ((\"redis.connection\", \"Connection.send_packed_command\"),)\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n span = execution_context.get_span()\n if span and span.subtype == \"redis\":\n span.context[\"destination\"] = get_destination_info(instance)\n return wrapped(*args, **kwargs)\n\n\ndef get_destination_info(connection):\n destination_info = {\"service\": {\"name\": \"redis\", \"resource\": \"redis\", \"type\": \"db\"}}\n if hasattr(connection, \"port\"):\n destination_info[\"port\"] = connection.port\n destination_info[\"address\"] = connection.host\n elif hasattr(connection, \"path\"):\n destination_info[\"port\"] = None\n destination_info[\"address\"] = \"unix://\" + connection.path\n return destination_info\n", "path": "elasticapm/instrumentation/packages/redis.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom __future__ import absolute_import\n\nfrom elasticapm.contrib.asyncio.traces import async_capture_span\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import execution_context\n\n\nclass RedisConnectionPoolInstrumentation(AbstractInstrumentedModule):\n name = \"aioredis\"\n\n instrument_list = [(\"aioredis.pool\", \"ConnectionsPool.execute\"),\n (\"aioredis.pool\", \"ConnectionsPool.execute_pubsub\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if len(args) > 0:\n wrapped_name = args[0].decode()\n else:\n wrapped_name = self.get_wrapped_name(wrapped, instance, method)\n\n with async_capture_span(\n wrapped_name, span_type=\"db\", span_subtype=\"redis\", span_action=\"query\", leaf=True\n ) as span:\n span.context[\"destination\"] = _get_destination_info(instance)\n\n return wrapped(*args, **kwargs)\n\n\nclass RedisPipelineInstrumentation(AbstractInstrumentedModule):\n name = \"aioredis\"\n\n instrument_list = [(\"aioredis.commands.transaction\", \"Pipeline.execute\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n wrapped_name = self.get_wrapped_name(wrapped, instance, method)\n\n with async_capture_span(\n wrapped_name, span_type=\"db\", span_subtype=\"redis\", span_action=\"query\", leaf=True\n ) as span:\n span.context[\"destination\"] = _get_destination_info(instance)\n\n return wrapped(*args, **kwargs)\n\n\nclass RedisConnectionInstrumentation(AbstractInstrumentedModule):\n name = \"aioredis\"\n\n instrument_list = ((\"aioredis.connection\", \"RedisConnection.execute\"),\n (\"aioredis.pool\", \"ConnectionsPool.execute_pubsub\"))\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n span = execution_context.get_span()\n if span and span.subtype == \"aioredis\":\n span.context[\"destination\"] = _get_destination_info(instance)\n return wrapped(*args, **kwargs)\n\n\ndef _get_destination_info(connection):\n destination_info = {\"service\": {\"name\": \"aioredis\", \"resource\": \"redis\", \"type\": \"db\"}}\n\n if hasattr(connection, \"_pool_or_conn\"):\n destination_info[\"port\"] = connection._pool_or_conn.address[1]\n destination_info[\"address\"] = connection._pool_or_conn.address[0]\n else:\n destination_info[\"port\"] = connection.address[1]\n destination_info[\"address\"] = connection.address[0]\n\n return destination_info\n", "path": "elasticapm/instrumentation/packages/asyncio/aioredis.py"}, {"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom __future__ import absolute_import\n\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import capture_span, execution_context\n\n\nclass Redis3CheckMixin(object):\n instrument_list_3 = []\n instrument_list = []\n\n def get_instrument_list(self):\n try:\n from redis import VERSION\n\n if VERSION[0] >= 3:\n return self.instrument_list_3\n return self.instrument_list\n except ImportError:\n return self.instrument_list\n\n\nclass RedisInstrumentation(Redis3CheckMixin, AbstractInstrumentedModule):\n name = \"redis\"\n\n # no need to instrument StrictRedis in redis-py >= 3.0\n instrument_list_3 = [(\"redis.client\", \"Redis.execute_command\"), (\"redis.client\", \"PubSub.execute_command\")]\n instrument_list = [(\"redis.client\", \"Redis.execute_command\"), (\"redis.client\", \"StrictRedis.execute_command\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if len(args) > 0:\n wrapped_name = str(args[0])\n else:\n wrapped_name = self.get_wrapped_name(wrapped, instance, method)\n\n with capture_span(wrapped_name, span_type=\"db\", span_subtype=\"redis\", span_action=\"query\", leaf=True):\n return wrapped(*args, **kwargs)\n\n\nclass RedisPipelineInstrumentation(Redis3CheckMixin, AbstractInstrumentedModule):\n name = \"redis\"\n\n # BasePipeline has been renamed to Pipeline in redis-py 3\n instrument_list_3 = [(\"redis.client\", \"Pipeline.execute\")]\n instrument_list = [(\"redis.client\", \"BasePipeline.execute\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n wrapped_name = self.get_wrapped_name(wrapped, instance, method)\n with capture_span(wrapped_name, span_type=\"db\", span_subtype=\"redis\", span_action=\"query\", leaf=True):\n return wrapped(*args, **kwargs)\n\n\nclass RedisConnectionInstrumentation(AbstractInstrumentedModule):\n name = \"redis\"\n\n instrument_list = ((\"redis.connection\", \"Connection.send_packed_command\"),)\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n span = execution_context.get_span()\n if span and span.subtype == \"redis\":\n span.context[\"destination\"] = get_destination_info(instance)\n return wrapped(*args, **kwargs)\n\n\ndef get_destination_info(connection):\n destination_info = {\"service\": {\"name\": \"redis\", \"resource\": \"redis\", \"type\": \"db\"}}\n if hasattr(connection, \"port\"):\n destination_info[\"port\"] = connection.port\n destination_info[\"address\"] = connection.host\n elif hasattr(connection, \"path\"):\n destination_info[\"port\"] = None\n destination_info[\"address\"] = \"unix://\" + connection.path\n return destination_info\n", "path": "elasticapm/instrumentation/packages/redis.py"}]}
| 2,662 | 463 |
gh_patches_debug_35742
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-1123
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Generation from "Pumped storage" in France FR
At the moment, the bar charts on the map only show pumped storage consumption for France. But RTE also has data for pumped storage generation. This is currently not displayed on the map, because the "hydro" category of RTE includes all three types "hydro storage+run of river+pumped storage". But there is a seperate "pumping" category for consumption of the pumped storages (pumping).
http://www.rte-france.com/en/eco2mix/eco2mix-mix-energetique-en

After selecting the hydro category, you'll see "details" below it. Selecting "details" you will see this, incuding the breakdown by hydro type:

The most recent dataset for France can also be downloaded here:
http://www.rte-france.com/en/eco2mix/eco2mix-telechargement-en
The FR.py parser seems to use this URL http://www.rte-france.com/getEco2MixXml.php?type=donneesMarche&dateDeb={}&dateFin={}&mode=NORM for getting the data. Maybe there is a similar one for the hydro breakdown by type to seperate pumped storage generation from it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/FR.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import arrow
4 import requests
5 import xml.etree.ElementTree as ET
6
7 MAP_GENERATION = {
8 u'Nucl\xe9aire': 'nuclear',
9 'Charbon': 'coal',
10 'Gaz': 'gas',
11 'Fioul': 'oil',
12 'Hydraulique': 'hydro',
13 'Eolien': 'wind',
14 'Solaire': 'solar',
15 'Autres': 'biomass'
16 }
17 MAP_STORAGE = {
18 'Pompage': 'hydro',
19 }
20
21
22 def fetch_production(country_code='FR', session=None):
23 r = session or requests.session()
24 formatted_date = arrow.now(tz='Europe/Paris').format('DD/MM/YYYY')
25 url = 'http://www.rte-france.com/getEco2MixXml.php?type=mix&&dateDeb={}&dateFin={}&mode=NORM'.format(formatted_date, formatted_date)
26 response = r.get(url)
27 obj = ET.fromstring(response.content)
28 mixtr = obj[7]
29 data = {
30 'countryCode': country_code,
31 'production': {},
32 'storage': {},
33 'source': 'rte-france.com',
34 }
35 for item in mixtr.getchildren():
36 if item.get('granularite') != 'Global':
37 continue
38 key = item.get('v')
39 value = None
40 for value in item.getchildren():
41 pass
42 if key in MAP_GENERATION:
43 data['production'][MAP_GENERATION[key]] = float(value.text)
44 elif key in MAP_STORAGE:
45 data['storage'][MAP_STORAGE[key]] = -1 * float(value.text)
46
47 data['datetime'] = arrow.get(arrow.get(obj[1].text).datetime,
48 'Europe/Paris').replace(minutes=+(int(value.attrib['periode']) * 15.0)).datetime
49
50 # Fetch imports
51 # url = 'http://www.rte-france.com/getEco2MixXml.php?type=echcom&&dateDeb={}&dateFin={}&mode=NORM'.format(formatted_date, formatted_date)
52 # response = r.get(url)
53 # obj = ET.fromstring(response.content)
54 # parsed = {}
55 # for item in obj[7].getchildren():
56 # value = None
57 # for value in item: pass
58 # parsed[item.get('v')] = float(value.text)
59
60 # data['exchange'] = {
61 # 'CH': parsed['CH'],
62 # 'GB': parsed['GB'],
63 # 'ES': parsed['ES'],
64 # 'IT': parsed['IT'],
65 # 'DE': parsed['DB'] # Germany + Belgium redirected to Germany
66 # }
67
68 return data
69
70
71 def fetch_price(country_code, session=None, from_date=None, to_date=None):
72 r = session or requests.session()
73 dt_now = arrow.now(tz='Europe/Paris')
74 formatted_from = from_date or dt_now.format('DD/MM/YYYY')
75 formatted_to = to_date or dt_now.format('DD/MM/YYYY')
76
77 url = 'http://www.rte-france.com/getEco2MixXml.php?type=donneesMarche&dateDeb={}&dateFin={}&mode=NORM'.format(formatted_from, formatted_to)
78 response = r.get(url)
79 obj = ET.fromstring(response.content)
80 mixtr = obj[5]
81
82 prices = []
83 datetimes = []
84
85 date_str = mixtr.get('date')
86 date = arrow.get(arrow.get(date_str).datetime, 'Europe/Paris')
87 for country_item in mixtr.getchildren():
88 if country_item.get('granularite') != 'Global':
89 continue
90 country_c = country_item.get('perimetre')
91 if country_code != country_c:
92 continue
93 value = None
94 for value in country_item.getchildren():
95 if value.text == 'ND':
96 continue
97 datetime = date.replace(hours=+int(value.attrib['periode'])).datetime
98 if datetime > dt_now:
99 continue
100 datetimes.append(datetime)
101 prices.append(float(value.text))
102
103 data = {
104 'countryCode': country_code,
105 'currency': 'EUR',
106 'datetime': datetimes[-1],
107 'price': prices[-1],
108 'source': 'rte-france.com',
109 }
110 return data
111
112
113 if __name__ == '__main__':
114 print(fetch_production())
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/FR.py b/parsers/FR.py
--- a/parsers/FR.py
+++ b/parsers/FR.py
@@ -16,6 +16,7 @@
}
MAP_STORAGE = {
'Pompage': 'hydro',
+ 'Hydraulique': 'hydro',
}
@@ -33,38 +34,36 @@
'source': 'rte-france.com',
}
for item in mixtr.getchildren():
- if item.get('granularite') != 'Global':
- continue
key = item.get('v')
+ granularite = item.get('granularite')
value = None
for value in item.getchildren():
pass
- if key in MAP_GENERATION:
- data['production'][MAP_GENERATION[key]] = float(value.text)
- elif key in MAP_STORAGE:
- data['storage'][MAP_STORAGE[key]] = -1 * float(value.text)
+ if key == 'Hydraulique':
+ # Hydro is a special case!
+ if granularite == 'Global':
+ continue
+ elif granularite in ['FEE', 'LAC']:
+ if not MAP_GENERATION[key] in data['production']:
+ data['production'][MAP_GENERATION[key]] = 0
+ # Run of the river or conventional
+ data['production'][MAP_GENERATION[key]] += float(value.text)
+ elif granularite == 'STT':
+ if not MAP_STORAGE[key] in data['storage']:
+ data['storage'][MAP_STORAGE[key]] = 0
+ # Pumped storage generation
+ data['storage'][MAP_STORAGE[key]] += -1 * float(value.text)
+ elif granularite == 'Global':
+ if key in MAP_GENERATION:
+ data['production'][MAP_GENERATION[key]] = float(value.text)
+ elif key in MAP_STORAGE:
+ if not MAP_STORAGE[key] in data['storage']:
+ data['storage'][MAP_STORAGE[key]] = 0
+ data['storage'][MAP_STORAGE[key]] += -1 * float(value.text)
data['datetime'] = arrow.get(arrow.get(obj[1].text).datetime,
'Europe/Paris').replace(minutes=+(int(value.attrib['periode']) * 15.0)).datetime
- # Fetch imports
- # url = 'http://www.rte-france.com/getEco2MixXml.php?type=echcom&&dateDeb={}&dateFin={}&mode=NORM'.format(formatted_date, formatted_date)
- # response = r.get(url)
- # obj = ET.fromstring(response.content)
- # parsed = {}
- # for item in obj[7].getchildren():
- # value = None
- # for value in item: pass
- # parsed[item.get('v')] = float(value.text)
-
- # data['exchange'] = {
- # 'CH': parsed['CH'],
- # 'GB': parsed['GB'],
- # 'ES': parsed['ES'],
- # 'IT': parsed['IT'],
- # 'DE': parsed['DB'] # Germany + Belgium redirected to Germany
- # }
-
return data
|
{"golden_diff": "diff --git a/parsers/FR.py b/parsers/FR.py\n--- a/parsers/FR.py\n+++ b/parsers/FR.py\n@@ -16,6 +16,7 @@\n }\n MAP_STORAGE = {\n 'Pompage': 'hydro',\n+ 'Hydraulique': 'hydro',\n }\n \n \n@@ -33,38 +34,36 @@\n 'source': 'rte-france.com',\n }\n for item in mixtr.getchildren():\n- if item.get('granularite') != 'Global':\n- continue\n key = item.get('v')\n+ granularite = item.get('granularite')\n value = None\n for value in item.getchildren():\n pass\n- if key in MAP_GENERATION:\n- data['production'][MAP_GENERATION[key]] = float(value.text)\n- elif key in MAP_STORAGE:\n- data['storage'][MAP_STORAGE[key]] = -1 * float(value.text)\n+ if key == 'Hydraulique':\n+ # Hydro is a special case!\n+ if granularite == 'Global':\n+ continue\n+ elif granularite in ['FEE', 'LAC']:\n+ if not MAP_GENERATION[key] in data['production']:\n+ data['production'][MAP_GENERATION[key]] = 0\n+ # Run of the river or conventional\n+ data['production'][MAP_GENERATION[key]] += float(value.text)\n+ elif granularite == 'STT':\n+ if not MAP_STORAGE[key] in data['storage']:\n+ data['storage'][MAP_STORAGE[key]] = 0\n+ # Pumped storage generation\n+ data['storage'][MAP_STORAGE[key]] += -1 * float(value.text)\n+ elif granularite == 'Global':\n+ if key in MAP_GENERATION:\n+ data['production'][MAP_GENERATION[key]] = float(value.text)\n+ elif key in MAP_STORAGE:\n+ if not MAP_STORAGE[key] in data['storage']:\n+ data['storage'][MAP_STORAGE[key]] = 0\n+ data['storage'][MAP_STORAGE[key]] += -1 * float(value.text)\n \n data['datetime'] = arrow.get(arrow.get(obj[1].text).datetime,\n 'Europe/Paris').replace(minutes=+(int(value.attrib['periode']) * 15.0)).datetime\n \n- # Fetch imports\n- # url = 'http://www.rte-france.com/getEco2MixXml.php?type=echcom&&dateDeb={}&dateFin={}&mode=NORM'.format(formatted_date, formatted_date)\n- # response = r.get(url)\n- # obj = ET.fromstring(response.content)\n- # parsed = {}\n- # for item in obj[7].getchildren():\n- # value = None\n- # for value in item: pass\n- # parsed[item.get('v')] = float(value.text)\n-\n- # data['exchange'] = {\n- # 'CH': parsed['CH'],\n- # 'GB': parsed['GB'],\n- # 'ES': parsed['ES'],\n- # 'IT': parsed['IT'],\n- # 'DE': parsed['DB'] # Germany + Belgium redirected to Germany\n- # }\n-\n return data\n", "issue": "Generation from \"Pumped storage\" in France FR\nAt the moment, the bar charts on the map only show pumped storage consumption for France. But RTE also has data for pumped storage generation. This is currently not displayed on the map, because the \"hydro\" category of RTE includes all three types \"hydro storage+run of river+pumped storage\". But there is a seperate \"pumping\" category for consumption of the pumped storages (pumping).\r\nhttp://www.rte-france.com/en/eco2mix/eco2mix-mix-energetique-en\r\n\r\n\r\n\r\nAfter selecting the hydro category, you'll see \"details\" below it. Selecting \"details\" you will see this, incuding the breakdown by hydro type:\r\n\r\n\r\nThe most recent dataset for France can also be downloaded here:\r\nhttp://www.rte-france.com/en/eco2mix/eco2mix-telechargement-en\r\n\r\nThe FR.py parser seems to use this URL http://www.rte-france.com/getEco2MixXml.php?type=donneesMarche&dateDeb={}&dateFin={}&mode=NORM for getting the data. Maybe there is a similar one for the hydro breakdown by type to seperate pumped storage generation from it.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport arrow\nimport requests\nimport xml.etree.ElementTree as ET\n\nMAP_GENERATION = {\n u'Nucl\\xe9aire': 'nuclear',\n 'Charbon': 'coal',\n 'Gaz': 'gas',\n 'Fioul': 'oil',\n 'Hydraulique': 'hydro',\n 'Eolien': 'wind',\n 'Solaire': 'solar',\n 'Autres': 'biomass'\n}\nMAP_STORAGE = {\n 'Pompage': 'hydro',\n}\n\n\ndef fetch_production(country_code='FR', session=None):\n r = session or requests.session()\n formatted_date = arrow.now(tz='Europe/Paris').format('DD/MM/YYYY')\n url = 'http://www.rte-france.com/getEco2MixXml.php?type=mix&&dateDeb={}&dateFin={}&mode=NORM'.format(formatted_date, formatted_date)\n response = r.get(url)\n obj = ET.fromstring(response.content)\n mixtr = obj[7]\n data = {\n 'countryCode': country_code,\n 'production': {},\n 'storage': {},\n 'source': 'rte-france.com',\n }\n for item in mixtr.getchildren():\n if item.get('granularite') != 'Global':\n continue\n key = item.get('v')\n value = None\n for value in item.getchildren():\n pass\n if key in MAP_GENERATION:\n data['production'][MAP_GENERATION[key]] = float(value.text)\n elif key in MAP_STORAGE:\n data['storage'][MAP_STORAGE[key]] = -1 * float(value.text)\n\n data['datetime'] = arrow.get(arrow.get(obj[1].text).datetime,\n 'Europe/Paris').replace(minutes=+(int(value.attrib['periode']) * 15.0)).datetime\n\n # Fetch imports\n # url = 'http://www.rte-france.com/getEco2MixXml.php?type=echcom&&dateDeb={}&dateFin={}&mode=NORM'.format(formatted_date, formatted_date)\n # response = r.get(url)\n # obj = ET.fromstring(response.content)\n # parsed = {}\n # for item in obj[7].getchildren():\n # value = None\n # for value in item: pass\n # parsed[item.get('v')] = float(value.text)\n\n # data['exchange'] = {\n # 'CH': parsed['CH'],\n # 'GB': parsed['GB'],\n # 'ES': parsed['ES'],\n # 'IT': parsed['IT'],\n # 'DE': parsed['DB'] # Germany + Belgium redirected to Germany\n # }\n\n return data\n\n\ndef fetch_price(country_code, session=None, from_date=None, to_date=None):\n r = session or requests.session()\n dt_now = arrow.now(tz='Europe/Paris')\n formatted_from = from_date or dt_now.format('DD/MM/YYYY')\n formatted_to = to_date or dt_now.format('DD/MM/YYYY')\n\n url = 'http://www.rte-france.com/getEco2MixXml.php?type=donneesMarche&dateDeb={}&dateFin={}&mode=NORM'.format(formatted_from, formatted_to)\n response = r.get(url)\n obj = ET.fromstring(response.content)\n mixtr = obj[5]\n\n prices = []\n datetimes = []\n\n date_str = mixtr.get('date')\n date = arrow.get(arrow.get(date_str).datetime, 'Europe/Paris')\n for country_item in mixtr.getchildren():\n if country_item.get('granularite') != 'Global':\n continue\n country_c = country_item.get('perimetre')\n if country_code != country_c:\n continue\n value = None\n for value in country_item.getchildren():\n if value.text == 'ND':\n continue\n datetime = date.replace(hours=+int(value.attrib['periode'])).datetime\n if datetime > dt_now:\n continue\n datetimes.append(datetime)\n prices.append(float(value.text))\n\n data = {\n 'countryCode': country_code,\n 'currency': 'EUR',\n 'datetime': datetimes[-1],\n 'price': prices[-1],\n 'source': 'rte-france.com',\n }\n return data\n\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/FR.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport arrow\nimport requests\nimport xml.etree.ElementTree as ET\n\nMAP_GENERATION = {\n u'Nucl\\xe9aire': 'nuclear',\n 'Charbon': 'coal',\n 'Gaz': 'gas',\n 'Fioul': 'oil',\n 'Hydraulique': 'hydro',\n 'Eolien': 'wind',\n 'Solaire': 'solar',\n 'Autres': 'biomass'\n}\nMAP_STORAGE = {\n 'Pompage': 'hydro',\n 'Hydraulique': 'hydro',\n}\n\n\ndef fetch_production(country_code='FR', session=None):\n r = session or requests.session()\n formatted_date = arrow.now(tz='Europe/Paris').format('DD/MM/YYYY')\n url = 'http://www.rte-france.com/getEco2MixXml.php?type=mix&&dateDeb={}&dateFin={}&mode=NORM'.format(formatted_date, formatted_date)\n response = r.get(url)\n obj = ET.fromstring(response.content)\n mixtr = obj[7]\n data = {\n 'countryCode': country_code,\n 'production': {},\n 'storage': {},\n 'source': 'rte-france.com',\n }\n for item in mixtr.getchildren():\n key = item.get('v')\n granularite = item.get('granularite')\n value = None\n for value in item.getchildren():\n pass\n if key == 'Hydraulique':\n # Hydro is a special case!\n if granularite == 'Global':\n continue\n elif granularite in ['FEE', 'LAC']:\n if not MAP_GENERATION[key] in data['production']:\n data['production'][MAP_GENERATION[key]] = 0\n # Run of the river or conventional\n data['production'][MAP_GENERATION[key]] += float(value.text)\n elif granularite == 'STT':\n if not MAP_STORAGE[key] in data['storage']:\n data['storage'][MAP_STORAGE[key]] = 0\n # Pumped storage generation\n data['storage'][MAP_STORAGE[key]] += -1 * float(value.text)\n elif granularite == 'Global':\n if key in MAP_GENERATION:\n data['production'][MAP_GENERATION[key]] = float(value.text)\n elif key in MAP_STORAGE:\n if not MAP_STORAGE[key] in data['storage']:\n data['storage'][MAP_STORAGE[key]] = 0\n data['storage'][MAP_STORAGE[key]] += -1 * float(value.text)\n\n data['datetime'] = arrow.get(arrow.get(obj[1].text).datetime,\n 'Europe/Paris').replace(minutes=+(int(value.attrib['periode']) * 15.0)).datetime\n\n return data\n\n\ndef fetch_price(country_code, session=None, from_date=None, to_date=None):\n r = session or requests.session()\n dt_now = arrow.now(tz='Europe/Paris')\n formatted_from = from_date or dt_now.format('DD/MM/YYYY')\n formatted_to = to_date or dt_now.format('DD/MM/YYYY')\n\n url = 'http://www.rte-france.com/getEco2MixXml.php?type=donneesMarche&dateDeb={}&dateFin={}&mode=NORM'.format(formatted_from, formatted_to)\n response = r.get(url)\n obj = ET.fromstring(response.content)\n mixtr = obj[5]\n\n prices = []\n datetimes = []\n\n date_str = mixtr.get('date')\n date = arrow.get(arrow.get(date_str).datetime, 'Europe/Paris')\n for country_item in mixtr.getchildren():\n if country_item.get('granularite') != 'Global':\n continue\n country_c = country_item.get('perimetre')\n if country_code != country_c:\n continue\n value = None\n for value in country_item.getchildren():\n if value.text == 'ND':\n continue\n datetime = date.replace(hours=+int(value.attrib['periode'])).datetime\n if datetime > dt_now:\n continue\n datetimes.append(datetime)\n prices.append(float(value.text))\n\n data = {\n 'countryCode': country_code,\n 'currency': 'EUR',\n 'datetime': datetimes[-1],\n 'price': prices[-1],\n 'source': 'rte-france.com',\n }\n return data\n\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/FR.py"}]}
| 1,832 | 721 |
gh_patches_debug_14512
|
rasdani/github-patches
|
git_diff
|
safe-global__safe-config-service-698
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
500 Error on unsanitized URL query params
**Describe the bug**
Error response with 500 Internal server Error is returned to the clients when a unsanitized URL query param is sent to the service.
**To Reproduce**
Steps to reproduce the behavior:
- Check: https://safe-config.safe.global/api/v1/safe-apps/?url=%00
**Expected behavior**
URL input is sanitized beforehand.
**Environment**
- Staging & production
- All chains
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/safe_apps/views.py`
Content:
```
1 from typing import Any
2
3 from django.db.models import Q, QuerySet
4 from django.utils.decorators import method_decorator
5 from django.views.decorators.cache import cache_page
6 from drf_yasg import openapi
7 from drf_yasg.utils import swagger_auto_schema
8 from rest_framework.generics import ListAPIView
9 from rest_framework.request import Request
10 from rest_framework.response import Response
11
12 from .models import SafeApp
13 from .serializers import SafeAppsResponseSerializer
14
15
16 class SafeAppsListView(ListAPIView):
17 serializer_class = SafeAppsResponseSerializer
18 pagination_class = None
19
20 _swagger_chain_id_param = openapi.Parameter(
21 "chainId",
22 openapi.IN_QUERY,
23 description="Used to filter Safe Apps that are available on `chainId`",
24 type=openapi.TYPE_INTEGER,
25 )
26 _swagger_client_url_param = openapi.Parameter(
27 "clientUrl",
28 openapi.IN_QUERY,
29 description="Used to filter Safe Apps that are available on `clientUrl`",
30 type=openapi.TYPE_STRING,
31 )
32 _swagger_url_param = openapi.Parameter(
33 "url",
34 openapi.IN_QUERY,
35 description="Filter Safe Apps available from `url`. `url` needs to be an exact match",
36 type=openapi.TYPE_STRING,
37 )
38
39 @method_decorator(cache_page(60 * 10, cache="safe-apps")) # Cache 10 minutes
40 @swagger_auto_schema(
41 manual_parameters=[
42 _swagger_chain_id_param,
43 _swagger_client_url_param,
44 _swagger_url_param,
45 ]
46 ) # type: ignore[misc]
47 def get(self, request: Request, *args: Any, **kwargs: Any) -> Response:
48 """
49 Returns a collection of Safe Apps (across different chains).
50 Each Safe App can optionally include the information about the `Provider`
51 """
52 return super().get(request, *args, **kwargs)
53
54 def get_queryset(self) -> QuerySet[SafeApp]:
55 queryset = SafeApp.objects.filter(visible=True)
56
57 chain_id = self.request.query_params.get("chainId")
58 if chain_id is not None and chain_id.isdigit():
59 queryset = queryset.filter(chain_ids__contains=[chain_id])
60
61 client_url = self.request.query_params.get("clientUrl")
62 if client_url:
63 queryset = queryset.filter(
64 Q(exclusive_clients__url=client_url) | Q(exclusive_clients__isnull=True)
65 )
66
67 url = self.request.query_params.get("url")
68 if url:
69 queryset = queryset.filter(url=url)
70
71 return queryset
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/safe_apps/views.py b/src/safe_apps/views.py
--- a/src/safe_apps/views.py
+++ b/src/safe_apps/views.py
@@ -59,13 +59,13 @@
queryset = queryset.filter(chain_ids__contains=[chain_id])
client_url = self.request.query_params.get("clientUrl")
- if client_url:
+ if client_url and "\0" not in client_url:
queryset = queryset.filter(
Q(exclusive_clients__url=client_url) | Q(exclusive_clients__isnull=True)
)
url = self.request.query_params.get("url")
- if url:
+ if url and "\0" not in url:
queryset = queryset.filter(url=url)
return queryset
|
{"golden_diff": "diff --git a/src/safe_apps/views.py b/src/safe_apps/views.py\n--- a/src/safe_apps/views.py\n+++ b/src/safe_apps/views.py\n@@ -59,13 +59,13 @@\n queryset = queryset.filter(chain_ids__contains=[chain_id])\n \n client_url = self.request.query_params.get(\"clientUrl\")\n- if client_url:\n+ if client_url and \"\\0\" not in client_url:\n queryset = queryset.filter(\n Q(exclusive_clients__url=client_url) | Q(exclusive_clients__isnull=True)\n )\n \n url = self.request.query_params.get(\"url\")\n- if url:\n+ if url and \"\\0\" not in url:\n queryset = queryset.filter(url=url)\n \n return queryset\n", "issue": "500 Error on unsanitized URL query params \n**Describe the bug**\r\nError response with 500 Internal server Error is returned to the clients when a unsanitized URL query param is sent to the service.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n- Check: https://safe-config.safe.global/api/v1/safe-apps/?url=%00\r\n\r\n**Expected behavior**\r\nURL input is sanitized beforehand.\r\n\r\n**Environment**\r\n - Staging & production\r\n - All chains\r\n\n", "before_files": [{"content": "from typing import Any\n\nfrom django.db.models import Q, QuerySet\nfrom django.utils.decorators import method_decorator\nfrom django.views.decorators.cache import cache_page\nfrom drf_yasg import openapi\nfrom drf_yasg.utils import swagger_auto_schema\nfrom rest_framework.generics import ListAPIView\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\n\nfrom .models import SafeApp\nfrom .serializers import SafeAppsResponseSerializer\n\n\nclass SafeAppsListView(ListAPIView):\n serializer_class = SafeAppsResponseSerializer\n pagination_class = None\n\n _swagger_chain_id_param = openapi.Parameter(\n \"chainId\",\n openapi.IN_QUERY,\n description=\"Used to filter Safe Apps that are available on `chainId`\",\n type=openapi.TYPE_INTEGER,\n )\n _swagger_client_url_param = openapi.Parameter(\n \"clientUrl\",\n openapi.IN_QUERY,\n description=\"Used to filter Safe Apps that are available on `clientUrl`\",\n type=openapi.TYPE_STRING,\n )\n _swagger_url_param = openapi.Parameter(\n \"url\",\n openapi.IN_QUERY,\n description=\"Filter Safe Apps available from `url`. `url` needs to be an exact match\",\n type=openapi.TYPE_STRING,\n )\n\n @method_decorator(cache_page(60 * 10, cache=\"safe-apps\")) # Cache 10 minutes\n @swagger_auto_schema(\n manual_parameters=[\n _swagger_chain_id_param,\n _swagger_client_url_param,\n _swagger_url_param,\n ]\n ) # type: ignore[misc]\n def get(self, request: Request, *args: Any, **kwargs: Any) -> Response:\n \"\"\"\n Returns a collection of Safe Apps (across different chains).\n Each Safe App can optionally include the information about the `Provider`\n \"\"\"\n return super().get(request, *args, **kwargs)\n\n def get_queryset(self) -> QuerySet[SafeApp]:\n queryset = SafeApp.objects.filter(visible=True)\n\n chain_id = self.request.query_params.get(\"chainId\")\n if chain_id is not None and chain_id.isdigit():\n queryset = queryset.filter(chain_ids__contains=[chain_id])\n\n client_url = self.request.query_params.get(\"clientUrl\")\n if client_url:\n queryset = queryset.filter(\n Q(exclusive_clients__url=client_url) | Q(exclusive_clients__isnull=True)\n )\n\n url = self.request.query_params.get(\"url\")\n if url:\n queryset = queryset.filter(url=url)\n\n return queryset\n", "path": "src/safe_apps/views.py"}], "after_files": [{"content": "from typing import Any\n\nfrom django.db.models import Q, QuerySet\nfrom django.utils.decorators import method_decorator\nfrom django.views.decorators.cache import cache_page\nfrom drf_yasg import openapi\nfrom drf_yasg.utils import swagger_auto_schema\nfrom rest_framework.generics import ListAPIView\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\n\nfrom .models import SafeApp\nfrom .serializers import SafeAppsResponseSerializer\n\n\nclass SafeAppsListView(ListAPIView):\n serializer_class = SafeAppsResponseSerializer\n pagination_class = None\n\n _swagger_chain_id_param = openapi.Parameter(\n \"chainId\",\n openapi.IN_QUERY,\n description=\"Used to filter Safe Apps that are available on `chainId`\",\n type=openapi.TYPE_INTEGER,\n )\n _swagger_client_url_param = openapi.Parameter(\n \"clientUrl\",\n openapi.IN_QUERY,\n description=\"Used to filter Safe Apps that are available on `clientUrl`\",\n type=openapi.TYPE_STRING,\n )\n _swagger_url_param = openapi.Parameter(\n \"url\",\n openapi.IN_QUERY,\n description=\"Filter Safe Apps available from `url`. `url` needs to be an exact match\",\n type=openapi.TYPE_STRING,\n )\n\n @method_decorator(cache_page(60 * 10, cache=\"safe-apps\")) # Cache 10 minutes\n @swagger_auto_schema(\n manual_parameters=[\n _swagger_chain_id_param,\n _swagger_client_url_param,\n _swagger_url_param,\n ]\n ) # type: ignore[misc]\n def get(self, request: Request, *args: Any, **kwargs: Any) -> Response:\n \"\"\"\n Returns a collection of Safe Apps (across different chains).\n Each Safe App can optionally include the information about the `Provider`\n \"\"\"\n return super().get(request, *args, **kwargs)\n\n def get_queryset(self) -> QuerySet[SafeApp]:\n queryset = SafeApp.objects.filter(visible=True)\n\n chain_id = self.request.query_params.get(\"chainId\")\n if chain_id is not None and chain_id.isdigit():\n queryset = queryset.filter(chain_ids__contains=[chain_id])\n\n client_url = self.request.query_params.get(\"clientUrl\")\n if client_url and \"\\0\" not in client_url:\n queryset = queryset.filter(\n Q(exclusive_clients__url=client_url) | Q(exclusive_clients__isnull=True)\n )\n\n url = self.request.query_params.get(\"url\")\n if url and \"\\0\" not in url:\n queryset = queryset.filter(url=url)\n\n return queryset\n", "path": "src/safe_apps/views.py"}]}
| 1,042 | 168 |
gh_patches_debug_25253
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-2368
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
take out secret dev notes visible in frontend :-)

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/projects/templatetags/meinberlin_project_tags.py`
Content:
```
1 from django import template
2
3 from adhocracy4.comments.models import Comment
4 from meinberlin.apps.budgeting.models import Proposal as budget_proposal
5 from meinberlin.apps.ideas.models import Idea
6 from meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal
7 from meinberlin.apps.mapideas.models import MapIdea
8 from meinberlin.apps.projects import get_project_type
9
10 register = template.Library()
11
12
13 @register.filter
14 def project_url(project):
15 if get_project_type(project) in ('external', 'bplan'):
16 return project.externalproject.url
17 return project.get_absolute_url()
18
19
20 @register.filter
21 def project_type(project):
22 return get_project_type(project)
23
24
25 @register.filter
26 def is_external(project):
27 return get_project_type(project) in ('external', 'bplan')
28
29
30 @register.filter
31 def is_container(project):
32 return get_project_type(project) == 'container'
33
34
35 @register.simple_tag
36 def to_class_name(value):
37 return value.__class__.__name__
38
39
40 @register.simple_tag
41 def get_num_entries(module):
42 """Count all user-generated items."""
43 item_count = Idea.objects.filter(module=module).count() \
44 + MapIdea.objects.filter(module=module).count() \
45 + budget_proposal.objects.filter(module=module).count() \
46 + kiezkasse_proposal.objects.filter(module=module).count() \
47 + Comment.objects.filter(idea__module=module).count() \
48 + Comment.objects.filter(mapidea__module=module).count() \
49 + Comment.objects.filter(budget_proposal__module=module).count() \
50 + Comment.objects.filter(kiezkasse_proposal__module=module).count()
51 return item_count
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
--- a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
+++ b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
@@ -40,12 +40,18 @@
@register.simple_tag
def get_num_entries(module):
"""Count all user-generated items."""
- item_count = Idea.objects.filter(module=module).count() \
+ item_count = \
+ Idea.objects.filter(module=module).count() \
+ MapIdea.objects.filter(module=module).count() \
+ budget_proposal.objects.filter(module=module).count() \
+ kiezkasse_proposal.objects.filter(module=module).count() \
+ Comment.objects.filter(idea__module=module).count() \
+ Comment.objects.filter(mapidea__module=module).count() \
+ Comment.objects.filter(budget_proposal__module=module).count() \
- + Comment.objects.filter(kiezkasse_proposal__module=module).count()
+ + Comment.objects.filter(kiezkasse_proposal__module=module).count() \
+ + Comment.objects.filter(topic__module=module).count() \
+ + Comment.objects.filter(maptopic__module=module).count() \
+ + Comment.objects.filter(paragraph__chapter__module=module).count() \
+ + Comment.objects.filter(chapter__module=module).count() \
+ + Comment.objects.filter(poll__module=module).count()
return item_count
|
{"golden_diff": "diff --git a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n--- a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n+++ b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n@@ -40,12 +40,18 @@\n @register.simple_tag\n def get_num_entries(module):\n \"\"\"Count all user-generated items.\"\"\"\n- item_count = Idea.objects.filter(module=module).count() \\\n+ item_count = \\\n+ Idea.objects.filter(module=module).count() \\\n + MapIdea.objects.filter(module=module).count() \\\n + budget_proposal.objects.filter(module=module).count() \\\n + kiezkasse_proposal.objects.filter(module=module).count() \\\n + Comment.objects.filter(idea__module=module).count() \\\n + Comment.objects.filter(mapidea__module=module).count() \\\n + Comment.objects.filter(budget_proposal__module=module).count() \\\n- + Comment.objects.filter(kiezkasse_proposal__module=module).count()\n+ + Comment.objects.filter(kiezkasse_proposal__module=module).count() \\\n+ + Comment.objects.filter(topic__module=module).count() \\\n+ + Comment.objects.filter(maptopic__module=module).count() \\\n+ + Comment.objects.filter(paragraph__chapter__module=module).count() \\\n+ + Comment.objects.filter(chapter__module=module).count() \\\n+ + Comment.objects.filter(poll__module=module).count()\n return item_count\n", "issue": "take out secret dev notes visible in frontend :-)\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django import template\n\nfrom adhocracy4.comments.models import Comment\nfrom meinberlin.apps.budgeting.models import Proposal as budget_proposal\nfrom meinberlin.apps.ideas.models import Idea\nfrom meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal\nfrom meinberlin.apps.mapideas.models import MapIdea\nfrom meinberlin.apps.projects import get_project_type\n\nregister = template.Library()\n\n\[email protected]\ndef project_url(project):\n if get_project_type(project) in ('external', 'bplan'):\n return project.externalproject.url\n return project.get_absolute_url()\n\n\[email protected]\ndef project_type(project):\n return get_project_type(project)\n\n\[email protected]\ndef is_external(project):\n return get_project_type(project) in ('external', 'bplan')\n\n\[email protected]\ndef is_container(project):\n return get_project_type(project) == 'container'\n\n\[email protected]_tag\ndef to_class_name(value):\n return value.__class__.__name__\n\n\[email protected]_tag\ndef get_num_entries(module):\n \"\"\"Count all user-generated items.\"\"\"\n item_count = Idea.objects.filter(module=module).count() \\\n + MapIdea.objects.filter(module=module).count() \\\n + budget_proposal.objects.filter(module=module).count() \\\n + kiezkasse_proposal.objects.filter(module=module).count() \\\n + Comment.objects.filter(idea__module=module).count() \\\n + Comment.objects.filter(mapidea__module=module).count() \\\n + Comment.objects.filter(budget_proposal__module=module).count() \\\n + Comment.objects.filter(kiezkasse_proposal__module=module).count()\n return item_count\n", "path": "meinberlin/apps/projects/templatetags/meinberlin_project_tags.py"}], "after_files": [{"content": "from django import template\n\nfrom adhocracy4.comments.models import Comment\nfrom meinberlin.apps.budgeting.models import Proposal as budget_proposal\nfrom meinberlin.apps.ideas.models import Idea\nfrom meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal\nfrom meinberlin.apps.mapideas.models import MapIdea\nfrom meinberlin.apps.projects import get_project_type\n\nregister = template.Library()\n\n\[email protected]\ndef project_url(project):\n if get_project_type(project) in ('external', 'bplan'):\n return project.externalproject.url\n return project.get_absolute_url()\n\n\[email protected]\ndef project_type(project):\n return get_project_type(project)\n\n\[email protected]\ndef is_external(project):\n return get_project_type(project) in ('external', 'bplan')\n\n\[email protected]\ndef is_container(project):\n return get_project_type(project) == 'container'\n\n\[email protected]_tag\ndef to_class_name(value):\n return value.__class__.__name__\n\n\[email protected]_tag\ndef get_num_entries(module):\n \"\"\"Count all user-generated items.\"\"\"\n item_count = \\\n Idea.objects.filter(module=module).count() \\\n + MapIdea.objects.filter(module=module).count() \\\n + budget_proposal.objects.filter(module=module).count() \\\n + kiezkasse_proposal.objects.filter(module=module).count() \\\n + Comment.objects.filter(idea__module=module).count() \\\n + Comment.objects.filter(mapidea__module=module).count() \\\n + Comment.objects.filter(budget_proposal__module=module).count() \\\n + Comment.objects.filter(kiezkasse_proposal__module=module).count() \\\n + Comment.objects.filter(topic__module=module).count() \\\n + Comment.objects.filter(maptopic__module=module).count() \\\n + Comment.objects.filter(paragraph__chapter__module=module).count() \\\n + Comment.objects.filter(chapter__module=module).count() \\\n + Comment.objects.filter(poll__module=module).count()\n return item_count\n", "path": "meinberlin/apps/projects/templatetags/meinberlin_project_tags.py"}]}
| 836 | 367 |
gh_patches_debug_6788
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-1733
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Login ID and Password fields for a learner/user should not be case sensitive.
## Summary
Login ID and Password fields for a learner/user should not be case sensitive, this is especially for young learners and they struggle a lot to login itself.
Please consider this change for Nalanda branch.
## System information
- Version: Kolibri 0.4.0beta9
- Operating system: Ubuntu 14.04 LTS
- Browser: Chrome
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/auth/backends.py`
Content:
```
1 """
2 Implements custom auth backends as described in the Django docs, for our custom user classes -- FacilityUser and
3 DeviceOwner. The appropriate classes should be listed in the AUTHENTICATION_BACKENDS. Note that authentication
4 backends are checked in the order they're listed.
5 """
6
7 from kolibri.auth.models import DeviceOwner, FacilityUser
8
9
10 class FacilityUserBackend(object):
11 """
12 A class that implements authentication for FacilityUsers.
13 """
14
15 def authenticate(self, username=None, password=None, facility=None):
16 """
17 Authenticates the user if the credentials correspond to a FacilityUser for the specified Facility.
18
19 :param username: a string
20 :param password: a string
21 :param facility: a Facility
22 :return: A FacilityUser instance if successful, or None if authentication failed.
23 """
24 users = FacilityUser.objects.filter(username=username)
25 if facility:
26 users = users.filter(facility=facility)
27 for user in users:
28 if user.check_password(password):
29 return user
30 # Allow login without password for learners for facilities that allow this.
31 # Must specify the facility, to prevent accidental logins
32 elif facility and user.dataset.learner_can_login_with_no_password and not user.roles.count():
33 return user
34 return None
35
36 def get_user(self, user_id):
37 """
38 Gets a user. Auth backends are required to implement this.
39
40 :param user_id: A FacilityUser pk
41 :return: A FacilityUser instance if a BaseUser with that pk is found, else None.
42 """
43 try:
44 return FacilityUser.objects.get(pk=user_id)
45 except FacilityUser.DoesNotExist:
46 return None
47
48
49 class DeviceOwnerBackend(object):
50 """
51 A class that implements authentication for DeviceOwners.
52 """
53
54 def authenticate(self, username=None, password=None, **kwargs):
55 """
56 Authenticates the user if the credentials correspond to a DeviceOwner.
57
58 :param username: a string
59 :param password: a string
60 :return: A DeviceOwner instance if successful, or None if authentication failed.
61 """
62 try:
63 user = DeviceOwner.objects.get(username=username)
64 if user.check_password(password):
65 return user
66 else:
67 return None
68 except DeviceOwner.DoesNotExist:
69 return None
70
71 def get_user(self, user_id):
72 """
73 Gets a user. Auth backends are required to implement this.
74
75 :param user_id: A BaseUser pk
76 :return: A DeviceOwner instance if a BaseUser with that pk is found, else None.
77 """
78 try:
79 return DeviceOwner.objects.get(pk=user_id)
80 except DeviceOwner.DoesNotExist:
81 return None
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kolibri/auth/backends.py b/kolibri/auth/backends.py
--- a/kolibri/auth/backends.py
+++ b/kolibri/auth/backends.py
@@ -21,7 +21,7 @@
:param facility: a Facility
:return: A FacilityUser instance if successful, or None if authentication failed.
"""
- users = FacilityUser.objects.filter(username=username)
+ users = FacilityUser.objects.filter(username__iexact=username)
if facility:
users = users.filter(facility=facility)
for user in users:
|
{"golden_diff": "diff --git a/kolibri/auth/backends.py b/kolibri/auth/backends.py\n--- a/kolibri/auth/backends.py\n+++ b/kolibri/auth/backends.py\n@@ -21,7 +21,7 @@\n :param facility: a Facility\n :return: A FacilityUser instance if successful, or None if authentication failed.\n \"\"\"\n- users = FacilityUser.objects.filter(username=username)\n+ users = FacilityUser.objects.filter(username__iexact=username)\n if facility:\n users = users.filter(facility=facility)\n for user in users:\n", "issue": "Login ID and Password fields for a learner/user should not be case sensitive.\n## Summary\r\n\r\nLogin ID and Password fields for a learner/user should not be case sensitive, this is especially for young learners and they struggle a lot to login itself.\r\n\r\nPlease consider this change for Nalanda branch.\r\n\r\n## System information\r\n - Version: Kolibri 0.4.0beta9\r\n - Operating system: Ubuntu 14.04 LTS\r\n - Browser: Chrome\r\n\n", "before_files": [{"content": "\"\"\"\nImplements custom auth backends as described in the Django docs, for our custom user classes -- FacilityUser and\nDeviceOwner. The appropriate classes should be listed in the AUTHENTICATION_BACKENDS. Note that authentication\nbackends are checked in the order they're listed.\n\"\"\"\n\nfrom kolibri.auth.models import DeviceOwner, FacilityUser\n\n\nclass FacilityUserBackend(object):\n \"\"\"\n A class that implements authentication for FacilityUsers.\n \"\"\"\n\n def authenticate(self, username=None, password=None, facility=None):\n \"\"\"\n Authenticates the user if the credentials correspond to a FacilityUser for the specified Facility.\n\n :param username: a string\n :param password: a string\n :param facility: a Facility\n :return: A FacilityUser instance if successful, or None if authentication failed.\n \"\"\"\n users = FacilityUser.objects.filter(username=username)\n if facility:\n users = users.filter(facility=facility)\n for user in users:\n if user.check_password(password):\n return user\n # Allow login without password for learners for facilities that allow this.\n # Must specify the facility, to prevent accidental logins\n elif facility and user.dataset.learner_can_login_with_no_password and not user.roles.count():\n return user\n return None\n\n def get_user(self, user_id):\n \"\"\"\n Gets a user. Auth backends are required to implement this.\n\n :param user_id: A FacilityUser pk\n :return: A FacilityUser instance if a BaseUser with that pk is found, else None.\n \"\"\"\n try:\n return FacilityUser.objects.get(pk=user_id)\n except FacilityUser.DoesNotExist:\n return None\n\n\nclass DeviceOwnerBackend(object):\n \"\"\"\n A class that implements authentication for DeviceOwners.\n \"\"\"\n\n def authenticate(self, username=None, password=None, **kwargs):\n \"\"\"\n Authenticates the user if the credentials correspond to a DeviceOwner.\n\n :param username: a string\n :param password: a string\n :return: A DeviceOwner instance if successful, or None if authentication failed.\n \"\"\"\n try:\n user = DeviceOwner.objects.get(username=username)\n if user.check_password(password):\n return user\n else:\n return None\n except DeviceOwner.DoesNotExist:\n return None\n\n def get_user(self, user_id):\n \"\"\"\n Gets a user. Auth backends are required to implement this.\n\n :param user_id: A BaseUser pk\n :return: A DeviceOwner instance if a BaseUser with that pk is found, else None.\n \"\"\"\n try:\n return DeviceOwner.objects.get(pk=user_id)\n except DeviceOwner.DoesNotExist:\n return None\n", "path": "kolibri/auth/backends.py"}], "after_files": [{"content": "\"\"\"\nImplements custom auth backends as described in the Django docs, for our custom user classes -- FacilityUser and\nDeviceOwner. The appropriate classes should be listed in the AUTHENTICATION_BACKENDS. Note that authentication\nbackends are checked in the order they're listed.\n\"\"\"\n\nfrom kolibri.auth.models import DeviceOwner, FacilityUser\n\n\nclass FacilityUserBackend(object):\n \"\"\"\n A class that implements authentication for FacilityUsers.\n \"\"\"\n\n def authenticate(self, username=None, password=None, facility=None):\n \"\"\"\n Authenticates the user if the credentials correspond to a FacilityUser for the specified Facility.\n\n :param username: a string\n :param password: a string\n :param facility: a Facility\n :return: A FacilityUser instance if successful, or None if authentication failed.\n \"\"\"\n users = FacilityUser.objects.filter(username__iexact=username)\n if facility:\n users = users.filter(facility=facility)\n for user in users:\n if user.check_password(password):\n return user\n # Allow login without password for learners for facilities that allow this.\n # Must specify the facility, to prevent accidental logins\n elif facility and user.dataset.learner_can_login_with_no_password and not user.roles.count():\n return user\n return None\n\n def get_user(self, user_id):\n \"\"\"\n Gets a user. Auth backends are required to implement this.\n\n :param user_id: A FacilityUser pk\n :return: A FacilityUser instance if a BaseUser with that pk is found, else None.\n \"\"\"\n try:\n return FacilityUser.objects.get(pk=user_id)\n except FacilityUser.DoesNotExist:\n return None\n\n\nclass DeviceOwnerBackend(object):\n \"\"\"\n A class that implements authentication for DeviceOwners.\n \"\"\"\n\n def authenticate(self, username=None, password=None, **kwargs):\n \"\"\"\n Authenticates the user if the credentials correspond to a DeviceOwner.\n\n :param username: a string\n :param password: a string\n :return: A DeviceOwner instance if successful, or None if authentication failed.\n \"\"\"\n try:\n user = DeviceOwner.objects.get(username=username)\n if user.check_password(password):\n return user\n else:\n return None\n except DeviceOwner.DoesNotExist:\n return None\n\n def get_user(self, user_id):\n \"\"\"\n Gets a user. Auth backends are required to implement this.\n\n :param user_id: A BaseUser pk\n :return: A DeviceOwner instance if a BaseUser with that pk is found, else None.\n \"\"\"\n try:\n return DeviceOwner.objects.get(pk=user_id)\n except DeviceOwner.DoesNotExist:\n return None\n", "path": "kolibri/auth/backends.py"}]}
| 1,074 | 126 |
gh_patches_debug_63150
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-15449
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pymysql.err.ProgrammingError: ('DocType', 'Webhook')
```
> bench --site all migrate --skip-failing
...
Migrating my-site
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/frappe/frappe-bench/apps/frappe/frappe/utils/bench_helper.py", line 104, in <module>
main()
File "/home/frappe/frappe-bench/apps/frappe/frappe/utils/bench_helper.py", line 19, in main
click.Group(commands=commands)(prog_name='bench')
File "/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/frappe/frappe-bench/apps/frappe/frappe/commands/__init__.py", line 27, in _func
ret = f(frappe._dict(ctx.obj), *args, **kwargs)
File "/home/frappe/frappe-bench/apps/frappe/frappe/commands/site.py", line 309, in migrate
skip_search_index=skip_search_index
File "/home/frappe/frappe-bench/apps/frappe/frappe/migrate.py", line 78, in migrate
skip_search_index=skip_search_index
File "/home/frappe/frappe-bench/apps/frappe/frappe/migrate.py", line 78, in migrate
sync_languages()
File "/home/frappe/frappe-bench/apps/frappe/frappe/core/doctype/language/language.py", line 43, in sync_languages
'language_name': l['name']
File "/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py", line 231, in insert
self.run_method("before_insert")
File "/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py", line 870, in run_method
run_webhooks(self, method)
File "/home/frappe/frappe-bench/apps/frappe/frappe/integrations/doctype/webhook/__init__.py", line 25, in run_webhooks
filters={"enabled": True}
File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 1469, in get_all
return get_list(doctype, *args, **kwargs)
File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 1442, in get_list
return frappe.model.db_query.DatabaseQuery(doctype).execute(*args, **kwargs)
File "/home/frappe/frappe-bench/apps/frappe/frappe/model/db_query.py", line 102, in execute
self.columns = self.get_table_columns()
File "/home/frappe/frappe-bench/apps/frappe/frappe/model/db_query.py", line 339, in get_table_columns
return get_table_columns(self.doctype)
File "/home/frappe/frappe-bench/apps/frappe/frappe/model/meta.py", line 49, in get_table_columns
return frappe.db.get_table_columns(doctype)
File "/home/frappe/frappe-bench/apps/frappe/frappe/database/database.py", line 902, in get_table_columns
raise self.TableMissingError('DocType', doctype)
pymysql.err.ProgrammingError: ('DocType', 'Webhook')
```
Migrating from `version-13-beta` to `version-13` (13.17)
### Versions
```
> bench version
erpnext 13.17.0
frappe 13.17.1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/integrations/doctype/webhook/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2017, Frappe Technologies and contributors
3 # License: MIT. See LICENSE
4
5 import frappe
6
7
8 def run_webhooks(doc, method):
9 '''Run webhooks for this method'''
10 if frappe.flags.in_import or frappe.flags.in_patch or frappe.flags.in_install:
11 return
12
13 if frappe.flags.webhooks_executed is None:
14 frappe.flags.webhooks_executed = {}
15
16 if frappe.flags.webhooks is None:
17 # load webhooks from cache
18 webhooks = frappe.cache().get_value('webhooks')
19 if webhooks is None:
20 # query webhooks
21 webhooks_list = frappe.get_all('Webhook',
22 fields=["name", "`condition`", "webhook_docevent", "webhook_doctype"],
23 filters={"enabled": True}
24 )
25
26 # make webhooks map for cache
27 webhooks = {}
28 for w in webhooks_list:
29 webhooks.setdefault(w.webhook_doctype, []).append(w)
30 frappe.cache().set_value('webhooks', webhooks)
31
32 frappe.flags.webhooks = webhooks
33
34 # get webhooks for this doctype
35 webhooks_for_doc = frappe.flags.webhooks.get(doc.doctype, None)
36
37 if not webhooks_for_doc:
38 # no webhooks, quit
39 return
40
41 def _webhook_request(webhook):
42 if webhook.name not in frappe.flags.webhooks_executed.get(doc.name, []):
43 frappe.enqueue("frappe.integrations.doctype.webhook.webhook.enqueue_webhook",
44 enqueue_after_commit=True, doc=doc, webhook=webhook)
45
46 # keep list of webhooks executed for this doc in this request
47 # so that we don't run the same webhook for the same document multiple times
48 # in one request
49 frappe.flags.webhooks_executed.setdefault(doc.name, []).append(webhook.name)
50
51 event_list = ["on_update", "after_insert", "on_submit", "on_cancel", "on_trash"]
52
53 if not doc.flags.in_insert:
54 # value change is not applicable in insert
55 event_list.append('on_change')
56 event_list.append('before_update_after_submit')
57
58 from frappe.integrations.doctype.webhook.webhook import get_context
59
60 for webhook in webhooks_for_doc:
61 trigger_webhook = False
62 event = method if method in event_list else None
63 if not webhook.condition:
64 trigger_webhook = True
65 elif frappe.safe_eval(webhook.condition, eval_locals=get_context(doc)):
66 trigger_webhook = True
67
68 if trigger_webhook and event and webhook.webhook_docevent == event:
69 _webhook_request(webhook)
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/frappe/integrations/doctype/webhook/__init__.py b/frappe/integrations/doctype/webhook/__init__.py
--- a/frappe/integrations/doctype/webhook/__init__.py
+++ b/frappe/integrations/doctype/webhook/__init__.py
@@ -7,7 +7,7 @@
def run_webhooks(doc, method):
'''Run webhooks for this method'''
- if frappe.flags.in_import or frappe.flags.in_patch or frappe.flags.in_install:
+ if frappe.flags.in_import or frappe.flags.in_patch or frappe.flags.in_install or frappe.flags.in_migrate:
return
if frappe.flags.webhooks_executed is None:
|
{"golden_diff": "diff --git a/frappe/integrations/doctype/webhook/__init__.py b/frappe/integrations/doctype/webhook/__init__.py\n--- a/frappe/integrations/doctype/webhook/__init__.py\n+++ b/frappe/integrations/doctype/webhook/__init__.py\n@@ -7,7 +7,7 @@\n \n def run_webhooks(doc, method):\n \t'''Run webhooks for this method'''\n-\tif frappe.flags.in_import or frappe.flags.in_patch or frappe.flags.in_install:\n+\tif frappe.flags.in_import or frappe.flags.in_patch or frappe.flags.in_install or frappe.flags.in_migrate:\n \t\treturn\n \n \tif frappe.flags.webhooks_executed is None:\n", "issue": "pymysql.err.ProgrammingError: ('DocType', 'Webhook')\n```\r\n> bench --site all migrate --skip-failing \r\n...\r\nMigrating my-site\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/utils/bench_helper.py\", line 104, in <module>\r\n main()\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/utils/bench_helper.py\", line 19, in main\r\n click.Group(commands=commands)(prog_name='bench')\r\n File \"/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py\", line 1259, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/home/frappe/frappe-bench/env/lib/python3.6/site-packages/click/decorators.py\", line 21, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/commands/__init__.py\", line 27, in _func\r\n ret = f(frappe._dict(ctx.obj), *args, **kwargs)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/commands/site.py\", line 309, in migrate\r\n skip_search_index=skip_search_index\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/migrate.py\", line 78, in migrate\r\n skip_search_index=skip_search_index\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/migrate.py\", line 78, in migrate\r\n sync_languages()\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/core/doctype/language/language.py\", line 43, in sync_languages\r\n 'language_name': l['name']\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py\", line 231, in insert\r\n self.run_method(\"before_insert\")\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/model/document.py\", line 870, in run_method\r\n run_webhooks(self, method)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/integrations/doctype/webhook/__init__.py\", line 25, in run_webhooks\r\n filters={\"enabled\": True}\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py\", line 1469, in get_all\r\n return get_list(doctype, *args, **kwargs)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py\", line 1442, in get_list\r\n return frappe.model.db_query.DatabaseQuery(doctype).execute(*args, **kwargs)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/model/db_query.py\", line 102, in execute\r\n self.columns = self.get_table_columns()\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/model/db_query.py\", line 339, in get_table_columns\r\n return get_table_columns(self.doctype)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/model/meta.py\", line 49, in get_table_columns\r\n return frappe.db.get_table_columns(doctype)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/database/database.py\", line 902, in get_table_columns\r\n raise self.TableMissingError('DocType', doctype)\r\npymysql.err.ProgrammingError: ('DocType', 'Webhook')\r\n```\r\n\r\nMigrating from `version-13-beta` to `version-13` (13.17)\r\n\r\n### Versions\r\n\r\n```\r\n> bench version\r\nerpnext 13.17.0\r\nfrappe 13.17.1\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2017, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport frappe\n\n\ndef run_webhooks(doc, method):\n\t'''Run webhooks for this method'''\n\tif frappe.flags.in_import or frappe.flags.in_patch or frappe.flags.in_install:\n\t\treturn\n\n\tif frappe.flags.webhooks_executed is None:\n\t\tfrappe.flags.webhooks_executed = {}\n\n\tif frappe.flags.webhooks is None:\n\t\t# load webhooks from cache\n\t\twebhooks = frappe.cache().get_value('webhooks')\n\t\tif webhooks is None:\n\t\t\t# query webhooks\n\t\t\twebhooks_list = frappe.get_all('Webhook',\n\t\t\t\t\t\tfields=[\"name\", \"`condition`\", \"webhook_docevent\", \"webhook_doctype\"],\n\t\t\t\t\t\tfilters={\"enabled\": True}\n\t\t\t\t\t)\n\n\t\t\t# make webhooks map for cache\n\t\t\twebhooks = {}\n\t\t\tfor w in webhooks_list:\n\t\t\t\twebhooks.setdefault(w.webhook_doctype, []).append(w)\n\t\t\tfrappe.cache().set_value('webhooks', webhooks)\n\n\t\tfrappe.flags.webhooks = webhooks\n\n\t# get webhooks for this doctype\n\twebhooks_for_doc = frappe.flags.webhooks.get(doc.doctype, None)\n\n\tif not webhooks_for_doc:\n\t\t# no webhooks, quit\n\t\treturn\n\n\tdef _webhook_request(webhook):\n\t\tif webhook.name not in frappe.flags.webhooks_executed.get(doc.name, []):\n\t\t\tfrappe.enqueue(\"frappe.integrations.doctype.webhook.webhook.enqueue_webhook\",\n\t\t\t\tenqueue_after_commit=True, doc=doc, webhook=webhook)\n\n\t\t\t# keep list of webhooks executed for this doc in this request\n\t\t\t# so that we don't run the same webhook for the same document multiple times\n\t\t\t# in one request\n\t\t\tfrappe.flags.webhooks_executed.setdefault(doc.name, []).append(webhook.name)\n\n\tevent_list = [\"on_update\", \"after_insert\", \"on_submit\", \"on_cancel\", \"on_trash\"]\n\n\tif not doc.flags.in_insert:\n\t\t# value change is not applicable in insert\n\t\tevent_list.append('on_change')\n\t\tevent_list.append('before_update_after_submit')\n\n\tfrom frappe.integrations.doctype.webhook.webhook import get_context\n\n\tfor webhook in webhooks_for_doc:\n\t\ttrigger_webhook = False\n\t\tevent = method if method in event_list else None\n\t\tif not webhook.condition:\n\t\t\ttrigger_webhook = True\n\t\telif frappe.safe_eval(webhook.condition, eval_locals=get_context(doc)):\n\t\t\ttrigger_webhook = True\n\n\t\tif trigger_webhook and event and webhook.webhook_docevent == event:\n\t\t\t_webhook_request(webhook)\n", "path": "frappe/integrations/doctype/webhook/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2017, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport frappe\n\n\ndef run_webhooks(doc, method):\n\t'''Run webhooks for this method'''\n\tif frappe.flags.in_import or frappe.flags.in_patch or frappe.flags.in_install or frappe.flags.in_migrate:\n\t\treturn\n\n\tif frappe.flags.webhooks_executed is None:\n\t\tfrappe.flags.webhooks_executed = {}\n\n\tif frappe.flags.webhooks is None:\n\t\t# load webhooks from cache\n\t\twebhooks = frappe.cache().get_value('webhooks')\n\t\tif webhooks is None:\n\t\t\t# query webhooks\n\t\t\twebhooks_list = frappe.get_all('Webhook',\n\t\t\t\t\t\tfields=[\"name\", \"`condition`\", \"webhook_docevent\", \"webhook_doctype\"],\n\t\t\t\t\t\tfilters={\"enabled\": True}\n\t\t\t\t\t)\n\n\t\t\t# make webhooks map for cache\n\t\t\twebhooks = {}\n\t\t\tfor w in webhooks_list:\n\t\t\t\twebhooks.setdefault(w.webhook_doctype, []).append(w)\n\t\t\tfrappe.cache().set_value('webhooks', webhooks)\n\n\t\tfrappe.flags.webhooks = webhooks\n\n\t# get webhooks for this doctype\n\twebhooks_for_doc = frappe.flags.webhooks.get(doc.doctype, None)\n\n\tif not webhooks_for_doc:\n\t\t# no webhooks, quit\n\t\treturn\n\n\tdef _webhook_request(webhook):\n\t\tif webhook.name not in frappe.flags.webhooks_executed.get(doc.name, []):\n\t\t\tfrappe.enqueue(\"frappe.integrations.doctype.webhook.webhook.enqueue_webhook\",\n\t\t\t\tenqueue_after_commit=True, doc=doc, webhook=webhook)\n\n\t\t\t# keep list of webhooks executed for this doc in this request\n\t\t\t# so that we don't run the same webhook for the same document multiple times\n\t\t\t# in one request\n\t\t\tfrappe.flags.webhooks_executed.setdefault(doc.name, []).append(webhook.name)\n\n\tevent_list = [\"on_update\", \"after_insert\", \"on_submit\", \"on_cancel\", \"on_trash\"]\n\n\tif not doc.flags.in_insert:\n\t\t# value change is not applicable in insert\n\t\tevent_list.append('on_change')\n\t\tevent_list.append('before_update_after_submit')\n\n\tfrom frappe.integrations.doctype.webhook.webhook import get_context\n\n\tfor webhook in webhooks_for_doc:\n\t\ttrigger_webhook = False\n\t\tevent = method if method in event_list else None\n\t\tif not webhook.condition:\n\t\t\ttrigger_webhook = True\n\t\telif frappe.safe_eval(webhook.condition, eval_locals=get_context(doc)):\n\t\t\ttrigger_webhook = True\n\n\t\tif trigger_webhook and event and webhook.webhook_docevent == event:\n\t\t\t_webhook_request(webhook)\n", "path": "frappe/integrations/doctype/webhook/__init__.py"}]}
| 2,143 | 155 |
gh_patches_debug_39006
|
rasdani/github-patches
|
git_diff
|
MycroftAI__mycroft-core-2538
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Skills and Enclosure background services fail to stop and are killed...
## Be clear about the software, hardware and version you are running
For example:
in CLI
>> what version are you running
>> I am running mycroft-core version 20 oh 2, release 0
>> You are on the latest version.
Opensuse Leap 15.1
## Try to provide steps that we can use to replicate the Issue
For example:
1. CTRL+C in CLI
2. Enter ./stop_mycroft.sh
3. Skills and Enclosure services are eventually killed.
4. Takes about 30 seconds total
## Be as specific as possible about the expected condition, and the deviation from expected condition.
user@LinuxOS:~/mycroft-core> ./stop-mycroft.sh skills
Stopping skills (5579)...stopped.
user@LinuxOS:~/mycroft-core> ./stop-mycroft.sh enclosure
Stopping enclosure (5588)...failed to stop.
Killing enclosure (5588)...killed.
user@LinuxOS:~/mycroft-core> ./stop-mycroft.sh
Stopping all mycroft-core services
Stopping messagebus.service (5576)...stopped.
Stopping audio (5582)...stopped.
Stopping speech (5585)...stopped.
...
user@LinuxOS:~/mycroft-core> ./stop-mycroft.sh
Stopping all mycroft-core services
Stopping messagebus.service (18995)...stopped.
Stopping skills (18998)...failed to stop.
Killing skills (18998)...killed.
Stopping audio (19001)...stopped.
Stopping speech (19004)...stopped.
Stopping enclosure (19007)...failed to stop.
Killing enclosure (19007)...killed.
user@LinuxOS:~/mycroft-core>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mycroft/client/enclosure/__main__.py`
Content:
```
1 # Copyright 2017 Mycroft AI Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import sys
16
17 from mycroft.util.log import LOG
18 from mycroft.messagebus.client import MessageBusClient
19 from mycroft.configuration import Configuration, LocalConf, SYSTEM_CONFIG
20
21
22 def main():
23 # Read the system configuration
24 system_config = LocalConf(SYSTEM_CONFIG)
25 platform = system_config.get("enclosure", {}).get("platform")
26
27 if platform == "mycroft_mark_1":
28 LOG.debug("Creating Mark I Enclosure")
29 from mycroft.client.enclosure.mark1 import EnclosureMark1
30 enclosure = EnclosureMark1()
31 elif platform == "mycroft_mark_2":
32 LOG.debug("Creating Mark II Enclosure")
33 from mycroft.client.enclosure.mark2 import EnclosureMark2
34 enclosure = EnclosureMark2()
35 else:
36 LOG.debug("Creating generic enclosure, platform='{}'".format(platform))
37
38 # TODO: Mechanism to load from elsewhere. E.g. read a script path from
39 # the mycroft.conf, then load/launch that script.
40 from mycroft.client.enclosure.generic import EnclosureGeneric
41 enclosure = EnclosureGeneric()
42
43 if enclosure:
44 try:
45 LOG.debug("Enclosure started!")
46 enclosure.run()
47 except Exception as e:
48 print(e)
49 finally:
50 sys.exit()
51 else:
52 LOG.debug("No enclosure available for this hardware, running headless")
53
54
55 if __name__ == "__main__":
56 main()
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mycroft/client/enclosure/__main__.py b/mycroft/client/enclosure/__main__.py
--- a/mycroft/client/enclosure/__main__.py
+++ b/mycroft/client/enclosure/__main__.py
@@ -12,44 +12,67 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
-import sys
+"""Entrypoint for enclosure service.
+This provides any "enclosure" specific functionality, for example GUI or
+control over the Mark-1 Faceplate.
+"""
+from mycroft.configuration import LocalConf, SYSTEM_CONFIG
from mycroft.util.log import LOG
-from mycroft.messagebus.client import MessageBusClient
-from mycroft.configuration import Configuration, LocalConf, SYSTEM_CONFIG
+from mycroft.util import (create_daemon, wait_for_exit_signal,
+ reset_sigint_handler)
-def main():
- # Read the system configuration
- system_config = LocalConf(SYSTEM_CONFIG)
- platform = system_config.get("enclosure", {}).get("platform")
+def create_enclosure(platform):
+ """Create an enclosure based on the provided platform string.
+ Arguments:
+ platform (str): platform name string
+
+ Returns:
+ Enclosure object
+ """
if platform == "mycroft_mark_1":
- LOG.debug("Creating Mark I Enclosure")
+ LOG.info("Creating Mark I Enclosure")
from mycroft.client.enclosure.mark1 import EnclosureMark1
enclosure = EnclosureMark1()
elif platform == "mycroft_mark_2":
- LOG.debug("Creating Mark II Enclosure")
+ LOG.info("Creating Mark II Enclosure")
from mycroft.client.enclosure.mark2 import EnclosureMark2
enclosure = EnclosureMark2()
else:
- LOG.debug("Creating generic enclosure, platform='{}'".format(platform))
+ LOG.info("Creating generic enclosure, platform='{}'".format(platform))
# TODO: Mechanism to load from elsewhere. E.g. read a script path from
# the mycroft.conf, then load/launch that script.
from mycroft.client.enclosure.generic import EnclosureGeneric
enclosure = EnclosureGeneric()
+ return enclosure
+
+
+def main():
+ """Launch one of the available enclosure implementations.
+
+ This depends on the configured platform and can currently either be
+ mycroft_mark_1 or mycroft_mark_2, if unconfigured a generic enclosure with
+ only the GUI bus will be started.
+ """
+ # Read the system configuration
+ system_config = LocalConf(SYSTEM_CONFIG)
+ platform = system_config.get("enclosure", {}).get("platform")
+
+ enclosure = create_enclosure(platform)
if enclosure:
try:
LOG.debug("Enclosure started!")
- enclosure.run()
+ reset_sigint_handler()
+ create_daemon(enclosure.run)
+ wait_for_exit_signal()
except Exception as e:
print(e)
- finally:
- sys.exit()
else:
- LOG.debug("No enclosure available for this hardware, running headless")
+ LOG.info("No enclosure available for this hardware, running headless")
if __name__ == "__main__":
|
{"golden_diff": "diff --git a/mycroft/client/enclosure/__main__.py b/mycroft/client/enclosure/__main__.py\n--- a/mycroft/client/enclosure/__main__.py\n+++ b/mycroft/client/enclosure/__main__.py\n@@ -12,44 +12,67 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n #\n-import sys\n+\"\"\"Entrypoint for enclosure service.\n \n+This provides any \"enclosure\" specific functionality, for example GUI or\n+control over the Mark-1 Faceplate.\n+\"\"\"\n+from mycroft.configuration import LocalConf, SYSTEM_CONFIG\n from mycroft.util.log import LOG\n-from mycroft.messagebus.client import MessageBusClient\n-from mycroft.configuration import Configuration, LocalConf, SYSTEM_CONFIG\n+from mycroft.util import (create_daemon, wait_for_exit_signal,\n+ reset_sigint_handler)\n \n \n-def main():\n- # Read the system configuration\n- system_config = LocalConf(SYSTEM_CONFIG)\n- platform = system_config.get(\"enclosure\", {}).get(\"platform\")\n+def create_enclosure(platform):\n+ \"\"\"Create an enclosure based on the provided platform string.\n \n+ Arguments:\n+ platform (str): platform name string\n+\n+ Returns:\n+ Enclosure object\n+ \"\"\"\n if platform == \"mycroft_mark_1\":\n- LOG.debug(\"Creating Mark I Enclosure\")\n+ LOG.info(\"Creating Mark I Enclosure\")\n from mycroft.client.enclosure.mark1 import EnclosureMark1\n enclosure = EnclosureMark1()\n elif platform == \"mycroft_mark_2\":\n- LOG.debug(\"Creating Mark II Enclosure\")\n+ LOG.info(\"Creating Mark II Enclosure\")\n from mycroft.client.enclosure.mark2 import EnclosureMark2\n enclosure = EnclosureMark2()\n else:\n- LOG.debug(\"Creating generic enclosure, platform='{}'\".format(platform))\n+ LOG.info(\"Creating generic enclosure, platform='{}'\".format(platform))\n \n # TODO: Mechanism to load from elsewhere. E.g. read a script path from\n # the mycroft.conf, then load/launch that script.\n from mycroft.client.enclosure.generic import EnclosureGeneric\n enclosure = EnclosureGeneric()\n \n+ return enclosure\n+\n+\n+def main():\n+ \"\"\"Launch one of the available enclosure implementations.\n+\n+ This depends on the configured platform and can currently either be\n+ mycroft_mark_1 or mycroft_mark_2, if unconfigured a generic enclosure with\n+ only the GUI bus will be started.\n+ \"\"\"\n+ # Read the system configuration\n+ system_config = LocalConf(SYSTEM_CONFIG)\n+ platform = system_config.get(\"enclosure\", {}).get(\"platform\")\n+\n+ enclosure = create_enclosure(platform)\n if enclosure:\n try:\n LOG.debug(\"Enclosure started!\")\n- enclosure.run()\n+ reset_sigint_handler()\n+ create_daemon(enclosure.run)\n+ wait_for_exit_signal()\n except Exception as e:\n print(e)\n- finally:\n- sys.exit()\n else:\n- LOG.debug(\"No enclosure available for this hardware, running headless\")\n+ LOG.info(\"No enclosure available for this hardware, running headless\")\n \n \n if __name__ == \"__main__\":\n", "issue": "Skills and Enclosure background services fail to stop and are killed...\n## Be clear about the software, hardware and version you are running\r\n\r\nFor example: \r\n\r\nin CLI\r\n >> what version are you running \r\n >> I am running mycroft-core version 20 oh 2, release 0 \r\n >> You are on the latest version.\r\n\r\nOpensuse Leap 15.1\r\n## Try to provide steps that we can use to replicate the Issue\r\n\r\nFor example: \r\n\r\n1. CTRL+C in CLI\r\n2. Enter ./stop_mycroft.sh \r\n3. Skills and Enclosure services are eventually killed.\r\n4. Takes about 30 seconds total\r\n\r\n## Be as specific as possible about the expected condition, and the deviation from expected condition. \r\n\r\nuser@LinuxOS:~/mycroft-core> ./stop-mycroft.sh skills\r\nStopping skills (5579)...stopped.\r\nuser@LinuxOS:~/mycroft-core> ./stop-mycroft.sh enclosure\r\nStopping enclosure (5588)...failed to stop.\r\n Killing enclosure (5588)...killed.\r\nuser@LinuxOS:~/mycroft-core> ./stop-mycroft.sh\r\nStopping all mycroft-core services\r\nStopping messagebus.service (5576)...stopped.\r\nStopping audio (5582)...stopped.\r\nStopping speech (5585)...stopped.\r\n...\r\nuser@LinuxOS:~/mycroft-core> ./stop-mycroft.sh\r\nStopping all mycroft-core services\r\nStopping messagebus.service (18995)...stopped.\r\nStopping skills (18998)...failed to stop.\r\n Killing skills (18998)...killed.\r\nStopping audio (19001)...stopped.\r\nStopping speech (19004)...stopped.\r\nStopping enclosure (19007)...failed to stop.\r\n Killing enclosure (19007)...killed.\r\nuser@LinuxOS:~/mycroft-core> \r\n\r\n\n", "before_files": [{"content": "# Copyright 2017 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport sys\n\nfrom mycroft.util.log import LOG\nfrom mycroft.messagebus.client import MessageBusClient\nfrom mycroft.configuration import Configuration, LocalConf, SYSTEM_CONFIG\n\n\ndef main():\n # Read the system configuration\n system_config = LocalConf(SYSTEM_CONFIG)\n platform = system_config.get(\"enclosure\", {}).get(\"platform\")\n\n if platform == \"mycroft_mark_1\":\n LOG.debug(\"Creating Mark I Enclosure\")\n from mycroft.client.enclosure.mark1 import EnclosureMark1\n enclosure = EnclosureMark1()\n elif platform == \"mycroft_mark_2\":\n LOG.debug(\"Creating Mark II Enclosure\")\n from mycroft.client.enclosure.mark2 import EnclosureMark2\n enclosure = EnclosureMark2()\n else:\n LOG.debug(\"Creating generic enclosure, platform='{}'\".format(platform))\n\n # TODO: Mechanism to load from elsewhere. E.g. read a script path from\n # the mycroft.conf, then load/launch that script.\n from mycroft.client.enclosure.generic import EnclosureGeneric\n enclosure = EnclosureGeneric()\n\n if enclosure:\n try:\n LOG.debug(\"Enclosure started!\")\n enclosure.run()\n except Exception as e:\n print(e)\n finally:\n sys.exit()\n else:\n LOG.debug(\"No enclosure available for this hardware, running headless\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "mycroft/client/enclosure/__main__.py"}], "after_files": [{"content": "# Copyright 2017 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\"\"\"Entrypoint for enclosure service.\n\nThis provides any \"enclosure\" specific functionality, for example GUI or\ncontrol over the Mark-1 Faceplate.\n\"\"\"\nfrom mycroft.configuration import LocalConf, SYSTEM_CONFIG\nfrom mycroft.util.log import LOG\nfrom mycroft.util import (create_daemon, wait_for_exit_signal,\n reset_sigint_handler)\n\n\ndef create_enclosure(platform):\n \"\"\"Create an enclosure based on the provided platform string.\n\n Arguments:\n platform (str): platform name string\n\n Returns:\n Enclosure object\n \"\"\"\n if platform == \"mycroft_mark_1\":\n LOG.info(\"Creating Mark I Enclosure\")\n from mycroft.client.enclosure.mark1 import EnclosureMark1\n enclosure = EnclosureMark1()\n elif platform == \"mycroft_mark_2\":\n LOG.info(\"Creating Mark II Enclosure\")\n from mycroft.client.enclosure.mark2 import EnclosureMark2\n enclosure = EnclosureMark2()\n else:\n LOG.info(\"Creating generic enclosure, platform='{}'\".format(platform))\n\n # TODO: Mechanism to load from elsewhere. E.g. read a script path from\n # the mycroft.conf, then load/launch that script.\n from mycroft.client.enclosure.generic import EnclosureGeneric\n enclosure = EnclosureGeneric()\n\n return enclosure\n\n\ndef main():\n \"\"\"Launch one of the available enclosure implementations.\n\n This depends on the configured platform and can currently either be\n mycroft_mark_1 or mycroft_mark_2, if unconfigured a generic enclosure with\n only the GUI bus will be started.\n \"\"\"\n # Read the system configuration\n system_config = LocalConf(SYSTEM_CONFIG)\n platform = system_config.get(\"enclosure\", {}).get(\"platform\")\n\n enclosure = create_enclosure(platform)\n if enclosure:\n try:\n LOG.debug(\"Enclosure started!\")\n reset_sigint_handler()\n create_daemon(enclosure.run)\n wait_for_exit_signal()\n except Exception as e:\n print(e)\n else:\n LOG.info(\"No enclosure available for this hardware, running headless\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "mycroft/client/enclosure/__main__.py"}]}
| 1,219 | 710 |
gh_patches_debug_29429
|
rasdani/github-patches
|
git_diff
|
encode__starlette-109
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
scope["server"] can be None
From https://asgi.readthedocs.io/en/latest/specs/www.html#connection-scope
> server: A two-item iterable of [host, port], where host is the listening address for this server as a unicode string, and port is the integer listening port. Optional, defaults to None.
https://github.com/encode/starlette/blob/master/starlette/datastructures.py#L11 doesn't handle that option, it assumes scope["server"] is always a two-pair
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/datastructures.py`
Content:
```
1 import typing
2 from starlette.types import Scope
3 from urllib.parse import parse_qsl, unquote, urlparse, ParseResult
4
5
6 class URL:
7 def __init__(self, url: str = "", scope: Scope = None) -> None:
8 if scope is not None:
9 assert not url, 'Cannot set both "url" and "scope".'
10 scheme = scope["scheme"]
11 host, port = scope["server"]
12 path = scope.get("root_path", "") + scope["path"]
13 query_string = scope["query_string"]
14
15 default_port = {"http": 80, "https": 443, "ws": 80, "wss": 443}[scheme]
16 if port == default_port:
17 url = "%s://%s%s" % (scheme, host, path)
18 else:
19 url = "%s://%s:%s%s" % (scheme, host, port, path)
20
21 if query_string:
22 url += "?" + unquote(query_string.decode())
23 self._url = url
24
25 @property
26 def components(self) -> ParseResult:
27 if not hasattr(self, "_components"):
28 self._components = urlparse(self._url)
29 return self._components
30
31 @property
32 def scheme(self) -> str:
33 return self.components.scheme
34
35 @property
36 def netloc(self) -> str:
37 return self.components.netloc
38
39 @property
40 def path(self) -> str:
41 return self.components.path
42
43 @property
44 def params(self) -> str:
45 return self.components.params
46
47 @property
48 def query(self) -> str:
49 return self.components.query
50
51 @property
52 def fragment(self) -> str:
53 return self.components.fragment
54
55 @property
56 def username(self) -> typing.Union[None, str]:
57 return self.components.username
58
59 @property
60 def password(self) -> typing.Union[None, str]:
61 return self.components.password
62
63 @property
64 def hostname(self) -> typing.Union[None, str]:
65 return self.components.hostname
66
67 @property
68 def port(self) -> typing.Optional[int]:
69 return self.components.port
70
71 def replace(self, **kwargs: typing.Any) -> "URL":
72 if "hostname" in kwargs or "port" in kwargs:
73 hostname = kwargs.pop("hostname", self.hostname)
74 port = kwargs.pop("port", self.port)
75 if port is None:
76 kwargs["netloc"] = hostname
77 else:
78 kwargs["netloc"] = "%s:%d" % (hostname, port)
79 components = self.components._replace(**kwargs)
80 return URL(components.geturl())
81
82 def __eq__(self, other):
83 return str(self) == str(other)
84
85 def __str__(self):
86 return self._url
87
88
89 # Type annotations for valid `__init__` values to QueryParams and Headers.
90 StrPairs = typing.Sequence[typing.Tuple[str, str]]
91 BytesPairs = typing.List[typing.Tuple[bytes, bytes]]
92 StrDict = typing.Mapping[str, str]
93
94
95 class QueryParams(StrDict):
96 """
97 An immutable multidict.
98 """
99
100 def __init__(
101 self, value: typing.Union[str, typing.Union[StrDict, StrPairs]] = None
102 ) -> None:
103 if value is None:
104 value = []
105 elif isinstance(value, str):
106 value = parse_qsl(value)
107
108 if hasattr(value, "items"):
109 items = list(typing.cast(StrDict, value).items())
110 else:
111 items = list(typing.cast(StrPairs, value))
112 self._dict = {k: v for k, v in reversed(items)}
113 self._list = items
114
115 def getlist(self, key: typing.Any) -> typing.List[str]:
116 return [item_value for item_key, item_value in self._list if item_key == key]
117
118 def keys(self) -> typing.List[str]: # type: ignore
119 return [key for key, value in self._list]
120
121 def values(self) -> typing.List[str]: # type: ignore
122 return [value for key, value in self._list]
123
124 def items(self) -> StrPairs: # type: ignore
125 return list(self._list)
126
127 def get(self, key: typing.Any, default: typing.Any = None) -> typing.Any:
128 if key in self._dict:
129 return self._dict[key]
130 else:
131 return default
132
133 def __getitem__(self, key: typing.Any) -> str:
134 return self._dict[key]
135
136 def __contains__(self, key: typing.Any) -> bool:
137 return key in self._dict
138
139 def __iter__(self) -> typing.Iterator[typing.Any]:
140 return iter(self._list)
141
142 def __len__(self) -> int:
143 return len(self._list)
144
145 def __eq__(self, other: typing.Any) -> bool:
146 if not isinstance(other, QueryParams):
147 other = QueryParams(other)
148 return sorted(self._list) == sorted(other._list)
149
150 def __repr__(self) -> str:
151 return "QueryParams(%s)" % repr(self._list)
152
153
154 class Headers(typing.Mapping[str, str]):
155 """
156 An immutable, case-insensitive multidict.
157 """
158
159 def __init__(self, raw_headers: typing.Optional[BytesPairs] = None) -> None:
160 if raw_headers is None:
161 self._list = [] # type: BytesPairs
162 else:
163 for header_key, header_value in raw_headers:
164 assert isinstance(header_key, bytes)
165 assert isinstance(header_value, bytes)
166 assert header_key == header_key.lower()
167 self._list = raw_headers
168
169 def keys(self) -> typing.List[str]: # type: ignore
170 return [key.decode("latin-1") for key, value in self._list]
171
172 def values(self) -> typing.List[str]: # type: ignore
173 return [value.decode("latin-1") for key, value in self._list]
174
175 def items(self) -> StrPairs: # type: ignore
176 return [
177 (key.decode("latin-1"), value.decode("latin-1"))
178 for key, value in self._list
179 ]
180
181 def get(self, key: str, default: typing.Any = None) -> typing.Any:
182 try:
183 return self[key]
184 except KeyError:
185 return default
186
187 def getlist(self, key: str) -> typing.List[str]:
188 get_header_key = key.lower().encode("latin-1")
189 return [
190 item_value.decode("latin-1")
191 for item_key, item_value in self._list
192 if item_key == get_header_key
193 ]
194
195 def mutablecopy(self) -> "MutableHeaders":
196 return MutableHeaders(self._list[:])
197
198 def __getitem__(self, key: str) -> str:
199 get_header_key = key.lower().encode("latin-1")
200 for header_key, header_value in self._list:
201 if header_key == get_header_key:
202 return header_value.decode("latin-1")
203 raise KeyError(key)
204
205 def __contains__(self, key: typing.Any) -> bool:
206 get_header_key = key.lower().encode("latin-1")
207 for header_key, header_value in self._list:
208 if header_key == get_header_key:
209 return True
210 return False
211
212 def __iter__(self) -> typing.Iterator[typing.Any]:
213 return iter(self.items())
214
215 def __len__(self) -> int:
216 return len(self._list)
217
218 def __eq__(self, other: typing.Any) -> bool:
219 if not isinstance(other, Headers):
220 return False
221 return sorted(self._list) == sorted(other._list)
222
223 def __repr__(self) -> str:
224 return "%s(%s)" % (self.__class__.__name__, repr(self.items()))
225
226
227 class MutableHeaders(Headers):
228 def __setitem__(self, key: str, value: str) -> None:
229 """
230 Set the header `key` to `value`, removing any duplicate entries.
231 Retains insertion order.
232 """
233 set_key = key.lower().encode("latin-1")
234 set_value = value.encode("latin-1")
235
236 found_indexes = []
237 for idx, (item_key, item_value) in enumerate(self._list):
238 if item_key == set_key:
239 found_indexes.append(idx)
240
241 for idx in reversed(found_indexes[1:]):
242 del self._list[idx]
243
244 if found_indexes:
245 idx = found_indexes[0]
246 self._list[idx] = (set_key, set_value)
247 else:
248 self._list.append((set_key, set_value))
249
250 def __delitem__(self, key: str) -> None:
251 """
252 Remove the header `key`.
253 """
254 del_key = key.lower().encode("latin-1")
255
256 pop_indexes = []
257 for idx, (item_key, item_value) in enumerate(self._list):
258 if item_key == del_key:
259 pop_indexes.append(idx)
260
261 for idx in reversed(pop_indexes):
262 del (self._list[idx])
263
264 def setdefault(self, key: str, value: str) -> str:
265 """
266 If the header `key` does not exist, then set it to `value`.
267 Returns the header value.
268 """
269 set_key = key.lower().encode("latin-1")
270 set_value = value.encode("latin-1")
271
272 for idx, (item_key, item_value) in enumerate(self._list):
273 if item_key == set_key:
274 return item_value.decode("latin-1")
275 self._list.append((set_key, set_value))
276 return value
277
278 def update(self, other: dict):
279 for key, val in other.items():
280 self[key] = val
281
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/starlette/datastructures.py b/starlette/datastructures.py
--- a/starlette/datastructures.py
+++ b/starlette/datastructures.py
@@ -7,16 +7,20 @@
def __init__(self, url: str = "", scope: Scope = None) -> None:
if scope is not None:
assert not url, 'Cannot set both "url" and "scope".'
- scheme = scope["scheme"]
- host, port = scope["server"]
+ scheme = scope.get("scheme", "http")
+ server = scope.get("server", None)
path = scope.get("root_path", "") + scope["path"]
query_string = scope["query_string"]
- default_port = {"http": 80, "https": 443, "ws": 80, "wss": 443}[scheme]
- if port == default_port:
- url = "%s://%s%s" % (scheme, host, path)
+ if server is None:
+ url = path
else:
- url = "%s://%s:%s%s" % (scheme, host, port, path)
+ host, port = server
+ default_port = {"http": 80, "https": 443, "ws": 80, "wss": 443}[scheme]
+ if port == default_port:
+ url = "%s://%s%s" % (scheme, host, path)
+ else:
+ url = "%s://%s:%s%s" % (scheme, host, port, path)
if query_string:
url += "?" + unquote(query_string.decode())
@@ -85,6 +89,9 @@
def __str__(self):
return self._url
+ def __repr__(self):
+ return "%s(%s)" % (self.__class__.__name__, repr(self._url))
+
# Type annotations for valid `__init__` values to QueryParams and Headers.
StrPairs = typing.Sequence[typing.Tuple[str, str]]
|
{"golden_diff": "diff --git a/starlette/datastructures.py b/starlette/datastructures.py\n--- a/starlette/datastructures.py\n+++ b/starlette/datastructures.py\n@@ -7,16 +7,20 @@\n def __init__(self, url: str = \"\", scope: Scope = None) -> None:\n if scope is not None:\n assert not url, 'Cannot set both \"url\" and \"scope\".'\n- scheme = scope[\"scheme\"]\n- host, port = scope[\"server\"]\n+ scheme = scope.get(\"scheme\", \"http\")\n+ server = scope.get(\"server\", None)\n path = scope.get(\"root_path\", \"\") + scope[\"path\"]\n query_string = scope[\"query_string\"]\n \n- default_port = {\"http\": 80, \"https\": 443, \"ws\": 80, \"wss\": 443}[scheme]\n- if port == default_port:\n- url = \"%s://%s%s\" % (scheme, host, path)\n+ if server is None:\n+ url = path\n else:\n- url = \"%s://%s:%s%s\" % (scheme, host, port, path)\n+ host, port = server\n+ default_port = {\"http\": 80, \"https\": 443, \"ws\": 80, \"wss\": 443}[scheme]\n+ if port == default_port:\n+ url = \"%s://%s%s\" % (scheme, host, path)\n+ else:\n+ url = \"%s://%s:%s%s\" % (scheme, host, port, path)\n \n if query_string:\n url += \"?\" + unquote(query_string.decode())\n@@ -85,6 +89,9 @@\n def __str__(self):\n return self._url\n \n+ def __repr__(self):\n+ return \"%s(%s)\" % (self.__class__.__name__, repr(self._url))\n+\n \n # Type annotations for valid `__init__` values to QueryParams and Headers.\n StrPairs = typing.Sequence[typing.Tuple[str, str]]\n", "issue": "scope[\"server\"] can be None\nFrom https://asgi.readthedocs.io/en/latest/specs/www.html#connection-scope\r\n\r\n> server: A two-item iterable of [host, port], where host is the listening address for this server as a unicode string, and port is the integer listening port. Optional, defaults to None.\r\n\r\nhttps://github.com/encode/starlette/blob/master/starlette/datastructures.py#L11 doesn't handle that option, it assumes scope[\"server\"] is always a two-pair\r\n\r\n\n", "before_files": [{"content": "import typing\nfrom starlette.types import Scope\nfrom urllib.parse import parse_qsl, unquote, urlparse, ParseResult\n\n\nclass URL:\n def __init__(self, url: str = \"\", scope: Scope = None) -> None:\n if scope is not None:\n assert not url, 'Cannot set both \"url\" and \"scope\".'\n scheme = scope[\"scheme\"]\n host, port = scope[\"server\"]\n path = scope.get(\"root_path\", \"\") + scope[\"path\"]\n query_string = scope[\"query_string\"]\n\n default_port = {\"http\": 80, \"https\": 443, \"ws\": 80, \"wss\": 443}[scheme]\n if port == default_port:\n url = \"%s://%s%s\" % (scheme, host, path)\n else:\n url = \"%s://%s:%s%s\" % (scheme, host, port, path)\n\n if query_string:\n url += \"?\" + unquote(query_string.decode())\n self._url = url\n\n @property\n def components(self) -> ParseResult:\n if not hasattr(self, \"_components\"):\n self._components = urlparse(self._url)\n return self._components\n\n @property\n def scheme(self) -> str:\n return self.components.scheme\n\n @property\n def netloc(self) -> str:\n return self.components.netloc\n\n @property\n def path(self) -> str:\n return self.components.path\n\n @property\n def params(self) -> str:\n return self.components.params\n\n @property\n def query(self) -> str:\n return self.components.query\n\n @property\n def fragment(self) -> str:\n return self.components.fragment\n\n @property\n def username(self) -> typing.Union[None, str]:\n return self.components.username\n\n @property\n def password(self) -> typing.Union[None, str]:\n return self.components.password\n\n @property\n def hostname(self) -> typing.Union[None, str]:\n return self.components.hostname\n\n @property\n def port(self) -> typing.Optional[int]:\n return self.components.port\n\n def replace(self, **kwargs: typing.Any) -> \"URL\":\n if \"hostname\" in kwargs or \"port\" in kwargs:\n hostname = kwargs.pop(\"hostname\", self.hostname)\n port = kwargs.pop(\"port\", self.port)\n if port is None:\n kwargs[\"netloc\"] = hostname\n else:\n kwargs[\"netloc\"] = \"%s:%d\" % (hostname, port)\n components = self.components._replace(**kwargs)\n return URL(components.geturl())\n\n def __eq__(self, other):\n return str(self) == str(other)\n\n def __str__(self):\n return self._url\n\n\n# Type annotations for valid `__init__` values to QueryParams and Headers.\nStrPairs = typing.Sequence[typing.Tuple[str, str]]\nBytesPairs = typing.List[typing.Tuple[bytes, bytes]]\nStrDict = typing.Mapping[str, str]\n\n\nclass QueryParams(StrDict):\n \"\"\"\n An immutable multidict.\n \"\"\"\n\n def __init__(\n self, value: typing.Union[str, typing.Union[StrDict, StrPairs]] = None\n ) -> None:\n if value is None:\n value = []\n elif isinstance(value, str):\n value = parse_qsl(value)\n\n if hasattr(value, \"items\"):\n items = list(typing.cast(StrDict, value).items())\n else:\n items = list(typing.cast(StrPairs, value))\n self._dict = {k: v for k, v in reversed(items)}\n self._list = items\n\n def getlist(self, key: typing.Any) -> typing.List[str]:\n return [item_value for item_key, item_value in self._list if item_key == key]\n\n def keys(self) -> typing.List[str]: # type: ignore\n return [key for key, value in self._list]\n\n def values(self) -> typing.List[str]: # type: ignore\n return [value for key, value in self._list]\n\n def items(self) -> StrPairs: # type: ignore\n return list(self._list)\n\n def get(self, key: typing.Any, default: typing.Any = None) -> typing.Any:\n if key in self._dict:\n return self._dict[key]\n else:\n return default\n\n def __getitem__(self, key: typing.Any) -> str:\n return self._dict[key]\n\n def __contains__(self, key: typing.Any) -> bool:\n return key in self._dict\n\n def __iter__(self) -> typing.Iterator[typing.Any]:\n return iter(self._list)\n\n def __len__(self) -> int:\n return len(self._list)\n\n def __eq__(self, other: typing.Any) -> bool:\n if not isinstance(other, QueryParams):\n other = QueryParams(other)\n return sorted(self._list) == sorted(other._list)\n\n def __repr__(self) -> str:\n return \"QueryParams(%s)\" % repr(self._list)\n\n\nclass Headers(typing.Mapping[str, str]):\n \"\"\"\n An immutable, case-insensitive multidict.\n \"\"\"\n\n def __init__(self, raw_headers: typing.Optional[BytesPairs] = None) -> None:\n if raw_headers is None:\n self._list = [] # type: BytesPairs\n else:\n for header_key, header_value in raw_headers:\n assert isinstance(header_key, bytes)\n assert isinstance(header_value, bytes)\n assert header_key == header_key.lower()\n self._list = raw_headers\n\n def keys(self) -> typing.List[str]: # type: ignore\n return [key.decode(\"latin-1\") for key, value in self._list]\n\n def values(self) -> typing.List[str]: # type: ignore\n return [value.decode(\"latin-1\") for key, value in self._list]\n\n def items(self) -> StrPairs: # type: ignore\n return [\n (key.decode(\"latin-1\"), value.decode(\"latin-1\"))\n for key, value in self._list\n ]\n\n def get(self, key: str, default: typing.Any = None) -> typing.Any:\n try:\n return self[key]\n except KeyError:\n return default\n\n def getlist(self, key: str) -> typing.List[str]:\n get_header_key = key.lower().encode(\"latin-1\")\n return [\n item_value.decode(\"latin-1\")\n for item_key, item_value in self._list\n if item_key == get_header_key\n ]\n\n def mutablecopy(self) -> \"MutableHeaders\":\n return MutableHeaders(self._list[:])\n\n def __getitem__(self, key: str) -> str:\n get_header_key = key.lower().encode(\"latin-1\")\n for header_key, header_value in self._list:\n if header_key == get_header_key:\n return header_value.decode(\"latin-1\")\n raise KeyError(key)\n\n def __contains__(self, key: typing.Any) -> bool:\n get_header_key = key.lower().encode(\"latin-1\")\n for header_key, header_value in self._list:\n if header_key == get_header_key:\n return True\n return False\n\n def __iter__(self) -> typing.Iterator[typing.Any]:\n return iter(self.items())\n\n def __len__(self) -> int:\n return len(self._list)\n\n def __eq__(self, other: typing.Any) -> bool:\n if not isinstance(other, Headers):\n return False\n return sorted(self._list) == sorted(other._list)\n\n def __repr__(self) -> str:\n return \"%s(%s)\" % (self.__class__.__name__, repr(self.items()))\n\n\nclass MutableHeaders(Headers):\n def __setitem__(self, key: str, value: str) -> None:\n \"\"\"\n Set the header `key` to `value`, removing any duplicate entries.\n Retains insertion order.\n \"\"\"\n set_key = key.lower().encode(\"latin-1\")\n set_value = value.encode(\"latin-1\")\n\n found_indexes = []\n for idx, (item_key, item_value) in enumerate(self._list):\n if item_key == set_key:\n found_indexes.append(idx)\n\n for idx in reversed(found_indexes[1:]):\n del self._list[idx]\n\n if found_indexes:\n idx = found_indexes[0]\n self._list[idx] = (set_key, set_value)\n else:\n self._list.append((set_key, set_value))\n\n def __delitem__(self, key: str) -> None:\n \"\"\"\n Remove the header `key`.\n \"\"\"\n del_key = key.lower().encode(\"latin-1\")\n\n pop_indexes = []\n for idx, (item_key, item_value) in enumerate(self._list):\n if item_key == del_key:\n pop_indexes.append(idx)\n\n for idx in reversed(pop_indexes):\n del (self._list[idx])\n\n def setdefault(self, key: str, value: str) -> str:\n \"\"\"\n If the header `key` does not exist, then set it to `value`.\n Returns the header value.\n \"\"\"\n set_key = key.lower().encode(\"latin-1\")\n set_value = value.encode(\"latin-1\")\n\n for idx, (item_key, item_value) in enumerate(self._list):\n if item_key == set_key:\n return item_value.decode(\"latin-1\")\n self._list.append((set_key, set_value))\n return value\n\n def update(self, other: dict):\n for key, val in other.items():\n self[key] = val\n", "path": "starlette/datastructures.py"}], "after_files": [{"content": "import typing\nfrom starlette.types import Scope\nfrom urllib.parse import parse_qsl, unquote, urlparse, ParseResult\n\n\nclass URL:\n def __init__(self, url: str = \"\", scope: Scope = None) -> None:\n if scope is not None:\n assert not url, 'Cannot set both \"url\" and \"scope\".'\n scheme = scope.get(\"scheme\", \"http\")\n server = scope.get(\"server\", None)\n path = scope.get(\"root_path\", \"\") + scope[\"path\"]\n query_string = scope[\"query_string\"]\n\n if server is None:\n url = path\n else:\n host, port = server\n default_port = {\"http\": 80, \"https\": 443, \"ws\": 80, \"wss\": 443}[scheme]\n if port == default_port:\n url = \"%s://%s%s\" % (scheme, host, path)\n else:\n url = \"%s://%s:%s%s\" % (scheme, host, port, path)\n\n if query_string:\n url += \"?\" + unquote(query_string.decode())\n self._url = url\n\n @property\n def components(self) -> ParseResult:\n if not hasattr(self, \"_components\"):\n self._components = urlparse(self._url)\n return self._components\n\n @property\n def scheme(self) -> str:\n return self.components.scheme\n\n @property\n def netloc(self) -> str:\n return self.components.netloc\n\n @property\n def path(self) -> str:\n return self.components.path\n\n @property\n def params(self) -> str:\n return self.components.params\n\n @property\n def query(self) -> str:\n return self.components.query\n\n @property\n def fragment(self) -> str:\n return self.components.fragment\n\n @property\n def username(self) -> typing.Union[None, str]:\n return self.components.username\n\n @property\n def password(self) -> typing.Union[None, str]:\n return self.components.password\n\n @property\n def hostname(self) -> typing.Union[None, str]:\n return self.components.hostname\n\n @property\n def port(self) -> typing.Optional[int]:\n return self.components.port\n\n def replace(self, **kwargs: typing.Any) -> \"URL\":\n if \"hostname\" in kwargs or \"port\" in kwargs:\n hostname = kwargs.pop(\"hostname\", self.hostname)\n port = kwargs.pop(\"port\", self.port)\n if port is None:\n kwargs[\"netloc\"] = hostname\n else:\n kwargs[\"netloc\"] = \"%s:%d\" % (hostname, port)\n components = self.components._replace(**kwargs)\n return URL(components.geturl())\n\n def __eq__(self, other):\n return str(self) == str(other)\n\n def __str__(self):\n return self._url\n\n def __repr__(self):\n return \"%s(%s)\" % (self.__class__.__name__, repr(self._url))\n\n\n# Type annotations for valid `__init__` values to QueryParams and Headers.\nStrPairs = typing.Sequence[typing.Tuple[str, str]]\nBytesPairs = typing.List[typing.Tuple[bytes, bytes]]\nStrDict = typing.Mapping[str, str]\n\n\nclass QueryParams(StrDict):\n \"\"\"\n An immutable multidict.\n \"\"\"\n\n def __init__(\n self, value: typing.Union[str, typing.Union[StrDict, StrPairs]] = None\n ) -> None:\n if value is None:\n value = []\n elif isinstance(value, str):\n value = parse_qsl(value)\n\n if hasattr(value, \"items\"):\n items = list(typing.cast(StrDict, value).items())\n else:\n items = list(typing.cast(StrPairs, value))\n self._dict = {k: v for k, v in reversed(items)}\n self._list = items\n\n def getlist(self, key: typing.Any) -> typing.List[str]:\n return [item_value for item_key, item_value in self._list if item_key == key]\n\n def keys(self) -> typing.List[str]: # type: ignore\n return [key for key, value in self._list]\n\n def values(self) -> typing.List[str]: # type: ignore\n return [value for key, value in self._list]\n\n def items(self) -> StrPairs: # type: ignore\n return list(self._list)\n\n def get(self, key: typing.Any, default: typing.Any = None) -> typing.Any:\n if key in self._dict:\n return self._dict[key]\n else:\n return default\n\n def __getitem__(self, key: typing.Any) -> str:\n return self._dict[key]\n\n def __contains__(self, key: typing.Any) -> bool:\n return key in self._dict\n\n def __iter__(self) -> typing.Iterator[typing.Any]:\n return iter(self._list)\n\n def __len__(self) -> int:\n return len(self._list)\n\n def __eq__(self, other: typing.Any) -> bool:\n if not isinstance(other, QueryParams):\n other = QueryParams(other)\n return sorted(self._list) == sorted(other._list)\n\n def __repr__(self) -> str:\n return \"QueryParams(%s)\" % repr(self._list)\n\n\nclass Headers(typing.Mapping[str, str]):\n \"\"\"\n An immutable, case-insensitive multidict.\n \"\"\"\n\n def __init__(self, raw_headers: typing.Optional[BytesPairs] = None) -> None:\n if raw_headers is None:\n self._list = [] # type: BytesPairs\n else:\n for header_key, header_value in raw_headers:\n assert isinstance(header_key, bytes)\n assert isinstance(header_value, bytes)\n assert header_key == header_key.lower()\n self._list = raw_headers\n\n def keys(self) -> typing.List[str]: # type: ignore\n return [key.decode(\"latin-1\") for key, value in self._list]\n\n def values(self) -> typing.List[str]: # type: ignore\n return [value.decode(\"latin-1\") for key, value in self._list]\n\n def items(self) -> StrPairs: # type: ignore\n return [\n (key.decode(\"latin-1\"), value.decode(\"latin-1\"))\n for key, value in self._list\n ]\n\n def get(self, key: str, default: typing.Any = None) -> typing.Any:\n try:\n return self[key]\n except KeyError:\n return default\n\n def getlist(self, key: str) -> typing.List[str]:\n get_header_key = key.lower().encode(\"latin-1\")\n return [\n item_value.decode(\"latin-1\")\n for item_key, item_value in self._list\n if item_key == get_header_key\n ]\n\n def mutablecopy(self) -> \"MutableHeaders\":\n return MutableHeaders(self._list[:])\n\n def __getitem__(self, key: str) -> str:\n get_header_key = key.lower().encode(\"latin-1\")\n for header_key, header_value in self._list:\n if header_key == get_header_key:\n return header_value.decode(\"latin-1\")\n raise KeyError(key)\n\n def __contains__(self, key: typing.Any) -> bool:\n get_header_key = key.lower().encode(\"latin-1\")\n for header_key, header_value in self._list:\n if header_key == get_header_key:\n return True\n return False\n\n def __iter__(self) -> typing.Iterator[typing.Any]:\n return iter(self.items())\n\n def __len__(self) -> int:\n return len(self._list)\n\n def __eq__(self, other: typing.Any) -> bool:\n if not isinstance(other, Headers):\n return False\n return sorted(self._list) == sorted(other._list)\n\n def __repr__(self) -> str:\n return \"%s(%s)\" % (self.__class__.__name__, repr(self.items()))\n\n\nclass MutableHeaders(Headers):\n def __setitem__(self, key: str, value: str) -> None:\n \"\"\"\n Set the header `key` to `value`, removing any duplicate entries.\n Retains insertion order.\n \"\"\"\n set_key = key.lower().encode(\"latin-1\")\n set_value = value.encode(\"latin-1\")\n\n found_indexes = []\n for idx, (item_key, item_value) in enumerate(self._list):\n if item_key == set_key:\n found_indexes.append(idx)\n\n for idx in reversed(found_indexes[1:]):\n del self._list[idx]\n\n if found_indexes:\n idx = found_indexes[0]\n self._list[idx] = (set_key, set_value)\n else:\n self._list.append((set_key, set_value))\n\n def __delitem__(self, key: str) -> None:\n \"\"\"\n Remove the header `key`.\n \"\"\"\n del_key = key.lower().encode(\"latin-1\")\n\n pop_indexes = []\n for idx, (item_key, item_value) in enumerate(self._list):\n if item_key == del_key:\n pop_indexes.append(idx)\n\n for idx in reversed(pop_indexes):\n del (self._list[idx])\n\n def setdefault(self, key: str, value: str) -> str:\n \"\"\"\n If the header `key` does not exist, then set it to `value`.\n Returns the header value.\n \"\"\"\n set_key = key.lower().encode(\"latin-1\")\n set_value = value.encode(\"latin-1\")\n\n for idx, (item_key, item_value) in enumerate(self._list):\n if item_key == set_key:\n return item_value.decode(\"latin-1\")\n self._list.append((set_key, set_value))\n return value\n\n def update(self, other: dict):\n for key, val in other.items():\n self[key] = val\n", "path": "starlette/datastructures.py"}]}
| 3,243 | 458 |
gh_patches_debug_14965
|
rasdani/github-patches
|
git_diff
|
huggingface__transformers-7248
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[example/glue] run_glue compute metrics fail for bart like models
This PR #7126 introduced multiple predictions for trainer. This breaks the `compute_metrics_fn` of `run_glue.py` for `bart` like models which return multiple predictions.
For `BartForSequenceClassfication` `p.predictions` is a `tuple`, so following code fails
https://github.com/huggingface/transformers/blob/1d90d0f386af2af52017d51c421e71a51ec94de0/examples/text-classification/run_glue.py#L154
@sgugger
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/text-classification/run_glue.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
3 # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 """ Finetuning the library models for sequence classification on GLUE."""
17
18
19 import dataclasses
20 import logging
21 import os
22 import sys
23 from dataclasses import dataclass, field
24 from typing import Callable, Dict, Optional
25
26 import numpy as np
27
28 from transformers import AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, EvalPrediction, GlueDataset
29 from transformers import GlueDataTrainingArguments as DataTrainingArguments
30 from transformers import (
31 HfArgumentParser,
32 Trainer,
33 TrainingArguments,
34 glue_compute_metrics,
35 glue_output_modes,
36 glue_tasks_num_labels,
37 set_seed,
38 )
39
40
41 logger = logging.getLogger(__name__)
42
43
44 @dataclass
45 class ModelArguments:
46 """
47 Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
48 """
49
50 model_name_or_path: str = field(
51 metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
52 )
53 config_name: Optional[str] = field(
54 default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
55 )
56 tokenizer_name: Optional[str] = field(
57 default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
58 )
59 cache_dir: Optional[str] = field(
60 default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
61 )
62
63
64 def main():
65 # See all possible arguments in src/transformers/training_args.py
66 # or by passing the --help flag to this script.
67 # We now keep distinct sets of args, for a cleaner separation of concerns.
68
69 parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
70
71 if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
72 # If we pass only one argument to the script and it's the path to a json file,
73 # let's parse it to get our arguments.
74 model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
75 else:
76 model_args, data_args, training_args = parser.parse_args_into_dataclasses()
77
78 if (
79 os.path.exists(training_args.output_dir)
80 and os.listdir(training_args.output_dir)
81 and training_args.do_train
82 and not training_args.overwrite_output_dir
83 ):
84 raise ValueError(
85 f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
86 )
87
88 # Setup logging
89 logging.basicConfig(
90 format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
91 datefmt="%m/%d/%Y %H:%M:%S",
92 level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
93 )
94 logger.warning(
95 "Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
96 training_args.local_rank,
97 training_args.device,
98 training_args.n_gpu,
99 bool(training_args.local_rank != -1),
100 training_args.fp16,
101 )
102 logger.info("Training/evaluation parameters %s", training_args)
103
104 # Set seed
105 set_seed(training_args.seed)
106
107 try:
108 num_labels = glue_tasks_num_labels[data_args.task_name]
109 output_mode = glue_output_modes[data_args.task_name]
110 except KeyError:
111 raise ValueError("Task not found: %s" % (data_args.task_name))
112
113 # Load pretrained model and tokenizer
114 #
115 # Distributed training:
116 # The .from_pretrained methods guarantee that only one local process can concurrently
117 # download model & vocab.
118
119 config = AutoConfig.from_pretrained(
120 model_args.config_name if model_args.config_name else model_args.model_name_or_path,
121 num_labels=num_labels,
122 finetuning_task=data_args.task_name,
123 cache_dir=model_args.cache_dir,
124 )
125 tokenizer = AutoTokenizer.from_pretrained(
126 model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
127 cache_dir=model_args.cache_dir,
128 )
129 model = AutoModelForSequenceClassification.from_pretrained(
130 model_args.model_name_or_path,
131 from_tf=bool(".ckpt" in model_args.model_name_or_path),
132 config=config,
133 cache_dir=model_args.cache_dir,
134 )
135
136 # Get datasets
137 train_dataset = (
138 GlueDataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
139 )
140 eval_dataset = (
141 GlueDataset(data_args, tokenizer=tokenizer, mode="dev", cache_dir=model_args.cache_dir)
142 if training_args.do_eval
143 else None
144 )
145 test_dataset = (
146 GlueDataset(data_args, tokenizer=tokenizer, mode="test", cache_dir=model_args.cache_dir)
147 if training_args.do_predict
148 else None
149 )
150
151 def build_compute_metrics_fn(task_name: str) -> Callable[[EvalPrediction], Dict]:
152 def compute_metrics_fn(p: EvalPrediction):
153 if output_mode == "classification":
154 preds = np.argmax(p.predictions, axis=1)
155 elif output_mode == "regression":
156 preds = np.squeeze(p.predictions)
157 return glue_compute_metrics(task_name, preds, p.label_ids)
158
159 return compute_metrics_fn
160
161 # Initialize our Trainer
162 trainer = Trainer(
163 model=model,
164 args=training_args,
165 train_dataset=train_dataset,
166 eval_dataset=eval_dataset,
167 compute_metrics=build_compute_metrics_fn(data_args.task_name),
168 )
169
170 # Training
171 if training_args.do_train:
172 trainer.train(
173 model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
174 )
175 trainer.save_model()
176 # For convenience, we also re-save the tokenizer to the same directory,
177 # so that you can share your model easily on huggingface.co/models =)
178 if trainer.is_world_master():
179 tokenizer.save_pretrained(training_args.output_dir)
180
181 # Evaluation
182 eval_results = {}
183 if training_args.do_eval:
184 logger.info("*** Evaluate ***")
185
186 # Loop to handle MNLI double evaluation (matched, mis-matched)
187 eval_datasets = [eval_dataset]
188 if data_args.task_name == "mnli":
189 mnli_mm_data_args = dataclasses.replace(data_args, task_name="mnli-mm")
190 eval_datasets.append(
191 GlueDataset(mnli_mm_data_args, tokenizer=tokenizer, mode="dev", cache_dir=model_args.cache_dir)
192 )
193
194 for eval_dataset in eval_datasets:
195 trainer.compute_metrics = build_compute_metrics_fn(eval_dataset.args.task_name)
196 eval_result = trainer.evaluate(eval_dataset=eval_dataset)
197
198 output_eval_file = os.path.join(
199 training_args.output_dir, f"eval_results_{eval_dataset.args.task_name}.txt"
200 )
201 if trainer.is_world_master():
202 with open(output_eval_file, "w") as writer:
203 logger.info("***** Eval results {} *****".format(eval_dataset.args.task_name))
204 for key, value in eval_result.items():
205 logger.info(" %s = %s", key, value)
206 writer.write("%s = %s\n" % (key, value))
207
208 eval_results.update(eval_result)
209
210 if training_args.do_predict:
211 logging.info("*** Test ***")
212 test_datasets = [test_dataset]
213 if data_args.task_name == "mnli":
214 mnli_mm_data_args = dataclasses.replace(data_args, task_name="mnli-mm")
215 test_datasets.append(
216 GlueDataset(mnli_mm_data_args, tokenizer=tokenizer, mode="test", cache_dir=model_args.cache_dir)
217 )
218
219 for test_dataset in test_datasets:
220 predictions = trainer.predict(test_dataset=test_dataset).predictions
221 if output_mode == "classification":
222 predictions = np.argmax(predictions, axis=1)
223
224 output_test_file = os.path.join(
225 training_args.output_dir, f"test_results_{test_dataset.args.task_name}.txt"
226 )
227 if trainer.is_world_master():
228 with open(output_test_file, "w") as writer:
229 logger.info("***** Test results {} *****".format(test_dataset.args.task_name))
230 writer.write("index\tprediction\n")
231 for index, item in enumerate(predictions):
232 if output_mode == "regression":
233 writer.write("%d\t%3.3f\n" % (index, item))
234 else:
235 item = test_dataset.get_labels()[item]
236 writer.write("%d\t%s\n" % (index, item))
237 return eval_results
238
239
240 def _mp_fn(index):
241 # For xla_spawn (TPUs)
242 main()
243
244
245 if __name__ == "__main__":
246 main()
247
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/text-classification/run_glue.py b/examples/text-classification/run_glue.py
--- a/examples/text-classification/run_glue.py
+++ b/examples/text-classification/run_glue.py
@@ -150,10 +150,11 @@
def build_compute_metrics_fn(task_name: str) -> Callable[[EvalPrediction], Dict]:
def compute_metrics_fn(p: EvalPrediction):
+ preds = p.predictions[0] if type(p.predictions) == tuple else p.predictions
if output_mode == "classification":
- preds = np.argmax(p.predictions, axis=1)
- elif output_mode == "regression":
- preds = np.squeeze(p.predictions)
+ preds = np.argmax(preds, axis=1)
+ else: # regression
+ preds = np.squeeze(preds)
return glue_compute_metrics(task_name, preds, p.label_ids)
return compute_metrics_fn
|
{"golden_diff": "diff --git a/examples/text-classification/run_glue.py b/examples/text-classification/run_glue.py\n--- a/examples/text-classification/run_glue.py\n+++ b/examples/text-classification/run_glue.py\n@@ -150,10 +150,11 @@\n \n def build_compute_metrics_fn(task_name: str) -> Callable[[EvalPrediction], Dict]:\n def compute_metrics_fn(p: EvalPrediction):\n+ preds = p.predictions[0] if type(p.predictions) == tuple else p.predictions\n if output_mode == \"classification\":\n- preds = np.argmax(p.predictions, axis=1)\n- elif output_mode == \"regression\":\n- preds = np.squeeze(p.predictions)\n+ preds = np.argmax(preds, axis=1)\n+ else: # regression\n+ preds = np.squeeze(preds)\n return glue_compute_metrics(task_name, preds, p.label_ids)\n \n return compute_metrics_fn\n", "issue": "[example/glue] run_glue compute metrics fail for bart like models\nThis PR #7126 introduced multiple predictions for trainer. This breaks the `compute_metrics_fn` of `run_glue.py` for `bart` like models which return multiple predictions.\r\n\r\nFor `BartForSequenceClassfication` `p.predictions` is a `tuple`, so following code fails\r\nhttps://github.com/huggingface/transformers/blob/1d90d0f386af2af52017d51c421e71a51ec94de0/examples/text-classification/run_glue.py#L154\r\n\r\n@sgugger \r\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.\n# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\" Finetuning the library models for sequence classification on GLUE.\"\"\"\n\n\nimport dataclasses\nimport logging\nimport os\nimport sys\nfrom dataclasses import dataclass, field\nfrom typing import Callable, Dict, Optional\n\nimport numpy as np\n\nfrom transformers import AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, EvalPrediction, GlueDataset\nfrom transformers import GlueDataTrainingArguments as DataTrainingArguments\nfrom transformers import (\n HfArgumentParser,\n Trainer,\n TrainingArguments,\n glue_compute_metrics,\n glue_output_modes,\n glue_tasks_num_labels,\n set_seed,\n)\n\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass ModelArguments:\n \"\"\"\n Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\n \"\"\"\n\n model_name_or_path: str = field(\n metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\n )\n config_name: Optional[str] = field(\n default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\n )\n tokenizer_name: Optional[str] = field(\n default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\n )\n cache_dir: Optional[str] = field(\n default=None, metadata={\"help\": \"Where do you want to store the pretrained models downloaded from s3\"}\n )\n\n\ndef main():\n # See all possible arguments in src/transformers/training_args.py\n # or by passing the --help flag to this script.\n # We now keep distinct sets of args, for a cleaner separation of concerns.\n\n parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\n\n if len(sys.argv) == 2 and sys.argv[1].endswith(\".json\"):\n # If we pass only one argument to the script and it's the path to a json file,\n # let's parse it to get our arguments.\n model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))\n else:\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\n\n if (\n os.path.exists(training_args.output_dir)\n and os.listdir(training_args.output_dir)\n and training_args.do_train\n and not training_args.overwrite_output_dir\n ):\n raise ValueError(\n f\"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome.\"\n )\n\n # Setup logging\n logging.basicConfig(\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n datefmt=\"%m/%d/%Y %H:%M:%S\",\n level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,\n )\n logger.warning(\n \"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s\",\n training_args.local_rank,\n training_args.device,\n training_args.n_gpu,\n bool(training_args.local_rank != -1),\n training_args.fp16,\n )\n logger.info(\"Training/evaluation parameters %s\", training_args)\n\n # Set seed\n set_seed(training_args.seed)\n\n try:\n num_labels = glue_tasks_num_labels[data_args.task_name]\n output_mode = glue_output_modes[data_args.task_name]\n except KeyError:\n raise ValueError(\"Task not found: %s\" % (data_args.task_name))\n\n # Load pretrained model and tokenizer\n #\n # Distributed training:\n # The .from_pretrained methods guarantee that only one local process can concurrently\n # download model & vocab.\n\n config = AutoConfig.from_pretrained(\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\n num_labels=num_labels,\n finetuning_task=data_args.task_name,\n cache_dir=model_args.cache_dir,\n )\n tokenizer = AutoTokenizer.from_pretrained(\n model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\n cache_dir=model_args.cache_dir,\n )\n model = AutoModelForSequenceClassification.from_pretrained(\n model_args.model_name_or_path,\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n config=config,\n cache_dir=model_args.cache_dir,\n )\n\n # Get datasets\n train_dataset = (\n GlueDataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None\n )\n eval_dataset = (\n GlueDataset(data_args, tokenizer=tokenizer, mode=\"dev\", cache_dir=model_args.cache_dir)\n if training_args.do_eval\n else None\n )\n test_dataset = (\n GlueDataset(data_args, tokenizer=tokenizer, mode=\"test\", cache_dir=model_args.cache_dir)\n if training_args.do_predict\n else None\n )\n\n def build_compute_metrics_fn(task_name: str) -> Callable[[EvalPrediction], Dict]:\n def compute_metrics_fn(p: EvalPrediction):\n if output_mode == \"classification\":\n preds = np.argmax(p.predictions, axis=1)\n elif output_mode == \"regression\":\n preds = np.squeeze(p.predictions)\n return glue_compute_metrics(task_name, preds, p.label_ids)\n\n return compute_metrics_fn\n\n # Initialize our Trainer\n trainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=train_dataset,\n eval_dataset=eval_dataset,\n compute_metrics=build_compute_metrics_fn(data_args.task_name),\n )\n\n # Training\n if training_args.do_train:\n trainer.train(\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\n )\n trainer.save_model()\n # For convenience, we also re-save the tokenizer to the same directory,\n # so that you can share your model easily on huggingface.co/models =)\n if trainer.is_world_master():\n tokenizer.save_pretrained(training_args.output_dir)\n\n # Evaluation\n eval_results = {}\n if training_args.do_eval:\n logger.info(\"*** Evaluate ***\")\n\n # Loop to handle MNLI double evaluation (matched, mis-matched)\n eval_datasets = [eval_dataset]\n if data_args.task_name == \"mnli\":\n mnli_mm_data_args = dataclasses.replace(data_args, task_name=\"mnli-mm\")\n eval_datasets.append(\n GlueDataset(mnli_mm_data_args, tokenizer=tokenizer, mode=\"dev\", cache_dir=model_args.cache_dir)\n )\n\n for eval_dataset in eval_datasets:\n trainer.compute_metrics = build_compute_metrics_fn(eval_dataset.args.task_name)\n eval_result = trainer.evaluate(eval_dataset=eval_dataset)\n\n output_eval_file = os.path.join(\n training_args.output_dir, f\"eval_results_{eval_dataset.args.task_name}.txt\"\n )\n if trainer.is_world_master():\n with open(output_eval_file, \"w\") as writer:\n logger.info(\"***** Eval results {} *****\".format(eval_dataset.args.task_name))\n for key, value in eval_result.items():\n logger.info(\" %s = %s\", key, value)\n writer.write(\"%s = %s\\n\" % (key, value))\n\n eval_results.update(eval_result)\n\n if training_args.do_predict:\n logging.info(\"*** Test ***\")\n test_datasets = [test_dataset]\n if data_args.task_name == \"mnli\":\n mnli_mm_data_args = dataclasses.replace(data_args, task_name=\"mnli-mm\")\n test_datasets.append(\n GlueDataset(mnli_mm_data_args, tokenizer=tokenizer, mode=\"test\", cache_dir=model_args.cache_dir)\n )\n\n for test_dataset in test_datasets:\n predictions = trainer.predict(test_dataset=test_dataset).predictions\n if output_mode == \"classification\":\n predictions = np.argmax(predictions, axis=1)\n\n output_test_file = os.path.join(\n training_args.output_dir, f\"test_results_{test_dataset.args.task_name}.txt\"\n )\n if trainer.is_world_master():\n with open(output_test_file, \"w\") as writer:\n logger.info(\"***** Test results {} *****\".format(test_dataset.args.task_name))\n writer.write(\"index\\tprediction\\n\")\n for index, item in enumerate(predictions):\n if output_mode == \"regression\":\n writer.write(\"%d\\t%3.3f\\n\" % (index, item))\n else:\n item = test_dataset.get_labels()[item]\n writer.write(\"%d\\t%s\\n\" % (index, item))\n return eval_results\n\n\ndef _mp_fn(index):\n # For xla_spawn (TPUs)\n main()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "examples/text-classification/run_glue.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.\n# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\" Finetuning the library models for sequence classification on GLUE.\"\"\"\n\n\nimport dataclasses\nimport logging\nimport os\nimport sys\nfrom dataclasses import dataclass, field\nfrom typing import Callable, Dict, Optional\n\nimport numpy as np\n\nfrom transformers import AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, EvalPrediction, GlueDataset\nfrom transformers import GlueDataTrainingArguments as DataTrainingArguments\nfrom transformers import (\n HfArgumentParser,\n Trainer,\n TrainingArguments,\n glue_compute_metrics,\n glue_output_modes,\n glue_tasks_num_labels,\n set_seed,\n)\n\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass ModelArguments:\n \"\"\"\n Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\n \"\"\"\n\n model_name_or_path: str = field(\n metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\n )\n config_name: Optional[str] = field(\n default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\n )\n tokenizer_name: Optional[str] = field(\n default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\n )\n cache_dir: Optional[str] = field(\n default=None, metadata={\"help\": \"Where do you want to store the pretrained models downloaded from s3\"}\n )\n\n\ndef main():\n # See all possible arguments in src/transformers/training_args.py\n # or by passing the --help flag to this script.\n # We now keep distinct sets of args, for a cleaner separation of concerns.\n\n parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\n\n if len(sys.argv) == 2 and sys.argv[1].endswith(\".json\"):\n # If we pass only one argument to the script and it's the path to a json file,\n # let's parse it to get our arguments.\n model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))\n else:\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\n\n if (\n os.path.exists(training_args.output_dir)\n and os.listdir(training_args.output_dir)\n and training_args.do_train\n and not training_args.overwrite_output_dir\n ):\n raise ValueError(\n f\"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome.\"\n )\n\n # Setup logging\n logging.basicConfig(\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n datefmt=\"%m/%d/%Y %H:%M:%S\",\n level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,\n )\n logger.warning(\n \"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s\",\n training_args.local_rank,\n training_args.device,\n training_args.n_gpu,\n bool(training_args.local_rank != -1),\n training_args.fp16,\n )\n logger.info(\"Training/evaluation parameters %s\", training_args)\n\n # Set seed\n set_seed(training_args.seed)\n\n try:\n num_labels = glue_tasks_num_labels[data_args.task_name]\n output_mode = glue_output_modes[data_args.task_name]\n except KeyError:\n raise ValueError(\"Task not found: %s\" % (data_args.task_name))\n\n # Load pretrained model and tokenizer\n #\n # Distributed training:\n # The .from_pretrained methods guarantee that only one local process can concurrently\n # download model & vocab.\n\n config = AutoConfig.from_pretrained(\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\n num_labels=num_labels,\n finetuning_task=data_args.task_name,\n cache_dir=model_args.cache_dir,\n )\n tokenizer = AutoTokenizer.from_pretrained(\n model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\n cache_dir=model_args.cache_dir,\n )\n model = AutoModelForSequenceClassification.from_pretrained(\n model_args.model_name_or_path,\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n config=config,\n cache_dir=model_args.cache_dir,\n )\n\n # Get datasets\n train_dataset = (\n GlueDataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None\n )\n eval_dataset = (\n GlueDataset(data_args, tokenizer=tokenizer, mode=\"dev\", cache_dir=model_args.cache_dir)\n if training_args.do_eval\n else None\n )\n test_dataset = (\n GlueDataset(data_args, tokenizer=tokenizer, mode=\"test\", cache_dir=model_args.cache_dir)\n if training_args.do_predict\n else None\n )\n\n def build_compute_metrics_fn(task_name: str) -> Callable[[EvalPrediction], Dict]:\n def compute_metrics_fn(p: EvalPrediction):\n preds = p.predictions[0] if type(p.predictions) == tuple else p.predictions\n if output_mode == \"classification\":\n preds = np.argmax(preds, axis=1)\n else: # regression\n preds = np.squeeze(preds)\n return glue_compute_metrics(task_name, preds, p.label_ids)\n\n return compute_metrics_fn\n\n # Initialize our Trainer\n trainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=train_dataset,\n eval_dataset=eval_dataset,\n compute_metrics=build_compute_metrics_fn(data_args.task_name),\n )\n\n # Training\n if training_args.do_train:\n trainer.train(\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\n )\n trainer.save_model()\n # For convenience, we also re-save the tokenizer to the same directory,\n # so that you can share your model easily on huggingface.co/models =)\n if trainer.is_world_master():\n tokenizer.save_pretrained(training_args.output_dir)\n\n # Evaluation\n eval_results = {}\n if training_args.do_eval:\n logger.info(\"*** Evaluate ***\")\n\n # Loop to handle MNLI double evaluation (matched, mis-matched)\n eval_datasets = [eval_dataset]\n if data_args.task_name == \"mnli\":\n mnli_mm_data_args = dataclasses.replace(data_args, task_name=\"mnli-mm\")\n eval_datasets.append(\n GlueDataset(mnli_mm_data_args, tokenizer=tokenizer, mode=\"dev\", cache_dir=model_args.cache_dir)\n )\n\n for eval_dataset in eval_datasets:\n trainer.compute_metrics = build_compute_metrics_fn(eval_dataset.args.task_name)\n eval_result = trainer.evaluate(eval_dataset=eval_dataset)\n\n output_eval_file = os.path.join(\n training_args.output_dir, f\"eval_results_{eval_dataset.args.task_name}.txt\"\n )\n if trainer.is_world_master():\n with open(output_eval_file, \"w\") as writer:\n logger.info(\"***** Eval results {} *****\".format(eval_dataset.args.task_name))\n for key, value in eval_result.items():\n logger.info(\" %s = %s\", key, value)\n writer.write(\"%s = %s\\n\" % (key, value))\n\n eval_results.update(eval_result)\n\n if training_args.do_predict:\n logging.info(\"*** Test ***\")\n test_datasets = [test_dataset]\n if data_args.task_name == \"mnli\":\n mnli_mm_data_args = dataclasses.replace(data_args, task_name=\"mnli-mm\")\n test_datasets.append(\n GlueDataset(mnli_mm_data_args, tokenizer=tokenizer, mode=\"test\", cache_dir=model_args.cache_dir)\n )\n\n for test_dataset in test_datasets:\n predictions = trainer.predict(test_dataset=test_dataset).predictions\n if output_mode == \"classification\":\n predictions = np.argmax(predictions, axis=1)\n\n output_test_file = os.path.join(\n training_args.output_dir, f\"test_results_{test_dataset.args.task_name}.txt\"\n )\n if trainer.is_world_master():\n with open(output_test_file, \"w\") as writer:\n logger.info(\"***** Test results {} *****\".format(test_dataset.args.task_name))\n writer.write(\"index\\tprediction\\n\")\n for index, item in enumerate(predictions):\n if output_mode == \"regression\":\n writer.write(\"%d\\t%3.3f\\n\" % (index, item))\n else:\n item = test_dataset.get_labels()[item]\n writer.write(\"%d\\t%s\\n\" % (index, item))\n return eval_results\n\n\ndef _mp_fn(index):\n # For xla_spawn (TPUs)\n main()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "examples/text-classification/run_glue.py"}]}
| 3,095 | 206 |
gh_patches_debug_12965
|
rasdani/github-patches
|
git_diff
|
getredash__redash-5812
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Timing out when connecting to a MSSQL database on non-default port using ODBC driver
I had to use "Microsoft SQL Server (ODBC)" data source because the "Microsoft SQL Server" one does not currently support using SSL. However, when trying to connect to my server on a port different than 1433, connection timed out.
After a bit of digging, I found this:
> Microsoft's ODBC drivers for SQL Server do not use a PORT= parameter. The port number, if any, is appended to the server name/IP with a comma
source: https://stackoverflow.com/a/50051708/1277401
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/query_runner/mssql_odbc.py`
Content:
```
1 import logging
2 import sys
3 import uuid
4
5 from redash.query_runner import *
6 from redash.query_runner.mssql import types_map
7 from redash.utils import json_dumps, json_loads
8
9 logger = logging.getLogger(__name__)
10
11 try:
12 import pyodbc
13
14 enabled = True
15 except ImportError:
16 enabled = False
17
18
19 class SQLServerODBC(BaseSQLQueryRunner):
20 should_annotate_query = False
21 noop_query = "SELECT 1"
22
23 @classmethod
24 def configuration_schema(cls):
25 return {
26 "type": "object",
27 "properties": {
28 "server": {"type": "string"},
29 "port": {"type": "number", "default": 1433},
30 "user": {"type": "string"},
31 "password": {"type": "string"},
32 "db": {"type": "string", "title": "Database Name"},
33 "charset": {
34 "type": "string",
35 "default": "UTF-8",
36 "title": "Character Set",
37 },
38 "use_ssl": {"type": "boolean", "title": "Use SSL", "default": False,},
39 "verify_ssl": {
40 "type": "boolean",
41 "title": "Verify SSL certificate",
42 "default": True,
43 },
44 },
45 "order": [
46 "server",
47 "port",
48 "user",
49 "password",
50 "db",
51 "charset",
52 "use_ssl",
53 "verify_ssl",
54 ],
55 "required": ["server", "user", "password", "db"],
56 "secret": ["password"],
57 "extra_options": ["verify_ssl", "use_ssl"],
58 }
59
60 @classmethod
61 def enabled(cls):
62 return enabled
63
64 @classmethod
65 def name(cls):
66 return "Microsoft SQL Server (ODBC)"
67
68 @classmethod
69 def type(cls):
70 return "mssql_odbc"
71
72 @property
73 def supports_auto_limit(self):
74 return False
75
76 def _get_tables(self, schema):
77 query = """
78 SELECT table_schema, table_name, column_name
79 FROM INFORMATION_SCHEMA.COLUMNS
80 WHERE table_schema NOT IN ('guest','INFORMATION_SCHEMA','sys','db_owner','db_accessadmin'
81 ,'db_securityadmin','db_ddladmin','db_backupoperator','db_datareader'
82 ,'db_datawriter','db_denydatareader','db_denydatawriter'
83 );
84 """
85
86 results, error = self.run_query(query, None)
87
88 if error is not None:
89 self._handle_run_query_error(error)
90
91 results = json_loads(results)
92
93 for row in results["rows"]:
94 if row["table_schema"] != self.configuration["db"]:
95 table_name = "{}.{}".format(row["table_schema"], row["table_name"])
96 else:
97 table_name = row["table_name"]
98
99 if table_name not in schema:
100 schema[table_name] = {"name": table_name, "columns": []}
101
102 schema[table_name]["columns"].append(row["column_name"])
103
104 return list(schema.values())
105
106 def run_query(self, query, user):
107 connection = None
108
109 try:
110 server = self.configuration.get("server")
111 user = self.configuration.get("user", "")
112 password = self.configuration.get("password", "")
113 db = self.configuration["db"]
114 port = self.configuration.get("port", 1433)
115 charset = self.configuration.get("charset", "UTF-8")
116
117 connection_string_fmt = "DRIVER={{ODBC Driver 17 for SQL Server}};PORT={};SERVER={};DATABASE={};UID={};PWD={}"
118 connection_string = connection_string_fmt.format(
119 port, server, db, user, password
120 )
121
122 if self.configuration.get("use_ssl", False):
123 connection_string += ";Encrypt=YES"
124
125 if not self.configuration.get("verify_ssl"):
126 connection_string += ";TrustServerCertificate=YES"
127
128 connection = pyodbc.connect(connection_string)
129 cursor = connection.cursor()
130 logger.debug("SQLServerODBC running query: %s", query)
131 cursor.execute(query)
132 data = cursor.fetchall()
133
134 if cursor.description is not None:
135 columns = self.fetch_columns(
136 [(i[0], types_map.get(i[1], None)) for i in cursor.description]
137 )
138 rows = [
139 dict(zip((column["name"] for column in columns), row))
140 for row in data
141 ]
142
143 data = {"columns": columns, "rows": rows}
144 json_data = json_dumps(data)
145 error = None
146 else:
147 error = "No data was returned."
148 json_data = None
149
150 cursor.close()
151 except pyodbc.Error as e:
152 try:
153 # Query errors are at `args[1]`
154 error = e.args[1]
155 except IndexError:
156 # Connection errors are `args[0][1]`
157 error = e.args[0][1]
158 json_data = None
159 except (KeyboardInterrupt, JobTimeoutException):
160 connection.cancel()
161 raise
162 finally:
163 if connection:
164 connection.close()
165
166 return json_data, error
167
168
169 register(SQLServerODBC)
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redash/query_runner/mssql_odbc.py b/redash/query_runner/mssql_odbc.py
--- a/redash/query_runner/mssql_odbc.py
+++ b/redash/query_runner/mssql_odbc.py
@@ -114,9 +114,9 @@
port = self.configuration.get("port", 1433)
charset = self.configuration.get("charset", "UTF-8")
- connection_string_fmt = "DRIVER={{ODBC Driver 17 for SQL Server}};PORT={};SERVER={};DATABASE={};UID={};PWD={}"
+ connection_string_fmt = "DRIVER={{ODBC Driver 17 for SQL Server}};SERVER={},{};DATABASE={};UID={};PWD={}"
connection_string = connection_string_fmt.format(
- port, server, db, user, password
+ server, port, db, user, password
)
if self.configuration.get("use_ssl", False):
|
{"golden_diff": "diff --git a/redash/query_runner/mssql_odbc.py b/redash/query_runner/mssql_odbc.py\n--- a/redash/query_runner/mssql_odbc.py\n+++ b/redash/query_runner/mssql_odbc.py\n@@ -114,9 +114,9 @@\n port = self.configuration.get(\"port\", 1433)\n charset = self.configuration.get(\"charset\", \"UTF-8\")\n \n- connection_string_fmt = \"DRIVER={{ODBC Driver 17 for SQL Server}};PORT={};SERVER={};DATABASE={};UID={};PWD={}\"\n+ connection_string_fmt = \"DRIVER={{ODBC Driver 17 for SQL Server}};SERVER={},{};DATABASE={};UID={};PWD={}\"\n connection_string = connection_string_fmt.format(\n- port, server, db, user, password\n+ server, port, db, user, password\n )\n \n if self.configuration.get(\"use_ssl\", False):\n", "issue": "Timing out when connecting to a MSSQL database on non-default port using ODBC driver\nI had to use \"Microsoft SQL Server (ODBC)\" data source because the \"Microsoft SQL Server\" one does not currently support using SSL. However, when trying to connect to my server on a port different than 1433, connection timed out.\r\n\r\nAfter a bit of digging, I found this:\r\n> Microsoft's ODBC drivers for SQL Server do not use a PORT= parameter. The port number, if any, is appended to the server name/IP with a comma\r\n\r\nsource: https://stackoverflow.com/a/50051708/1277401\n", "before_files": [{"content": "import logging\nimport sys\nimport uuid\n\nfrom redash.query_runner import *\nfrom redash.query_runner.mssql import types_map\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\n\ntry:\n import pyodbc\n\n enabled = True\nexcept ImportError:\n enabled = False\n\n\nclass SQLServerODBC(BaseSQLQueryRunner):\n should_annotate_query = False\n noop_query = \"SELECT 1\"\n\n @classmethod\n def configuration_schema(cls):\n return {\n \"type\": \"object\",\n \"properties\": {\n \"server\": {\"type\": \"string\"},\n \"port\": {\"type\": \"number\", \"default\": 1433},\n \"user\": {\"type\": \"string\"},\n \"password\": {\"type\": \"string\"},\n \"db\": {\"type\": \"string\", \"title\": \"Database Name\"},\n \"charset\": {\n \"type\": \"string\",\n \"default\": \"UTF-8\",\n \"title\": \"Character Set\",\n },\n \"use_ssl\": {\"type\": \"boolean\", \"title\": \"Use SSL\", \"default\": False,},\n \"verify_ssl\": {\n \"type\": \"boolean\",\n \"title\": \"Verify SSL certificate\",\n \"default\": True,\n },\n },\n \"order\": [\n \"server\",\n \"port\",\n \"user\",\n \"password\",\n \"db\",\n \"charset\",\n \"use_ssl\",\n \"verify_ssl\",\n ],\n \"required\": [\"server\", \"user\", \"password\", \"db\"],\n \"secret\": [\"password\"],\n \"extra_options\": [\"verify_ssl\", \"use_ssl\"],\n }\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def name(cls):\n return \"Microsoft SQL Server (ODBC)\"\n\n @classmethod\n def type(cls):\n return \"mssql_odbc\"\n\n @property\n def supports_auto_limit(self):\n return False\n\n def _get_tables(self, schema):\n query = \"\"\"\n SELECT table_schema, table_name, column_name\n FROM INFORMATION_SCHEMA.COLUMNS\n WHERE table_schema NOT IN ('guest','INFORMATION_SCHEMA','sys','db_owner','db_accessadmin'\n ,'db_securityadmin','db_ddladmin','db_backupoperator','db_datareader'\n ,'db_datawriter','db_denydatareader','db_denydatawriter'\n );\n \"\"\"\n\n results, error = self.run_query(query, None)\n\n if error is not None:\n self._handle_run_query_error(error)\n\n results = json_loads(results)\n\n for row in results[\"rows\"]:\n if row[\"table_schema\"] != self.configuration[\"db\"]:\n table_name = \"{}.{}\".format(row[\"table_schema\"], row[\"table_name\"])\n else:\n table_name = row[\"table_name\"]\n\n if table_name not in schema:\n schema[table_name] = {\"name\": table_name, \"columns\": []}\n\n schema[table_name][\"columns\"].append(row[\"column_name\"])\n\n return list(schema.values())\n\n def run_query(self, query, user):\n connection = None\n\n try:\n server = self.configuration.get(\"server\")\n user = self.configuration.get(\"user\", \"\")\n password = self.configuration.get(\"password\", \"\")\n db = self.configuration[\"db\"]\n port = self.configuration.get(\"port\", 1433)\n charset = self.configuration.get(\"charset\", \"UTF-8\")\n\n connection_string_fmt = \"DRIVER={{ODBC Driver 17 for SQL Server}};PORT={};SERVER={};DATABASE={};UID={};PWD={}\"\n connection_string = connection_string_fmt.format(\n port, server, db, user, password\n )\n\n if self.configuration.get(\"use_ssl\", False):\n connection_string += \";Encrypt=YES\"\n\n if not self.configuration.get(\"verify_ssl\"):\n connection_string += \";TrustServerCertificate=YES\"\n\n connection = pyodbc.connect(connection_string)\n cursor = connection.cursor()\n logger.debug(\"SQLServerODBC running query: %s\", query)\n cursor.execute(query)\n data = cursor.fetchall()\n\n if cursor.description is not None:\n columns = self.fetch_columns(\n [(i[0], types_map.get(i[1], None)) for i in cursor.description]\n )\n rows = [\n dict(zip((column[\"name\"] for column in columns), row))\n for row in data\n ]\n\n data = {\"columns\": columns, \"rows\": rows}\n json_data = json_dumps(data)\n error = None\n else:\n error = \"No data was returned.\"\n json_data = None\n\n cursor.close()\n except pyodbc.Error as e:\n try:\n # Query errors are at `args[1]`\n error = e.args[1]\n except IndexError:\n # Connection errors are `args[0][1]`\n error = e.args[0][1]\n json_data = None\n except (KeyboardInterrupt, JobTimeoutException):\n connection.cancel()\n raise\n finally:\n if connection:\n connection.close()\n\n return json_data, error\n\n\nregister(SQLServerODBC)\n", "path": "redash/query_runner/mssql_odbc.py"}], "after_files": [{"content": "import logging\nimport sys\nimport uuid\n\nfrom redash.query_runner import *\nfrom redash.query_runner.mssql import types_map\nfrom redash.utils import json_dumps, json_loads\n\nlogger = logging.getLogger(__name__)\n\ntry:\n import pyodbc\n\n enabled = True\nexcept ImportError:\n enabled = False\n\n\nclass SQLServerODBC(BaseSQLQueryRunner):\n should_annotate_query = False\n noop_query = \"SELECT 1\"\n\n @classmethod\n def configuration_schema(cls):\n return {\n \"type\": \"object\",\n \"properties\": {\n \"server\": {\"type\": \"string\"},\n \"port\": {\"type\": \"number\", \"default\": 1433},\n \"user\": {\"type\": \"string\"},\n \"password\": {\"type\": \"string\"},\n \"db\": {\"type\": \"string\", \"title\": \"Database Name\"},\n \"charset\": {\n \"type\": \"string\",\n \"default\": \"UTF-8\",\n \"title\": \"Character Set\",\n },\n \"use_ssl\": {\"type\": \"boolean\", \"title\": \"Use SSL\", \"default\": False,},\n \"verify_ssl\": {\n \"type\": \"boolean\",\n \"title\": \"Verify SSL certificate\",\n \"default\": True,\n },\n },\n \"order\": [\n \"server\",\n \"port\",\n \"user\",\n \"password\",\n \"db\",\n \"charset\",\n \"use_ssl\",\n \"verify_ssl\",\n ],\n \"required\": [\"server\", \"user\", \"password\", \"db\"],\n \"secret\": [\"password\"],\n \"extra_options\": [\"verify_ssl\", \"use_ssl\"],\n }\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def name(cls):\n return \"Microsoft SQL Server (ODBC)\"\n\n @classmethod\n def type(cls):\n return \"mssql_odbc\"\n\n @property\n def supports_auto_limit(self):\n return False\n\n def _get_tables(self, schema):\n query = \"\"\"\n SELECT table_schema, table_name, column_name\n FROM INFORMATION_SCHEMA.COLUMNS\n WHERE table_schema NOT IN ('guest','INFORMATION_SCHEMA','sys','db_owner','db_accessadmin'\n ,'db_securityadmin','db_ddladmin','db_backupoperator','db_datareader'\n ,'db_datawriter','db_denydatareader','db_denydatawriter'\n );\n \"\"\"\n\n results, error = self.run_query(query, None)\n\n if error is not None:\n self._handle_run_query_error(error)\n\n results = json_loads(results)\n\n for row in results[\"rows\"]:\n if row[\"table_schema\"] != self.configuration[\"db\"]:\n table_name = \"{}.{}\".format(row[\"table_schema\"], row[\"table_name\"])\n else:\n table_name = row[\"table_name\"]\n\n if table_name not in schema:\n schema[table_name] = {\"name\": table_name, \"columns\": []}\n\n schema[table_name][\"columns\"].append(row[\"column_name\"])\n\n return list(schema.values())\n\n def run_query(self, query, user):\n connection = None\n\n try:\n server = self.configuration.get(\"server\")\n user = self.configuration.get(\"user\", \"\")\n password = self.configuration.get(\"password\", \"\")\n db = self.configuration[\"db\"]\n port = self.configuration.get(\"port\", 1433)\n charset = self.configuration.get(\"charset\", \"UTF-8\")\n\n connection_string_fmt = \"DRIVER={{ODBC Driver 17 for SQL Server}};SERVER={},{};DATABASE={};UID={};PWD={}\"\n connection_string = connection_string_fmt.format(\n server, port, db, user, password\n )\n\n if self.configuration.get(\"use_ssl\", False):\n connection_string += \";Encrypt=YES\"\n\n if not self.configuration.get(\"verify_ssl\"):\n connection_string += \";TrustServerCertificate=YES\"\n\n connection = pyodbc.connect(connection_string)\n cursor = connection.cursor()\n logger.debug(\"SQLServerODBC running query: %s\", query)\n cursor.execute(query)\n data = cursor.fetchall()\n\n if cursor.description is not None:\n columns = self.fetch_columns(\n [(i[0], types_map.get(i[1], None)) for i in cursor.description]\n )\n rows = [\n dict(zip((column[\"name\"] for column in columns), row))\n for row in data\n ]\n\n data = {\"columns\": columns, \"rows\": rows}\n json_data = json_dumps(data)\n error = None\n else:\n error = \"No data was returned.\"\n json_data = None\n\n cursor.close()\n except pyodbc.Error as e:\n try:\n # Query errors are at `args[1]`\n error = e.args[1]\n except IndexError:\n # Connection errors are `args[0][1]`\n error = e.args[0][1]\n json_data = None\n except (KeyboardInterrupt, JobTimeoutException):\n connection.cancel()\n raise\n finally:\n if connection:\n connection.close()\n\n return json_data, error\n\n\nregister(SQLServerODBC)\n", "path": "redash/query_runner/mssql_odbc.py"}]}
| 1,920 | 209 |
gh_patches_debug_5342
|
rasdani/github-patches
|
git_diff
|
googleapis__google-api-python-client-1185
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HttpError error_details isn't populated unless __repr__ is called first
I was trying to handle an HttpError by looking at the contents of the `error_details` attribute. I noticed the attribute is a null-string unless I trigger the `__repr__` function first. For example, this does not work as the error_details is always `""`. I made a simple test that demonstrates the error:
```
from googleapiclient import discovery, errors
client = discovery.build(
"discovery", "v1"
)
req = client.apis().getRest(api='fake_api', version='v1')
try:
resp = req.execute()
except errors.HttpError as err:
print(f'Error details are currently: "{err.error_details}"')
print(f'Exception string representation is: "{err}"')
print(f'Error details are currently: "{err.error_details}"')
```
The output of the above code:
```
Error details are currently: ""
Exception string representation is: "<HttpError 404 when requesting https://www.googleapis.com/discovery/v1/apis/fake_api/v1/rest?alt=json returned "Requested entity was not found.". Details: "Requested entity was not found.">"
Error details are currently: "Requested entity was not found."
```
I tested and the behavior is the same on both `google-api-python-client-1.12.8` and `google-api-python-client-2.0.2`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `googleapiclient/errors.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Errors for the library.
16
17 All exceptions defined by the library
18 should be defined in this file.
19 """
20 from __future__ import absolute_import
21
22 __author__ = "[email protected] (Joe Gregorio)"
23
24 import json
25
26 from googleapiclient import _helpers as util
27
28
29 class Error(Exception):
30 """Base error for this module."""
31
32 pass
33
34
35 class HttpError(Error):
36 """HTTP data was invalid or unexpected."""
37
38 @util.positional(3)
39 def __init__(self, resp, content, uri=None):
40 self.resp = resp
41 if not isinstance(content, bytes):
42 raise TypeError("HTTP content should be bytes")
43 self.content = content
44 self.uri = uri
45 self.error_details = ""
46
47 def _get_reason(self):
48 """Calculate the reason for the error from the response content."""
49 reason = self.resp.reason
50 try:
51 try:
52 data = json.loads(self.content.decode("utf-8"))
53 except json.JSONDecodeError:
54 # In case it is not json
55 data = self.content.decode("utf-8")
56 if isinstance(data, dict):
57 reason = data["error"]["message"]
58 error_detail_keyword = next((kw for kw in ["detail", "details", "message"] if kw in data["error"]), "")
59 if error_detail_keyword:
60 self.error_details = data["error"][error_detail_keyword]
61 elif isinstance(data, list) and len(data) > 0:
62 first_error = data[0]
63 reason = first_error["error"]["message"]
64 if "details" in first_error["error"]:
65 self.error_details = first_error["error"]["details"]
66 else:
67 self.error_details = data
68 except (ValueError, KeyError, TypeError):
69 pass
70 if reason is None:
71 reason = ""
72 return reason
73
74 def __repr__(self):
75 reason = self._get_reason()
76 if self.error_details:
77 return '<HttpError %s when requesting %s returned "%s". Details: "%s">' % (
78 self.resp.status,
79 self.uri,
80 reason.strip(),
81 self.error_details,
82 )
83 elif self.uri:
84 return '<HttpError %s when requesting %s returned "%s">' % (
85 self.resp.status,
86 self.uri,
87 self._get_reason().strip(),
88 )
89 else:
90 return '<HttpError %s "%s">' % (self.resp.status, self._get_reason())
91
92 __str__ = __repr__
93
94
95 class InvalidJsonError(Error):
96 """The JSON returned could not be parsed."""
97
98 pass
99
100
101 class UnknownFileType(Error):
102 """File type unknown or unexpected."""
103
104 pass
105
106
107 class UnknownLinkType(Error):
108 """Link type unknown or unexpected."""
109
110 pass
111
112
113 class UnknownApiNameOrVersion(Error):
114 """No API with that name and version exists."""
115
116 pass
117
118
119 class UnacceptableMimeTypeError(Error):
120 """That is an unacceptable mimetype for this operation."""
121
122 pass
123
124
125 class MediaUploadSizeError(Error):
126 """Media is larger than the method can accept."""
127
128 pass
129
130
131 class ResumableUploadError(HttpError):
132 """Error occurred during resumable upload."""
133
134 pass
135
136
137 class InvalidChunkSizeError(Error):
138 """The given chunksize is not valid."""
139
140 pass
141
142
143 class InvalidNotificationError(Error):
144 """The channel Notification is invalid."""
145
146 pass
147
148
149 class BatchError(HttpError):
150 """Error occurred during batch operations."""
151
152 @util.positional(2)
153 def __init__(self, reason, resp=None, content=None):
154 self.resp = resp
155 self.content = content
156 self.reason = reason
157
158 def __repr__(self):
159 if getattr(self.resp, "status", None) is None:
160 return '<BatchError "%s">' % (self.reason)
161 else:
162 return '<BatchError %s "%s">' % (self.resp.status, self.reason)
163
164 __str__ = __repr__
165
166
167 class UnexpectedMethodError(Error):
168 """Exception raised by RequestMockBuilder on unexpected calls."""
169
170 @util.positional(1)
171 def __init__(self, methodId=None):
172 """Constructor for an UnexpectedMethodError."""
173 super(UnexpectedMethodError, self).__init__(
174 "Received unexpected call %s" % methodId
175 )
176
177
178 class UnexpectedBodyError(Error):
179 """Exception raised by RequestMockBuilder on unexpected bodies."""
180
181 def __init__(self, expected, provided):
182 """Constructor for an UnexpectedMethodError."""
183 super(UnexpectedBodyError, self).__init__(
184 "Expected: [%s] - Provided: [%s]" % (expected, provided)
185 )
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/googleapiclient/errors.py b/googleapiclient/errors.py
--- a/googleapiclient/errors.py
+++ b/googleapiclient/errors.py
@@ -43,6 +43,12 @@
self.content = content
self.uri = uri
self.error_details = ""
+ self._get_reason()
+
+ @property
+ def status_code(self):
+ """Return the HTTP status code from the response content."""
+ return self.resp.status
def _get_reason(self):
"""Calculate the reason for the error from the response content."""
|
{"golden_diff": "diff --git a/googleapiclient/errors.py b/googleapiclient/errors.py\n--- a/googleapiclient/errors.py\n+++ b/googleapiclient/errors.py\n@@ -43,6 +43,12 @@\n self.content = content\n self.uri = uri\n self.error_details = \"\"\n+ self._get_reason()\n+\n+ @property\n+ def status_code(self):\n+ \"\"\"Return the HTTP status code from the response content.\"\"\"\n+ return self.resp.status\n \n def _get_reason(self):\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n", "issue": "HttpError error_details isn't populated unless __repr__ is called first\nI was trying to handle an HttpError by looking at the contents of the `error_details` attribute. I noticed the attribute is a null-string unless I trigger the `__repr__` function first. For example, this does not work as the error_details is always `\"\"`. I made a simple test that demonstrates the error:\r\n\r\n```\r\nfrom googleapiclient import discovery, errors\r\n\r\nclient = discovery.build(\r\n \"discovery\", \"v1\"\r\n)\r\n\r\nreq = client.apis().getRest(api='fake_api', version='v1')\r\n\r\ntry:\r\n resp = req.execute()\r\nexcept errors.HttpError as err:\r\n print(f'Error details are currently: \"{err.error_details}\"')\r\n print(f'Exception string representation is: \"{err}\"')\r\n print(f'Error details are currently: \"{err.error_details}\"')\r\n```\r\n\r\nThe output of the above code:\r\n\r\n```\r\nError details are currently: \"\"\r\nException string representation is: \"<HttpError 404 when requesting https://www.googleapis.com/discovery/v1/apis/fake_api/v1/rest?alt=json returned \"Requested entity was not found.\". Details: \"Requested entity was not found.\">\"\r\nError details are currently: \"Requested entity was not found.\"\r\n```\r\n\r\nI tested and the behavior is the same on both `google-api-python-client-1.12.8` and `google-api-python-client-2.0.2`\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Errors for the library.\n\nAll exceptions defined by the library\nshould be defined in this file.\n\"\"\"\nfrom __future__ import absolute_import\n\n__author__ = \"[email protected] (Joe Gregorio)\"\n\nimport json\n\nfrom googleapiclient import _helpers as util\n\n\nclass Error(Exception):\n \"\"\"Base error for this module.\"\"\"\n\n pass\n\n\nclass HttpError(Error):\n \"\"\"HTTP data was invalid or unexpected.\"\"\"\n\n @util.positional(3)\n def __init__(self, resp, content, uri=None):\n self.resp = resp\n if not isinstance(content, bytes):\n raise TypeError(\"HTTP content should be bytes\")\n self.content = content\n self.uri = uri\n self.error_details = \"\"\n\n def _get_reason(self):\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n reason = self.resp.reason\n try:\n try:\n data = json.loads(self.content.decode(\"utf-8\"))\n except json.JSONDecodeError:\n # In case it is not json\n data = self.content.decode(\"utf-8\")\n if isinstance(data, dict):\n reason = data[\"error\"][\"message\"]\n error_detail_keyword = next((kw for kw in [\"detail\", \"details\", \"message\"] if kw in data[\"error\"]), \"\")\n if error_detail_keyword:\n self.error_details = data[\"error\"][error_detail_keyword]\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error[\"error\"][\"message\"]\n if \"details\" in first_error[\"error\"]:\n self.error_details = first_error[\"error\"][\"details\"]\n else:\n self.error_details = data\n except (ValueError, KeyError, TypeError):\n pass\n if reason is None:\n reason = \"\"\n return reason\n\n def __repr__(self):\n reason = self._get_reason()\n if self.error_details:\n return '<HttpError %s when requesting %s returned \"%s\". Details: \"%s\">' % (\n self.resp.status,\n self.uri,\n reason.strip(),\n self.error_details,\n )\n elif self.uri:\n return '<HttpError %s when requesting %s returned \"%s\">' % (\n self.resp.status,\n self.uri,\n self._get_reason().strip(),\n )\n else:\n return '<HttpError %s \"%s\">' % (self.resp.status, self._get_reason())\n\n __str__ = __repr__\n\n\nclass InvalidJsonError(Error):\n \"\"\"The JSON returned could not be parsed.\"\"\"\n\n pass\n\n\nclass UnknownFileType(Error):\n \"\"\"File type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownLinkType(Error):\n \"\"\"Link type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownApiNameOrVersion(Error):\n \"\"\"No API with that name and version exists.\"\"\"\n\n pass\n\n\nclass UnacceptableMimeTypeError(Error):\n \"\"\"That is an unacceptable mimetype for this operation.\"\"\"\n\n pass\n\n\nclass MediaUploadSizeError(Error):\n \"\"\"Media is larger than the method can accept.\"\"\"\n\n pass\n\n\nclass ResumableUploadError(HttpError):\n \"\"\"Error occurred during resumable upload.\"\"\"\n\n pass\n\n\nclass InvalidChunkSizeError(Error):\n \"\"\"The given chunksize is not valid.\"\"\"\n\n pass\n\n\nclass InvalidNotificationError(Error):\n \"\"\"The channel Notification is invalid.\"\"\"\n\n pass\n\n\nclass BatchError(HttpError):\n \"\"\"Error occurred during batch operations.\"\"\"\n\n @util.positional(2)\n def __init__(self, reason, resp=None, content=None):\n self.resp = resp\n self.content = content\n self.reason = reason\n\n def __repr__(self):\n if getattr(self.resp, \"status\", None) is None:\n return '<BatchError \"%s\">' % (self.reason)\n else:\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n\n __str__ = __repr__\n\n\nclass UnexpectedMethodError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected calls.\"\"\"\n\n @util.positional(1)\n def __init__(self, methodId=None):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedMethodError, self).__init__(\n \"Received unexpected call %s\" % methodId\n )\n\n\nclass UnexpectedBodyError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected bodies.\"\"\"\n\n def __init__(self, expected, provided):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedBodyError, self).__init__(\n \"Expected: [%s] - Provided: [%s]\" % (expected, provided)\n )\n", "path": "googleapiclient/errors.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Errors for the library.\n\nAll exceptions defined by the library\nshould be defined in this file.\n\"\"\"\nfrom __future__ import absolute_import\n\n__author__ = \"[email protected] (Joe Gregorio)\"\n\nimport json\n\nfrom googleapiclient import _helpers as util\n\n\nclass Error(Exception):\n \"\"\"Base error for this module.\"\"\"\n\n pass\n\n\nclass HttpError(Error):\n \"\"\"HTTP data was invalid or unexpected.\"\"\"\n\n @util.positional(3)\n def __init__(self, resp, content, uri=None):\n self.resp = resp\n if not isinstance(content, bytes):\n raise TypeError(\"HTTP content should be bytes\")\n self.content = content\n self.uri = uri\n self.error_details = \"\"\n self._get_reason()\n\n @property\n def status_code(self):\n \"\"\"Return the HTTP status code from the response content.\"\"\"\n return self.resp.status\n\n def _get_reason(self):\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n reason = self.resp.reason\n try:\n try:\n data = json.loads(self.content.decode(\"utf-8\"))\n except json.JSONDecodeError:\n # In case it is not json\n data = self.content.decode(\"utf-8\")\n if isinstance(data, dict):\n reason = data[\"error\"][\"message\"]\n error_detail_keyword = next((kw for kw in [\"detail\", \"details\", \"message\"] if kw in data[\"error\"]), \"\")\n if error_detail_keyword:\n self.error_details = data[\"error\"][error_detail_keyword]\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error[\"error\"][\"message\"]\n if \"details\" in first_error[\"error\"]:\n self.error_details = first_error[\"error\"][\"details\"]\n else:\n self.error_details = data\n except (ValueError, KeyError, TypeError):\n pass\n if reason is None:\n reason = \"\"\n return reason\n\n def __repr__(self):\n reason = self._get_reason()\n if self.error_details:\n return '<HttpError %s when requesting %s returned \"%s\". Details: \"%s\">' % (\n self.resp.status,\n self.uri,\n reason.strip(),\n self.error_details,\n )\n elif self.uri:\n return '<HttpError %s when requesting %s returned \"%s\">' % (\n self.resp.status,\n self.uri,\n self._get_reason().strip(),\n )\n else:\n return '<HttpError %s \"%s\">' % (self.resp.status, self._get_reason())\n\n __str__ = __repr__\n\n\nclass InvalidJsonError(Error):\n \"\"\"The JSON returned could not be parsed.\"\"\"\n\n pass\n\n\nclass UnknownFileType(Error):\n \"\"\"File type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownLinkType(Error):\n \"\"\"Link type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownApiNameOrVersion(Error):\n \"\"\"No API with that name and version exists.\"\"\"\n\n pass\n\n\nclass UnacceptableMimeTypeError(Error):\n \"\"\"That is an unacceptable mimetype for this operation.\"\"\"\n\n pass\n\n\nclass MediaUploadSizeError(Error):\n \"\"\"Media is larger than the method can accept.\"\"\"\n\n pass\n\n\nclass ResumableUploadError(HttpError):\n \"\"\"Error occurred during resumable upload.\"\"\"\n\n pass\n\n\nclass InvalidChunkSizeError(Error):\n \"\"\"The given chunksize is not valid.\"\"\"\n\n pass\n\n\nclass InvalidNotificationError(Error):\n \"\"\"The channel Notification is invalid.\"\"\"\n\n pass\n\n\nclass BatchError(HttpError):\n \"\"\"Error occurred during batch operations.\"\"\"\n\n @util.positional(2)\n def __init__(self, reason, resp=None, content=None):\n self.resp = resp\n self.content = content\n self.reason = reason\n\n def __repr__(self):\n if getattr(self.resp, \"status\", None) is None:\n return '<BatchError \"%s\">' % (self.reason)\n else:\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n\n __str__ = __repr__\n\n\nclass UnexpectedMethodError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected calls.\"\"\"\n\n @util.positional(1)\n def __init__(self, methodId=None):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedMethodError, self).__init__(\n \"Received unexpected call %s\" % methodId\n )\n\n\nclass UnexpectedBodyError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected bodies.\"\"\"\n\n def __init__(self, expected, provided):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedBodyError, self).__init__(\n \"Expected: [%s] - Provided: [%s]\" % (expected, provided)\n )\n", "path": "googleapiclient/errors.py"}]}
| 2,152 | 127 |
gh_patches_debug_29205
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-2119
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
update pylint to 2.11.0
The version of pylint in this repo is falling behind. I tried running it w/ pylint 2.11.0 and came across a bunch of warnings in the following categories:
- [x] #2130
- [x] #2125
- [x] #2126
- [x] #2132
- [x] #2134
I will submit separate PRs for each of those, before submitting a PR to bump pylint to 2.11.0.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/src/opentelemetry/util/_time.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from logging import getLogger
16 from sys import version_info
17
18 if version_info.minor < 7:
19 getLogger(__name__).warning( # pylint: disable=logging-not-lazy
20 "You are using Python 3.%s. This version does not support timestamps "
21 "with nanosecond precision and the OpenTelemetry SDK will use "
22 "millisecond precision instead. Please refer to PEP 564 for more "
23 "information. Please upgrade to Python 3.7 or newer to use nanosecond "
24 "precision." % version_info.minor
25 )
26 from time import time
27
28 def _time_ns() -> int:
29 return int(time() * 1e9)
30
31
32 else:
33 from time import time_ns
34
35 _time_ns = time_ns
36
```
Path: `opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright The OpenTelemetry Authors
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import argparse
18 import logging
19 import subprocess
20 import sys
21
22 import pkg_resources
23
24 from opentelemetry.instrumentation.bootstrap_gen import (
25 default_instrumentations,
26 libraries,
27 )
28
29 logger = logging.getLogger(__file__)
30
31
32 def _syscall(func):
33 def wrapper(package=None):
34 try:
35 if package:
36 return func(package)
37 return func()
38 except subprocess.SubprocessError as exp:
39 cmd = getattr(exp, "cmd", None)
40 if cmd:
41 msg = f'Error calling system command "{" ".join(cmd)}"'
42 if package:
43 msg = f'{msg} for package "{package}"'
44 raise RuntimeError(msg)
45
46 return wrapper
47
48
49 @_syscall
50 def _sys_pip_install(package):
51 # explicit upgrade strategy to override potential pip config
52 subprocess.check_call(
53 [
54 sys.executable,
55 "-m",
56 "pip",
57 "install",
58 "-U",
59 "--upgrade-strategy",
60 "only-if-needed",
61 package,
62 ]
63 )
64
65
66 def _pip_check():
67 """Ensures none of the instrumentations have dependency conflicts.
68 Clean check reported as:
69 'No broken requirements found.'
70 Dependency conflicts are reported as:
71 'opentelemetry-instrumentation-flask 1.0.1 has requirement opentelemetry-sdk<2.0,>=1.0, but you have opentelemetry-sdk 0.5.'
72 To not be too restrictive, we'll only check for relevant packages.
73 """
74 check_pipe = subprocess.Popen(
75 [sys.executable, "-m", "pip", "check"], stdout=subprocess.PIPE
76 )
77 pip_check = check_pipe.communicate()[0].decode()
78 pip_check_lower = pip_check.lower()
79 for package_tup in libraries.values():
80 for package in package_tup:
81 if package.lower() in pip_check_lower:
82 raise RuntimeError(f"Dependency conflict found: {pip_check}")
83
84
85 def _is_installed(req):
86 if req in sys.modules:
87 return True
88
89 try:
90 pkg_resources.get_distribution(req)
91 except pkg_resources.DistributionNotFound:
92 return False
93 except pkg_resources.VersionConflict as exc:
94 logger.warning(
95 "instrumentation for package %s is available but version %s is installed. Skipping.",
96 exc.req,
97 exc.dist.as_requirement(), # pylint: disable=no-member
98 )
99 return False
100 return True
101
102
103 def _find_installed_libraries():
104 libs = default_instrumentations[:]
105 libs.extend(
106 [
107 v["instrumentation"]
108 for _, v in libraries.items()
109 if _is_installed(v["library"])
110 ]
111 )
112 return libs
113
114
115 def _run_requirements():
116 logger.setLevel(logging.ERROR)
117 print("\n".join(_find_installed_libraries()), end="")
118
119
120 def _run_install():
121 for lib in _find_installed_libraries():
122 _sys_pip_install(lib)
123 _pip_check()
124
125
126 def run() -> None:
127 action_install = "install"
128 action_requirements = "requirements"
129
130 parser = argparse.ArgumentParser(
131 description="""
132 opentelemetry-bootstrap detects installed libraries and automatically
133 installs the relevant instrumentation packages for them.
134 """
135 )
136 parser.add_argument(
137 "-a",
138 "--action",
139 choices=[action_install, action_requirements],
140 default=action_requirements,
141 help="""
142 install - uses pip to install the new requirements using to the
143 currently active site-package.
144 requirements - prints out the new requirements to stdout. Action can
145 be piped and appended to a requirements.txt file.
146 """,
147 )
148 args = parser.parse_args()
149
150 cmd = {
151 action_install: _run_install,
152 action_requirements: _run_requirements,
153 }[args.action]
154 cmd()
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opentelemetry-api/src/opentelemetry/util/_time.py b/opentelemetry-api/src/opentelemetry/util/_time.py
--- a/opentelemetry-api/src/opentelemetry/util/_time.py
+++ b/opentelemetry-api/src/opentelemetry/util/_time.py
@@ -17,7 +17,7 @@
if version_info.minor < 7:
getLogger(__name__).warning( # pylint: disable=logging-not-lazy
- "You are using Python 3.%s. This version does not support timestamps "
+ "You are using Python 3.%s. This version does not support timestamps " # pylint: disable=C0209
"with nanosecond precision and the OpenTelemetry SDK will use "
"millisecond precision instead. Please refer to PEP 564 for more "
"information. Please upgrade to Python 3.7 or newer to use nanosecond "
diff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py
--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py
+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py
@@ -71,11 +71,11 @@
'opentelemetry-instrumentation-flask 1.0.1 has requirement opentelemetry-sdk<2.0,>=1.0, but you have opentelemetry-sdk 0.5.'
To not be too restrictive, we'll only check for relevant packages.
"""
- check_pipe = subprocess.Popen(
+ with subprocess.Popen(
[sys.executable, "-m", "pip", "check"], stdout=subprocess.PIPE
- )
- pip_check = check_pipe.communicate()[0].decode()
- pip_check_lower = pip_check.lower()
+ ) as check_pipe:
+ pip_check = check_pipe.communicate()[0].decode()
+ pip_check_lower = pip_check.lower()
for package_tup in libraries.values():
for package in package_tup:
if package.lower() in pip_check_lower:
|
{"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/util/_time.py b/opentelemetry-api/src/opentelemetry/util/_time.py\n--- a/opentelemetry-api/src/opentelemetry/util/_time.py\n+++ b/opentelemetry-api/src/opentelemetry/util/_time.py\n@@ -17,7 +17,7 @@\n \n if version_info.minor < 7:\n getLogger(__name__).warning( # pylint: disable=logging-not-lazy\n- \"You are using Python 3.%s. This version does not support timestamps \"\n+ \"You are using Python 3.%s. This version does not support timestamps \" # pylint: disable=C0209\n \"with nanosecond precision and the OpenTelemetry SDK will use \"\n \"millisecond precision instead. Please refer to PEP 564 for more \"\n \"information. Please upgrade to Python 3.7 or newer to use nanosecond \"\ndiff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py\n--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py\n+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py\n@@ -71,11 +71,11 @@\n 'opentelemetry-instrumentation-flask 1.0.1 has requirement opentelemetry-sdk<2.0,>=1.0, but you have opentelemetry-sdk 0.5.'\n To not be too restrictive, we'll only check for relevant packages.\n \"\"\"\n- check_pipe = subprocess.Popen(\n+ with subprocess.Popen(\n [sys.executable, \"-m\", \"pip\", \"check\"], stdout=subprocess.PIPE\n- )\n- pip_check = check_pipe.communicate()[0].decode()\n- pip_check_lower = pip_check.lower()\n+ ) as check_pipe:\n+ pip_check = check_pipe.communicate()[0].decode()\n+ pip_check_lower = pip_check.lower()\n for package_tup in libraries.values():\n for package in package_tup:\n if package.lower() in pip_check_lower:\n", "issue": "update pylint to 2.11.0\nThe version of pylint in this repo is falling behind. I tried running it w/ pylint 2.11.0 and came across a bunch of warnings in the following categories:\r\n\r\n- [x] #2130\r\n- [x] #2125\r\n- [x] #2126\r\n- [x] #2132\r\n- [x] #2134\r\n\r\nI will submit separate PRs for each of those, before submitting a PR to bump pylint to 2.11.0.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom logging import getLogger\nfrom sys import version_info\n\nif version_info.minor < 7:\n getLogger(__name__).warning( # pylint: disable=logging-not-lazy\n \"You are using Python 3.%s. This version does not support timestamps \"\n \"with nanosecond precision and the OpenTelemetry SDK will use \"\n \"millisecond precision instead. Please refer to PEP 564 for more \"\n \"information. Please upgrade to Python 3.7 or newer to use nanosecond \"\n \"precision.\" % version_info.minor\n )\n from time import time\n\n def _time_ns() -> int:\n return int(time() * 1e9)\n\n\nelse:\n from time import time_ns\n\n _time_ns = time_ns\n", "path": "opentelemetry-api/src/opentelemetry/util/_time.py"}, {"content": "#!/usr/bin/env python3\n\n# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport logging\nimport subprocess\nimport sys\n\nimport pkg_resources\n\nfrom opentelemetry.instrumentation.bootstrap_gen import (\n default_instrumentations,\n libraries,\n)\n\nlogger = logging.getLogger(__file__)\n\n\ndef _syscall(func):\n def wrapper(package=None):\n try:\n if package:\n return func(package)\n return func()\n except subprocess.SubprocessError as exp:\n cmd = getattr(exp, \"cmd\", None)\n if cmd:\n msg = f'Error calling system command \"{\" \".join(cmd)}\"'\n if package:\n msg = f'{msg} for package \"{package}\"'\n raise RuntimeError(msg)\n\n return wrapper\n\n\n@_syscall\ndef _sys_pip_install(package):\n # explicit upgrade strategy to override potential pip config\n subprocess.check_call(\n [\n sys.executable,\n \"-m\",\n \"pip\",\n \"install\",\n \"-U\",\n \"--upgrade-strategy\",\n \"only-if-needed\",\n package,\n ]\n )\n\n\ndef _pip_check():\n \"\"\"Ensures none of the instrumentations have dependency conflicts.\n Clean check reported as:\n 'No broken requirements found.'\n Dependency conflicts are reported as:\n 'opentelemetry-instrumentation-flask 1.0.1 has requirement opentelemetry-sdk<2.0,>=1.0, but you have opentelemetry-sdk 0.5.'\n To not be too restrictive, we'll only check for relevant packages.\n \"\"\"\n check_pipe = subprocess.Popen(\n [sys.executable, \"-m\", \"pip\", \"check\"], stdout=subprocess.PIPE\n )\n pip_check = check_pipe.communicate()[0].decode()\n pip_check_lower = pip_check.lower()\n for package_tup in libraries.values():\n for package in package_tup:\n if package.lower() in pip_check_lower:\n raise RuntimeError(f\"Dependency conflict found: {pip_check}\")\n\n\ndef _is_installed(req):\n if req in sys.modules:\n return True\n\n try:\n pkg_resources.get_distribution(req)\n except pkg_resources.DistributionNotFound:\n return False\n except pkg_resources.VersionConflict as exc:\n logger.warning(\n \"instrumentation for package %s is available but version %s is installed. Skipping.\",\n exc.req,\n exc.dist.as_requirement(), # pylint: disable=no-member\n )\n return False\n return True\n\n\ndef _find_installed_libraries():\n libs = default_instrumentations[:]\n libs.extend(\n [\n v[\"instrumentation\"]\n for _, v in libraries.items()\n if _is_installed(v[\"library\"])\n ]\n )\n return libs\n\n\ndef _run_requirements():\n logger.setLevel(logging.ERROR)\n print(\"\\n\".join(_find_installed_libraries()), end=\"\")\n\n\ndef _run_install():\n for lib in _find_installed_libraries():\n _sys_pip_install(lib)\n _pip_check()\n\n\ndef run() -> None:\n action_install = \"install\"\n action_requirements = \"requirements\"\n\n parser = argparse.ArgumentParser(\n description=\"\"\"\n opentelemetry-bootstrap detects installed libraries and automatically\n installs the relevant instrumentation packages for them.\n \"\"\"\n )\n parser.add_argument(\n \"-a\",\n \"--action\",\n choices=[action_install, action_requirements],\n default=action_requirements,\n help=\"\"\"\n install - uses pip to install the new requirements using to the\n currently active site-package.\n requirements - prints out the new requirements to stdout. Action can\n be piped and appended to a requirements.txt file.\n \"\"\",\n )\n args = parser.parse_args()\n\n cmd = {\n action_install: _run_install,\n action_requirements: _run_requirements,\n }[args.action]\n cmd()\n", "path": "opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom logging import getLogger\nfrom sys import version_info\n\nif version_info.minor < 7:\n getLogger(__name__).warning( # pylint: disable=logging-not-lazy\n \"You are using Python 3.%s. This version does not support timestamps \" # pylint: disable=C0209\n \"with nanosecond precision and the OpenTelemetry SDK will use \"\n \"millisecond precision instead. Please refer to PEP 564 for more \"\n \"information. Please upgrade to Python 3.7 or newer to use nanosecond \"\n \"precision.\" % version_info.minor\n )\n from time import time\n\n def _time_ns() -> int:\n return int(time() * 1e9)\n\n\nelse:\n from time import time_ns\n\n _time_ns = time_ns\n", "path": "opentelemetry-api/src/opentelemetry/util/_time.py"}, {"content": "#!/usr/bin/env python3\n\n# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport logging\nimport subprocess\nimport sys\n\nimport pkg_resources\n\nfrom opentelemetry.instrumentation.bootstrap_gen import (\n default_instrumentations,\n libraries,\n)\n\nlogger = logging.getLogger(__file__)\n\n\ndef _syscall(func):\n def wrapper(package=None):\n try:\n if package:\n return func(package)\n return func()\n except subprocess.SubprocessError as exp:\n cmd = getattr(exp, \"cmd\", None)\n if cmd:\n msg = f'Error calling system command \"{\" \".join(cmd)}\"'\n if package:\n msg = f'{msg} for package \"{package}\"'\n raise RuntimeError(msg)\n\n return wrapper\n\n\n@_syscall\ndef _sys_pip_install(package):\n # explicit upgrade strategy to override potential pip config\n subprocess.check_call(\n [\n sys.executable,\n \"-m\",\n \"pip\",\n \"install\",\n \"-U\",\n \"--upgrade-strategy\",\n \"only-if-needed\",\n package,\n ]\n )\n\n\ndef _pip_check():\n \"\"\"Ensures none of the instrumentations have dependency conflicts.\n Clean check reported as:\n 'No broken requirements found.'\n Dependency conflicts are reported as:\n 'opentelemetry-instrumentation-flask 1.0.1 has requirement opentelemetry-sdk<2.0,>=1.0, but you have opentelemetry-sdk 0.5.'\n To not be too restrictive, we'll only check for relevant packages.\n \"\"\"\n with subprocess.Popen(\n [sys.executable, \"-m\", \"pip\", \"check\"], stdout=subprocess.PIPE\n ) as check_pipe:\n pip_check = check_pipe.communicate()[0].decode()\n pip_check_lower = pip_check.lower()\n for package_tup in libraries.values():\n for package in package_tup:\n if package.lower() in pip_check_lower:\n raise RuntimeError(f\"Dependency conflict found: {pip_check}\")\n\n\ndef _is_installed(req):\n if req in sys.modules:\n return True\n\n try:\n pkg_resources.get_distribution(req)\n except pkg_resources.DistributionNotFound:\n return False\n except pkg_resources.VersionConflict as exc:\n logger.warning(\n \"instrumentation for package %s is available but version %s is installed. Skipping.\",\n exc.req,\n exc.dist.as_requirement(), # pylint: disable=no-member\n )\n return False\n return True\n\n\ndef _find_installed_libraries():\n libs = default_instrumentations[:]\n libs.extend(\n [\n v[\"instrumentation\"]\n for _, v in libraries.items()\n if _is_installed(v[\"library\"])\n ]\n )\n return libs\n\n\ndef _run_requirements():\n logger.setLevel(logging.ERROR)\n print(\"\\n\".join(_find_installed_libraries()), end=\"\")\n\n\ndef _run_install():\n for lib in _find_installed_libraries():\n _sys_pip_install(lib)\n _pip_check()\n\n\ndef run() -> None:\n action_install = \"install\"\n action_requirements = \"requirements\"\n\n parser = argparse.ArgumentParser(\n description=\"\"\"\n opentelemetry-bootstrap detects installed libraries and automatically\n installs the relevant instrumentation packages for them.\n \"\"\"\n )\n parser.add_argument(\n \"-a\",\n \"--action\",\n choices=[action_install, action_requirements],\n default=action_requirements,\n help=\"\"\"\n install - uses pip to install the new requirements using to the\n currently active site-package.\n requirements - prints out the new requirements to stdout. Action can\n be piped and appended to a requirements.txt file.\n \"\"\",\n )\n args = parser.parse_args()\n\n cmd = {\n action_install: _run_install,\n action_requirements: _run_requirements,\n }[args.action]\n cmd()\n", "path": "opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap.py"}]}
| 2,068 | 468 |
gh_patches_debug_25062
|
rasdani/github-patches
|
git_diff
|
microsoft__playwright-python-572
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] misleading error message "browser was not found" if browser fails to start for other reasons
# Environment
playwright==1.9.2
ubuntu 20.04 in docker
chromium v854489 installed via `python3 -m playwright install`
# Problem description:
Unintentionally, the test attempted to launch headful chromium, launch failed with misleading error message:
```
"chromium" browser was not found.
Please complete Playwright installation via running
"python -m playwright install"
```
Launching browser as `chromium.launch(headless=True)` was succesful, so the problem clearly was not with browser installation, but browser launch inself (most probably, unavailable X11)
# Expected behavior:
browser stderr captured and displayed
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `playwright/_impl/_browser_type.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from pathlib import Path
16 from typing import Dict, List, Union
17
18 from playwright._impl._api_structures import (
19 Geolocation,
20 HttpCredentials,
21 ProxySettings,
22 ViewportSize,
23 )
24 from playwright._impl._browser import Browser, normalize_context_params
25 from playwright._impl._browser_context import BrowserContext
26 from playwright._impl._connection import ChannelOwner, from_channel
27 from playwright._impl._helper import (
28 ColorScheme,
29 Env,
30 locals_to_params,
31 not_installed_error,
32 )
33
34
35 class BrowserType(ChannelOwner):
36 def __init__(
37 self, parent: ChannelOwner, type: str, guid: str, initializer: Dict
38 ) -> None:
39 super().__init__(parent, type, guid, initializer)
40
41 @property
42 def name(self) -> str:
43 return self._initializer["name"]
44
45 @property
46 def executable_path(self) -> str:
47 return self._initializer["executablePath"]
48
49 async def launch(
50 self,
51 executablePath: Union[str, Path] = None,
52 args: List[str] = None,
53 ignoreDefaultArgs: Union[bool, List[str]] = None,
54 handleSIGINT: bool = None,
55 handleSIGTERM: bool = None,
56 handleSIGHUP: bool = None,
57 timeout: float = None,
58 env: Env = None,
59 headless: bool = None,
60 devtools: bool = None,
61 proxy: ProxySettings = None,
62 downloadsPath: Union[str, Path] = None,
63 slowMo: float = None,
64 chromiumSandbox: bool = None,
65 firefoxUserPrefs: Dict[str, Union[str, float, bool]] = None,
66 ) -> Browser:
67 params = locals_to_params(locals())
68 normalize_launch_params(params)
69 try:
70 return from_channel(await self._channel.send("launch", params))
71 except Exception as e:
72 if f"{self.name}-" in str(e):
73 raise not_installed_error(f'"{self.name}" browser was not found.')
74 raise e
75
76 async def launch_persistent_context(
77 self,
78 userDataDir: Union[str, Path],
79 executablePath: Union[str, Path] = None,
80 args: List[str] = None,
81 ignoreDefaultArgs: Union[bool, List[str]] = None,
82 handleSIGINT: bool = None,
83 handleSIGTERM: bool = None,
84 handleSIGHUP: bool = None,
85 timeout: float = None,
86 env: Env = None,
87 headless: bool = None,
88 devtools: bool = None,
89 proxy: ProxySettings = None,
90 downloadsPath: Union[str, Path] = None,
91 slowMo: float = None,
92 viewport: ViewportSize = None,
93 noViewport: bool = None,
94 ignoreHTTPSErrors: bool = None,
95 javaScriptEnabled: bool = None,
96 bypassCSP: bool = None,
97 userAgent: str = None,
98 locale: str = None,
99 timezoneId: str = None,
100 geolocation: Geolocation = None,
101 permissions: List[str] = None,
102 extraHTTPHeaders: Dict[str, str] = None,
103 offline: bool = None,
104 httpCredentials: HttpCredentials = None,
105 deviceScaleFactor: float = None,
106 isMobile: bool = None,
107 hasTouch: bool = None,
108 colorScheme: ColorScheme = None,
109 acceptDownloads: bool = None,
110 chromiumSandbox: bool = None,
111 recordHarPath: Union[Path, str] = None,
112 recordHarOmitContent: bool = None,
113 recordVideoDir: Union[Path, str] = None,
114 recordVideoSize: ViewportSize = None,
115 ) -> BrowserContext:
116 userDataDir = str(Path(userDataDir))
117 params = locals_to_params(locals())
118 normalize_context_params(self._connection._is_sync, params)
119 normalize_launch_params(params)
120 try:
121 context = from_channel(
122 await self._channel.send("launchPersistentContext", params)
123 )
124 context._options = params
125 return context
126 except Exception as e:
127 if f"{self.name}-" in str(e):
128 raise not_installed_error(f'"{self.name}" browser was not found.')
129 raise e
130
131
132 def normalize_launch_params(params: Dict) -> None:
133 if "env" in params:
134 params["env"] = {name: str(value) for [name, value] in params["env"].items()}
135 if "ignoreDefaultArgs" in params:
136 if params["ignoreDefaultArgs"] is True:
137 params["ignoreAllDefaultArgs"] = True
138 del params["ignoreDefaultArgs"]
139 if "executablePath" in params:
140 params["executablePath"] = str(Path(params["executablePath"]))
141 if "downloadsPath" in params:
142 params["downloadsPath"] = str(Path(params["downloadsPath"]))
143
```
Path: `playwright/_impl/_helper.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import fnmatch
16 import math
17 import re
18 import sys
19 import time
20 import traceback
21 from types import TracebackType
22 from typing import (
23 TYPE_CHECKING,
24 Any,
25 Callable,
26 Dict,
27 List,
28 Optional,
29 Pattern,
30 Union,
31 cast,
32 )
33
34 from playwright._impl._api_types import Error, TimeoutError
35
36 if sys.version_info >= (3, 8): # pragma: no cover
37 from typing import Literal, TypedDict
38 else: # pragma: no cover
39 from typing_extensions import Literal, TypedDict
40
41
42 if TYPE_CHECKING: # pragma: no cover
43 from playwright._impl._network import Request, Response, Route
44
45 URLMatch = Union[str, Pattern, Callable[[str], bool]]
46 URLMatchRequest = Union[str, Pattern, Callable[["Request"], bool]]
47 URLMatchResponse = Union[str, Pattern, Callable[["Response"], bool]]
48 RouteHandler = Union[Callable[["Route"], Any], Callable[["Route", "Request"], Any]]
49
50 ColorScheme = Literal["dark", "light", "no-preference"]
51 DocumentLoadState = Literal["domcontentloaded", "load", "networkidle"]
52 KeyboardModifier = Literal["Alt", "Control", "Meta", "Shift"]
53 MouseButton = Literal["left", "middle", "right"]
54
55
56 class ErrorPayload(TypedDict, total=False):
57 message: str
58 name: str
59 stack: str
60 value: Any
61
62
63 class Header(TypedDict):
64 name: str
65 value: str
66
67
68 class ContinueParameters(TypedDict, total=False):
69 url: Optional[str]
70 method: Optional[str]
71 headers: Optional[List[Header]]
72 postData: Optional[str]
73
74
75 class ParsedMessageParams(TypedDict):
76 type: str
77 guid: str
78 initializer: Dict
79
80
81 class ParsedMessagePayload(TypedDict, total=False):
82 id: int
83 guid: str
84 method: str
85 params: ParsedMessageParams
86 result: Any
87 error: ErrorPayload
88
89
90 class Document(TypedDict):
91 request: Optional[Any]
92
93
94 class FrameNavigatedEvent(TypedDict):
95 url: str
96 name: str
97 newDocument: Optional[Document]
98 error: Optional[str]
99
100
101 Env = Dict[str, Union[str, float, bool]]
102
103
104 class URLMatcher:
105 def __init__(self, match: URLMatch) -> None:
106 self._callback: Optional[Callable[[str], bool]] = None
107 self._regex_obj: Optional[Pattern] = None
108 if isinstance(match, str):
109 regex = fnmatch.translate(match)
110 self._regex_obj = re.compile(regex)
111 elif isinstance(match, Pattern):
112 self._regex_obj = match
113 else:
114 self._callback = match
115 self.match = match
116
117 def matches(self, url: str) -> bool:
118 if self._callback:
119 return self._callback(url)
120 if self._regex_obj:
121 return cast(bool, self._regex_obj.search(url))
122 return False
123
124
125 class TimeoutSettings:
126 def __init__(self, parent: Optional["TimeoutSettings"]) -> None:
127 self._parent = parent
128 self._timeout = 30000.0
129 self._navigation_timeout = 30000.0
130
131 def set_timeout(self, timeout: float) -> None:
132 self._timeout = timeout
133
134 def timeout(self) -> float:
135 if self._timeout is not None:
136 return self._timeout
137 if self._parent:
138 return self._parent.timeout()
139 return 30000
140
141 def set_navigation_timeout(self, navigation_timeout: float) -> None:
142 self._navigation_timeout = navigation_timeout
143
144 def navigation_timeout(self) -> float:
145 if self._navigation_timeout is not None:
146 return self._navigation_timeout
147 if self._parent:
148 return self._parent.navigation_timeout()
149 return 30000
150
151
152 def serialize_error(ex: Exception, tb: Optional[TracebackType]) -> ErrorPayload:
153 return dict(message=str(ex), name="Error", stack="".join(traceback.format_tb(tb)))
154
155
156 def parse_error(error: ErrorPayload) -> Error:
157 base_error_class = Error
158 if error.get("name") == "TimeoutError":
159 base_error_class = TimeoutError
160 return base_error_class(
161 cast(str, patch_error_message(error.get("message"))), error["stack"]
162 )
163
164
165 def patch_error_message(message: Optional[str]) -> Optional[str]:
166 if not message:
167 return None
168
169 match = re.match(r"(\w+)(: expected .*)", message)
170 if match:
171 message = to_snake_case(match.group(1)) + match.group(2)
172 message = message.replace(
173 "Pass { acceptDownloads: true }", "Pass { accept_downloads: True }"
174 )
175 return message
176
177
178 def locals_to_params(args: Dict) -> Dict:
179 copy = {}
180 for key in args:
181 if key == "self":
182 continue
183 if args[key] is not None:
184 copy[key] = args[key]
185 return copy
186
187
188 def monotonic_time() -> int:
189 return math.floor(time.monotonic() * 1000)
190
191
192 class RouteHandlerEntry:
193 def __init__(self, matcher: URLMatcher, handler: RouteHandler):
194 self.matcher = matcher
195 self.handler = handler
196
197
198 def is_safe_close_error(error: Exception) -> bool:
199 message = str(error)
200 return message.endswith("Browser has been closed") or message.endswith(
201 "Target page, context or browser has been closed"
202 )
203
204
205 def not_installed_error(message: str) -> Exception:
206 return Exception(
207 f"""
208 ================================================================================
209 {message}
210 Please complete Playwright installation via running
211
212 "python -m playwright install"
213
214 ================================================================================
215 """
216 )
217
218
219 to_snake_case_regex = re.compile("((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))")
220
221
222 def to_snake_case(name: str) -> str:
223 return to_snake_case_regex.sub(r"_\1", name).lower()
224
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/playwright/_impl/_browser_type.py b/playwright/_impl/_browser_type.py
--- a/playwright/_impl/_browser_type.py
+++ b/playwright/_impl/_browser_type.py
@@ -69,7 +69,7 @@
try:
return from_channel(await self._channel.send("launch", params))
except Exception as e:
- if f"{self.name}-" in str(e):
+ if "because executable doesn't exist" in str(e):
raise not_installed_error(f'"{self.name}" browser was not found.')
raise e
@@ -124,7 +124,7 @@
context._options = params
return context
except Exception as e:
- if f"{self.name}-" in str(e):
+ if "because executable doesn't exist" in str(e):
raise not_installed_error(f'"{self.name}" browser was not found.')
raise e
diff --git a/playwright/_impl/_helper.py b/playwright/_impl/_helper.py
--- a/playwright/_impl/_helper.py
+++ b/playwright/_impl/_helper.py
@@ -203,7 +203,7 @@
def not_installed_error(message: str) -> Exception:
- return Exception(
+ return Error(
f"""
================================================================================
{message}
|
{"golden_diff": "diff --git a/playwright/_impl/_browser_type.py b/playwright/_impl/_browser_type.py\n--- a/playwright/_impl/_browser_type.py\n+++ b/playwright/_impl/_browser_type.py\n@@ -69,7 +69,7 @@\n try:\n return from_channel(await self._channel.send(\"launch\", params))\n except Exception as e:\n- if f\"{self.name}-\" in str(e):\n+ if \"because executable doesn't exist\" in str(e):\n raise not_installed_error(f'\"{self.name}\" browser was not found.')\n raise e\n \n@@ -124,7 +124,7 @@\n context._options = params\n return context\n except Exception as e:\n- if f\"{self.name}-\" in str(e):\n+ if \"because executable doesn't exist\" in str(e):\n raise not_installed_error(f'\"{self.name}\" browser was not found.')\n raise e\n \ndiff --git a/playwright/_impl/_helper.py b/playwright/_impl/_helper.py\n--- a/playwright/_impl/_helper.py\n+++ b/playwright/_impl/_helper.py\n@@ -203,7 +203,7 @@\n \n \n def not_installed_error(message: str) -> Exception:\n- return Exception(\n+ return Error(\n f\"\"\"\n ================================================================================\n {message}\n", "issue": "[BUG] misleading error message \"browser was not found\" if browser fails to start for other reasons\n# Environment\r\nplaywright==1.9.2\r\nubuntu 20.04 in docker\r\nchromium v854489 installed via `python3 -m playwright install`\r\n\r\n# Problem description:\r\nUnintentionally, the test attempted to launch headful chromium, launch failed with misleading error message:\r\n```\r\n\"chromium\" browser was not found.\r\nPlease complete Playwright installation via running\r\n \"python -m playwright install\"\r\n```\r\nLaunching browser as `chromium.launch(headless=True)` was succesful, so the problem clearly was not with browser installation, but browser launch inself (most probably, unavailable X11)\r\n\r\n# Expected behavior:\r\nbrowser stderr captured and displayed\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pathlib import Path\nfrom typing import Dict, List, Union\n\nfrom playwright._impl._api_structures import (\n Geolocation,\n HttpCredentials,\n ProxySettings,\n ViewportSize,\n)\nfrom playwright._impl._browser import Browser, normalize_context_params\nfrom playwright._impl._browser_context import BrowserContext\nfrom playwright._impl._connection import ChannelOwner, from_channel\nfrom playwright._impl._helper import (\n ColorScheme,\n Env,\n locals_to_params,\n not_installed_error,\n)\n\n\nclass BrowserType(ChannelOwner):\n def __init__(\n self, parent: ChannelOwner, type: str, guid: str, initializer: Dict\n ) -> None:\n super().__init__(parent, type, guid, initializer)\n\n @property\n def name(self) -> str:\n return self._initializer[\"name\"]\n\n @property\n def executable_path(self) -> str:\n return self._initializer[\"executablePath\"]\n\n async def launch(\n self,\n executablePath: Union[str, Path] = None,\n args: List[str] = None,\n ignoreDefaultArgs: Union[bool, List[str]] = None,\n handleSIGINT: bool = None,\n handleSIGTERM: bool = None,\n handleSIGHUP: bool = None,\n timeout: float = None,\n env: Env = None,\n headless: bool = None,\n devtools: bool = None,\n proxy: ProxySettings = None,\n downloadsPath: Union[str, Path] = None,\n slowMo: float = None,\n chromiumSandbox: bool = None,\n firefoxUserPrefs: Dict[str, Union[str, float, bool]] = None,\n ) -> Browser:\n params = locals_to_params(locals())\n normalize_launch_params(params)\n try:\n return from_channel(await self._channel.send(\"launch\", params))\n except Exception as e:\n if f\"{self.name}-\" in str(e):\n raise not_installed_error(f'\"{self.name}\" browser was not found.')\n raise e\n\n async def launch_persistent_context(\n self,\n userDataDir: Union[str, Path],\n executablePath: Union[str, Path] = None,\n args: List[str] = None,\n ignoreDefaultArgs: Union[bool, List[str]] = None,\n handleSIGINT: bool = None,\n handleSIGTERM: bool = None,\n handleSIGHUP: bool = None,\n timeout: float = None,\n env: Env = None,\n headless: bool = None,\n devtools: bool = None,\n proxy: ProxySettings = None,\n downloadsPath: Union[str, Path] = None,\n slowMo: float = None,\n viewport: ViewportSize = None,\n noViewport: bool = None,\n ignoreHTTPSErrors: bool = None,\n javaScriptEnabled: bool = None,\n bypassCSP: bool = None,\n userAgent: str = None,\n locale: str = None,\n timezoneId: str = None,\n geolocation: Geolocation = None,\n permissions: List[str] = None,\n extraHTTPHeaders: Dict[str, str] = None,\n offline: bool = None,\n httpCredentials: HttpCredentials = None,\n deviceScaleFactor: float = None,\n isMobile: bool = None,\n hasTouch: bool = None,\n colorScheme: ColorScheme = None,\n acceptDownloads: bool = None,\n chromiumSandbox: bool = None,\n recordHarPath: Union[Path, str] = None,\n recordHarOmitContent: bool = None,\n recordVideoDir: Union[Path, str] = None,\n recordVideoSize: ViewportSize = None,\n ) -> BrowserContext:\n userDataDir = str(Path(userDataDir))\n params = locals_to_params(locals())\n normalize_context_params(self._connection._is_sync, params)\n normalize_launch_params(params)\n try:\n context = from_channel(\n await self._channel.send(\"launchPersistentContext\", params)\n )\n context._options = params\n return context\n except Exception as e:\n if f\"{self.name}-\" in str(e):\n raise not_installed_error(f'\"{self.name}\" browser was not found.')\n raise e\n\n\ndef normalize_launch_params(params: Dict) -> None:\n if \"env\" in params:\n params[\"env\"] = {name: str(value) for [name, value] in params[\"env\"].items()}\n if \"ignoreDefaultArgs\" in params:\n if params[\"ignoreDefaultArgs\"] is True:\n params[\"ignoreAllDefaultArgs\"] = True\n del params[\"ignoreDefaultArgs\"]\n if \"executablePath\" in params:\n params[\"executablePath\"] = str(Path(params[\"executablePath\"]))\n if \"downloadsPath\" in params:\n params[\"downloadsPath\"] = str(Path(params[\"downloadsPath\"]))\n", "path": "playwright/_impl/_browser_type.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport fnmatch\nimport math\nimport re\nimport sys\nimport time\nimport traceback\nfrom types import TracebackType\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n List,\n Optional,\n Pattern,\n Union,\n cast,\n)\n\nfrom playwright._impl._api_types import Error, TimeoutError\n\nif sys.version_info >= (3, 8): # pragma: no cover\n from typing import Literal, TypedDict\nelse: # pragma: no cover\n from typing_extensions import Literal, TypedDict\n\n\nif TYPE_CHECKING: # pragma: no cover\n from playwright._impl._network import Request, Response, Route\n\nURLMatch = Union[str, Pattern, Callable[[str], bool]]\nURLMatchRequest = Union[str, Pattern, Callable[[\"Request\"], bool]]\nURLMatchResponse = Union[str, Pattern, Callable[[\"Response\"], bool]]\nRouteHandler = Union[Callable[[\"Route\"], Any], Callable[[\"Route\", \"Request\"], Any]]\n\nColorScheme = Literal[\"dark\", \"light\", \"no-preference\"]\nDocumentLoadState = Literal[\"domcontentloaded\", \"load\", \"networkidle\"]\nKeyboardModifier = Literal[\"Alt\", \"Control\", \"Meta\", \"Shift\"]\nMouseButton = Literal[\"left\", \"middle\", \"right\"]\n\n\nclass ErrorPayload(TypedDict, total=False):\n message: str\n name: str\n stack: str\n value: Any\n\n\nclass Header(TypedDict):\n name: str\n value: str\n\n\nclass ContinueParameters(TypedDict, total=False):\n url: Optional[str]\n method: Optional[str]\n headers: Optional[List[Header]]\n postData: Optional[str]\n\n\nclass ParsedMessageParams(TypedDict):\n type: str\n guid: str\n initializer: Dict\n\n\nclass ParsedMessagePayload(TypedDict, total=False):\n id: int\n guid: str\n method: str\n params: ParsedMessageParams\n result: Any\n error: ErrorPayload\n\n\nclass Document(TypedDict):\n request: Optional[Any]\n\n\nclass FrameNavigatedEvent(TypedDict):\n url: str\n name: str\n newDocument: Optional[Document]\n error: Optional[str]\n\n\nEnv = Dict[str, Union[str, float, bool]]\n\n\nclass URLMatcher:\n def __init__(self, match: URLMatch) -> None:\n self._callback: Optional[Callable[[str], bool]] = None\n self._regex_obj: Optional[Pattern] = None\n if isinstance(match, str):\n regex = fnmatch.translate(match)\n self._regex_obj = re.compile(regex)\n elif isinstance(match, Pattern):\n self._regex_obj = match\n else:\n self._callback = match\n self.match = match\n\n def matches(self, url: str) -> bool:\n if self._callback:\n return self._callback(url)\n if self._regex_obj:\n return cast(bool, self._regex_obj.search(url))\n return False\n\n\nclass TimeoutSettings:\n def __init__(self, parent: Optional[\"TimeoutSettings\"]) -> None:\n self._parent = parent\n self._timeout = 30000.0\n self._navigation_timeout = 30000.0\n\n def set_timeout(self, timeout: float) -> None:\n self._timeout = timeout\n\n def timeout(self) -> float:\n if self._timeout is not None:\n return self._timeout\n if self._parent:\n return self._parent.timeout()\n return 30000\n\n def set_navigation_timeout(self, navigation_timeout: float) -> None:\n self._navigation_timeout = navigation_timeout\n\n def navigation_timeout(self) -> float:\n if self._navigation_timeout is not None:\n return self._navigation_timeout\n if self._parent:\n return self._parent.navigation_timeout()\n return 30000\n\n\ndef serialize_error(ex: Exception, tb: Optional[TracebackType]) -> ErrorPayload:\n return dict(message=str(ex), name=\"Error\", stack=\"\".join(traceback.format_tb(tb)))\n\n\ndef parse_error(error: ErrorPayload) -> Error:\n base_error_class = Error\n if error.get(\"name\") == \"TimeoutError\":\n base_error_class = TimeoutError\n return base_error_class(\n cast(str, patch_error_message(error.get(\"message\"))), error[\"stack\"]\n )\n\n\ndef patch_error_message(message: Optional[str]) -> Optional[str]:\n if not message:\n return None\n\n match = re.match(r\"(\\w+)(: expected .*)\", message)\n if match:\n message = to_snake_case(match.group(1)) + match.group(2)\n message = message.replace(\n \"Pass { acceptDownloads: true }\", \"Pass { accept_downloads: True }\"\n )\n return message\n\n\ndef locals_to_params(args: Dict) -> Dict:\n copy = {}\n for key in args:\n if key == \"self\":\n continue\n if args[key] is not None:\n copy[key] = args[key]\n return copy\n\n\ndef monotonic_time() -> int:\n return math.floor(time.monotonic() * 1000)\n\n\nclass RouteHandlerEntry:\n def __init__(self, matcher: URLMatcher, handler: RouteHandler):\n self.matcher = matcher\n self.handler = handler\n\n\ndef is_safe_close_error(error: Exception) -> bool:\n message = str(error)\n return message.endswith(\"Browser has been closed\") or message.endswith(\n \"Target page, context or browser has been closed\"\n )\n\n\ndef not_installed_error(message: str) -> Exception:\n return Exception(\n f\"\"\"\n================================================================================\n{message}\nPlease complete Playwright installation via running\n\n \"python -m playwright install\"\n\n================================================================================\n\"\"\"\n )\n\n\nto_snake_case_regex = re.compile(\"((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))\")\n\n\ndef to_snake_case(name: str) -> str:\n return to_snake_case_regex.sub(r\"_\\1\", name).lower()\n", "path": "playwright/_impl/_helper.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pathlib import Path\nfrom typing import Dict, List, Union\n\nfrom playwright._impl._api_structures import (\n Geolocation,\n HttpCredentials,\n ProxySettings,\n ViewportSize,\n)\nfrom playwright._impl._browser import Browser, normalize_context_params\nfrom playwright._impl._browser_context import BrowserContext\nfrom playwright._impl._connection import ChannelOwner, from_channel\nfrom playwright._impl._helper import (\n ColorScheme,\n Env,\n locals_to_params,\n not_installed_error,\n)\n\n\nclass BrowserType(ChannelOwner):\n def __init__(\n self, parent: ChannelOwner, type: str, guid: str, initializer: Dict\n ) -> None:\n super().__init__(parent, type, guid, initializer)\n\n @property\n def name(self) -> str:\n return self._initializer[\"name\"]\n\n @property\n def executable_path(self) -> str:\n return self._initializer[\"executablePath\"]\n\n async def launch(\n self,\n executablePath: Union[str, Path] = None,\n args: List[str] = None,\n ignoreDefaultArgs: Union[bool, List[str]] = None,\n handleSIGINT: bool = None,\n handleSIGTERM: bool = None,\n handleSIGHUP: bool = None,\n timeout: float = None,\n env: Env = None,\n headless: bool = None,\n devtools: bool = None,\n proxy: ProxySettings = None,\n downloadsPath: Union[str, Path] = None,\n slowMo: float = None,\n chromiumSandbox: bool = None,\n firefoxUserPrefs: Dict[str, Union[str, float, bool]] = None,\n ) -> Browser:\n params = locals_to_params(locals())\n normalize_launch_params(params)\n try:\n return from_channel(await self._channel.send(\"launch\", params))\n except Exception as e:\n if \"because executable doesn't exist\" in str(e):\n raise not_installed_error(f'\"{self.name}\" browser was not found.')\n raise e\n\n async def launch_persistent_context(\n self,\n userDataDir: Union[str, Path],\n executablePath: Union[str, Path] = None,\n args: List[str] = None,\n ignoreDefaultArgs: Union[bool, List[str]] = None,\n handleSIGINT: bool = None,\n handleSIGTERM: bool = None,\n handleSIGHUP: bool = None,\n timeout: float = None,\n env: Env = None,\n headless: bool = None,\n devtools: bool = None,\n proxy: ProxySettings = None,\n downloadsPath: Union[str, Path] = None,\n slowMo: float = None,\n viewport: ViewportSize = None,\n noViewport: bool = None,\n ignoreHTTPSErrors: bool = None,\n javaScriptEnabled: bool = None,\n bypassCSP: bool = None,\n userAgent: str = None,\n locale: str = None,\n timezoneId: str = None,\n geolocation: Geolocation = None,\n permissions: List[str] = None,\n extraHTTPHeaders: Dict[str, str] = None,\n offline: bool = None,\n httpCredentials: HttpCredentials = None,\n deviceScaleFactor: float = None,\n isMobile: bool = None,\n hasTouch: bool = None,\n colorScheme: ColorScheme = None,\n acceptDownloads: bool = None,\n chromiumSandbox: bool = None,\n recordHarPath: Union[Path, str] = None,\n recordHarOmitContent: bool = None,\n recordVideoDir: Union[Path, str] = None,\n recordVideoSize: ViewportSize = None,\n ) -> BrowserContext:\n userDataDir = str(Path(userDataDir))\n params = locals_to_params(locals())\n normalize_context_params(self._connection._is_sync, params)\n normalize_launch_params(params)\n try:\n context = from_channel(\n await self._channel.send(\"launchPersistentContext\", params)\n )\n context._options = params\n return context\n except Exception as e:\n if \"because executable doesn't exist\" in str(e):\n raise not_installed_error(f'\"{self.name}\" browser was not found.')\n raise e\n\n\ndef normalize_launch_params(params: Dict) -> None:\n if \"env\" in params:\n params[\"env\"] = {name: str(value) for [name, value] in params[\"env\"].items()}\n if \"ignoreDefaultArgs\" in params:\n if params[\"ignoreDefaultArgs\"] is True:\n params[\"ignoreAllDefaultArgs\"] = True\n del params[\"ignoreDefaultArgs\"]\n if \"executablePath\" in params:\n params[\"executablePath\"] = str(Path(params[\"executablePath\"]))\n if \"downloadsPath\" in params:\n params[\"downloadsPath\"] = str(Path(params[\"downloadsPath\"]))\n", "path": "playwright/_impl/_browser_type.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport fnmatch\nimport math\nimport re\nimport sys\nimport time\nimport traceback\nfrom types import TracebackType\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Dict,\n List,\n Optional,\n Pattern,\n Union,\n cast,\n)\n\nfrom playwright._impl._api_types import Error, TimeoutError\n\nif sys.version_info >= (3, 8): # pragma: no cover\n from typing import Literal, TypedDict\nelse: # pragma: no cover\n from typing_extensions import Literal, TypedDict\n\n\nif TYPE_CHECKING: # pragma: no cover\n from playwright._impl._network import Request, Response, Route\n\nURLMatch = Union[str, Pattern, Callable[[str], bool]]\nURLMatchRequest = Union[str, Pattern, Callable[[\"Request\"], bool]]\nURLMatchResponse = Union[str, Pattern, Callable[[\"Response\"], bool]]\nRouteHandler = Union[Callable[[\"Route\"], Any], Callable[[\"Route\", \"Request\"], Any]]\n\nColorScheme = Literal[\"dark\", \"light\", \"no-preference\"]\nDocumentLoadState = Literal[\"domcontentloaded\", \"load\", \"networkidle\"]\nKeyboardModifier = Literal[\"Alt\", \"Control\", \"Meta\", \"Shift\"]\nMouseButton = Literal[\"left\", \"middle\", \"right\"]\n\n\nclass ErrorPayload(TypedDict, total=False):\n message: str\n name: str\n stack: str\n value: Any\n\n\nclass Header(TypedDict):\n name: str\n value: str\n\n\nclass ContinueParameters(TypedDict, total=False):\n url: Optional[str]\n method: Optional[str]\n headers: Optional[List[Header]]\n postData: Optional[str]\n\n\nclass ParsedMessageParams(TypedDict):\n type: str\n guid: str\n initializer: Dict\n\n\nclass ParsedMessagePayload(TypedDict, total=False):\n id: int\n guid: str\n method: str\n params: ParsedMessageParams\n result: Any\n error: ErrorPayload\n\n\nclass Document(TypedDict):\n request: Optional[Any]\n\n\nclass FrameNavigatedEvent(TypedDict):\n url: str\n name: str\n newDocument: Optional[Document]\n error: Optional[str]\n\n\nEnv = Dict[str, Union[str, float, bool]]\n\n\nclass URLMatcher:\n def __init__(self, match: URLMatch) -> None:\n self._callback: Optional[Callable[[str], bool]] = None\n self._regex_obj: Optional[Pattern] = None\n if isinstance(match, str):\n regex = fnmatch.translate(match)\n self._regex_obj = re.compile(regex)\n elif isinstance(match, Pattern):\n self._regex_obj = match\n else:\n self._callback = match\n self.match = match\n\n def matches(self, url: str) -> bool:\n if self._callback:\n return self._callback(url)\n if self._regex_obj:\n return cast(bool, self._regex_obj.search(url))\n return False\n\n\nclass TimeoutSettings:\n def __init__(self, parent: Optional[\"TimeoutSettings\"]) -> None:\n self._parent = parent\n self._timeout = 30000.0\n self._navigation_timeout = 30000.0\n\n def set_timeout(self, timeout: float) -> None:\n self._timeout = timeout\n\n def timeout(self) -> float:\n if self._timeout is not None:\n return self._timeout\n if self._parent:\n return self._parent.timeout()\n return 30000\n\n def set_navigation_timeout(self, navigation_timeout: float) -> None:\n self._navigation_timeout = navigation_timeout\n\n def navigation_timeout(self) -> float:\n if self._navigation_timeout is not None:\n return self._navigation_timeout\n if self._parent:\n return self._parent.navigation_timeout()\n return 30000\n\n\ndef serialize_error(ex: Exception, tb: Optional[TracebackType]) -> ErrorPayload:\n return dict(message=str(ex), name=\"Error\", stack=\"\".join(traceback.format_tb(tb)))\n\n\ndef parse_error(error: ErrorPayload) -> Error:\n base_error_class = Error\n if error.get(\"name\") == \"TimeoutError\":\n base_error_class = TimeoutError\n return base_error_class(\n cast(str, patch_error_message(error.get(\"message\"))), error[\"stack\"]\n )\n\n\ndef patch_error_message(message: Optional[str]) -> Optional[str]:\n if not message:\n return None\n\n match = re.match(r\"(\\w+)(: expected .*)\", message)\n if match:\n message = to_snake_case(match.group(1)) + match.group(2)\n message = message.replace(\n \"Pass { acceptDownloads: true }\", \"Pass { accept_downloads: True }\"\n )\n return message\n\n\ndef locals_to_params(args: Dict) -> Dict:\n copy = {}\n for key in args:\n if key == \"self\":\n continue\n if args[key] is not None:\n copy[key] = args[key]\n return copy\n\n\ndef monotonic_time() -> int:\n return math.floor(time.monotonic() * 1000)\n\n\nclass RouteHandlerEntry:\n def __init__(self, matcher: URLMatcher, handler: RouteHandler):\n self.matcher = matcher\n self.handler = handler\n\n\ndef is_safe_close_error(error: Exception) -> bool:\n message = str(error)\n return message.endswith(\"Browser has been closed\") or message.endswith(\n \"Target page, context or browser has been closed\"\n )\n\n\ndef not_installed_error(message: str) -> Exception:\n return Error(\n f\"\"\"\n================================================================================\n{message}\nPlease complete Playwright installation via running\n\n \"python -m playwright install\"\n\n================================================================================\n\"\"\"\n )\n\n\nto_snake_case_regex = re.compile(\"((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))\")\n\n\ndef to_snake_case(name: str) -> str:\n return to_snake_case_regex.sub(r\"_\\1\", name).lower()\n", "path": "playwright/_impl/_helper.py"}]}
| 3,972 | 288 |
gh_patches_debug_7798
|
rasdani/github-patches
|
git_diff
|
ESMCI__cime-3725
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing chem_mech files in E3SM CaseDocs after renaming CAM to EAM
After [renaming CAM to EAM in E3SM](https://github.com/E3SM-Project/E3SM/pull/3845), the following two files are not copied to CaseDocs
```
chem_mech.doc
chem_mech.in
```
Need to change the 'cam' substring in 'camconf' near the end of cime/scripts/lib/CIME/case/preview_namelists.py. The piece of codes are copied below
```
# Copy over chemistry mechanism docs if they exist
if (os.path.isdir(os.path.join(casebuild, "camconf"))):
for file_to_copy in glob.glob(os.path.join(casebuild, "camconf", "*chem_mech*")):
safe_copy(file_to_copy, docdir)
```
To make it work for both cam and eam, need help to replace the substring 'cam' with the atm COMP_NAME. Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/lib/CIME/case/preview_namelists.py`
Content:
```
1 """
2 API for preview namelist
3 create_dirs and create_namelists are members of Class case from file case.py
4 """
5
6 from CIME.XML.standard_module_setup import *
7 from CIME.utils import run_sub_or_cmd, safe_copy
8 import time, glob
9 logger = logging.getLogger(__name__)
10
11 def create_dirs(self):
12 """
13 Make necessary directories for case
14 """
15 # Get data from XML
16 exeroot = self.get_value("EXEROOT")
17 libroot = self.get_value("LIBROOT")
18 incroot = self.get_value("INCROOT")
19 rundir = self.get_value("RUNDIR")
20 caseroot = self.get_value("CASEROOT")
21 docdir = os.path.join(caseroot, "CaseDocs")
22 dirs_to_make = []
23 models = self.get_values("COMP_CLASSES")
24 for model in models:
25 dirname = model.lower()
26 dirs_to_make.append(os.path.join(exeroot, dirname, "obj"))
27
28 dirs_to_make.extend([exeroot, libroot, incroot, rundir, docdir])
29
30 for dir_to_make in dirs_to_make:
31 if (not os.path.isdir(dir_to_make) and not os.path.islink(dir_to_make)):
32 try:
33 logger.debug("Making dir '{}'".format(dir_to_make))
34 os.makedirs(dir_to_make)
35 except OSError as e:
36 # In a multithreaded situation, we may have lost a race to create this dir.
37 # We do not want to crash if that's the case.
38 if not os.path.isdir(dir_to_make):
39 expect(False, "Could not make directory '{}', error: {}".format(dir_to_make, e))
40
41 # As a convenience write the location of the case directory in the bld and run directories
42 for dir_ in (exeroot, rundir):
43 with open(os.path.join(dir_,"CASEROOT"),"w+") as fd:
44 fd.write(caseroot+"\n")
45
46 def create_namelists(self, component=None):
47 """
48 Create component namelists
49 """
50 self.flush()
51
52 create_dirs(self)
53
54 casebuild = self.get_value("CASEBUILD")
55 caseroot = self.get_value("CASEROOT")
56 rundir = self.get_value("RUNDIR")
57
58 docdir = os.path.join(caseroot, "CaseDocs")
59
60 # Load modules
61 self.load_env()
62
63 self.stage_refcase()
64
65 # Create namelists - must have cpl last in the list below
66 # Note - cpl must be last in the loop below so that in generating its namelist,
67 # it can use xml vars potentially set by other component's buildnml scripts
68 models = self.get_values("COMP_CLASSES")
69 models += [models.pop(0)]
70 for model in models:
71 model_str = model.lower()
72 logger.info(" {} {} ".format(time.strftime("%Y-%m-%d %H:%M:%S"),model_str))
73 config_file = self.get_value("CONFIG_{}_FILE".format(model_str.upper()))
74 config_dir = os.path.dirname(config_file)
75 if model_str == "cpl":
76 compname = "drv"
77 else:
78 compname = self.get_value("COMP_{}".format(model_str.upper()))
79 if component is None or component == model_str or compname=="ufsatm":
80 # first look in the case SourceMods directory
81 cmd = os.path.join(caseroot, "SourceMods", "src."+compname, "buildnml")
82 if os.path.isfile(cmd):
83 logger.warning("\nWARNING: Using local buildnml file {}\n".format(cmd))
84 else:
85 # otherwise look in the component config_dir
86 cmd = os.path.join(config_dir, "buildnml")
87 expect(os.path.isfile(cmd), "Could not find buildnml file for component {}".format(compname))
88 logger.info("Create namelist for component {}".format(compname))
89 run_sub_or_cmd(cmd, (caseroot), "buildnml",
90 (self, caseroot, compname), case=self)
91
92 logger.debug("Finished creating component namelists, component {} models = {}".format(component, models))
93
94 # Save namelists to docdir
95 if (not os.path.isdir(docdir)):
96 os.makedirs(docdir)
97 try:
98 with open(os.path.join(docdir, "README"), "w") as fd:
99 fd.write(" CESM Resolved Namelist Files\n For documentation only DO NOT MODIFY\n")
100 except (OSError, IOError) as e:
101 expect(False, "Failed to write {}/README: {}".format(docdir, e))
102
103 for cpglob in ["*_in_[0-9]*", "*modelio*", "*_in", "nuopc.runconfig",
104 "*streams*txt*", "*streams.xml", "*stxt", "*maps.rc", "*cism.config*", "nuopc.runseq"]:
105 for file_to_copy in glob.glob(os.path.join(rundir, cpglob)):
106 logger.debug("Copy file from '{}' to '{}'".format(file_to_copy, docdir))
107 safe_copy(file_to_copy, docdir)
108
109 # Copy over chemistry mechanism docs if they exist
110 if (os.path.isdir(os.path.join(casebuild, "camconf"))):
111 for file_to_copy in glob.glob(os.path.join(casebuild, "camconf", "*chem_mech*")):
112 safe_copy(file_to_copy, docdir)
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/lib/CIME/case/preview_namelists.py b/scripts/lib/CIME/case/preview_namelists.py
--- a/scripts/lib/CIME/case/preview_namelists.py
+++ b/scripts/lib/CIME/case/preview_namelists.py
@@ -107,6 +107,7 @@
safe_copy(file_to_copy, docdir)
# Copy over chemistry mechanism docs if they exist
- if (os.path.isdir(os.path.join(casebuild, "camconf"))):
- for file_to_copy in glob.glob(os.path.join(casebuild, "camconf", "*chem_mech*")):
+ atmconf = self.get_value("COMP_ATM") + "conf"
+ if (os.path.isdir(os.path.join(casebuild, atmconf))):
+ for file_to_copy in glob.glob(os.path.join(casebuild, atmconf, "*chem_mech*")):
safe_copy(file_to_copy, docdir)
|
{"golden_diff": "diff --git a/scripts/lib/CIME/case/preview_namelists.py b/scripts/lib/CIME/case/preview_namelists.py\n--- a/scripts/lib/CIME/case/preview_namelists.py\n+++ b/scripts/lib/CIME/case/preview_namelists.py\n@@ -107,6 +107,7 @@\n safe_copy(file_to_copy, docdir)\n \n # Copy over chemistry mechanism docs if they exist\n- if (os.path.isdir(os.path.join(casebuild, \"camconf\"))):\n- for file_to_copy in glob.glob(os.path.join(casebuild, \"camconf\", \"*chem_mech*\")):\n+ atmconf = self.get_value(\"COMP_ATM\") + \"conf\"\n+ if (os.path.isdir(os.path.join(casebuild, atmconf))):\n+ for file_to_copy in glob.glob(os.path.join(casebuild, atmconf, \"*chem_mech*\")):\n safe_copy(file_to_copy, docdir)\n", "issue": "Missing chem_mech files in E3SM CaseDocs after renaming CAM to EAM\nAfter [renaming CAM to EAM in E3SM](https://github.com/E3SM-Project/E3SM/pull/3845), the following two files are not copied to CaseDocs\r\n```\r\nchem_mech.doc\r\nchem_mech.in\r\n```\r\nNeed to change the 'cam' substring in 'camconf' near the end of cime/scripts/lib/CIME/case/preview_namelists.py. The piece of codes are copied below\r\n```\r\n# Copy over chemistry mechanism docs if they exist\r\n if (os.path.isdir(os.path.join(casebuild, \"camconf\"))):\r\n for file_to_copy in glob.glob(os.path.join(casebuild, \"camconf\", \"*chem_mech*\")):\r\n safe_copy(file_to_copy, docdir)\r\n```\r\nTo make it work for both cam and eam, need help to replace the substring 'cam' with the atm COMP_NAME. Thanks.\n", "before_files": [{"content": "\"\"\"\nAPI for preview namelist\ncreate_dirs and create_namelists are members of Class case from file case.py\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import run_sub_or_cmd, safe_copy\nimport time, glob\nlogger = logging.getLogger(__name__)\n\ndef create_dirs(self):\n \"\"\"\n Make necessary directories for case\n \"\"\"\n # Get data from XML\n exeroot = self.get_value(\"EXEROOT\")\n libroot = self.get_value(\"LIBROOT\")\n incroot = self.get_value(\"INCROOT\")\n rundir = self.get_value(\"RUNDIR\")\n caseroot = self.get_value(\"CASEROOT\")\n docdir = os.path.join(caseroot, \"CaseDocs\")\n dirs_to_make = []\n models = self.get_values(\"COMP_CLASSES\")\n for model in models:\n dirname = model.lower()\n dirs_to_make.append(os.path.join(exeroot, dirname, \"obj\"))\n\n dirs_to_make.extend([exeroot, libroot, incroot, rundir, docdir])\n\n for dir_to_make in dirs_to_make:\n if (not os.path.isdir(dir_to_make) and not os.path.islink(dir_to_make)):\n try:\n logger.debug(\"Making dir '{}'\".format(dir_to_make))\n os.makedirs(dir_to_make)\n except OSError as e:\n # In a multithreaded situation, we may have lost a race to create this dir.\n # We do not want to crash if that's the case.\n if not os.path.isdir(dir_to_make):\n expect(False, \"Could not make directory '{}', error: {}\".format(dir_to_make, e))\n\n # As a convenience write the location of the case directory in the bld and run directories\n for dir_ in (exeroot, rundir):\n with open(os.path.join(dir_,\"CASEROOT\"),\"w+\") as fd:\n fd.write(caseroot+\"\\n\")\n\ndef create_namelists(self, component=None):\n \"\"\"\n Create component namelists\n \"\"\"\n self.flush()\n\n create_dirs(self)\n\n casebuild = self.get_value(\"CASEBUILD\")\n caseroot = self.get_value(\"CASEROOT\")\n rundir = self.get_value(\"RUNDIR\")\n\n docdir = os.path.join(caseroot, \"CaseDocs\")\n\n # Load modules\n self.load_env()\n\n self.stage_refcase()\n\n # Create namelists - must have cpl last in the list below\n # Note - cpl must be last in the loop below so that in generating its namelist,\n # it can use xml vars potentially set by other component's buildnml scripts\n models = self.get_values(\"COMP_CLASSES\")\n models += [models.pop(0)]\n for model in models:\n model_str = model.lower()\n logger.info(\" {} {} \".format(time.strftime(\"%Y-%m-%d %H:%M:%S\"),model_str))\n config_file = self.get_value(\"CONFIG_{}_FILE\".format(model_str.upper()))\n config_dir = os.path.dirname(config_file)\n if model_str == \"cpl\":\n compname = \"drv\"\n else:\n compname = self.get_value(\"COMP_{}\".format(model_str.upper()))\n if component is None or component == model_str or compname==\"ufsatm\":\n # first look in the case SourceMods directory\n cmd = os.path.join(caseroot, \"SourceMods\", \"src.\"+compname, \"buildnml\")\n if os.path.isfile(cmd):\n logger.warning(\"\\nWARNING: Using local buildnml file {}\\n\".format(cmd))\n else:\n # otherwise look in the component config_dir\n cmd = os.path.join(config_dir, \"buildnml\")\n expect(os.path.isfile(cmd), \"Could not find buildnml file for component {}\".format(compname))\n logger.info(\"Create namelist for component {}\".format(compname))\n run_sub_or_cmd(cmd, (caseroot), \"buildnml\",\n (self, caseroot, compname), case=self)\n\n logger.debug(\"Finished creating component namelists, component {} models = {}\".format(component, models))\n\n # Save namelists to docdir\n if (not os.path.isdir(docdir)):\n os.makedirs(docdir)\n try:\n with open(os.path.join(docdir, \"README\"), \"w\") as fd:\n fd.write(\" CESM Resolved Namelist Files\\n For documentation only DO NOT MODIFY\\n\")\n except (OSError, IOError) as e:\n expect(False, \"Failed to write {}/README: {}\".format(docdir, e))\n\n for cpglob in [\"*_in_[0-9]*\", \"*modelio*\", \"*_in\", \"nuopc.runconfig\",\n \"*streams*txt*\", \"*streams.xml\", \"*stxt\", \"*maps.rc\", \"*cism.config*\", \"nuopc.runseq\"]:\n for file_to_copy in glob.glob(os.path.join(rundir, cpglob)):\n logger.debug(\"Copy file from '{}' to '{}'\".format(file_to_copy, docdir))\n safe_copy(file_to_copy, docdir)\n\n # Copy over chemistry mechanism docs if they exist\n if (os.path.isdir(os.path.join(casebuild, \"camconf\"))):\n for file_to_copy in glob.glob(os.path.join(casebuild, \"camconf\", \"*chem_mech*\")):\n safe_copy(file_to_copy, docdir)\n", "path": "scripts/lib/CIME/case/preview_namelists.py"}], "after_files": [{"content": "\"\"\"\nAPI for preview namelist\ncreate_dirs and create_namelists are members of Class case from file case.py\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import run_sub_or_cmd, safe_copy\nimport time, glob\nlogger = logging.getLogger(__name__)\n\ndef create_dirs(self):\n \"\"\"\n Make necessary directories for case\n \"\"\"\n # Get data from XML\n exeroot = self.get_value(\"EXEROOT\")\n libroot = self.get_value(\"LIBROOT\")\n incroot = self.get_value(\"INCROOT\")\n rundir = self.get_value(\"RUNDIR\")\n caseroot = self.get_value(\"CASEROOT\")\n docdir = os.path.join(caseroot, \"CaseDocs\")\n dirs_to_make = []\n models = self.get_values(\"COMP_CLASSES\")\n for model in models:\n dirname = model.lower()\n dirs_to_make.append(os.path.join(exeroot, dirname, \"obj\"))\n\n dirs_to_make.extend([exeroot, libroot, incroot, rundir, docdir])\n\n for dir_to_make in dirs_to_make:\n if (not os.path.isdir(dir_to_make) and not os.path.islink(dir_to_make)):\n try:\n logger.debug(\"Making dir '{}'\".format(dir_to_make))\n os.makedirs(dir_to_make)\n except OSError as e:\n # In a multithreaded situation, we may have lost a race to create this dir.\n # We do not want to crash if that's the case.\n if not os.path.isdir(dir_to_make):\n expect(False, \"Could not make directory '{}', error: {}\".format(dir_to_make, e))\n\n # As a convenience write the location of the case directory in the bld and run directories\n for dir_ in (exeroot, rundir):\n with open(os.path.join(dir_,\"CASEROOT\"),\"w+\") as fd:\n fd.write(caseroot+\"\\n\")\n\ndef create_namelists(self, component=None):\n \"\"\"\n Create component namelists\n \"\"\"\n self.flush()\n\n create_dirs(self)\n\n casebuild = self.get_value(\"CASEBUILD\")\n caseroot = self.get_value(\"CASEROOT\")\n rundir = self.get_value(\"RUNDIR\")\n\n docdir = os.path.join(caseroot, \"CaseDocs\")\n\n # Load modules\n self.load_env()\n\n self.stage_refcase()\n\n # Create namelists - must have cpl last in the list below\n # Note - cpl must be last in the loop below so that in generating its namelist,\n # it can use xml vars potentially set by other component's buildnml scripts\n models = self.get_values(\"COMP_CLASSES\")\n models += [models.pop(0)]\n for model in models:\n model_str = model.lower()\n logger.info(\" {} {} \".format(time.strftime(\"%Y-%m-%d %H:%M:%S\"),model_str))\n config_file = self.get_value(\"CONFIG_{}_FILE\".format(model_str.upper()))\n config_dir = os.path.dirname(config_file)\n if model_str == \"cpl\":\n compname = \"drv\"\n else:\n compname = self.get_value(\"COMP_{}\".format(model_str.upper()))\n if component is None or component == model_str or compname==\"ufsatm\":\n # first look in the case SourceMods directory\n cmd = os.path.join(caseroot, \"SourceMods\", \"src.\"+compname, \"buildnml\")\n if os.path.isfile(cmd):\n logger.warning(\"\\nWARNING: Using local buildnml file {}\\n\".format(cmd))\n else:\n # otherwise look in the component config_dir\n cmd = os.path.join(config_dir, \"buildnml\")\n expect(os.path.isfile(cmd), \"Could not find buildnml file for component {}\".format(compname))\n logger.info(\"Create namelist for component {}\".format(compname))\n run_sub_or_cmd(cmd, (caseroot), \"buildnml\",\n (self, caseroot, compname), case=self)\n\n logger.debug(\"Finished creating component namelists, component {} models = {}\".format(component, models))\n\n # Save namelists to docdir\n if (not os.path.isdir(docdir)):\n os.makedirs(docdir)\n try:\n with open(os.path.join(docdir, \"README\"), \"w\") as fd:\n fd.write(\" CESM Resolved Namelist Files\\n For documentation only DO NOT MODIFY\\n\")\n except (OSError, IOError) as e:\n expect(False, \"Failed to write {}/README: {}\".format(docdir, e))\n\n for cpglob in [\"*_in_[0-9]*\", \"*modelio*\", \"*_in\", \"nuopc.runconfig\",\n \"*streams*txt*\", \"*streams.xml\", \"*stxt\", \"*maps.rc\", \"*cism.config*\", \"nuopc.runseq\"]:\n for file_to_copy in glob.glob(os.path.join(rundir, cpglob)):\n logger.debug(\"Copy file from '{}' to '{}'\".format(file_to_copy, docdir))\n safe_copy(file_to_copy, docdir)\n\n # Copy over chemistry mechanism docs if they exist\n atmconf = self.get_value(\"COMP_ATM\") + \"conf\"\n if (os.path.isdir(os.path.join(casebuild, atmconf))):\n for file_to_copy in glob.glob(os.path.join(casebuild, atmconf, \"*chem_mech*\")):\n safe_copy(file_to_copy, docdir)\n", "path": "scripts/lib/CIME/case/preview_namelists.py"}]}
| 1,873 | 208 |
gh_patches_debug_4275
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-37
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Country is not stored in db on signup
When a user signs up the country is not stored in the db
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/profiles/forms.py`
Content:
```
1 from django import forms
2 from django.utils.translation import ugettext_lazy as _
3 from django_countries.countries import COUNTRIES
4
5 from userena.forms import SignupForm
6
7 class SignupFormExtra(SignupForm):
8 institution = forms.CharField(label=_(u'Institution'),
9 max_length = 100,
10 required = True,
11 help_text=_(u'Institution you are affiliated to.'))
12 department = forms.CharField(label=_(u'Department'),
13 max_length = 100,
14 required = True,
15 help_text=_(u'Department you represent.'))
16 country = forms.ChoiceField(label=_(u'Country'),
17 choices=COUNTRIES,
18 required = True)
19 website = forms.CharField(label=_(u'Website'),
20 max_length = 150,
21 required = False)
22 first_name = forms.CharField(label=_(u'First Name'),
23 max_length = 30,
24 required = True)
25 last_name = forms.CharField(label=_(u'Last Name'),
26 max_length = 30,
27 required = True)
28
29 def __init__(self, *args, **kw):
30 """ Bit of hackery to get the first and last name at the top of the form.
31 """
32 super(SignupFormExtra,self).__init__(*args,**kw)
33 # Put the first and last name at the top.
34 new_order = self.fields.keyOrder[:-2]
35 new_order.insert(0, 'first_name')
36 new_order.insert(1, 'last_name')
37 self.fields.keyOrder = new_order
38
39 def save(self):
40 user = super(SignupFormExtra,self).save()
41 user.first_name = self.cleaned_data['first_name']
42 user.last_name = self.cleaned_data['last_name']
43 user.save()
44 user_profile = user.get_profile()
45 user_profile.institution = self.cleaned_data['institution']
46 user_profile.department = self.cleaned_data['department']
47 user_profile.save()
48
49 return user
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django/profiles/forms.py b/django/profiles/forms.py
--- a/django/profiles/forms.py
+++ b/django/profiles/forms.py
@@ -44,6 +44,7 @@
user_profile = user.get_profile()
user_profile.institution = self.cleaned_data['institution']
user_profile.department = self.cleaned_data['department']
+ user_profile.country = self.cleaned_data['country']
user_profile.save()
return user
|
{"golden_diff": "diff --git a/django/profiles/forms.py b/django/profiles/forms.py\n--- a/django/profiles/forms.py\n+++ b/django/profiles/forms.py\n@@ -44,6 +44,7 @@\n user_profile = user.get_profile()\n user_profile.institution = self.cleaned_data['institution']\n user_profile.department = self.cleaned_data['department']\n+ user_profile.country = self.cleaned_data['country']\n user_profile.save()\n \n return user\n", "issue": "Country is not stored in db on signup\nWhen a user signs up the country is not stored in the db\n\n", "before_files": [{"content": "from django import forms\nfrom django.utils.translation import ugettext_lazy as _\nfrom django_countries.countries import COUNTRIES\n\nfrom userena.forms import SignupForm\n\nclass SignupFormExtra(SignupForm):\n institution = forms.CharField(label=_(u'Institution'),\n max_length = 100,\n required = True,\n help_text=_(u'Institution you are affiliated to.'))\n department = forms.CharField(label=_(u'Department'),\n max_length = 100,\n required = True,\n help_text=_(u'Department you represent.'))\n country = forms.ChoiceField(label=_(u'Country'),\n choices=COUNTRIES,\n required = True)\n website = forms.CharField(label=_(u'Website'),\n max_length = 150,\n required = False)\n first_name = forms.CharField(label=_(u'First Name'),\n max_length = 30,\n required = True)\n last_name = forms.CharField(label=_(u'Last Name'),\n max_length = 30,\n required = True)\n\n def __init__(self, *args, **kw):\n \"\"\" Bit of hackery to get the first and last name at the top of the form.\n \"\"\"\n super(SignupFormExtra,self).__init__(*args,**kw)\n # Put the first and last name at the top.\n new_order = self.fields.keyOrder[:-2]\n new_order.insert(0, 'first_name')\n new_order.insert(1, 'last_name')\n self.fields.keyOrder = new_order\n\n def save(self):\n user = super(SignupFormExtra,self).save()\n user.first_name = self.cleaned_data['first_name']\n user.last_name = self.cleaned_data['last_name']\n user.save()\n user_profile = user.get_profile()\n user_profile.institution = self.cleaned_data['institution']\n user_profile.department = self.cleaned_data['department']\n user_profile.save()\n\n return user\n", "path": "django/profiles/forms.py"}], "after_files": [{"content": "from django import forms\nfrom django.utils.translation import ugettext_lazy as _\nfrom django_countries.countries import COUNTRIES\n\nfrom userena.forms import SignupForm\n\nclass SignupFormExtra(SignupForm):\n institution = forms.CharField(label=_(u'Institution'),\n max_length = 100,\n required = True,\n help_text=_(u'Institution you are affiliated to.'))\n department = forms.CharField(label=_(u'Department'),\n max_length = 100,\n required = True,\n help_text=_(u'Department you represent.'))\n country = forms.ChoiceField(label=_(u'Country'),\n choices=COUNTRIES,\n required = True)\n website = forms.CharField(label=_(u'Website'),\n max_length = 150,\n required = False)\n first_name = forms.CharField(label=_(u'First Name'),\n max_length = 30,\n required = True)\n last_name = forms.CharField(label=_(u'Last Name'),\n max_length = 30,\n required = True)\n\n def __init__(self, *args, **kw):\n \"\"\" Bit of hackery to get the first and last name at the top of the form.\n \"\"\"\n super(SignupFormExtra,self).__init__(*args,**kw)\n # Put the first and last name at the top.\n new_order = self.fields.keyOrder[:-2]\n new_order.insert(0, 'first_name')\n new_order.insert(1, 'last_name')\n self.fields.keyOrder = new_order\n\n def save(self):\n user = super(SignupFormExtra,self).save()\n user.first_name = self.cleaned_data['first_name']\n user.last_name = self.cleaned_data['last_name']\n user.save()\n user_profile = user.get_profile()\n user_profile.institution = self.cleaned_data['institution']\n user_profile.department = self.cleaned_data['department']\n user_profile.country = self.cleaned_data['country']\n user_profile.save()\n\n return user\n", "path": "django/profiles/forms.py"}]}
| 797 | 101 |
gh_patches_debug_26386
|
rasdani/github-patches
|
git_diff
|
scverse__scanpy-2879
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
scanpy 1.10.0rc1 breaks anndata pre-release tests
### Please make sure these conditions are met
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of scanpy.
- [X] (optional) I have confirmed this bug exists on the master branch of scanpy.
### What happened?
`@doctest_needs` decorator causes test failures on scanpy import in anndata test suite
https://dev.azure.com/scverse/anndata/_build/results?buildId=5802&view=logs&jobId=0497d03e-5796-547f-cc56-989f8152a63c&j=0497d03e-5796-547f-cc56-989f8152a63c&t=ea3acdad-0250-5b8b-a1da-6cd02463cf17
### Minimal code sample
```python
NA
```
### Error output
```pytb
else:
enum_member = enum_class._new_member_(enum_class, *args)
if not hasattr(enum_member, '_value_'):
if enum_class._member_type_ is object:
enum_member._value_ = value
else:
try:
enum_member._value_ = enum_class._member_type_(*args)
except Exception as exc:
new_exc = TypeError(
'_value_ not set in __new__, unable to create it'
)
new_exc.__cause__ = exc
> raise new_exc
E TypeError: _value_ not set in __new__, unable to create it
```
### Versions
<details>
```
See anndata test failure
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scanpy/__init__.py`
Content:
```
1 """Single-Cell Analysis in Python."""
2 from __future__ import annotations
3
4 try: # See https://github.com/maresb/hatch-vcs-footgun-example
5 from setuptools_scm import get_version
6
7 __version__ = get_version(root="..", relative_to=__file__)
8 del get_version
9 except (ImportError, LookupError):
10 try:
11 from ._version import __version__
12 except ModuleNotFoundError:
13 raise RuntimeError(
14 "scanpy is not correctly installed. Please install it, e.g. with pip."
15 )
16
17 from ._utils import check_versions
18
19 check_versions()
20 del check_versions
21
22 # the actual API
23 # (start with settings as several tools are using it)
24 from anndata import (
25 AnnData,
26 concat,
27 read_csv,
28 read_excel,
29 read_h5ad,
30 read_hdf,
31 read_loom,
32 read_mtx,
33 read_text,
34 read_umi_tools,
35 )
36
37 from . import datasets, experimental, external, get, logging, metrics, queries
38 from . import plotting as pl
39 from . import preprocessing as pp
40 from . import tools as tl
41 from ._settings import Verbosity, settings
42 from .neighbors import Neighbors
43 from .readwrite import read, read_10x_h5, read_10x_mtx, read_visium, write
44
45 set_figure_params = settings.set_figure_params
46
47 # has to be done at the end, after everything has been imported
48 import sys
49
50 sys.modules.update({f"{__name__}.{m}": globals()[m] for m in ["tl", "pp", "pl"]})
51 from ._utils import annotate_doc_types
52
53 annotate_doc_types(sys.modules[__name__], "scanpy")
54 del sys, annotate_doc_types
55
56 __all__ = [
57 "__version__",
58 "AnnData",
59 "concat",
60 "read_csv",
61 "read_excel",
62 "read_h5ad",
63 "read_hdf",
64 "read_loom",
65 "read_mtx",
66 "read_text",
67 "read_umi_tools",
68 "read",
69 "read_10x_h5",
70 "read_10x_mtx",
71 "read_visium",
72 "write",
73 "datasets",
74 "experimental",
75 "external",
76 "get",
77 "logging",
78 "metrics",
79 "queries",
80 "pl",
81 "pp",
82 "tl",
83 "Verbosity",
84 "settings",
85 "Neighbors",
86 "set_figure_params",
87 ]
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scanpy/__init__.py b/scanpy/__init__.py
--- a/scanpy/__init__.py
+++ b/scanpy/__init__.py
@@ -1,6 +1,8 @@
"""Single-Cell Analysis in Python."""
from __future__ import annotations
+import sys
+
try: # See https://github.com/maresb/hatch-vcs-footgun-example
from setuptools_scm import get_version
@@ -21,6 +23,11 @@
# the actual API
# (start with settings as several tools are using it)
+
+from ._settings import Verbosity, settings
+
+set_figure_params = settings.set_figure_params
+
from anndata import (
AnnData,
concat,
@@ -38,15 +45,10 @@
from . import plotting as pl
from . import preprocessing as pp
from . import tools as tl
-from ._settings import Verbosity, settings
from .neighbors import Neighbors
from .readwrite import read, read_10x_h5, read_10x_mtx, read_visium, write
-set_figure_params = settings.set_figure_params
-
# has to be done at the end, after everything has been imported
-import sys
-
sys.modules.update({f"{__name__}.{m}": globals()[m] for m in ["tl", "pp", "pl"]})
from ._utils import annotate_doc_types
|
{"golden_diff": "diff --git a/scanpy/__init__.py b/scanpy/__init__.py\n--- a/scanpy/__init__.py\n+++ b/scanpy/__init__.py\n@@ -1,6 +1,8 @@\n \"\"\"Single-Cell Analysis in Python.\"\"\"\n from __future__ import annotations\n \n+import sys\n+\n try: # See https://github.com/maresb/hatch-vcs-footgun-example\n from setuptools_scm import get_version\n \n@@ -21,6 +23,11 @@\n \n # the actual API\n # (start with settings as several tools are using it)\n+\n+from ._settings import Verbosity, settings\n+\n+set_figure_params = settings.set_figure_params\n+\n from anndata import (\n AnnData,\n concat,\n@@ -38,15 +45,10 @@\n from . import plotting as pl\n from . import preprocessing as pp\n from . import tools as tl\n-from ._settings import Verbosity, settings\n from .neighbors import Neighbors\n from .readwrite import read, read_10x_h5, read_10x_mtx, read_visium, write\n \n-set_figure_params = settings.set_figure_params\n-\n # has to be done at the end, after everything has been imported\n-import sys\n-\n sys.modules.update({f\"{__name__}.{m}\": globals()[m] for m in [\"tl\", \"pp\", \"pl\"]})\n from ._utils import annotate_doc_types\n", "issue": "scanpy 1.10.0rc1 breaks anndata pre-release tests\n### Please make sure these conditions are met\n\n- [X] I have checked that this issue has not already been reported.\n- [X] I have confirmed this bug exists on the latest version of scanpy.\n- [X] (optional) I have confirmed this bug exists on the master branch of scanpy.\n\n### What happened?\n\n`@doctest_needs` decorator causes test failures on scanpy import in anndata test suite\r\n\r\nhttps://dev.azure.com/scverse/anndata/_build/results?buildId=5802&view=logs&jobId=0497d03e-5796-547f-cc56-989f8152a63c&j=0497d03e-5796-547f-cc56-989f8152a63c&t=ea3acdad-0250-5b8b-a1da-6cd02463cf17\r\n\r\n\n\n### Minimal code sample\n\n```python\nNA\n```\n\n\n### Error output\n\n```pytb\nelse:\r\n enum_member = enum_class._new_member_(enum_class, *args)\r\n if not hasattr(enum_member, '_value_'):\r\n if enum_class._member_type_ is object:\r\n enum_member._value_ = value\r\n else:\r\n try:\r\n enum_member._value_ = enum_class._member_type_(*args)\r\n except Exception as exc:\r\n new_exc = TypeError(\r\n '_value_ not set in __new__, unable to create it'\r\n )\r\n new_exc.__cause__ = exc\r\n> raise new_exc\r\nE TypeError: _value_ not set in __new__, unable to create it\n```\n\n\n### Versions\n\n<details>\r\n\r\n```\r\nSee anndata test failure\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "\"\"\"Single-Cell Analysis in Python.\"\"\"\nfrom __future__ import annotations\n\ntry: # See https://github.com/maresb/hatch-vcs-footgun-example\n from setuptools_scm import get_version\n\n __version__ = get_version(root=\"..\", relative_to=__file__)\n del get_version\nexcept (ImportError, LookupError):\n try:\n from ._version import __version__\n except ModuleNotFoundError:\n raise RuntimeError(\n \"scanpy is not correctly installed. Please install it, e.g. with pip.\"\n )\n\nfrom ._utils import check_versions\n\ncheck_versions()\ndel check_versions\n\n# the actual API\n# (start with settings as several tools are using it)\nfrom anndata import (\n AnnData,\n concat,\n read_csv,\n read_excel,\n read_h5ad,\n read_hdf,\n read_loom,\n read_mtx,\n read_text,\n read_umi_tools,\n)\n\nfrom . import datasets, experimental, external, get, logging, metrics, queries\nfrom . import plotting as pl\nfrom . import preprocessing as pp\nfrom . import tools as tl\nfrom ._settings import Verbosity, settings\nfrom .neighbors import Neighbors\nfrom .readwrite import read, read_10x_h5, read_10x_mtx, read_visium, write\n\nset_figure_params = settings.set_figure_params\n\n# has to be done at the end, after everything has been imported\nimport sys\n\nsys.modules.update({f\"{__name__}.{m}\": globals()[m] for m in [\"tl\", \"pp\", \"pl\"]})\nfrom ._utils import annotate_doc_types\n\nannotate_doc_types(sys.modules[__name__], \"scanpy\")\ndel sys, annotate_doc_types\n\n__all__ = [\n \"__version__\",\n \"AnnData\",\n \"concat\",\n \"read_csv\",\n \"read_excel\",\n \"read_h5ad\",\n \"read_hdf\",\n \"read_loom\",\n \"read_mtx\",\n \"read_text\",\n \"read_umi_tools\",\n \"read\",\n \"read_10x_h5\",\n \"read_10x_mtx\",\n \"read_visium\",\n \"write\",\n \"datasets\",\n \"experimental\",\n \"external\",\n \"get\",\n \"logging\",\n \"metrics\",\n \"queries\",\n \"pl\",\n \"pp\",\n \"tl\",\n \"Verbosity\",\n \"settings\",\n \"Neighbors\",\n \"set_figure_params\",\n]\n", "path": "scanpy/__init__.py"}], "after_files": [{"content": "\"\"\"Single-Cell Analysis in Python.\"\"\"\nfrom __future__ import annotations\n\nimport sys\n\ntry: # See https://github.com/maresb/hatch-vcs-footgun-example\n from setuptools_scm import get_version\n\n __version__ = get_version(root=\"..\", relative_to=__file__)\n del get_version\nexcept (ImportError, LookupError):\n try:\n from ._version import __version__\n except ModuleNotFoundError:\n raise RuntimeError(\n \"scanpy is not correctly installed. Please install it, e.g. with pip.\"\n )\n\nfrom ._utils import check_versions\n\ncheck_versions()\ndel check_versions\n\n# the actual API\n# (start with settings as several tools are using it)\n\nfrom ._settings import Verbosity, settings\n\nset_figure_params = settings.set_figure_params\n\nfrom anndata import (\n AnnData,\n concat,\n read_csv,\n read_excel,\n read_h5ad,\n read_hdf,\n read_loom,\n read_mtx,\n read_text,\n read_umi_tools,\n)\n\nfrom . import datasets, experimental, external, get, logging, metrics, queries\nfrom . import plotting as pl\nfrom . import preprocessing as pp\nfrom . import tools as tl\nfrom .neighbors import Neighbors\nfrom .readwrite import read, read_10x_h5, read_10x_mtx, read_visium, write\n\n# has to be done at the end, after everything has been imported\nsys.modules.update({f\"{__name__}.{m}\": globals()[m] for m in [\"tl\", \"pp\", \"pl\"]})\nfrom ._utils import annotate_doc_types\n\nannotate_doc_types(sys.modules[__name__], \"scanpy\")\ndel sys, annotate_doc_types\n\n__all__ = [\n \"__version__\",\n \"AnnData\",\n \"concat\",\n \"read_csv\",\n \"read_excel\",\n \"read_h5ad\",\n \"read_hdf\",\n \"read_loom\",\n \"read_mtx\",\n \"read_text\",\n \"read_umi_tools\",\n \"read\",\n \"read_10x_h5\",\n \"read_10x_mtx\",\n \"read_visium\",\n \"write\",\n \"datasets\",\n \"experimental\",\n \"external\",\n \"get\",\n \"logging\",\n \"metrics\",\n \"queries\",\n \"pl\",\n \"pp\",\n \"tl\",\n \"Verbosity\",\n \"settings\",\n \"Neighbors\",\n \"set_figure_params\",\n]\n", "path": "scanpy/__init__.py"}]}
| 1,375 | 311 |
gh_patches_debug_14509
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-2819
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG: broken reference in example
<!--Provide a brief description of the bug.-->
Broken reference to vol_to_surf in:
examples/01_plotting/plot_3d_map_to_surface_projection.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/01_plotting/plot_3d_map_to_surface_projection.py`
Content:
```
1 """
2 Making a surface plot of a 3D statistical map
3 =============================================
4
5 project a 3D statistical map onto a cortical mesh using
6 :func:`nilearn.surface.vol_to_surf`. Display a surface plot of the projected
7 map using :func:`nilearn.plotting.plot_surf_stat_map` and adding contours of
8 regions of interest using :func:`nilearn.plotting.plot_surf_contours`.
9
10 """
11
12 ##############################################################################
13 # Get a statistical map
14 # ---------------------
15
16 from nilearn import datasets
17
18 motor_images = datasets.fetch_neurovault_motor_task()
19 stat_img = motor_images.images[0]
20
21
22 ##############################################################################
23 # Get a cortical mesh
24 # -------------------
25
26 fsaverage = datasets.fetch_surf_fsaverage()
27
28 ##############################################################################
29 # Sample the 3D data around each node of the mesh
30 # -----------------------------------------------
31
32 from nilearn import surface
33
34 texture = surface.vol_to_surf(stat_img, fsaverage.pial_right)
35
36 ##############################################################################
37 # Plot the result
38 # ---------------
39
40 from nilearn import plotting
41
42 plotting.plot_surf_stat_map(fsaverage.infl_right, texture, hemi='right',
43 title='Surface right hemisphere', colorbar=True,
44 threshold=1., bg_map=fsaverage.sulc_right)
45
46 ##############################################################################
47 # Plot 3D image for comparison
48 # ----------------------------
49
50 plotting.plot_glass_brain(stat_img, display_mode='r', plot_abs=False,
51 title='Glass brain', threshold=2.)
52
53 plotting.plot_stat_map(stat_img, display_mode='x', threshold=1.,
54 cut_coords=range(0, 51, 10), title='Slices')
55
56 ##############################################################################
57 # Use an atlas and choose regions to outline
58 # ------------------------------------------
59
60 import numpy as np
61
62 destrieux_atlas = datasets.fetch_atlas_surf_destrieux()
63 parcellation = destrieux_atlas['map_right']
64
65 # these are the regions we want to outline
66 regions_dict = {b'G_postcentral': 'Postcentral gyrus',
67 b'G_precentral': 'Precentral gyrus'}
68
69 # get indices in atlas for these labels
70 regions_indices = [np.where(np.array(destrieux_atlas['labels']) == region)[0][0]
71 for region in regions_dict]
72
73 labels = list(regions_dict.values())
74
75 ##############################################################################
76 # Display outlines of the regions of interest on top of a statistical map
77 # -----------------------------------------------------------------------
78
79 figure = plotting.plot_surf_stat_map(fsaverage.infl_right, texture, hemi='right',
80 title='Surface right hemisphere',
81 colorbar=True, threshold=1.,
82 bg_map=fsaverage.sulc_right)
83
84 plotting.plot_surf_contours(fsaverage.infl_right, parcellation, labels=labels,
85 levels=regions_indices, figure=figure, legend=True,
86 colors=['g', 'k'])
87 plotting.show()
88
89 ##############################################################################
90 # Plot with higher-resolution mesh
91 # --------------------------------
92 #
93 # `fetch_surf_fsaverage` takes a "mesh" argument which specifies
94 # wether to fetch the low-resolution fsaverage5 mesh, or the high-resolution
95 # fsaverage mesh. using mesh="fsaverage" will result in more memory usage and
96 # computation time, but finer visualizations.
97
98 big_fsaverage = datasets.fetch_surf_fsaverage('fsaverage')
99 big_texture = surface.vol_to_surf(stat_img, big_fsaverage.pial_right)
100
101 plotting.plot_surf_stat_map(big_fsaverage.infl_right,
102 big_texture, hemi='right', colorbar=True,
103 title='Surface right hemisphere: fine mesh',
104 threshold=1., bg_map=big_fsaverage.sulc_right)
105
106
107 ##############################################################################
108 # Plot multiple views of the 3D volume on a surface
109 # -------------------------------------------------
110 #
111 # *plot_img_on_surf* takes a statistical map and projects it onto a surface.
112 # It supports multiple choices of orientations, and can plot either one or both
113 # hemispheres. If no *surf_mesh* is given, *plot_img_on_surf* projects the
114 # images onto `FreeSurfer <https://surfer.nmr.mgh.harvard.edu/>`_\'s
115 # fsaverage5.
116
117 plotting.plot_img_on_surf(stat_img,
118 views=['lateral', 'medial'],
119 hemispheres=['left', 'right'],
120 colorbar=True)
121 plotting.show()
122
123 ##############################################################################
124 # 3D visualization in a web browser
125 # ---------------------------------
126 # An alternative to :func:`nilearn.plotting.plot_surf_stat_map` is to use
127 # :func:`nilearn.plotting.view_surf` or
128 # :func:`nilearn.plotting.view_img_on_surf` that give more interactive
129 # visualizations in a web browser. See :ref:`interactive-surface-plotting` for
130 # more details.
131
132 view = plotting.view_surf(fsaverage.infl_right, texture, threshold='90%',
133 bg_map=fsaverage.sulc_right)
134
135 # In a Jupyter notebook, if ``view`` is the output of a cell, it will
136 # be displayed below the cell
137 view
138
139 ##############################################################################
140
141 # uncomment this to open the plot in a web browser:
142 # view.open_in_browser()
143
144 ##############################################################################
145 # We don't need to do the projection ourselves, we can use view_img_on_surf:
146
147 view = plotting.view_img_on_surf(stat_img, threshold='90%')
148 # view.open_in_browser()
149
150 view
151
152 ##############################################################################
153 # Impact of plot parameters on visualization
154 # ------------------------------------------
155 # You can specify arguments to be passed on to the function
156 # :func:`nilearn.plotting.vol_to_surf` using `vol_to_surf_kwargs`. This allows
157 # fine-grained control of how the input 3D image is resampled and interpolated -
158 # for example if you are viewing a volumetric atlas, you would want to avoid
159 # averaging the labels between neighboring regions. Using nearest-neighbor
160 # interpolation with zero radius will achieve this.
161
162 destrieux = datasets.fetch_atlas_destrieux_2009()
163
164 view = plotting.view_img_on_surf(
165 destrieux.maps,
166 surf_mesh="fsaverage",
167 vol_to_surf_kwargs={"n_samples": 1, "radius": 0.0, "interpolation": "nearest"},
168 symmetric_cmap=False,
169 )
170
171 # view.open_in_browser()
172 view
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/01_plotting/plot_3d_map_to_surface_projection.py b/examples/01_plotting/plot_3d_map_to_surface_projection.py
--- a/examples/01_plotting/plot_3d_map_to_surface_projection.py
+++ b/examples/01_plotting/plot_3d_map_to_surface_projection.py
@@ -153,7 +153,7 @@
# Impact of plot parameters on visualization
# ------------------------------------------
# You can specify arguments to be passed on to the function
-# :func:`nilearn.plotting.vol_to_surf` using `vol_to_surf_kwargs`. This allows
+# :func:`nilearn.surface.vol_to_surf` using `vol_to_surf_kwargs`. This allows
# fine-grained control of how the input 3D image is resampled and interpolated -
# for example if you are viewing a volumetric atlas, you would want to avoid
# averaging the labels between neighboring regions. Using nearest-neighbor
|
{"golden_diff": "diff --git a/examples/01_plotting/plot_3d_map_to_surface_projection.py b/examples/01_plotting/plot_3d_map_to_surface_projection.py\n--- a/examples/01_plotting/plot_3d_map_to_surface_projection.py\n+++ b/examples/01_plotting/plot_3d_map_to_surface_projection.py\n@@ -153,7 +153,7 @@\n # Impact of plot parameters on visualization\n # ------------------------------------------\n # You can specify arguments to be passed on to the function\n-# :func:`nilearn.plotting.vol_to_surf` using `vol_to_surf_kwargs`. This allows\n+# :func:`nilearn.surface.vol_to_surf` using `vol_to_surf_kwargs`. This allows\n # fine-grained control of how the input 3D image is resampled and interpolated -\n # for example if you are viewing a volumetric atlas, you would want to avoid\n # averaging the labels between neighboring regions. Using nearest-neighbor\n", "issue": "BUG: broken reference in example\n<!--Provide a brief description of the bug.-->\r\nBroken reference to vol_to_surf in:\r\n\r\nexamples/01_plotting/plot_3d_map_to_surface_projection.py\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nMaking a surface plot of a 3D statistical map\n=============================================\n\nproject a 3D statistical map onto a cortical mesh using\n:func:`nilearn.surface.vol_to_surf`. Display a surface plot of the projected\nmap using :func:`nilearn.plotting.plot_surf_stat_map` and adding contours of\nregions of interest using :func:`nilearn.plotting.plot_surf_contours`.\n\n\"\"\"\n\n##############################################################################\n# Get a statistical map\n# ---------------------\n\nfrom nilearn import datasets\n\nmotor_images = datasets.fetch_neurovault_motor_task()\nstat_img = motor_images.images[0]\n\n\n##############################################################################\n# Get a cortical mesh\n# -------------------\n\nfsaverage = datasets.fetch_surf_fsaverage()\n\n##############################################################################\n# Sample the 3D data around each node of the mesh\n# -----------------------------------------------\n\nfrom nilearn import surface\n\ntexture = surface.vol_to_surf(stat_img, fsaverage.pial_right)\n\n##############################################################################\n# Plot the result\n# ---------------\n\nfrom nilearn import plotting\n\nplotting.plot_surf_stat_map(fsaverage.infl_right, texture, hemi='right',\n title='Surface right hemisphere', colorbar=True,\n threshold=1., bg_map=fsaverage.sulc_right)\n\n##############################################################################\n# Plot 3D image for comparison\n# ----------------------------\n\nplotting.plot_glass_brain(stat_img, display_mode='r', plot_abs=False,\n title='Glass brain', threshold=2.)\n\nplotting.plot_stat_map(stat_img, display_mode='x', threshold=1.,\n cut_coords=range(0, 51, 10), title='Slices')\n\n##############################################################################\n# Use an atlas and choose regions to outline\n# ------------------------------------------\n\nimport numpy as np\n\ndestrieux_atlas = datasets.fetch_atlas_surf_destrieux()\nparcellation = destrieux_atlas['map_right']\n\n# these are the regions we want to outline\nregions_dict = {b'G_postcentral': 'Postcentral gyrus',\n b'G_precentral': 'Precentral gyrus'}\n\n# get indices in atlas for these labels\nregions_indices = [np.where(np.array(destrieux_atlas['labels']) == region)[0][0]\n for region in regions_dict]\n\nlabels = list(regions_dict.values())\n\n##############################################################################\n# Display outlines of the regions of interest on top of a statistical map\n# -----------------------------------------------------------------------\n\nfigure = plotting.plot_surf_stat_map(fsaverage.infl_right, texture, hemi='right',\n title='Surface right hemisphere',\n colorbar=True, threshold=1.,\n bg_map=fsaverage.sulc_right)\n\nplotting.plot_surf_contours(fsaverage.infl_right, parcellation, labels=labels,\n levels=regions_indices, figure=figure, legend=True,\n colors=['g', 'k'])\nplotting.show()\n\n##############################################################################\n# Plot with higher-resolution mesh\n# --------------------------------\n#\n# `fetch_surf_fsaverage` takes a \"mesh\" argument which specifies\n# wether to fetch the low-resolution fsaverage5 mesh, or the high-resolution\n# fsaverage mesh. using mesh=\"fsaverage\" will result in more memory usage and\n# computation time, but finer visualizations.\n\nbig_fsaverage = datasets.fetch_surf_fsaverage('fsaverage')\nbig_texture = surface.vol_to_surf(stat_img, big_fsaverage.pial_right)\n\nplotting.plot_surf_stat_map(big_fsaverage.infl_right,\n big_texture, hemi='right', colorbar=True,\n title='Surface right hemisphere: fine mesh',\n threshold=1., bg_map=big_fsaverage.sulc_right)\n\n\n##############################################################################\n# Plot multiple views of the 3D volume on a surface\n# -------------------------------------------------\n#\n# *plot_img_on_surf* takes a statistical map and projects it onto a surface.\n# It supports multiple choices of orientations, and can plot either one or both\n# hemispheres. If no *surf_mesh* is given, *plot_img_on_surf* projects the\n# images onto `FreeSurfer <https://surfer.nmr.mgh.harvard.edu/>`_\\'s\n# fsaverage5.\n\nplotting.plot_img_on_surf(stat_img,\n views=['lateral', 'medial'],\n hemispheres=['left', 'right'],\n colorbar=True)\nplotting.show()\n\n##############################################################################\n# 3D visualization in a web browser\n# ---------------------------------\n# An alternative to :func:`nilearn.plotting.plot_surf_stat_map` is to use\n# :func:`nilearn.plotting.view_surf` or\n# :func:`nilearn.plotting.view_img_on_surf` that give more interactive\n# visualizations in a web browser. See :ref:`interactive-surface-plotting` for\n# more details.\n\nview = plotting.view_surf(fsaverage.infl_right, texture, threshold='90%',\n bg_map=fsaverage.sulc_right)\n\n# In a Jupyter notebook, if ``view`` is the output of a cell, it will\n# be displayed below the cell\nview\n\n##############################################################################\n\n# uncomment this to open the plot in a web browser:\n# view.open_in_browser()\n\n##############################################################################\n# We don't need to do the projection ourselves, we can use view_img_on_surf:\n\nview = plotting.view_img_on_surf(stat_img, threshold='90%')\n# view.open_in_browser()\n\nview\n\n##############################################################################\n# Impact of plot parameters on visualization\n# ------------------------------------------\n# You can specify arguments to be passed on to the function\n# :func:`nilearn.plotting.vol_to_surf` using `vol_to_surf_kwargs`. This allows\n# fine-grained control of how the input 3D image is resampled and interpolated -\n# for example if you are viewing a volumetric atlas, you would want to avoid\n# averaging the labels between neighboring regions. Using nearest-neighbor\n# interpolation with zero radius will achieve this.\n\ndestrieux = datasets.fetch_atlas_destrieux_2009()\n\nview = plotting.view_img_on_surf(\n destrieux.maps,\n surf_mesh=\"fsaverage\",\n vol_to_surf_kwargs={\"n_samples\": 1, \"radius\": 0.0, \"interpolation\": \"nearest\"},\n symmetric_cmap=False,\n)\n\n# view.open_in_browser()\nview\n", "path": "examples/01_plotting/plot_3d_map_to_surface_projection.py"}], "after_files": [{"content": "\"\"\"\nMaking a surface plot of a 3D statistical map\n=============================================\n\nproject a 3D statistical map onto a cortical mesh using\n:func:`nilearn.surface.vol_to_surf`. Display a surface plot of the projected\nmap using :func:`nilearn.plotting.plot_surf_stat_map` and adding contours of\nregions of interest using :func:`nilearn.plotting.plot_surf_contours`.\n\n\"\"\"\n\n##############################################################################\n# Get a statistical map\n# ---------------------\n\nfrom nilearn import datasets\n\nmotor_images = datasets.fetch_neurovault_motor_task()\nstat_img = motor_images.images[0]\n\n\n##############################################################################\n# Get a cortical mesh\n# -------------------\n\nfsaverage = datasets.fetch_surf_fsaverage()\n\n##############################################################################\n# Sample the 3D data around each node of the mesh\n# -----------------------------------------------\n\nfrom nilearn import surface\n\ntexture = surface.vol_to_surf(stat_img, fsaverage.pial_right)\n\n##############################################################################\n# Plot the result\n# ---------------\n\nfrom nilearn import plotting\n\nplotting.plot_surf_stat_map(fsaverage.infl_right, texture, hemi='right',\n title='Surface right hemisphere', colorbar=True,\n threshold=1., bg_map=fsaverage.sulc_right)\n\n##############################################################################\n# Plot 3D image for comparison\n# ----------------------------\n\nplotting.plot_glass_brain(stat_img, display_mode='r', plot_abs=False,\n title='Glass brain', threshold=2.)\n\nplotting.plot_stat_map(stat_img, display_mode='x', threshold=1.,\n cut_coords=range(0, 51, 10), title='Slices')\n\n##############################################################################\n# Use an atlas and choose regions to outline\n# ------------------------------------------\n\nimport numpy as np\n\ndestrieux_atlas = datasets.fetch_atlas_surf_destrieux()\nparcellation = destrieux_atlas['map_right']\n\n# these are the regions we want to outline\nregions_dict = {b'G_postcentral': 'Postcentral gyrus',\n b'G_precentral': 'Precentral gyrus'}\n\n# get indices in atlas for these labels\nregions_indices = [np.where(np.array(destrieux_atlas['labels']) == region)[0][0]\n for region in regions_dict]\n\nlabels = list(regions_dict.values())\n\n##############################################################################\n# Display outlines of the regions of interest on top of a statistical map\n# -----------------------------------------------------------------------\n\nfigure = plotting.plot_surf_stat_map(fsaverage.infl_right, texture, hemi='right',\n title='Surface right hemisphere',\n colorbar=True, threshold=1.,\n bg_map=fsaverage.sulc_right)\n\nplotting.plot_surf_contours(fsaverage.infl_right, parcellation, labels=labels,\n levels=regions_indices, figure=figure, legend=True,\n colors=['g', 'k'])\nplotting.show()\n\n##############################################################################\n# Plot with higher-resolution mesh\n# --------------------------------\n#\n# `fetch_surf_fsaverage` takes a \"mesh\" argument which specifies\n# wether to fetch the low-resolution fsaverage5 mesh, or the high-resolution\n# fsaverage mesh. using mesh=\"fsaverage\" will result in more memory usage and\n# computation time, but finer visualizations.\n\nbig_fsaverage = datasets.fetch_surf_fsaverage('fsaverage')\nbig_texture = surface.vol_to_surf(stat_img, big_fsaverage.pial_right)\n\nplotting.plot_surf_stat_map(big_fsaverage.infl_right,\n big_texture, hemi='right', colorbar=True,\n title='Surface right hemisphere: fine mesh',\n threshold=1., bg_map=big_fsaverage.sulc_right)\n\n\n##############################################################################\n# Plot multiple views of the 3D volume on a surface\n# -------------------------------------------------\n#\n# *plot_img_on_surf* takes a statistical map and projects it onto a surface.\n# It supports multiple choices of orientations, and can plot either one or both\n# hemispheres. If no *surf_mesh* is given, *plot_img_on_surf* projects the\n# images onto `FreeSurfer <https://surfer.nmr.mgh.harvard.edu/>`_\\'s\n# fsaverage5.\n\nplotting.plot_img_on_surf(stat_img,\n views=['lateral', 'medial'],\n hemispheres=['left', 'right'],\n colorbar=True)\nplotting.show()\n\n##############################################################################\n# 3D visualization in a web browser\n# ---------------------------------\n# An alternative to :func:`nilearn.plotting.plot_surf_stat_map` is to use\n# :func:`nilearn.plotting.view_surf` or\n# :func:`nilearn.plotting.view_img_on_surf` that give more interactive\n# visualizations in a web browser. See :ref:`interactive-surface-plotting` for\n# more details.\n\nview = plotting.view_surf(fsaverage.infl_right, texture, threshold='90%',\n bg_map=fsaverage.sulc_right)\n\n# In a Jupyter notebook, if ``view`` is the output of a cell, it will\n# be displayed below the cell\nview\n\n##############################################################################\n\n# uncomment this to open the plot in a web browser:\n# view.open_in_browser()\n\n##############################################################################\n# We don't need to do the projection ourselves, we can use view_img_on_surf:\n\nview = plotting.view_img_on_surf(stat_img, threshold='90%')\n# view.open_in_browser()\n\nview\n\n##############################################################################\n# Impact of plot parameters on visualization\n# ------------------------------------------\n# You can specify arguments to be passed on to the function\n# :func:`nilearn.surface.vol_to_surf` using `vol_to_surf_kwargs`. This allows\n# fine-grained control of how the input 3D image is resampled and interpolated -\n# for example if you are viewing a volumetric atlas, you would want to avoid\n# averaging the labels between neighboring regions. Using nearest-neighbor\n# interpolation with zero radius will achieve this.\n\ndestrieux = datasets.fetch_atlas_destrieux_2009()\n\nview = plotting.view_img_on_surf(\n destrieux.maps,\n surf_mesh=\"fsaverage\",\n vol_to_surf_kwargs={\"n_samples\": 1, \"radius\": 0.0, \"interpolation\": \"nearest\"},\n symmetric_cmap=False,\n)\n\n# view.open_in_browser()\nview\n", "path": "examples/01_plotting/plot_3d_map_to_surface_projection.py"}]}
| 2,038 | 210 |
gh_patches_debug_7128
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-2419
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dynamic challenges do not show a Next Challenge
<!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**:
- CTFd Version/Commit: 3.6.0/8ead306f8b57c059192cd8b137f37ee41a078a41
- Operating System: All
- Web Browser and Version: All
**What happened?**
TLDR: *dynamic* challenges do not serve `next_id` to the frontend.
**How to reproduce your issue**
1. I created two challenges A and B with dynamic scoring.
2. I opened the admin configuration for challenge A.
3. I clicked "Next"
4. I selected challenge B from the dropdown.
5. I clicked the "Save" button.
6. The input field is empty.
**What did you expect to happen?**
The input field shows "Challenge B".
**Any associated stack traces or error logs**
The issue arises from the lack of `next_id` field in API responses for dynamic challenges [here](https://github.com/CTFd/CTFd/blob/8ead306f8b57c059192cd8b137f37ee41a078a41/CTFd/plugins/dynamic_challenges/__init__.py#L60-L89).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/plugins/dynamic_challenges/__init__.py`
Content:
```
1 from flask import Blueprint
2
3 from CTFd.models import Challenges, db
4 from CTFd.plugins import register_plugin_assets_directory
5 from CTFd.plugins.challenges import CHALLENGE_CLASSES, BaseChallenge
6 from CTFd.plugins.dynamic_challenges.decay import DECAY_FUNCTIONS, logarithmic
7 from CTFd.plugins.migrations import upgrade
8
9
10 class DynamicChallenge(Challenges):
11 __mapper_args__ = {"polymorphic_identity": "dynamic"}
12 id = db.Column(
13 db.Integer, db.ForeignKey("challenges.id", ondelete="CASCADE"), primary_key=True
14 )
15 initial = db.Column(db.Integer, default=0)
16 minimum = db.Column(db.Integer, default=0)
17 decay = db.Column(db.Integer, default=0)
18 function = db.Column(db.String(32), default="logarithmic")
19
20 def __init__(self, *args, **kwargs):
21 super(DynamicChallenge, self).__init__(**kwargs)
22 self.value = kwargs["initial"]
23
24
25 class DynamicValueChallenge(BaseChallenge):
26 id = "dynamic" # Unique identifier used to register challenges
27 name = "dynamic" # Name of a challenge type
28 templates = (
29 { # Handlebars templates used for each aspect of challenge editing & viewing
30 "create": "/plugins/dynamic_challenges/assets/create.html",
31 "update": "/plugins/dynamic_challenges/assets/update.html",
32 "view": "/plugins/dynamic_challenges/assets/view.html",
33 }
34 )
35 scripts = { # Scripts that are loaded when a template is loaded
36 "create": "/plugins/dynamic_challenges/assets/create.js",
37 "update": "/plugins/dynamic_challenges/assets/update.js",
38 "view": "/plugins/dynamic_challenges/assets/view.js",
39 }
40 # Route at which files are accessible. This must be registered using register_plugin_assets_directory()
41 route = "/plugins/dynamic_challenges/assets/"
42 # Blueprint used to access the static_folder directory.
43 blueprint = Blueprint(
44 "dynamic_challenges",
45 __name__,
46 template_folder="templates",
47 static_folder="assets",
48 )
49 challenge_model = DynamicChallenge
50
51 @classmethod
52 def calculate_value(cls, challenge):
53 f = DECAY_FUNCTIONS.get(challenge.function, logarithmic)
54 value = f(challenge)
55
56 challenge.value = value
57 db.session.commit()
58 return challenge
59
60 @classmethod
61 def read(cls, challenge):
62 """
63 This method is in used to access the data of a challenge in a format processable by the front end.
64
65 :param challenge:
66 :return: Challenge object, data dictionary to be returned to the user
67 """
68 challenge = DynamicChallenge.query.filter_by(id=challenge.id).first()
69 data = {
70 "id": challenge.id,
71 "name": challenge.name,
72 "value": challenge.value,
73 "initial": challenge.initial,
74 "decay": challenge.decay,
75 "minimum": challenge.minimum,
76 "description": challenge.description,
77 "connection_info": challenge.connection_info,
78 "category": challenge.category,
79 "state": challenge.state,
80 "max_attempts": challenge.max_attempts,
81 "type": challenge.type,
82 "type_data": {
83 "id": cls.id,
84 "name": cls.name,
85 "templates": cls.templates,
86 "scripts": cls.scripts,
87 },
88 }
89 return data
90
91 @classmethod
92 def update(cls, challenge, request):
93 """
94 This method is used to update the information associated with a challenge. This should be kept strictly to the
95 Challenges table and any child tables.
96
97 :param challenge:
98 :param request:
99 :return:
100 """
101 data = request.form or request.get_json()
102
103 for attr, value in data.items():
104 # We need to set these to floats so that the next operations don't operate on strings
105 if attr in ("initial", "minimum", "decay"):
106 value = float(value)
107 setattr(challenge, attr, value)
108
109 return DynamicValueChallenge.calculate_value(challenge)
110
111 @classmethod
112 def solve(cls, user, team, challenge, request):
113 super().solve(user, team, challenge, request)
114
115 DynamicValueChallenge.calculate_value(challenge)
116
117
118 def load(app):
119 upgrade(plugin_name="dynamic_challenges")
120 CHALLENGE_CLASSES["dynamic"] = DynamicValueChallenge
121 register_plugin_assets_directory(
122 app, base_path="/plugins/dynamic_challenges/assets/"
123 )
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/CTFd/plugins/dynamic_challenges/__init__.py b/CTFd/plugins/dynamic_challenges/__init__.py
--- a/CTFd/plugins/dynamic_challenges/__init__.py
+++ b/CTFd/plugins/dynamic_challenges/__init__.py
@@ -75,6 +75,7 @@
"minimum": challenge.minimum,
"description": challenge.description,
"connection_info": challenge.connection_info,
+ "next_id": challenge.next_id,
"category": challenge.category,
"state": challenge.state,
"max_attempts": challenge.max_attempts,
|
{"golden_diff": "diff --git a/CTFd/plugins/dynamic_challenges/__init__.py b/CTFd/plugins/dynamic_challenges/__init__.py\n--- a/CTFd/plugins/dynamic_challenges/__init__.py\n+++ b/CTFd/plugins/dynamic_challenges/__init__.py\n@@ -75,6 +75,7 @@\n \"minimum\": challenge.minimum,\n \"description\": challenge.description,\n \"connection_info\": challenge.connection_info,\n+ \"next_id\": challenge.next_id,\n \"category\": challenge.category,\n \"state\": challenge.state,\n \"max_attempts\": challenge.max_attempts,\n", "issue": "Dynamic challenges do not show a Next Challenge\n<!--\r\nIf this is a bug report please fill out the template below.\r\n\r\nIf this is a feature request please describe the behavior that you'd like to see.\r\n-->\r\n\r\n**Environment**:\r\n\r\n- CTFd Version/Commit: 3.6.0/8ead306f8b57c059192cd8b137f37ee41a078a41\r\n- Operating System: All\r\n- Web Browser and Version: All\r\n\r\n**What happened?**\r\n\r\nTLDR: *dynamic* challenges do not serve `next_id` to the frontend.\r\n\r\n**How to reproduce your issue**\r\n\r\n1. I created two challenges A and B with dynamic scoring.\r\n2. I opened the admin configuration for challenge A.\r\n3. I clicked \"Next\"\r\n4. I selected challenge B from the dropdown.\r\n5. I clicked the \"Save\" button.\r\n6. The input field is empty.\r\n\r\n**What did you expect to happen?**\r\n\r\nThe input field shows \"Challenge B\".\r\n\r\n**Any associated stack traces or error logs**\r\n\r\nThe issue arises from the lack of `next_id` field in API responses for dynamic challenges [here](https://github.com/CTFd/CTFd/blob/8ead306f8b57c059192cd8b137f37ee41a078a41/CTFd/plugins/dynamic_challenges/__init__.py#L60-L89).\r\n\n", "before_files": [{"content": "from flask import Blueprint\n\nfrom CTFd.models import Challenges, db\nfrom CTFd.plugins import register_plugin_assets_directory\nfrom CTFd.plugins.challenges import CHALLENGE_CLASSES, BaseChallenge\nfrom CTFd.plugins.dynamic_challenges.decay import DECAY_FUNCTIONS, logarithmic\nfrom CTFd.plugins.migrations import upgrade\n\n\nclass DynamicChallenge(Challenges):\n __mapper_args__ = {\"polymorphic_identity\": \"dynamic\"}\n id = db.Column(\n db.Integer, db.ForeignKey(\"challenges.id\", ondelete=\"CASCADE\"), primary_key=True\n )\n initial = db.Column(db.Integer, default=0)\n minimum = db.Column(db.Integer, default=0)\n decay = db.Column(db.Integer, default=0)\n function = db.Column(db.String(32), default=\"logarithmic\")\n\n def __init__(self, *args, **kwargs):\n super(DynamicChallenge, self).__init__(**kwargs)\n self.value = kwargs[\"initial\"]\n\n\nclass DynamicValueChallenge(BaseChallenge):\n id = \"dynamic\" # Unique identifier used to register challenges\n name = \"dynamic\" # Name of a challenge type\n templates = (\n { # Handlebars templates used for each aspect of challenge editing & viewing\n \"create\": \"/plugins/dynamic_challenges/assets/create.html\",\n \"update\": \"/plugins/dynamic_challenges/assets/update.html\",\n \"view\": \"/plugins/dynamic_challenges/assets/view.html\",\n }\n )\n scripts = { # Scripts that are loaded when a template is loaded\n \"create\": \"/plugins/dynamic_challenges/assets/create.js\",\n \"update\": \"/plugins/dynamic_challenges/assets/update.js\",\n \"view\": \"/plugins/dynamic_challenges/assets/view.js\",\n }\n # Route at which files are accessible. This must be registered using register_plugin_assets_directory()\n route = \"/plugins/dynamic_challenges/assets/\"\n # Blueprint used to access the static_folder directory.\n blueprint = Blueprint(\n \"dynamic_challenges\",\n __name__,\n template_folder=\"templates\",\n static_folder=\"assets\",\n )\n challenge_model = DynamicChallenge\n\n @classmethod\n def calculate_value(cls, challenge):\n f = DECAY_FUNCTIONS.get(challenge.function, logarithmic)\n value = f(challenge)\n\n challenge.value = value\n db.session.commit()\n return challenge\n\n @classmethod\n def read(cls, challenge):\n \"\"\"\n This method is in used to access the data of a challenge in a format processable by the front end.\n\n :param challenge:\n :return: Challenge object, data dictionary to be returned to the user\n \"\"\"\n challenge = DynamicChallenge.query.filter_by(id=challenge.id).first()\n data = {\n \"id\": challenge.id,\n \"name\": challenge.name,\n \"value\": challenge.value,\n \"initial\": challenge.initial,\n \"decay\": challenge.decay,\n \"minimum\": challenge.minimum,\n \"description\": challenge.description,\n \"connection_info\": challenge.connection_info,\n \"category\": challenge.category,\n \"state\": challenge.state,\n \"max_attempts\": challenge.max_attempts,\n \"type\": challenge.type,\n \"type_data\": {\n \"id\": cls.id,\n \"name\": cls.name,\n \"templates\": cls.templates,\n \"scripts\": cls.scripts,\n },\n }\n return data\n\n @classmethod\n def update(cls, challenge, request):\n \"\"\"\n This method is used to update the information associated with a challenge. This should be kept strictly to the\n Challenges table and any child tables.\n\n :param challenge:\n :param request:\n :return:\n \"\"\"\n data = request.form or request.get_json()\n\n for attr, value in data.items():\n # We need to set these to floats so that the next operations don't operate on strings\n if attr in (\"initial\", \"minimum\", \"decay\"):\n value = float(value)\n setattr(challenge, attr, value)\n\n return DynamicValueChallenge.calculate_value(challenge)\n\n @classmethod\n def solve(cls, user, team, challenge, request):\n super().solve(user, team, challenge, request)\n\n DynamicValueChallenge.calculate_value(challenge)\n\n\ndef load(app):\n upgrade(plugin_name=\"dynamic_challenges\")\n CHALLENGE_CLASSES[\"dynamic\"] = DynamicValueChallenge\n register_plugin_assets_directory(\n app, base_path=\"/plugins/dynamic_challenges/assets/\"\n )\n", "path": "CTFd/plugins/dynamic_challenges/__init__.py"}], "after_files": [{"content": "from flask import Blueprint\n\nfrom CTFd.models import Challenges, db\nfrom CTFd.plugins import register_plugin_assets_directory\nfrom CTFd.plugins.challenges import CHALLENGE_CLASSES, BaseChallenge\nfrom CTFd.plugins.dynamic_challenges.decay import DECAY_FUNCTIONS, logarithmic\nfrom CTFd.plugins.migrations import upgrade\n\n\nclass DynamicChallenge(Challenges):\n __mapper_args__ = {\"polymorphic_identity\": \"dynamic\"}\n id = db.Column(\n db.Integer, db.ForeignKey(\"challenges.id\", ondelete=\"CASCADE\"), primary_key=True\n )\n initial = db.Column(db.Integer, default=0)\n minimum = db.Column(db.Integer, default=0)\n decay = db.Column(db.Integer, default=0)\n function = db.Column(db.String(32), default=\"logarithmic\")\n\n def __init__(self, *args, **kwargs):\n super(DynamicChallenge, self).__init__(**kwargs)\n self.value = kwargs[\"initial\"]\n\n\nclass DynamicValueChallenge(BaseChallenge):\n id = \"dynamic\" # Unique identifier used to register challenges\n name = \"dynamic\" # Name of a challenge type\n templates = (\n { # Handlebars templates used for each aspect of challenge editing & viewing\n \"create\": \"/plugins/dynamic_challenges/assets/create.html\",\n \"update\": \"/plugins/dynamic_challenges/assets/update.html\",\n \"view\": \"/plugins/dynamic_challenges/assets/view.html\",\n }\n )\n scripts = { # Scripts that are loaded when a template is loaded\n \"create\": \"/plugins/dynamic_challenges/assets/create.js\",\n \"update\": \"/plugins/dynamic_challenges/assets/update.js\",\n \"view\": \"/plugins/dynamic_challenges/assets/view.js\",\n }\n # Route at which files are accessible. This must be registered using register_plugin_assets_directory()\n route = \"/plugins/dynamic_challenges/assets/\"\n # Blueprint used to access the static_folder directory.\n blueprint = Blueprint(\n \"dynamic_challenges\",\n __name__,\n template_folder=\"templates\",\n static_folder=\"assets\",\n )\n challenge_model = DynamicChallenge\n\n @classmethod\n def calculate_value(cls, challenge):\n f = DECAY_FUNCTIONS.get(challenge.function, logarithmic)\n value = f(challenge)\n\n challenge.value = value\n db.session.commit()\n return challenge\n\n @classmethod\n def read(cls, challenge):\n \"\"\"\n This method is in used to access the data of a challenge in a format processable by the front end.\n\n :param challenge:\n :return: Challenge object, data dictionary to be returned to the user\n \"\"\"\n challenge = DynamicChallenge.query.filter_by(id=challenge.id).first()\n data = {\n \"id\": challenge.id,\n \"name\": challenge.name,\n \"value\": challenge.value,\n \"initial\": challenge.initial,\n \"decay\": challenge.decay,\n \"minimum\": challenge.minimum,\n \"description\": challenge.description,\n \"connection_info\": challenge.connection_info,\n \"next_id\": challenge.next_id,\n \"category\": challenge.category,\n \"state\": challenge.state,\n \"max_attempts\": challenge.max_attempts,\n \"type\": challenge.type,\n \"type_data\": {\n \"id\": cls.id,\n \"name\": cls.name,\n \"templates\": cls.templates,\n \"scripts\": cls.scripts,\n },\n }\n return data\n\n @classmethod\n def update(cls, challenge, request):\n \"\"\"\n This method is used to update the information associated with a challenge. This should be kept strictly to the\n Challenges table and any child tables.\n\n :param challenge:\n :param request:\n :return:\n \"\"\"\n data = request.form or request.get_json()\n\n for attr, value in data.items():\n # We need to set these to floats so that the next operations don't operate on strings\n if attr in (\"initial\", \"minimum\", \"decay\"):\n value = float(value)\n setattr(challenge, attr, value)\n\n return DynamicValueChallenge.calculate_value(challenge)\n\n @classmethod\n def solve(cls, user, team, challenge, request):\n super().solve(user, team, challenge, request)\n\n DynamicValueChallenge.calculate_value(challenge)\n\n\ndef load(app):\n upgrade(plugin_name=\"dynamic_challenges\")\n CHALLENGE_CLASSES[\"dynamic\"] = DynamicValueChallenge\n register_plugin_assets_directory(\n app, base_path=\"/plugins/dynamic_challenges/assets/\"\n )\n", "path": "CTFd/plugins/dynamic_challenges/__init__.py"}]}
| 1,799 | 128 |
gh_patches_debug_64222
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-1313
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HTTP_PROXY variable with username and empty password not supported
Scrapy doesn't support proxy authentication when the password is empty when using the HTTP_PROXY environment variable to supply the proxy argument.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/downloadermiddlewares/httpproxy.py`
Content:
```
1 import base64
2 from six.moves.urllib.request import getproxies, proxy_bypass
3 from six.moves.urllib.parse import unquote
4 try:
5 from urllib2 import _parse_proxy
6 except ImportError:
7 from urllib.request import _parse_proxy
8 from six.moves.urllib.parse import urlunparse
9
10 from scrapy.utils.httpobj import urlparse_cached
11 from scrapy.exceptions import NotConfigured
12
13
14 class HttpProxyMiddleware(object):
15
16 def __init__(self):
17 self.proxies = {}
18 for type, url in getproxies().items():
19 self.proxies[type] = self._get_proxy(url, type)
20
21 if not self.proxies:
22 raise NotConfigured
23
24 def _get_proxy(self, url, orig_type):
25 proxy_type, user, password, hostport = _parse_proxy(url)
26 proxy_url = urlunparse((proxy_type or orig_type, hostport, '', '', '', ''))
27
28 if user and password:
29 user_pass = '%s:%s' % (unquote(user), unquote(password))
30 creds = base64.b64encode(user_pass).strip()
31 else:
32 creds = None
33
34 return creds, proxy_url
35
36 def process_request(self, request, spider):
37 # ignore if proxy is already seted
38 if 'proxy' in request.meta:
39 return
40
41 parsed = urlparse_cached(request)
42 scheme = parsed.scheme
43
44 # 'no_proxy' is only supported by http schemes
45 if scheme in ('http', 'https') and proxy_bypass(parsed.hostname):
46 return
47
48 if scheme in self.proxies:
49 self._set_proxy(request, scheme)
50
51 def _set_proxy(self, request, scheme):
52 creds, proxy = self.proxies[scheme]
53 request.meta['proxy'] = proxy
54 if creds:
55 request.headers['Proxy-Authorization'] = 'Basic ' + creds
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/downloadermiddlewares/httpproxy.py b/scrapy/downloadermiddlewares/httpproxy.py
--- a/scrapy/downloadermiddlewares/httpproxy.py
+++ b/scrapy/downloadermiddlewares/httpproxy.py
@@ -25,7 +25,7 @@
proxy_type, user, password, hostport = _parse_proxy(url)
proxy_url = urlunparse((proxy_type or orig_type, hostport, '', '', '', ''))
- if user and password:
+ if user:
user_pass = '%s:%s' % (unquote(user), unquote(password))
creds = base64.b64encode(user_pass).strip()
else:
|
{"golden_diff": "diff --git a/scrapy/downloadermiddlewares/httpproxy.py b/scrapy/downloadermiddlewares/httpproxy.py\n--- a/scrapy/downloadermiddlewares/httpproxy.py\n+++ b/scrapy/downloadermiddlewares/httpproxy.py\n@@ -25,7 +25,7 @@\n proxy_type, user, password, hostport = _parse_proxy(url)\n proxy_url = urlunparse((proxy_type or orig_type, hostport, '', '', '', ''))\n \n- if user and password:\n+ if user:\n user_pass = '%s:%s' % (unquote(user), unquote(password))\n creds = base64.b64encode(user_pass).strip()\n else:\n", "issue": "HTTP_PROXY variable with username and empty password not supported\nScrapy doesn't support proxy authentication when the password is empty when using the HTTP_PROXY environment variable to supply the proxy argument.\n\n", "before_files": [{"content": "import base64\nfrom six.moves.urllib.request import getproxies, proxy_bypass\nfrom six.moves.urllib.parse import unquote\ntry:\n from urllib2 import _parse_proxy\nexcept ImportError:\n from urllib.request import _parse_proxy\nfrom six.moves.urllib.parse import urlunparse\n\nfrom scrapy.utils.httpobj import urlparse_cached\nfrom scrapy.exceptions import NotConfigured\n\n\nclass HttpProxyMiddleware(object):\n\n def __init__(self):\n self.proxies = {}\n for type, url in getproxies().items():\n self.proxies[type] = self._get_proxy(url, type)\n\n if not self.proxies:\n raise NotConfigured\n\n def _get_proxy(self, url, orig_type):\n proxy_type, user, password, hostport = _parse_proxy(url)\n proxy_url = urlunparse((proxy_type or orig_type, hostport, '', '', '', ''))\n\n if user and password:\n user_pass = '%s:%s' % (unquote(user), unquote(password))\n creds = base64.b64encode(user_pass).strip()\n else:\n creds = None\n\n return creds, proxy_url\n\n def process_request(self, request, spider):\n # ignore if proxy is already seted\n if 'proxy' in request.meta:\n return\n\n parsed = urlparse_cached(request)\n scheme = parsed.scheme\n\n # 'no_proxy' is only supported by http schemes\n if scheme in ('http', 'https') and proxy_bypass(parsed.hostname):\n return\n\n if scheme in self.proxies:\n self._set_proxy(request, scheme)\n\n def _set_proxy(self, request, scheme):\n creds, proxy = self.proxies[scheme]\n request.meta['proxy'] = proxy\n if creds:\n request.headers['Proxy-Authorization'] = 'Basic ' + creds\n", "path": "scrapy/downloadermiddlewares/httpproxy.py"}], "after_files": [{"content": "import base64\nfrom six.moves.urllib.request import getproxies, proxy_bypass\nfrom six.moves.urllib.parse import unquote\ntry:\n from urllib2 import _parse_proxy\nexcept ImportError:\n from urllib.request import _parse_proxy\nfrom six.moves.urllib.parse import urlunparse\n\nfrom scrapy.utils.httpobj import urlparse_cached\nfrom scrapy.exceptions import NotConfigured\n\n\nclass HttpProxyMiddleware(object):\n\n def __init__(self):\n self.proxies = {}\n for type, url in getproxies().items():\n self.proxies[type] = self._get_proxy(url, type)\n\n if not self.proxies:\n raise NotConfigured\n\n def _get_proxy(self, url, orig_type):\n proxy_type, user, password, hostport = _parse_proxy(url)\n proxy_url = urlunparse((proxy_type or orig_type, hostport, '', '', '', ''))\n\n if user:\n user_pass = '%s:%s' % (unquote(user), unquote(password))\n creds = base64.b64encode(user_pass).strip()\n else:\n creds = None\n\n return creds, proxy_url\n\n def process_request(self, request, spider):\n # ignore if proxy is already seted\n if 'proxy' in request.meta:\n return\n\n parsed = urlparse_cached(request)\n scheme = parsed.scheme\n\n # 'no_proxy' is only supported by http schemes\n if scheme in ('http', 'https') and proxy_bypass(parsed.hostname):\n return\n\n if scheme in self.proxies:\n self._set_proxy(request, scheme)\n\n def _set_proxy(self, request, scheme):\n creds, proxy = self.proxies[scheme]\n request.meta['proxy'] = proxy\n if creds:\n request.headers['Proxy-Authorization'] = 'Basic ' + creds\n", "path": "scrapy/downloadermiddlewares/httpproxy.py"}]}
| 808 | 156 |
gh_patches_debug_21249
|
rasdani/github-patches
|
git_diff
|
statsmodels__statsmodels-3439
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
API/DOCS: newer correlation tools are missing in api and docs
`stats.api` and http://www.statsmodels.org/dev/stats.html#moment-helpers
only shows the original functions, not those added by Kerby
(I'm trying to figure out where we should put new correlation and covariance function, hypothesis tests, robust, regularized covariance and correlation.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `statsmodels/stats/api.py`
Content:
```
1 # pylint: disable=W0611
2 from . import diagnostic
3 from .diagnostic import (
4 acorr_ljungbox, acorr_breusch_godfrey,
5 CompareCox, compare_cox, CompareJ, compare_j,
6 HetGoldfeldQuandt, het_goldfeldquandt,
7 het_breuschpagan, het_white, het_arch,
8 linear_harvey_collier, linear_rainbow, linear_lm,
9 breaks_cusumolsresid, breaks_hansen, recursive_olsresiduals,
10 unitroot_adf,
11 normal_ad, lilliefors,
12 # deprecated because of misspelling:
13 lillifors, het_breushpagan, acorr_breush_godfrey
14 )
15
16 from . import multicomp
17 from .multitest import (multipletests, fdrcorrection, fdrcorrection_twostage)
18 from .multicomp import tukeyhsd
19 from . import gof
20 from .gof import (powerdiscrepancy, gof_chisquare_discrete,
21 chisquare_effectsize)
22 from . import stattools
23 from .stattools import durbin_watson, omni_normtest, jarque_bera
24
25 from . import sandwich_covariance
26 from .sandwich_covariance import (
27 cov_cluster, cov_cluster_2groups, cov_nw_panel,
28 cov_hac, cov_white_simple,
29 cov_hc0, cov_hc1, cov_hc2, cov_hc3,
30 se_cov
31 )
32
33 from .weightstats import (DescrStatsW, CompareMeans, ttest_ind, ttost_ind,
34 ttost_paired, ztest, ztost, zconfint)
35
36 from .proportion import (binom_test_reject_interval, binom_test,
37 binom_tost, binom_tost_reject_interval,
38 power_binom_tost, power_ztost_prop,
39 proportion_confint, proportion_effectsize,
40 proportions_chisquare, proportions_chisquare_allpairs,
41 proportions_chisquare_pairscontrol, proportions_ztest,
42 proportions_ztost)
43
44 from .power import (TTestPower, TTestIndPower, GofChisquarePower,
45 NormalIndPower, FTestAnovaPower, FTestPower,
46 tt_solve_power, tt_ind_solve_power, zt_ind_solve_power)
47
48 from .descriptivestats import Describe
49
50 from .anova import anova_lm
51
52 from . import moment_helpers
53 from .correlation_tools import corr_nearest, corr_clipped, cov_nearest
54
55 from statsmodels.sandbox.stats.runs import (Runs, runstest_1samp, runstest_2samp)
56
57 from statsmodels.stats.contingency_tables import (mcnemar, cochrans_q,
58 SquareTable,
59 Table2x2,
60 Table,
61 StratifiedTable)
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/statsmodels/stats/api.py b/statsmodels/stats/api.py
--- a/statsmodels/stats/api.py
+++ b/statsmodels/stats/api.py
@@ -39,7 +39,7 @@
proportion_confint, proportion_effectsize,
proportions_chisquare, proportions_chisquare_allpairs,
proportions_chisquare_pairscontrol, proportions_ztest,
- proportions_ztost)
+ proportions_ztost, multinomial_proportions_confint)
from .power import (TTestPower, TTestIndPower, GofChisquarePower,
NormalIndPower, FTestAnovaPower, FTestPower,
@@ -50,7 +50,9 @@
from .anova import anova_lm
from . import moment_helpers
-from .correlation_tools import corr_nearest, corr_clipped, cov_nearest
+from .correlation_tools import (corr_clipped, corr_nearest,
+ corr_nearest_factor, corr_thresholded, cov_nearest,
+ cov_nearest_factor_homog, FactoredPSDMatrix)
from statsmodels.sandbox.stats.runs import (Runs, runstest_1samp, runstest_2samp)
|
{"golden_diff": "diff --git a/statsmodels/stats/api.py b/statsmodels/stats/api.py\n--- a/statsmodels/stats/api.py\n+++ b/statsmodels/stats/api.py\n@@ -39,7 +39,7 @@\n proportion_confint, proportion_effectsize,\n proportions_chisquare, proportions_chisquare_allpairs,\n proportions_chisquare_pairscontrol, proportions_ztest,\n- proportions_ztost)\n+ proportions_ztost, multinomial_proportions_confint)\n \n from .power import (TTestPower, TTestIndPower, GofChisquarePower,\n NormalIndPower, FTestAnovaPower, FTestPower,\n@@ -50,7 +50,9 @@\n from .anova import anova_lm\n \n from . import moment_helpers\n-from .correlation_tools import corr_nearest, corr_clipped, cov_nearest\n+from .correlation_tools import (corr_clipped, corr_nearest,\n+ corr_nearest_factor, corr_thresholded, cov_nearest,\n+ cov_nearest_factor_homog, FactoredPSDMatrix)\n \n from statsmodels.sandbox.stats.runs import (Runs, runstest_1samp, runstest_2samp)\n", "issue": "API/DOCS: newer correlation tools are missing in api and docs\n`stats.api` and http://www.statsmodels.org/dev/stats.html#moment-helpers\nonly shows the original functions, not those added by Kerby\n\n(I'm trying to figure out where we should put new correlation and covariance function, hypothesis tests, robust, regularized covariance and correlation.)\n\n", "before_files": [{"content": "# pylint: disable=W0611\nfrom . import diagnostic\nfrom .diagnostic import (\n acorr_ljungbox, acorr_breusch_godfrey,\n CompareCox, compare_cox, CompareJ, compare_j,\n HetGoldfeldQuandt, het_goldfeldquandt,\n het_breuschpagan, het_white, het_arch,\n linear_harvey_collier, linear_rainbow, linear_lm,\n breaks_cusumolsresid, breaks_hansen, recursive_olsresiduals,\n unitroot_adf,\n normal_ad, lilliefors,\n # deprecated because of misspelling:\n lillifors, het_breushpagan, acorr_breush_godfrey\n )\n\nfrom . import multicomp\nfrom .multitest import (multipletests, fdrcorrection, fdrcorrection_twostage)\nfrom .multicomp import tukeyhsd\nfrom . import gof\nfrom .gof import (powerdiscrepancy, gof_chisquare_discrete,\n chisquare_effectsize)\nfrom . import stattools\nfrom .stattools import durbin_watson, omni_normtest, jarque_bera\n\nfrom . import sandwich_covariance\nfrom .sandwich_covariance import (\n cov_cluster, cov_cluster_2groups, cov_nw_panel,\n cov_hac, cov_white_simple,\n cov_hc0, cov_hc1, cov_hc2, cov_hc3,\n se_cov\n )\n\nfrom .weightstats import (DescrStatsW, CompareMeans, ttest_ind, ttost_ind,\n ttost_paired, ztest, ztost, zconfint)\n\nfrom .proportion import (binom_test_reject_interval, binom_test,\n binom_tost, binom_tost_reject_interval,\n power_binom_tost, power_ztost_prop,\n proportion_confint, proportion_effectsize,\n proportions_chisquare, proportions_chisquare_allpairs,\n proportions_chisquare_pairscontrol, proportions_ztest,\n proportions_ztost)\n\nfrom .power import (TTestPower, TTestIndPower, GofChisquarePower,\n NormalIndPower, FTestAnovaPower, FTestPower,\n tt_solve_power, tt_ind_solve_power, zt_ind_solve_power)\n\nfrom .descriptivestats import Describe\n\nfrom .anova import anova_lm\n\nfrom . import moment_helpers\nfrom .correlation_tools import corr_nearest, corr_clipped, cov_nearest\n\nfrom statsmodels.sandbox.stats.runs import (Runs, runstest_1samp, runstest_2samp)\n\nfrom statsmodels.stats.contingency_tables import (mcnemar, cochrans_q,\n SquareTable,\n Table2x2,\n Table,\n StratifiedTable)\n", "path": "statsmodels/stats/api.py"}], "after_files": [{"content": "# pylint: disable=W0611\nfrom . import diagnostic\nfrom .diagnostic import (\n acorr_ljungbox, acorr_breusch_godfrey,\n CompareCox, compare_cox, CompareJ, compare_j,\n HetGoldfeldQuandt, het_goldfeldquandt,\n het_breuschpagan, het_white, het_arch,\n linear_harvey_collier, linear_rainbow, linear_lm,\n breaks_cusumolsresid, breaks_hansen, recursive_olsresiduals,\n unitroot_adf,\n normal_ad, lilliefors,\n # deprecated because of misspelling:\n lillifors, het_breushpagan, acorr_breush_godfrey\n )\n\nfrom . import multicomp\nfrom .multitest import (multipletests, fdrcorrection, fdrcorrection_twostage)\nfrom .multicomp import tukeyhsd\nfrom . import gof\nfrom .gof import (powerdiscrepancy, gof_chisquare_discrete,\n chisquare_effectsize)\nfrom . import stattools\nfrom .stattools import durbin_watson, omni_normtest, jarque_bera\n\nfrom . import sandwich_covariance\nfrom .sandwich_covariance import (\n cov_cluster, cov_cluster_2groups, cov_nw_panel,\n cov_hac, cov_white_simple,\n cov_hc0, cov_hc1, cov_hc2, cov_hc3,\n se_cov\n )\n\nfrom .weightstats import (DescrStatsW, CompareMeans, ttest_ind, ttost_ind,\n ttost_paired, ztest, ztost, zconfint)\n\nfrom .proportion import (binom_test_reject_interval, binom_test,\n binom_tost, binom_tost_reject_interval,\n power_binom_tost, power_ztost_prop,\n proportion_confint, proportion_effectsize,\n proportions_chisquare, proportions_chisquare_allpairs,\n proportions_chisquare_pairscontrol, proportions_ztest,\n proportions_ztost, multinomial_proportions_confint)\n\nfrom .power import (TTestPower, TTestIndPower, GofChisquarePower,\n NormalIndPower, FTestAnovaPower, FTestPower,\n tt_solve_power, tt_ind_solve_power, zt_ind_solve_power)\n\nfrom .descriptivestats import Describe\n\nfrom .anova import anova_lm\n\nfrom . import moment_helpers\nfrom .correlation_tools import (corr_clipped, corr_nearest,\n corr_nearest_factor, corr_thresholded, cov_nearest,\n cov_nearest_factor_homog, FactoredPSDMatrix)\n\nfrom statsmodels.sandbox.stats.runs import (Runs, runstest_1samp, runstest_2samp)\n\nfrom statsmodels.stats.contingency_tables import (mcnemar, cochrans_q,\n SquareTable,\n Table2x2,\n Table,\n StratifiedTable)\n", "path": "statsmodels/stats/api.py"}]}
| 1,072 | 254 |
gh_patches_debug_12236
|
rasdani/github-patches
|
git_diff
|
microsoft__playwright-python-547
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Port the postData fix
https://github.com/microsoft/playwright/pull/5736
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `playwright/_impl/_network.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import base64
16 import json
17 import mimetypes
18 from pathlib import Path
19 from types import SimpleNamespace
20 from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Union, cast
21 from urllib import parse
22
23 from playwright._impl._api_structures import ResourceTiming
24 from playwright._impl._api_types import Error
25 from playwright._impl._connection import (
26 ChannelOwner,
27 from_channel,
28 from_nullable_channel,
29 )
30 from playwright._impl._event_context_manager import EventContextManagerImpl
31 from playwright._impl._helper import ContinueParameters, Header, locals_to_params
32 from playwright._impl._wait_helper import WaitHelper
33
34 if TYPE_CHECKING: # pragma: no cover
35 from playwright._impl._frame import Frame
36
37
38 class Request(ChannelOwner):
39 def __init__(
40 self, parent: ChannelOwner, type: str, guid: str, initializer: Dict
41 ) -> None:
42 super().__init__(parent, type, guid, initializer)
43 self._redirected_from: Optional["Request"] = from_nullable_channel(
44 initializer.get("redirectedFrom")
45 )
46 self._redirected_to: Optional["Request"] = None
47 if self._redirected_from:
48 self._redirected_from._redirected_to = self
49 self._failure_text: Optional[str] = None
50 self._timing: ResourceTiming = {
51 "startTime": 0,
52 "domainLookupStart": -1,
53 "domainLookupEnd": -1,
54 "connectStart": -1,
55 "secureConnectionStart": -1,
56 "connectEnd": -1,
57 "requestStart": -1,
58 "responseStart": -1,
59 "responseEnd": -1,
60 }
61 self._headers: Dict[str, str] = parse_headers(self._initializer["headers"])
62
63 @property
64 def url(self) -> str:
65 return self._initializer["url"]
66
67 @property
68 def resource_type(self) -> str:
69 return self._initializer["resourceType"]
70
71 @property
72 def method(self) -> str:
73 return self._initializer["method"]
74
75 @property
76 def post_data(self) -> Optional[str]:
77 data = self.post_data_buffer
78 if not data:
79 return None
80 return data.decode()
81
82 @property
83 def post_data_json(self) -> Optional[Any]:
84 post_data = self.post_data
85 if not post_data:
86 return None
87 content_type = self.headers["content-type"]
88 if not content_type:
89 return None
90 if content_type == "application/x-www-form-urlencoded":
91 return dict(parse.parse_qsl(post_data))
92 return json.loads(post_data)
93
94 @property
95 def post_data_buffer(self) -> Optional[bytes]:
96 b64_content = self._initializer.get("postData")
97 if not b64_content:
98 return None
99 return base64.b64decode(b64_content)
100
101 @property
102 def headers(self) -> Dict[str, str]:
103 return self._headers
104
105 async def response(self) -> Optional["Response"]:
106 return from_nullable_channel(await self._channel.send("response"))
107
108 @property
109 def frame(self) -> "Frame":
110 return from_channel(self._initializer["frame"])
111
112 def is_navigation_request(self) -> bool:
113 return self._initializer["isNavigationRequest"]
114
115 @property
116 def redirected_from(self) -> Optional["Request"]:
117 return self._redirected_from
118
119 @property
120 def redirected_to(self) -> Optional["Request"]:
121 return self._redirected_to
122
123 @property
124 def failure(self) -> Optional[str]:
125 return self._failure_text
126
127 @property
128 def timing(self) -> ResourceTiming:
129 return self._timing
130
131
132 class Route(ChannelOwner):
133 def __init__(
134 self, parent: ChannelOwner, type: str, guid: str, initializer: Dict
135 ) -> None:
136 super().__init__(parent, type, guid, initializer)
137
138 @property
139 def request(self) -> Request:
140 return from_channel(self._initializer["request"])
141
142 async def abort(self, errorCode: str = None) -> None:
143 await self._channel.send("abort", locals_to_params(locals()))
144
145 async def fulfill(
146 self,
147 status: int = None,
148 headers: Dict[str, str] = None,
149 body: Union[str, bytes] = None,
150 path: Union[str, Path] = None,
151 contentType: str = None,
152 ) -> None:
153 params = locals_to_params(locals())
154 length = 0
155 if isinstance(body, str):
156 params["body"] = body
157 params["isBase64"] = False
158 length = len(body.encode())
159 elif isinstance(body, bytes):
160 params["body"] = base64.b64encode(body).decode()
161 params["isBase64"] = True
162 length = len(body)
163 elif path:
164 del params["path"]
165 file_content = Path(path).read_bytes()
166 params["body"] = base64.b64encode(file_content).decode()
167 params["isBase64"] = True
168 length = len(file_content)
169
170 headers = {k.lower(): str(v) for k, v in params.get("headers", {}).items()}
171 if params.get("contentType"):
172 headers["content-type"] = params["contentType"]
173 elif path:
174 headers["content-type"] = (
175 mimetypes.guess_type(str(Path(path)))[0] or "application/octet-stream"
176 )
177 if length and "content-length" not in headers:
178 headers["content-length"] = str(length)
179 params["headers"] = serialize_headers(headers)
180 await self._channel.send("fulfill", params)
181
182 async def continue_(
183 self,
184 url: str = None,
185 method: str = None,
186 headers: Dict[str, str] = None,
187 postData: Union[str, bytes] = None,
188 ) -> None:
189 overrides: ContinueParameters = {}
190 if url:
191 overrides["url"] = url
192 if method:
193 overrides["method"] = method
194 if headers:
195 overrides["headers"] = serialize_headers(headers)
196 if isinstance(postData, str):
197 overrides["postData"] = base64.b64encode(postData.encode()).decode()
198 elif isinstance(postData, bytes):
199 overrides["postData"] = base64.b64encode(postData).decode()
200 await self._channel.send("continue", cast(Any, overrides))
201
202
203 class Response(ChannelOwner):
204 def __init__(
205 self, parent: ChannelOwner, type: str, guid: str, initializer: Dict
206 ) -> None:
207 super().__init__(parent, type, guid, initializer)
208 self._request: Request = from_channel(self._initializer["request"])
209 timing = self._initializer["timing"]
210 self._request._timing["startTime"] = timing["startTime"]
211 self._request._timing["domainLookupStart"] = timing["domainLookupStart"]
212 self._request._timing["domainLookupEnd"] = timing["domainLookupEnd"]
213 self._request._timing["connectStart"] = timing["connectStart"]
214 self._request._timing["secureConnectionStart"] = timing["secureConnectionStart"]
215 self._request._timing["connectEnd"] = timing["connectEnd"]
216 self._request._timing["requestStart"] = timing["requestStart"]
217 self._request._timing["responseStart"] = timing["responseStart"]
218 self._request._headers = parse_headers(self._initializer["requestHeaders"])
219
220 @property
221 def url(self) -> str:
222 return self._initializer["url"]
223
224 @property
225 def ok(self) -> bool:
226 return self._initializer["status"] == 0 or (
227 self._initializer["status"] >= 200 and self._initializer["status"] <= 299
228 )
229
230 @property
231 def status(self) -> int:
232 return self._initializer["status"]
233
234 @property
235 def status_text(self) -> str:
236 return self._initializer["statusText"]
237
238 @property
239 def headers(self) -> Dict[str, str]:
240 return parse_headers(self._initializer["headers"])
241
242 async def finished(self) -> Optional[str]:
243 return await self._channel.send("finished")
244
245 async def body(self) -> bytes:
246 binary = await self._channel.send("body")
247 return base64.b64decode(binary)
248
249 async def text(self) -> str:
250 content = await self.body()
251 return content.decode()
252
253 async def json(self) -> Union[Any]:
254 return json.loads(await self.text())
255
256 @property
257 def request(self) -> Request:
258 return self._request
259
260 @property
261 def frame(self) -> "Frame":
262 return self._request.frame
263
264
265 class WebSocket(ChannelOwner):
266
267 Events = SimpleNamespace(
268 Close="close",
269 FrameReceived="framereceived",
270 FrameSent="framesent",
271 Error="socketerror",
272 )
273
274 def __init__(
275 self, parent: ChannelOwner, type: str, guid: str, initializer: Dict
276 ) -> None:
277 super().__init__(parent, type, guid, initializer)
278 self._is_closed = False
279 self._channel.on(
280 "frameSent",
281 lambda params: self._on_frame_sent(params["opcode"], params["data"]),
282 )
283 self._channel.on(
284 "frameReceived",
285 lambda params: self._on_frame_received(params["opcode"], params["data"]),
286 )
287 self._channel.on(
288 "error", lambda params: self.emit(WebSocket.Events.Error, params["error"])
289 )
290 self._channel.on("close", lambda params: self._on_close())
291
292 @property
293 def url(self) -> str:
294 return self._initializer["url"]
295
296 def expect_event(
297 self,
298 event: str,
299 predicate: Callable = None,
300 timeout: float = None,
301 ) -> EventContextManagerImpl:
302 if timeout is None:
303 timeout = cast(Any, self._parent)._timeout_settings.timeout()
304 wait_helper = WaitHelper(self, f"web_socket.expect_event({event})")
305 wait_helper.reject_on_timeout(
306 timeout, f'Timeout while waiting for event "{event}"'
307 )
308 if event != WebSocket.Events.Close:
309 wait_helper.reject_on_event(
310 self, WebSocket.Events.Close, Error("Socket closed")
311 )
312 if event != WebSocket.Events.Error:
313 wait_helper.reject_on_event(
314 self, WebSocket.Events.Error, Error("Socket error")
315 )
316 wait_helper.reject_on_event(self._parent, "close", Error("Page closed"))
317 wait_helper.wait_for_event(self, event, predicate)
318 return EventContextManagerImpl(wait_helper.result())
319
320 async def wait_for_event(
321 self, event: str, predicate: Callable = None, timeout: float = None
322 ) -> Any:
323 async with self.expect_event(event, predicate, timeout) as event_info:
324 pass
325 return await event_info
326
327 def _on_frame_sent(self, opcode: int, data: str) -> None:
328 if opcode == 2:
329 self.emit(WebSocket.Events.FrameSent, base64.b64decode(data))
330 else:
331 self.emit(WebSocket.Events.FrameSent, data)
332
333 def _on_frame_received(self, opcode: int, data: str) -> None:
334 if opcode == 2:
335 self.emit(WebSocket.Events.FrameReceived, base64.b64decode(data))
336 else:
337 self.emit(WebSocket.Events.FrameReceived, data)
338
339 def is_closed(self) -> bool:
340 return self._is_closed
341
342 def _on_close(self) -> None:
343 self._is_closed = True
344 self.emit(WebSocket.Events.Close)
345
346
347 def serialize_headers(headers: Dict[str, str]) -> List[Header]:
348 return [{"name": name, "value": value} for name, value in headers.items()]
349
350
351 def parse_headers(headers: List[Header]) -> Dict[str, str]:
352 return {header["name"].lower(): header["value"] for header in headers}
353
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/playwright/_impl/_network.py b/playwright/_impl/_network.py
--- a/playwright/_impl/_network.py
+++ b/playwright/_impl/_network.py
@@ -85,11 +85,12 @@
if not post_data:
return None
content_type = self.headers["content-type"]
- if not content_type:
- return None
if content_type == "application/x-www-form-urlencoded":
return dict(parse.parse_qsl(post_data))
- return json.loads(post_data)
+ try:
+ return json.loads(post_data)
+ except Exception:
+ raise Error(f"POST data is not a valid JSON object: {post_data}")
@property
def post_data_buffer(self) -> Optional[bytes]:
|
{"golden_diff": "diff --git a/playwright/_impl/_network.py b/playwright/_impl/_network.py\n--- a/playwright/_impl/_network.py\n+++ b/playwright/_impl/_network.py\n@@ -85,11 +85,12 @@\n if not post_data:\n return None\n content_type = self.headers[\"content-type\"]\n- if not content_type:\n- return None\n if content_type == \"application/x-www-form-urlencoded\":\n return dict(parse.parse_qsl(post_data))\n- return json.loads(post_data)\n+ try:\n+ return json.loads(post_data)\n+ except Exception:\n+ raise Error(f\"POST data is not a valid JSON object: {post_data}\")\n \n @property\n def post_data_buffer(self) -> Optional[bytes]:\n", "issue": "Port the postData fix\nhttps://github.com/microsoft/playwright/pull/5736\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\nimport json\nimport mimetypes\nfrom pathlib import Path\nfrom types import SimpleNamespace\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Union, cast\nfrom urllib import parse\n\nfrom playwright._impl._api_structures import ResourceTiming\nfrom playwright._impl._api_types import Error\nfrom playwright._impl._connection import (\n ChannelOwner,\n from_channel,\n from_nullable_channel,\n)\nfrom playwright._impl._event_context_manager import EventContextManagerImpl\nfrom playwright._impl._helper import ContinueParameters, Header, locals_to_params\nfrom playwright._impl._wait_helper import WaitHelper\n\nif TYPE_CHECKING: # pragma: no cover\n from playwright._impl._frame import Frame\n\n\nclass Request(ChannelOwner):\n def __init__(\n self, parent: ChannelOwner, type: str, guid: str, initializer: Dict\n ) -> None:\n super().__init__(parent, type, guid, initializer)\n self._redirected_from: Optional[\"Request\"] = from_nullable_channel(\n initializer.get(\"redirectedFrom\")\n )\n self._redirected_to: Optional[\"Request\"] = None\n if self._redirected_from:\n self._redirected_from._redirected_to = self\n self._failure_text: Optional[str] = None\n self._timing: ResourceTiming = {\n \"startTime\": 0,\n \"domainLookupStart\": -1,\n \"domainLookupEnd\": -1,\n \"connectStart\": -1,\n \"secureConnectionStart\": -1,\n \"connectEnd\": -1,\n \"requestStart\": -1,\n \"responseStart\": -1,\n \"responseEnd\": -1,\n }\n self._headers: Dict[str, str] = parse_headers(self._initializer[\"headers\"])\n\n @property\n def url(self) -> str:\n return self._initializer[\"url\"]\n\n @property\n def resource_type(self) -> str:\n return self._initializer[\"resourceType\"]\n\n @property\n def method(self) -> str:\n return self._initializer[\"method\"]\n\n @property\n def post_data(self) -> Optional[str]:\n data = self.post_data_buffer\n if not data:\n return None\n return data.decode()\n\n @property\n def post_data_json(self) -> Optional[Any]:\n post_data = self.post_data\n if not post_data:\n return None\n content_type = self.headers[\"content-type\"]\n if not content_type:\n return None\n if content_type == \"application/x-www-form-urlencoded\":\n return dict(parse.parse_qsl(post_data))\n return json.loads(post_data)\n\n @property\n def post_data_buffer(self) -> Optional[bytes]:\n b64_content = self._initializer.get(\"postData\")\n if not b64_content:\n return None\n return base64.b64decode(b64_content)\n\n @property\n def headers(self) -> Dict[str, str]:\n return self._headers\n\n async def response(self) -> Optional[\"Response\"]:\n return from_nullable_channel(await self._channel.send(\"response\"))\n\n @property\n def frame(self) -> \"Frame\":\n return from_channel(self._initializer[\"frame\"])\n\n def is_navigation_request(self) -> bool:\n return self._initializer[\"isNavigationRequest\"]\n\n @property\n def redirected_from(self) -> Optional[\"Request\"]:\n return self._redirected_from\n\n @property\n def redirected_to(self) -> Optional[\"Request\"]:\n return self._redirected_to\n\n @property\n def failure(self) -> Optional[str]:\n return self._failure_text\n\n @property\n def timing(self) -> ResourceTiming:\n return self._timing\n\n\nclass Route(ChannelOwner):\n def __init__(\n self, parent: ChannelOwner, type: str, guid: str, initializer: Dict\n ) -> None:\n super().__init__(parent, type, guid, initializer)\n\n @property\n def request(self) -> Request:\n return from_channel(self._initializer[\"request\"])\n\n async def abort(self, errorCode: str = None) -> None:\n await self._channel.send(\"abort\", locals_to_params(locals()))\n\n async def fulfill(\n self,\n status: int = None,\n headers: Dict[str, str] = None,\n body: Union[str, bytes] = None,\n path: Union[str, Path] = None,\n contentType: str = None,\n ) -> None:\n params = locals_to_params(locals())\n length = 0\n if isinstance(body, str):\n params[\"body\"] = body\n params[\"isBase64\"] = False\n length = len(body.encode())\n elif isinstance(body, bytes):\n params[\"body\"] = base64.b64encode(body).decode()\n params[\"isBase64\"] = True\n length = len(body)\n elif path:\n del params[\"path\"]\n file_content = Path(path).read_bytes()\n params[\"body\"] = base64.b64encode(file_content).decode()\n params[\"isBase64\"] = True\n length = len(file_content)\n\n headers = {k.lower(): str(v) for k, v in params.get(\"headers\", {}).items()}\n if params.get(\"contentType\"):\n headers[\"content-type\"] = params[\"contentType\"]\n elif path:\n headers[\"content-type\"] = (\n mimetypes.guess_type(str(Path(path)))[0] or \"application/octet-stream\"\n )\n if length and \"content-length\" not in headers:\n headers[\"content-length\"] = str(length)\n params[\"headers\"] = serialize_headers(headers)\n await self._channel.send(\"fulfill\", params)\n\n async def continue_(\n self,\n url: str = None,\n method: str = None,\n headers: Dict[str, str] = None,\n postData: Union[str, bytes] = None,\n ) -> None:\n overrides: ContinueParameters = {}\n if url:\n overrides[\"url\"] = url\n if method:\n overrides[\"method\"] = method\n if headers:\n overrides[\"headers\"] = serialize_headers(headers)\n if isinstance(postData, str):\n overrides[\"postData\"] = base64.b64encode(postData.encode()).decode()\n elif isinstance(postData, bytes):\n overrides[\"postData\"] = base64.b64encode(postData).decode()\n await self._channel.send(\"continue\", cast(Any, overrides))\n\n\nclass Response(ChannelOwner):\n def __init__(\n self, parent: ChannelOwner, type: str, guid: str, initializer: Dict\n ) -> None:\n super().__init__(parent, type, guid, initializer)\n self._request: Request = from_channel(self._initializer[\"request\"])\n timing = self._initializer[\"timing\"]\n self._request._timing[\"startTime\"] = timing[\"startTime\"]\n self._request._timing[\"domainLookupStart\"] = timing[\"domainLookupStart\"]\n self._request._timing[\"domainLookupEnd\"] = timing[\"domainLookupEnd\"]\n self._request._timing[\"connectStart\"] = timing[\"connectStart\"]\n self._request._timing[\"secureConnectionStart\"] = timing[\"secureConnectionStart\"]\n self._request._timing[\"connectEnd\"] = timing[\"connectEnd\"]\n self._request._timing[\"requestStart\"] = timing[\"requestStart\"]\n self._request._timing[\"responseStart\"] = timing[\"responseStart\"]\n self._request._headers = parse_headers(self._initializer[\"requestHeaders\"])\n\n @property\n def url(self) -> str:\n return self._initializer[\"url\"]\n\n @property\n def ok(self) -> bool:\n return self._initializer[\"status\"] == 0 or (\n self._initializer[\"status\"] >= 200 and self._initializer[\"status\"] <= 299\n )\n\n @property\n def status(self) -> int:\n return self._initializer[\"status\"]\n\n @property\n def status_text(self) -> str:\n return self._initializer[\"statusText\"]\n\n @property\n def headers(self) -> Dict[str, str]:\n return parse_headers(self._initializer[\"headers\"])\n\n async def finished(self) -> Optional[str]:\n return await self._channel.send(\"finished\")\n\n async def body(self) -> bytes:\n binary = await self._channel.send(\"body\")\n return base64.b64decode(binary)\n\n async def text(self) -> str:\n content = await self.body()\n return content.decode()\n\n async def json(self) -> Union[Any]:\n return json.loads(await self.text())\n\n @property\n def request(self) -> Request:\n return self._request\n\n @property\n def frame(self) -> \"Frame\":\n return self._request.frame\n\n\nclass WebSocket(ChannelOwner):\n\n Events = SimpleNamespace(\n Close=\"close\",\n FrameReceived=\"framereceived\",\n FrameSent=\"framesent\",\n Error=\"socketerror\",\n )\n\n def __init__(\n self, parent: ChannelOwner, type: str, guid: str, initializer: Dict\n ) -> None:\n super().__init__(parent, type, guid, initializer)\n self._is_closed = False\n self._channel.on(\n \"frameSent\",\n lambda params: self._on_frame_sent(params[\"opcode\"], params[\"data\"]),\n )\n self._channel.on(\n \"frameReceived\",\n lambda params: self._on_frame_received(params[\"opcode\"], params[\"data\"]),\n )\n self._channel.on(\n \"error\", lambda params: self.emit(WebSocket.Events.Error, params[\"error\"])\n )\n self._channel.on(\"close\", lambda params: self._on_close())\n\n @property\n def url(self) -> str:\n return self._initializer[\"url\"]\n\n def expect_event(\n self,\n event: str,\n predicate: Callable = None,\n timeout: float = None,\n ) -> EventContextManagerImpl:\n if timeout is None:\n timeout = cast(Any, self._parent)._timeout_settings.timeout()\n wait_helper = WaitHelper(self, f\"web_socket.expect_event({event})\")\n wait_helper.reject_on_timeout(\n timeout, f'Timeout while waiting for event \"{event}\"'\n )\n if event != WebSocket.Events.Close:\n wait_helper.reject_on_event(\n self, WebSocket.Events.Close, Error(\"Socket closed\")\n )\n if event != WebSocket.Events.Error:\n wait_helper.reject_on_event(\n self, WebSocket.Events.Error, Error(\"Socket error\")\n )\n wait_helper.reject_on_event(self._parent, \"close\", Error(\"Page closed\"))\n wait_helper.wait_for_event(self, event, predicate)\n return EventContextManagerImpl(wait_helper.result())\n\n async def wait_for_event(\n self, event: str, predicate: Callable = None, timeout: float = None\n ) -> Any:\n async with self.expect_event(event, predicate, timeout) as event_info:\n pass\n return await event_info\n\n def _on_frame_sent(self, opcode: int, data: str) -> None:\n if opcode == 2:\n self.emit(WebSocket.Events.FrameSent, base64.b64decode(data))\n else:\n self.emit(WebSocket.Events.FrameSent, data)\n\n def _on_frame_received(self, opcode: int, data: str) -> None:\n if opcode == 2:\n self.emit(WebSocket.Events.FrameReceived, base64.b64decode(data))\n else:\n self.emit(WebSocket.Events.FrameReceived, data)\n\n def is_closed(self) -> bool:\n return self._is_closed\n\n def _on_close(self) -> None:\n self._is_closed = True\n self.emit(WebSocket.Events.Close)\n\n\ndef serialize_headers(headers: Dict[str, str]) -> List[Header]:\n return [{\"name\": name, \"value\": value} for name, value in headers.items()]\n\n\ndef parse_headers(headers: List[Header]) -> Dict[str, str]:\n return {header[\"name\"].lower(): header[\"value\"] for header in headers}\n", "path": "playwright/_impl/_network.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\nimport json\nimport mimetypes\nfrom pathlib import Path\nfrom types import SimpleNamespace\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Union, cast\nfrom urllib import parse\n\nfrom playwright._impl._api_structures import ResourceTiming\nfrom playwright._impl._api_types import Error\nfrom playwright._impl._connection import (\n ChannelOwner,\n from_channel,\n from_nullable_channel,\n)\nfrom playwright._impl._event_context_manager import EventContextManagerImpl\nfrom playwright._impl._helper import ContinueParameters, Header, locals_to_params\nfrom playwright._impl._wait_helper import WaitHelper\n\nif TYPE_CHECKING: # pragma: no cover\n from playwright._impl._frame import Frame\n\n\nclass Request(ChannelOwner):\n def __init__(\n self, parent: ChannelOwner, type: str, guid: str, initializer: Dict\n ) -> None:\n super().__init__(parent, type, guid, initializer)\n self._redirected_from: Optional[\"Request\"] = from_nullable_channel(\n initializer.get(\"redirectedFrom\")\n )\n self._redirected_to: Optional[\"Request\"] = None\n if self._redirected_from:\n self._redirected_from._redirected_to = self\n self._failure_text: Optional[str] = None\n self._timing: ResourceTiming = {\n \"startTime\": 0,\n \"domainLookupStart\": -1,\n \"domainLookupEnd\": -1,\n \"connectStart\": -1,\n \"secureConnectionStart\": -1,\n \"connectEnd\": -1,\n \"requestStart\": -1,\n \"responseStart\": -1,\n \"responseEnd\": -1,\n }\n self._headers: Dict[str, str] = parse_headers(self._initializer[\"headers\"])\n\n @property\n def url(self) -> str:\n return self._initializer[\"url\"]\n\n @property\n def resource_type(self) -> str:\n return self._initializer[\"resourceType\"]\n\n @property\n def method(self) -> str:\n return self._initializer[\"method\"]\n\n @property\n def post_data(self) -> Optional[str]:\n data = self.post_data_buffer\n if not data:\n return None\n return data.decode()\n\n @property\n def post_data_json(self) -> Optional[Any]:\n post_data = self.post_data\n if not post_data:\n return None\n content_type = self.headers[\"content-type\"]\n if content_type == \"application/x-www-form-urlencoded\":\n return dict(parse.parse_qsl(post_data))\n try:\n return json.loads(post_data)\n except Exception:\n raise Error(f\"POST data is not a valid JSON object: {post_data}\")\n\n @property\n def post_data_buffer(self) -> Optional[bytes]:\n b64_content = self._initializer.get(\"postData\")\n if not b64_content:\n return None\n return base64.b64decode(b64_content)\n\n @property\n def headers(self) -> Dict[str, str]:\n return self._headers\n\n async def response(self) -> Optional[\"Response\"]:\n return from_nullable_channel(await self._channel.send(\"response\"))\n\n @property\n def frame(self) -> \"Frame\":\n return from_channel(self._initializer[\"frame\"])\n\n def is_navigation_request(self) -> bool:\n return self._initializer[\"isNavigationRequest\"]\n\n @property\n def redirected_from(self) -> Optional[\"Request\"]:\n return self._redirected_from\n\n @property\n def redirected_to(self) -> Optional[\"Request\"]:\n return self._redirected_to\n\n @property\n def failure(self) -> Optional[str]:\n return self._failure_text\n\n @property\n def timing(self) -> ResourceTiming:\n return self._timing\n\n\nclass Route(ChannelOwner):\n def __init__(\n self, parent: ChannelOwner, type: str, guid: str, initializer: Dict\n ) -> None:\n super().__init__(parent, type, guid, initializer)\n\n @property\n def request(self) -> Request:\n return from_channel(self._initializer[\"request\"])\n\n async def abort(self, errorCode: str = None) -> None:\n await self._channel.send(\"abort\", locals_to_params(locals()))\n\n async def fulfill(\n self,\n status: int = None,\n headers: Dict[str, str] = None,\n body: Union[str, bytes] = None,\n path: Union[str, Path] = None,\n contentType: str = None,\n ) -> None:\n params = locals_to_params(locals())\n length = 0\n if isinstance(body, str):\n params[\"body\"] = body\n params[\"isBase64\"] = False\n length = len(body.encode())\n elif isinstance(body, bytes):\n params[\"body\"] = base64.b64encode(body).decode()\n params[\"isBase64\"] = True\n length = len(body)\n elif path:\n del params[\"path\"]\n file_content = Path(path).read_bytes()\n params[\"body\"] = base64.b64encode(file_content).decode()\n params[\"isBase64\"] = True\n length = len(file_content)\n\n headers = {k.lower(): str(v) for k, v in params.get(\"headers\", {}).items()}\n if params.get(\"contentType\"):\n headers[\"content-type\"] = params[\"contentType\"]\n elif path:\n headers[\"content-type\"] = (\n mimetypes.guess_type(str(Path(path)))[0] or \"application/octet-stream\"\n )\n if length and \"content-length\" not in headers:\n headers[\"content-length\"] = str(length)\n params[\"headers\"] = serialize_headers(headers)\n await self._channel.send(\"fulfill\", params)\n\n async def continue_(\n self,\n url: str = None,\n method: str = None,\n headers: Dict[str, str] = None,\n postData: Union[str, bytes] = None,\n ) -> None:\n overrides: ContinueParameters = {}\n if url:\n overrides[\"url\"] = url\n if method:\n overrides[\"method\"] = method\n if headers:\n overrides[\"headers\"] = serialize_headers(headers)\n if isinstance(postData, str):\n overrides[\"postData\"] = base64.b64encode(postData.encode()).decode()\n elif isinstance(postData, bytes):\n overrides[\"postData\"] = base64.b64encode(postData).decode()\n await self._channel.send(\"continue\", cast(Any, overrides))\n\n\nclass Response(ChannelOwner):\n def __init__(\n self, parent: ChannelOwner, type: str, guid: str, initializer: Dict\n ) -> None:\n super().__init__(parent, type, guid, initializer)\n self._request: Request = from_channel(self._initializer[\"request\"])\n timing = self._initializer[\"timing\"]\n self._request._timing[\"startTime\"] = timing[\"startTime\"]\n self._request._timing[\"domainLookupStart\"] = timing[\"domainLookupStart\"]\n self._request._timing[\"domainLookupEnd\"] = timing[\"domainLookupEnd\"]\n self._request._timing[\"connectStart\"] = timing[\"connectStart\"]\n self._request._timing[\"secureConnectionStart\"] = timing[\"secureConnectionStart\"]\n self._request._timing[\"connectEnd\"] = timing[\"connectEnd\"]\n self._request._timing[\"requestStart\"] = timing[\"requestStart\"]\n self._request._timing[\"responseStart\"] = timing[\"responseStart\"]\n self._request._headers = parse_headers(self._initializer[\"requestHeaders\"])\n\n @property\n def url(self) -> str:\n return self._initializer[\"url\"]\n\n @property\n def ok(self) -> bool:\n return self._initializer[\"status\"] == 0 or (\n self._initializer[\"status\"] >= 200 and self._initializer[\"status\"] <= 299\n )\n\n @property\n def status(self) -> int:\n return self._initializer[\"status\"]\n\n @property\n def status_text(self) -> str:\n return self._initializer[\"statusText\"]\n\n @property\n def headers(self) -> Dict[str, str]:\n return parse_headers(self._initializer[\"headers\"])\n\n async def finished(self) -> Optional[str]:\n return await self._channel.send(\"finished\")\n\n async def body(self) -> bytes:\n binary = await self._channel.send(\"body\")\n return base64.b64decode(binary)\n\n async def text(self) -> str:\n content = await self.body()\n return content.decode()\n\n async def json(self) -> Union[Any]:\n return json.loads(await self.text())\n\n @property\n def request(self) -> Request:\n return self._request\n\n @property\n def frame(self) -> \"Frame\":\n return self._request.frame\n\n\nclass WebSocket(ChannelOwner):\n\n Events = SimpleNamespace(\n Close=\"close\",\n FrameReceived=\"framereceived\",\n FrameSent=\"framesent\",\n Error=\"socketerror\",\n )\n\n def __init__(\n self, parent: ChannelOwner, type: str, guid: str, initializer: Dict\n ) -> None:\n super().__init__(parent, type, guid, initializer)\n self._is_closed = False\n self._channel.on(\n \"frameSent\",\n lambda params: self._on_frame_sent(params[\"opcode\"], params[\"data\"]),\n )\n self._channel.on(\n \"frameReceived\",\n lambda params: self._on_frame_received(params[\"opcode\"], params[\"data\"]),\n )\n self._channel.on(\n \"error\", lambda params: self.emit(WebSocket.Events.Error, params[\"error\"])\n )\n self._channel.on(\"close\", lambda params: self._on_close())\n\n @property\n def url(self) -> str:\n return self._initializer[\"url\"]\n\n def expect_event(\n self,\n event: str,\n predicate: Callable = None,\n timeout: float = None,\n ) -> EventContextManagerImpl:\n if timeout is None:\n timeout = cast(Any, self._parent)._timeout_settings.timeout()\n wait_helper = WaitHelper(self, f\"web_socket.expect_event({event})\")\n wait_helper.reject_on_timeout(\n timeout, f'Timeout while waiting for event \"{event}\"'\n )\n if event != WebSocket.Events.Close:\n wait_helper.reject_on_event(\n self, WebSocket.Events.Close, Error(\"Socket closed\")\n )\n if event != WebSocket.Events.Error:\n wait_helper.reject_on_event(\n self, WebSocket.Events.Error, Error(\"Socket error\")\n )\n wait_helper.reject_on_event(self._parent, \"close\", Error(\"Page closed\"))\n wait_helper.wait_for_event(self, event, predicate)\n return EventContextManagerImpl(wait_helper.result())\n\n async def wait_for_event(\n self, event: str, predicate: Callable = None, timeout: float = None\n ) -> Any:\n async with self.expect_event(event, predicate, timeout) as event_info:\n pass\n return await event_info\n\n def _on_frame_sent(self, opcode: int, data: str) -> None:\n if opcode == 2:\n self.emit(WebSocket.Events.FrameSent, base64.b64decode(data))\n else:\n self.emit(WebSocket.Events.FrameSent, data)\n\n def _on_frame_received(self, opcode: int, data: str) -> None:\n if opcode == 2:\n self.emit(WebSocket.Events.FrameReceived, base64.b64decode(data))\n else:\n self.emit(WebSocket.Events.FrameReceived, data)\n\n def is_closed(self) -> bool:\n return self._is_closed\n\n def _on_close(self) -> None:\n self._is_closed = True\n self.emit(WebSocket.Events.Close)\n\n\ndef serialize_headers(headers: Dict[str, str]) -> List[Header]:\n return [{\"name\": name, \"value\": value} for name, value in headers.items()]\n\n\ndef parse_headers(headers: List[Header]) -> Dict[str, str]:\n return {header[\"name\"].lower(): header[\"value\"] for header in headers}\n", "path": "playwright/_impl/_network.py"}]}
| 3,966 | 169 |
gh_patches_debug_12624
|
rasdani/github-patches
|
git_diff
|
secdev__scapy-2631
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use nextproto property instead of nextprotocol
This is just a checklist to guide you. You can remove it safely.
**Checklist:**
- [x ] If you are new to Scapy: I have checked <https://github.com/secdev/scapy/blob/master/CONTRIBUTING.md> (esp. section submitting-pull-requests)
- [ ] I squashed commits belonging together
- [ ] I added unit tests or explained why they are not relevant
- [ ] I executed the regression tests for Python2 and Python3 (using `tox` or, `cd test && ./run_tests_py2, cd test && ./run_tests_py3`)
- [ ] If the PR is still not finished, please create a [Draft Pull Request](https://github.blog/2019-02-14-introducing-draft-pull-requests/)
> brief description what this PR will do, e.g. fixes broken dissection of XXX
Fix wrong property in `bind_layers` function of NSH protocol. In the NSH class, it defines `nextproto` for next protocol property.
I changed from `nextprotocol` to `nextproto` in `bind_layers` functions.
> if required - short explanation why you fixed something in a way that may look more complicated as it actually is
> if required - outline impacts on other parts of the library
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scapy/contrib/nsh.py`
Content:
```
1 # This file is part of Scapy
2 # Scapy is free software: you can redistribute it and/or modify
3 # it under the terms of the GNU General Public License as published by
4 # the Free Software Foundation, either version 2 of the License, or
5 # any later version.
6 #
7 # Scapy is distributed in the hope that it will be useful,
8 # but WITHOUT ANY WARRANTY; without even the implied warranty of
9 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 # GNU General Public License for more details.
11 #
12 # You should have received a copy of the GNU General Public License
13 # along with Scapy. If not, see <http://www.gnu.org/licenses/>.
14
15 # scapy.contrib.description = Network Services Headers (NSH)
16 # scapy.contrib.status = loads
17
18 from scapy.all import bind_layers
19 from scapy.fields import BitField, ByteField, ByteEnumField, BitEnumField, \
20 ShortField, X3BytesField, XIntField, XStrFixedLenField, \
21 ConditionalField, PacketListField, BitFieldLenField
22 from scapy.layers.inet import Ether, IP
23 from scapy.layers.inet6 import IPv6
24 from scapy.layers.vxlan import VXLAN
25 from scapy.packet import Packet
26 from scapy.layers.l2 import GRE
27
28 from scapy.contrib.mpls import MPLS
29
30 #
31 # NSH Support
32 # https://www.rfc-editor.org/rfc/rfc8300.txt January 2018
33 #
34
35
36 class NSHTLV(Packet):
37 "NSH MD-type 2 - Variable Length Context Headers"
38 name = "NSHTLV"
39 fields_desc = [
40 ShortField('class', 0),
41 BitField('type', 0, 8),
42 BitField('reserved', 0, 1),
43 BitField('length', 0, 7),
44 PacketListField('metadata', None, XIntField, count_from='length')
45 ]
46
47
48 class NSH(Packet):
49 """Network Service Header.
50 NSH MD-type 1 if there is no ContextHeaders"""
51 name = "NSH"
52
53 fields_desc = [
54 BitField('ver', 0, 2),
55 BitField('oam', 0, 1),
56 BitField('unused1', 0, 1),
57 BitField('ttl', 63, 6),
58 BitFieldLenField('length', None, 6,
59 count_of='vlch',
60 adjust=lambda pkt, x: 6 if pkt.mdtype == 1
61 else x + 2),
62 BitField('unused2', 0, 4),
63 BitEnumField('mdtype', 1, 4, {0: 'Reserved MDType',
64 1: 'Fixed Length',
65 2: 'Variable Length',
66 0xF: 'Experimental MDType'}),
67 ByteEnumField('nextproto', 3, {1: 'IPv4',
68 2: 'IPv6',
69 3: 'Ethernet',
70 4: 'NSH',
71 5: 'MPLS',
72 0xFE: 'Experiment 1',
73 0xFF: 'Experiment 2'}),
74 X3BytesField('spi', 0),
75 ByteField('si', 0xFF),
76 ConditionalField(XStrFixedLenField("context_header", "", 16),
77 lambda pkt: pkt.mdtype == 1),
78 ConditionalField(PacketListField("vlch", None, NSHTLV,
79 count_from="length"),
80 lambda pkt: pkt.mdtype == 2)
81 ]
82
83 def mysummary(self):
84 return self.sprintf("SPI: %spi% - SI: %si%")
85
86
87 bind_layers(Ether, NSH, {'type': 0x894F}, type=0x894F)
88 bind_layers(VXLAN, NSH, {'flags': 0xC, 'nextprotocol': 4}, nextprotocol=4)
89 bind_layers(GRE, NSH, {'proto': 0x894F}, proto=0x894F)
90
91 bind_layers(NSH, IP, {'nextprotocol': 1}, nextprotocol=1)
92 bind_layers(NSH, IPv6, {'nextprotocol': 2}, nextprotocol=2)
93 bind_layers(NSH, Ether, {'nextprotocol': 3}, nextprotocol=3)
94 bind_layers(NSH, NSH, {'nextprotocol': 4}, nextprotocol=4)
95 bind_layers(NSH, MPLS, {'nextprotocol': 5}, nextprotocol=5)
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scapy/contrib/nsh.py b/scapy/contrib/nsh.py
--- a/scapy/contrib/nsh.py
+++ b/scapy/contrib/nsh.py
@@ -85,11 +85,11 @@
bind_layers(Ether, NSH, {'type': 0x894F}, type=0x894F)
-bind_layers(VXLAN, NSH, {'flags': 0xC, 'nextprotocol': 4}, nextprotocol=4)
+bind_layers(VXLAN, NSH, {'flags': 0xC, 'nextproto': 4}, nextproto=4)
bind_layers(GRE, NSH, {'proto': 0x894F}, proto=0x894F)
-bind_layers(NSH, IP, {'nextprotocol': 1}, nextprotocol=1)
-bind_layers(NSH, IPv6, {'nextprotocol': 2}, nextprotocol=2)
-bind_layers(NSH, Ether, {'nextprotocol': 3}, nextprotocol=3)
-bind_layers(NSH, NSH, {'nextprotocol': 4}, nextprotocol=4)
-bind_layers(NSH, MPLS, {'nextprotocol': 5}, nextprotocol=5)
+bind_layers(NSH, IP, nextproto=1)
+bind_layers(NSH, IPv6, nextproto=2)
+bind_layers(NSH, Ether, nextproto=3)
+bind_layers(NSH, NSH, nextproto=4)
+bind_layers(NSH, MPLS, nextproto=5)
|
{"golden_diff": "diff --git a/scapy/contrib/nsh.py b/scapy/contrib/nsh.py\n--- a/scapy/contrib/nsh.py\n+++ b/scapy/contrib/nsh.py\n@@ -85,11 +85,11 @@\n \n \n bind_layers(Ether, NSH, {'type': 0x894F}, type=0x894F)\n-bind_layers(VXLAN, NSH, {'flags': 0xC, 'nextprotocol': 4}, nextprotocol=4)\n+bind_layers(VXLAN, NSH, {'flags': 0xC, 'nextproto': 4}, nextproto=4)\n bind_layers(GRE, NSH, {'proto': 0x894F}, proto=0x894F)\n \n-bind_layers(NSH, IP, {'nextprotocol': 1}, nextprotocol=1)\n-bind_layers(NSH, IPv6, {'nextprotocol': 2}, nextprotocol=2)\n-bind_layers(NSH, Ether, {'nextprotocol': 3}, nextprotocol=3)\n-bind_layers(NSH, NSH, {'nextprotocol': 4}, nextprotocol=4)\n-bind_layers(NSH, MPLS, {'nextprotocol': 5}, nextprotocol=5)\n+bind_layers(NSH, IP, nextproto=1)\n+bind_layers(NSH, IPv6, nextproto=2)\n+bind_layers(NSH, Ether, nextproto=3)\n+bind_layers(NSH, NSH, nextproto=4)\n+bind_layers(NSH, MPLS, nextproto=5)\n", "issue": "Use nextproto property instead of nextprotocol\nThis is just a checklist to guide you. You can remove it safely.\r\n\r\n**Checklist:**\r\n\r\n- [x ] If you are new to Scapy: I have checked <https://github.com/secdev/scapy/blob/master/CONTRIBUTING.md> (esp. section submitting-pull-requests)\r\n- [ ] I squashed commits belonging together\r\n- [ ] I added unit tests or explained why they are not relevant\r\n- [ ] I executed the regression tests for Python2 and Python3 (using `tox` or, `cd test && ./run_tests_py2, cd test && ./run_tests_py3`)\r\n- [ ] If the PR is still not finished, please create a [Draft Pull Request](https://github.blog/2019-02-14-introducing-draft-pull-requests/)\r\n\r\n> brief description what this PR will do, e.g. fixes broken dissection of XXX\r\nFix wrong property in `bind_layers` function of NSH protocol. In the NSH class, it defines `nextproto` for next protocol property. \r\n\r\nI changed from `nextprotocol` to `nextproto` in `bind_layers` functions.\r\n\r\n> if required - short explanation why you fixed something in a way that may look more complicated as it actually is\r\n\r\n> if required - outline impacts on other parts of the library\r\n\n", "before_files": [{"content": "# This file is part of Scapy\n# Scapy is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 2 of the License, or\n# any later version.\n#\n# Scapy is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Scapy. If not, see <http://www.gnu.org/licenses/>.\n\n# scapy.contrib.description = Network Services Headers (NSH)\n# scapy.contrib.status = loads\n\nfrom scapy.all import bind_layers\nfrom scapy.fields import BitField, ByteField, ByteEnumField, BitEnumField, \\\n ShortField, X3BytesField, XIntField, XStrFixedLenField, \\\n ConditionalField, PacketListField, BitFieldLenField\nfrom scapy.layers.inet import Ether, IP\nfrom scapy.layers.inet6 import IPv6\nfrom scapy.layers.vxlan import VXLAN\nfrom scapy.packet import Packet\nfrom scapy.layers.l2 import GRE\n\nfrom scapy.contrib.mpls import MPLS\n\n#\n# NSH Support\n# https://www.rfc-editor.org/rfc/rfc8300.txt January 2018\n#\n\n\nclass NSHTLV(Packet):\n \"NSH MD-type 2 - Variable Length Context Headers\"\n name = \"NSHTLV\"\n fields_desc = [\n ShortField('class', 0),\n BitField('type', 0, 8),\n BitField('reserved', 0, 1),\n BitField('length', 0, 7),\n PacketListField('metadata', None, XIntField, count_from='length')\n ]\n\n\nclass NSH(Packet):\n \"\"\"Network Service Header.\n NSH MD-type 1 if there is no ContextHeaders\"\"\"\n name = \"NSH\"\n\n fields_desc = [\n BitField('ver', 0, 2),\n BitField('oam', 0, 1),\n BitField('unused1', 0, 1),\n BitField('ttl', 63, 6),\n BitFieldLenField('length', None, 6,\n count_of='vlch',\n adjust=lambda pkt, x: 6 if pkt.mdtype == 1\n else x + 2),\n BitField('unused2', 0, 4),\n BitEnumField('mdtype', 1, 4, {0: 'Reserved MDType',\n 1: 'Fixed Length',\n 2: 'Variable Length',\n 0xF: 'Experimental MDType'}),\n ByteEnumField('nextproto', 3, {1: 'IPv4',\n 2: 'IPv6',\n 3: 'Ethernet',\n 4: 'NSH',\n 5: 'MPLS',\n 0xFE: 'Experiment 1',\n 0xFF: 'Experiment 2'}),\n X3BytesField('spi', 0),\n ByteField('si', 0xFF),\n ConditionalField(XStrFixedLenField(\"context_header\", \"\", 16),\n lambda pkt: pkt.mdtype == 1),\n ConditionalField(PacketListField(\"vlch\", None, NSHTLV,\n count_from=\"length\"),\n lambda pkt: pkt.mdtype == 2)\n ]\n\n def mysummary(self):\n return self.sprintf(\"SPI: %spi% - SI: %si%\")\n\n\nbind_layers(Ether, NSH, {'type': 0x894F}, type=0x894F)\nbind_layers(VXLAN, NSH, {'flags': 0xC, 'nextprotocol': 4}, nextprotocol=4)\nbind_layers(GRE, NSH, {'proto': 0x894F}, proto=0x894F)\n\nbind_layers(NSH, IP, {'nextprotocol': 1}, nextprotocol=1)\nbind_layers(NSH, IPv6, {'nextprotocol': 2}, nextprotocol=2)\nbind_layers(NSH, Ether, {'nextprotocol': 3}, nextprotocol=3)\nbind_layers(NSH, NSH, {'nextprotocol': 4}, nextprotocol=4)\nbind_layers(NSH, MPLS, {'nextprotocol': 5}, nextprotocol=5)\n", "path": "scapy/contrib/nsh.py"}], "after_files": [{"content": "# This file is part of Scapy\n# Scapy is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 2 of the License, or\n# any later version.\n#\n# Scapy is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Scapy. If not, see <http://www.gnu.org/licenses/>.\n\n# scapy.contrib.description = Network Services Headers (NSH)\n# scapy.contrib.status = loads\n\nfrom scapy.all import bind_layers\nfrom scapy.fields import BitField, ByteField, ByteEnumField, BitEnumField, \\\n ShortField, X3BytesField, XIntField, XStrFixedLenField, \\\n ConditionalField, PacketListField, BitFieldLenField\nfrom scapy.layers.inet import Ether, IP\nfrom scapy.layers.inet6 import IPv6\nfrom scapy.layers.vxlan import VXLAN\nfrom scapy.packet import Packet\nfrom scapy.layers.l2 import GRE\n\nfrom scapy.contrib.mpls import MPLS\n\n#\n# NSH Support\n# https://www.rfc-editor.org/rfc/rfc8300.txt January 2018\n#\n\n\nclass NSHTLV(Packet):\n \"NSH MD-type 2 - Variable Length Context Headers\"\n name = \"NSHTLV\"\n fields_desc = [\n ShortField('class', 0),\n BitField('type', 0, 8),\n BitField('reserved', 0, 1),\n BitField('length', 0, 7),\n PacketListField('metadata', None, XIntField, count_from='length')\n ]\n\n\nclass NSH(Packet):\n \"\"\"Network Service Header.\n NSH MD-type 1 if there is no ContextHeaders\"\"\"\n name = \"NSH\"\n\n fields_desc = [\n BitField('ver', 0, 2),\n BitField('oam', 0, 1),\n BitField('unused1', 0, 1),\n BitField('ttl', 63, 6),\n BitFieldLenField('length', None, 6,\n count_of='vlch',\n adjust=lambda pkt, x: 6 if pkt.mdtype == 1\n else x + 2),\n BitField('unused2', 0, 4),\n BitEnumField('mdtype', 1, 4, {0: 'Reserved MDType',\n 1: 'Fixed Length',\n 2: 'Variable Length',\n 0xF: 'Experimental MDType'}),\n ByteEnumField('nextproto', 3, {1: 'IPv4',\n 2: 'IPv6',\n 3: 'Ethernet',\n 4: 'NSH',\n 5: 'MPLS',\n 0xFE: 'Experiment 1',\n 0xFF: 'Experiment 2'}),\n X3BytesField('spi', 0),\n ByteField('si', 0xFF),\n ConditionalField(XStrFixedLenField(\"context_header\", \"\", 16),\n lambda pkt: pkt.mdtype == 1),\n ConditionalField(PacketListField(\"vlch\", None, NSHTLV,\n count_from=\"length\"),\n lambda pkt: pkt.mdtype == 2)\n ]\n\n def mysummary(self):\n return self.sprintf(\"SPI: %spi% - SI: %si%\")\n\n\nbind_layers(Ether, NSH, {'type': 0x894F}, type=0x894F)\nbind_layers(VXLAN, NSH, {'flags': 0xC, 'nextproto': 4}, nextproto=4)\nbind_layers(GRE, NSH, {'proto': 0x894F}, proto=0x894F)\n\nbind_layers(NSH, IP, nextproto=1)\nbind_layers(NSH, IPv6, nextproto=2)\nbind_layers(NSH, Ether, nextproto=3)\nbind_layers(NSH, NSH, nextproto=4)\nbind_layers(NSH, MPLS, nextproto=5)\n", "path": "scapy/contrib/nsh.py"}]}
| 1,716 | 335 |
gh_patches_debug_11202
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-2181
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move Payment Gateways to own subtab
On `admin/settings/` add a subtab "Payment Gateways" and move the Paypal and Stripe here.


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/settings/__init__.py`
Content:
```
1 import stripe
2 from flask import current_app
3 from sqlalchemy import desc
4 from app.models.setting import Setting
5 from app.models.fees import TicketFees
6
7
8 def get_settings():
9 """
10 Use this to get latest system settings
11 """
12 if 'custom_settings' in current_app.config:
13 return current_app.config['custom_settings']
14 s = Setting.query.order_by(desc(Setting.id)).first()
15 if s is None:
16 set_settings(secret='super secret key')
17 else:
18 current_app.config['custom_settings'] = make_dict(s)
19 return current_app.config['custom_settings']
20
21
22 def set_settings(**kwargs):
23 """
24 Update system settings
25 """
26
27 if 'service_fee' in kwargs:
28 ticket_service_fees = kwargs.get('service_fee')
29 ticket_maximum_fees = kwargs.get('maximum_fee')
30 from app.helpers.data_getter import DataGetter
31 from app.helpers.data import save_to_db
32 currencies = DataGetter.get_payment_currencies()
33 for i, currency in enumerate(currencies):
34 currency = currency.split(' ')[0]
35 ticket_fee = TicketFees(currency=currency,
36 service_fee=ticket_service_fees[i],
37 maximum_fee=ticket_maximum_fees[i])
38 save_to_db(ticket_fee, "Ticket Fees settings saved")
39 else:
40 setting = Setting(**kwargs)
41 from app.helpers.data import save_to_db
42 save_to_db(setting, 'Setting saved')
43 current_app.secret_key = setting.secret
44 stripe.api_key = setting.stripe_secret_key
45 current_app.config['custom_settings'] = make_dict(setting)
46
47
48 def make_dict(s):
49 arguments = {}
50 for name, column in s.__mapper__.columns.items():
51 if not (column.primary_key or column.unique):
52 arguments[name] = getattr(s, name)
53 return arguments
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/settings/__init__.py b/app/settings/__init__.py
--- a/app/settings/__init__.py
+++ b/app/settings/__init__.py
@@ -30,7 +30,7 @@
from app.helpers.data_getter import DataGetter
from app.helpers.data import save_to_db
currencies = DataGetter.get_payment_currencies()
- for i, currency in enumerate(currencies):
+ for i, (currency, has_paypal, has_stripe) in enumerate(currencies):
currency = currency.split(' ')[0]
ticket_fee = TicketFees(currency=currency,
service_fee=ticket_service_fees[i],
|
{"golden_diff": "diff --git a/app/settings/__init__.py b/app/settings/__init__.py\n--- a/app/settings/__init__.py\n+++ b/app/settings/__init__.py\n@@ -30,7 +30,7 @@\n from app.helpers.data_getter import DataGetter\n from app.helpers.data import save_to_db\n currencies = DataGetter.get_payment_currencies()\n- for i, currency in enumerate(currencies):\n+ for i, (currency, has_paypal, has_stripe) in enumerate(currencies):\n currency = currency.split(' ')[0]\n ticket_fee = TicketFees(currency=currency,\n service_fee=ticket_service_fees[i],\n", "issue": "Move Payment Gateways to own subtab\nOn `admin/settings/` add a subtab \"Payment Gateways\" and move the Paypal and Stripe here.\n\n\n\n\n\n", "before_files": [{"content": "import stripe\nfrom flask import current_app\nfrom sqlalchemy import desc\nfrom app.models.setting import Setting\nfrom app.models.fees import TicketFees\n\n\ndef get_settings():\n \"\"\"\n Use this to get latest system settings\n \"\"\"\n if 'custom_settings' in current_app.config:\n return current_app.config['custom_settings']\n s = Setting.query.order_by(desc(Setting.id)).first()\n if s is None:\n set_settings(secret='super secret key')\n else:\n current_app.config['custom_settings'] = make_dict(s)\n return current_app.config['custom_settings']\n\n\ndef set_settings(**kwargs):\n \"\"\"\n Update system settings\n \"\"\"\n\n if 'service_fee' in kwargs:\n ticket_service_fees = kwargs.get('service_fee')\n ticket_maximum_fees = kwargs.get('maximum_fee')\n from app.helpers.data_getter import DataGetter\n from app.helpers.data import save_to_db\n currencies = DataGetter.get_payment_currencies()\n for i, currency in enumerate(currencies):\n currency = currency.split(' ')[0]\n ticket_fee = TicketFees(currency=currency,\n service_fee=ticket_service_fees[i],\n maximum_fee=ticket_maximum_fees[i])\n save_to_db(ticket_fee, \"Ticket Fees settings saved\")\n else:\n setting = Setting(**kwargs)\n from app.helpers.data import save_to_db\n save_to_db(setting, 'Setting saved')\n current_app.secret_key = setting.secret\n stripe.api_key = setting.stripe_secret_key\n current_app.config['custom_settings'] = make_dict(setting)\n\n\ndef make_dict(s):\n arguments = {}\n for name, column in s.__mapper__.columns.items():\n if not (column.primary_key or column.unique):\n arguments[name] = getattr(s, name)\n return arguments\n", "path": "app/settings/__init__.py"}], "after_files": [{"content": "import stripe\nfrom flask import current_app\nfrom sqlalchemy import desc\nfrom app.models.setting import Setting\nfrom app.models.fees import TicketFees\n\n\ndef get_settings():\n \"\"\"\n Use this to get latest system settings\n \"\"\"\n if 'custom_settings' in current_app.config:\n return current_app.config['custom_settings']\n s = Setting.query.order_by(desc(Setting.id)).first()\n if s is None:\n set_settings(secret='super secret key')\n else:\n current_app.config['custom_settings'] = make_dict(s)\n return current_app.config['custom_settings']\n\n\ndef set_settings(**kwargs):\n \"\"\"\n Update system settings\n \"\"\"\n\n if 'service_fee' in kwargs:\n ticket_service_fees = kwargs.get('service_fee')\n ticket_maximum_fees = kwargs.get('maximum_fee')\n from app.helpers.data_getter import DataGetter\n from app.helpers.data import save_to_db\n currencies = DataGetter.get_payment_currencies()\n for i, (currency, has_paypal, has_stripe) in enumerate(currencies):\n currency = currency.split(' ')[0]\n ticket_fee = TicketFees(currency=currency,\n service_fee=ticket_service_fees[i],\n maximum_fee=ticket_maximum_fees[i])\n save_to_db(ticket_fee, \"Ticket Fees settings saved\")\n else:\n setting = Setting(**kwargs)\n from app.helpers.data import save_to_db\n save_to_db(setting, 'Setting saved')\n current_app.secret_key = setting.secret\n stripe.api_key = setting.stripe_secret_key\n current_app.config['custom_settings'] = make_dict(setting)\n\n\ndef make_dict(s):\n arguments = {}\n for name, column in s.__mapper__.columns.items():\n if not (column.primary_key or column.unique):\n arguments[name] = getattr(s, name)\n return arguments\n", "path": "app/settings/__init__.py"}]}
| 937 | 141 |
gh_patches_debug_23885
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-3587
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add official support for Python 3.12
## Description
<!-- Is your feature request related to a problem? A clear and concise description of what the problem is: "I'm always frustrated when ..." -->
Kedro itself probably works on Python 3.12 already, it would be nice to declare official support.
However, installing Kedro is one thing, but installing the typical dependencies might not be straightforward. For example, I just tested the spaceflights starter and most of the dependencies have already published precompiled wheels for Python 3.12 (at least for M1 Mac), but two of them are still problematic as of today:
- aiohttp https://github.com/aio-libs/aiohttp/issues/7739 worked by installing the beta version as advised there, so it will be solved soon (edit: fixed ✔️)
- pyzmq https://github.com/zeromq/pyzmq/issues/1907 (M1 specific), didn't work after installing the ZMQ header libraries with mamba (edit: fixed ✔️)
## Context
<!-- Why is this change important to you? How would you use it? How can it benefit other users? -->
#2815 was already completed, but officially Kedro does not support Python 3.12 yet.
You can use Kedro on Python 3.12 by manually disabling the warning.
## Possible Implementation
<!-- (Optional) Suggest an idea for implementing the addition or change. -->
Wait a bit until at least the spaceflights starter can be safely installed in most mainstream platforms.
## Possible Alternatives
<!-- (Optional) Describe any alternative solutions or features you've considered. -->
Declare Python 3.12 support already, at the cost of creating some grievance of users that then proceed to install some dependencies.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/__init__.py`
Content:
```
1 """Kedro is a framework that makes it easy to build robust and scalable
2 data pipelines by providing uniform project templates, data abstraction,
3 configuration and pipeline assembly.
4 """
5
6 import sys
7 import warnings
8
9 __version__ = "0.19.3"
10
11
12 class KedroDeprecationWarning(DeprecationWarning):
13 """Custom class for warnings about deprecated Kedro features."""
14
15
16 class KedroPythonVersionWarning(UserWarning):
17 """Custom class for warnings about incompatibilities with Python versions."""
18
19
20 if not sys.warnoptions:
21 warnings.simplefilter("default", KedroDeprecationWarning)
22 warnings.simplefilter("error", KedroPythonVersionWarning)
23
24 if sys.version_info >= (3, 12):
25 warnings.warn(
26 """Kedro is not yet fully compatible with this Python version.
27 To proceed at your own risk and ignore this warning,
28 run Kedro with `python -W "default:Kedro is not yet fully compatible" -m kedro ...`
29 or set the PYTHONWARNINGS environment variable accordingly.""",
30 KedroPythonVersionWarning,
31 )
32
```
Path: `kedro/config/abstract_config.py`
Content:
```
1 """This module provides ``kedro.abstract_config`` with the baseline
2 class model for a `ConfigLoader` implementation.
3 """
4 from __future__ import annotations
5
6 from collections import UserDict
7 from typing import Any
8
9
10 class AbstractConfigLoader(UserDict):
11 """``AbstractConfigLoader`` is the abstract base class
12 for all `ConfigLoader` implementations.
13 All user-defined `ConfigLoader` implementations should inherit
14 from `AbstractConfigLoader` and implement all relevant abstract methods.
15 """
16
17 def __init__(
18 self,
19 conf_source: str,
20 env: str | None = None,
21 runtime_params: dict[str, Any] | None = None,
22 **kwargs: Any,
23 ):
24 super().__init__()
25 self.conf_source = conf_source
26 self.env = env
27 self.runtime_params = runtime_params or {}
28
29
30 class BadConfigException(Exception):
31 """Raised when a configuration file cannot be loaded, for instance
32 due to wrong syntax or poor formatting.
33 """
34
35 pass
36
37
38 class MissingConfigException(Exception):
39 """Raised when no configuration files can be found within a config path"""
40
41 pass
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kedro/__init__.py b/kedro/__init__.py
--- a/kedro/__init__.py
+++ b/kedro/__init__.py
@@ -21,7 +21,7 @@
warnings.simplefilter("default", KedroDeprecationWarning)
warnings.simplefilter("error", KedroPythonVersionWarning)
-if sys.version_info >= (3, 12):
+if sys.version_info >= (3, 13):
warnings.warn(
"""Kedro is not yet fully compatible with this Python version.
To proceed at your own risk and ignore this warning,
diff --git a/kedro/config/abstract_config.py b/kedro/config/abstract_config.py
--- a/kedro/config/abstract_config.py
+++ b/kedro/config/abstract_config.py
@@ -26,6 +26,17 @@
self.env = env
self.runtime_params = runtime_params or {}
+ # As of Python 3.12 __getitem__ is no longer called in the inherited UserDict.get()
+ # This causes AbstractConfigLoader.get() to break
+ # See: https://github.com/python/cpython/issues/105524
+ # Overwrite the inherited get function with the implementation from 3.11 and prior
+ def get(self, key: str, default: Any = None) -> Any:
+ "D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None."
+ try:
+ return self[key]
+ except KeyError:
+ return default
+
class BadConfigException(Exception):
"""Raised when a configuration file cannot be loaded, for instance
|
{"golden_diff": "diff --git a/kedro/__init__.py b/kedro/__init__.py\n--- a/kedro/__init__.py\n+++ b/kedro/__init__.py\n@@ -21,7 +21,7 @@\n warnings.simplefilter(\"default\", KedroDeprecationWarning)\n warnings.simplefilter(\"error\", KedroPythonVersionWarning)\n \n-if sys.version_info >= (3, 12):\n+if sys.version_info >= (3, 13):\n warnings.warn(\n \"\"\"Kedro is not yet fully compatible with this Python version.\n To proceed at your own risk and ignore this warning,\ndiff --git a/kedro/config/abstract_config.py b/kedro/config/abstract_config.py\n--- a/kedro/config/abstract_config.py\n+++ b/kedro/config/abstract_config.py\n@@ -26,6 +26,17 @@\n self.env = env\n self.runtime_params = runtime_params or {}\n \n+ # As of Python 3.12 __getitem__ is no longer called in the inherited UserDict.get()\n+ # This causes AbstractConfigLoader.get() to break\n+ # See: https://github.com/python/cpython/issues/105524\n+ # Overwrite the inherited get function with the implementation from 3.11 and prior\n+ def get(self, key: str, default: Any = None) -> Any:\n+ \"D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.\"\n+ try:\n+ return self[key]\n+ except KeyError:\n+ return default\n+\n \n class BadConfigException(Exception):\n \"\"\"Raised when a configuration file cannot be loaded, for instance\n", "issue": "Add official support for Python 3.12\n## Description\r\n<!-- Is your feature request related to a problem? A clear and concise description of what the problem is: \"I'm always frustrated when ...\" -->\r\nKedro itself probably works on Python 3.12 already, it would be nice to declare official support.\r\n\r\nHowever, installing Kedro is one thing, but installing the typical dependencies might not be straightforward. For example, I just tested the spaceflights starter and most of the dependencies have already published precompiled wheels for Python 3.12 (at least for M1 Mac), but two of them are still problematic as of today:\r\n\r\n- aiohttp https://github.com/aio-libs/aiohttp/issues/7739 worked by installing the beta version as advised there, so it will be solved soon (edit: fixed \u2714\ufe0f)\r\n- pyzmq https://github.com/zeromq/pyzmq/issues/1907 (M1 specific), didn't work after installing the ZMQ header libraries with mamba (edit: fixed \u2714\ufe0f)\r\n\r\n## Context\r\n<!-- Why is this change important to you? How would you use it? How can it benefit other users? -->\r\n#2815 was already completed, but officially Kedro does not support Python 3.12 yet.\r\n\r\nYou can use Kedro on Python 3.12 by manually disabling the warning.\r\n\r\n## Possible Implementation\r\n<!-- (Optional) Suggest an idea for implementing the addition or change. -->\r\nWait a bit until at least the spaceflights starter can be safely installed in most mainstream platforms.\r\n\r\n## Possible Alternatives\r\n<!-- (Optional) Describe any alternative solutions or features you've considered. -->\r\nDeclare Python 3.12 support already, at the cost of creating some grievance of users that then proceed to install some dependencies.\r\n\n", "before_files": [{"content": "\"\"\"Kedro is a framework that makes it easy to build robust and scalable\ndata pipelines by providing uniform project templates, data abstraction,\nconfiguration and pipeline assembly.\n\"\"\"\n\nimport sys\nimport warnings\n\n__version__ = \"0.19.3\"\n\n\nclass KedroDeprecationWarning(DeprecationWarning):\n \"\"\"Custom class for warnings about deprecated Kedro features.\"\"\"\n\n\nclass KedroPythonVersionWarning(UserWarning):\n \"\"\"Custom class for warnings about incompatibilities with Python versions.\"\"\"\n\n\nif not sys.warnoptions:\n warnings.simplefilter(\"default\", KedroDeprecationWarning)\n warnings.simplefilter(\"error\", KedroPythonVersionWarning)\n\nif sys.version_info >= (3, 12):\n warnings.warn(\n \"\"\"Kedro is not yet fully compatible with this Python version.\nTo proceed at your own risk and ignore this warning,\nrun Kedro with `python -W \"default:Kedro is not yet fully compatible\" -m kedro ...`\nor set the PYTHONWARNINGS environment variable accordingly.\"\"\",\n KedroPythonVersionWarning,\n )\n", "path": "kedro/__init__.py"}, {"content": "\"\"\"This module provides ``kedro.abstract_config`` with the baseline\nclass model for a `ConfigLoader` implementation.\n\"\"\"\nfrom __future__ import annotations\n\nfrom collections import UserDict\nfrom typing import Any\n\n\nclass AbstractConfigLoader(UserDict):\n \"\"\"``AbstractConfigLoader`` is the abstract base class\n for all `ConfigLoader` implementations.\n All user-defined `ConfigLoader` implementations should inherit\n from `AbstractConfigLoader` and implement all relevant abstract methods.\n \"\"\"\n\n def __init__(\n self,\n conf_source: str,\n env: str | None = None,\n runtime_params: dict[str, Any] | None = None,\n **kwargs: Any,\n ):\n super().__init__()\n self.conf_source = conf_source\n self.env = env\n self.runtime_params = runtime_params or {}\n\n\nclass BadConfigException(Exception):\n \"\"\"Raised when a configuration file cannot be loaded, for instance\n due to wrong syntax or poor formatting.\n \"\"\"\n\n pass\n\n\nclass MissingConfigException(Exception):\n \"\"\"Raised when no configuration files can be found within a config path\"\"\"\n\n pass\n", "path": "kedro/config/abstract_config.py"}], "after_files": [{"content": "\"\"\"Kedro is a framework that makes it easy to build robust and scalable\ndata pipelines by providing uniform project templates, data abstraction,\nconfiguration and pipeline assembly.\n\"\"\"\n\nimport sys\nimport warnings\n\n__version__ = \"0.19.3\"\n\n\nclass KedroDeprecationWarning(DeprecationWarning):\n \"\"\"Custom class for warnings about deprecated Kedro features.\"\"\"\n\n\nclass KedroPythonVersionWarning(UserWarning):\n \"\"\"Custom class for warnings about incompatibilities with Python versions.\"\"\"\n\n\nif not sys.warnoptions:\n warnings.simplefilter(\"default\", KedroDeprecationWarning)\n warnings.simplefilter(\"error\", KedroPythonVersionWarning)\n\nif sys.version_info >= (3, 13):\n warnings.warn(\n \"\"\"Kedro is not yet fully compatible with this Python version.\nTo proceed at your own risk and ignore this warning,\nrun Kedro with `python -W \"default:Kedro is not yet fully compatible\" -m kedro ...`\nor set the PYTHONWARNINGS environment variable accordingly.\"\"\",\n KedroPythonVersionWarning,\n )\n", "path": "kedro/__init__.py"}, {"content": "\"\"\"This module provides ``kedro.abstract_config`` with the baseline\nclass model for a `ConfigLoader` implementation.\n\"\"\"\nfrom __future__ import annotations\n\nfrom collections import UserDict\nfrom typing import Any\n\n\nclass AbstractConfigLoader(UserDict):\n \"\"\"``AbstractConfigLoader`` is the abstract base class\n for all `ConfigLoader` implementations.\n All user-defined `ConfigLoader` implementations should inherit\n from `AbstractConfigLoader` and implement all relevant abstract methods.\n \"\"\"\n\n def __init__(\n self,\n conf_source: str,\n env: str | None = None,\n runtime_params: dict[str, Any] | None = None,\n **kwargs: Any,\n ):\n super().__init__()\n self.conf_source = conf_source\n self.env = env\n self.runtime_params = runtime_params or {}\n\n # As of Python 3.12 __getitem__ is no longer called in the inherited UserDict.get()\n # This causes AbstractConfigLoader.get() to break\n # See: https://github.com/python/cpython/issues/105524\n # Overwrite the inherited get function with the implementation from 3.11 and prior\n def get(self, key: str, default: Any = None) -> Any:\n \"D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.\"\n try:\n return self[key]\n except KeyError:\n return default\n\n\nclass BadConfigException(Exception):\n \"\"\"Raised when a configuration file cannot be loaded, for instance\n due to wrong syntax or poor formatting.\n \"\"\"\n\n pass\n\n\nclass MissingConfigException(Exception):\n \"\"\"Raised when no configuration files can be found within a config path\"\"\"\n\n pass\n", "path": "kedro/config/abstract_config.py"}]}
| 1,270 | 378 |
gh_patches_debug_17700
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-210
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Chant-edit page doesn't load for Admin user
The chant-edit page (e.g., http://127.0.0.1:3122/edit-volpiano/702611?pk=705019) takes forever to load for Admin user.
I was logged in with my Admin account (i.e., superuser). Ideally, this should give me power to access and change anything.
I also check with my project manager account and it loaded fine.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/users/managers.py`
Content:
```
1 # https://testdriven.io/blog/django-custom-user-model/#:~:text=The%20default%20User%20model%20in,either%20subclassing%20AbstractUser%20or%20AbstractBaseUser%20.
2
3 from django.contrib.auth.base_user import BaseUserManager
4 from django.utils.translation import gettext_lazy as _
5
6
7 class CustomUserManager(BaseUserManager):
8 """
9 Custom user model manager where email is the unique identifiers
10 for authentication instead of usernames.
11 """
12 def create_user(self, email, password, **extra_fields):
13 """
14 Create and save a User with the given email and password.
15 """
16 if not email:
17 raise ValueError(_('The Email must be set'))
18 email = self.normalize_email(email)
19 user = self.model(email=email, **extra_fields)
20 user.set_password(password)
21 user.save()
22 return user
23
24 def create_superuser(self, email, password, **extra_fields):
25 """
26 Create and save a SuperUser with the given email and password.
27 """
28 extra_fields.setdefault('is_staff', True)
29 extra_fields.setdefault('is_superuser', True)
30 extra_fields.setdefault('is_active', True)
31
32 if extra_fields.get('is_staff') is not True:
33 raise ValueError(_('Superuser must have is_staff=True.'))
34 if extra_fields.get('is_superuser') is not True:
35 raise ValueError(_('Superuser must have is_superuser=True.'))
36 return self.create_user(email, password, **extra_fields)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django/cantusdb_project/users/managers.py b/django/cantusdb_project/users/managers.py
--- a/django/cantusdb_project/users/managers.py
+++ b/django/cantusdb_project/users/managers.py
@@ -2,7 +2,7 @@
from django.contrib.auth.base_user import BaseUserManager
from django.utils.translation import gettext_lazy as _
-
+from django.contrib.auth.models import Group
class CustomUserManager(BaseUserManager):
"""
@@ -33,4 +33,7 @@
raise ValueError(_('Superuser must have is_staff=True.'))
if extra_fields.get('is_superuser') is not True:
raise ValueError(_('Superuser must have is_superuser=True.'))
- return self.create_user(email, password, **extra_fields)
\ No newline at end of file
+ user = self.create_user(email, password, **extra_fields)
+ pm = Group.objects.get(name='project manager')
+ pm.user_set.add(user)
+ return user
\ No newline at end of file
|
{"golden_diff": "diff --git a/django/cantusdb_project/users/managers.py b/django/cantusdb_project/users/managers.py\n--- a/django/cantusdb_project/users/managers.py\n+++ b/django/cantusdb_project/users/managers.py\n@@ -2,7 +2,7 @@\n \n from django.contrib.auth.base_user import BaseUserManager\n from django.utils.translation import gettext_lazy as _\n-\n+from django.contrib.auth.models import Group\n \n class CustomUserManager(BaseUserManager):\n \"\"\"\n@@ -33,4 +33,7 @@\n raise ValueError(_('Superuser must have is_staff=True.'))\n if extra_fields.get('is_superuser') is not True:\n raise ValueError(_('Superuser must have is_superuser=True.'))\n- return self.create_user(email, password, **extra_fields)\n\\ No newline at end of file\n+ user = self.create_user(email, password, **extra_fields)\n+ pm = Group.objects.get(name='project manager') \n+ pm.user_set.add(user)\n+ return user\n\\ No newline at end of file\n", "issue": "Chant-edit page doesn't load for Admin user\nThe chant-edit page (e.g., http://127.0.0.1:3122/edit-volpiano/702611?pk=705019) takes forever to load for Admin user. \r\nI was logged in with my Admin account (i.e., superuser). Ideally, this should give me power to access and change anything. \r\n\r\nI also check with my project manager account and it loaded fine.\n", "before_files": [{"content": "# https://testdriven.io/blog/django-custom-user-model/#:~:text=The%20default%20User%20model%20in,either%20subclassing%20AbstractUser%20or%20AbstractBaseUser%20.\n\nfrom django.contrib.auth.base_user import BaseUserManager\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass CustomUserManager(BaseUserManager):\n \"\"\"\n Custom user model manager where email is the unique identifiers\n for authentication instead of usernames.\n \"\"\"\n def create_user(self, email, password, **extra_fields):\n \"\"\"\n Create and save a User with the given email and password.\n \"\"\"\n if not email:\n raise ValueError(_('The Email must be set'))\n email = self.normalize_email(email)\n user = self.model(email=email, **extra_fields)\n user.set_password(password)\n user.save()\n return user\n\n def create_superuser(self, email, password, **extra_fields):\n \"\"\"\n Create and save a SuperUser with the given email and password.\n \"\"\"\n extra_fields.setdefault('is_staff', True)\n extra_fields.setdefault('is_superuser', True)\n extra_fields.setdefault('is_active', True)\n\n if extra_fields.get('is_staff') is not True:\n raise ValueError(_('Superuser must have is_staff=True.'))\n if extra_fields.get('is_superuser') is not True:\n raise ValueError(_('Superuser must have is_superuser=True.'))\n return self.create_user(email, password, **extra_fields)", "path": "django/cantusdb_project/users/managers.py"}], "after_files": [{"content": "# https://testdriven.io/blog/django-custom-user-model/#:~:text=The%20default%20User%20model%20in,either%20subclassing%20AbstractUser%20or%20AbstractBaseUser%20.\n\nfrom django.contrib.auth.base_user import BaseUserManager\nfrom django.utils.translation import gettext_lazy as _\nfrom django.contrib.auth.models import Group\n\nclass CustomUserManager(BaseUserManager):\n \"\"\"\n Custom user model manager where email is the unique identifiers\n for authentication instead of usernames.\n \"\"\"\n def create_user(self, email, password, **extra_fields):\n \"\"\"\n Create and save a User with the given email and password.\n \"\"\"\n if not email:\n raise ValueError(_('The Email must be set'))\n email = self.normalize_email(email)\n user = self.model(email=email, **extra_fields)\n user.set_password(password)\n user.save()\n return user\n\n def create_superuser(self, email, password, **extra_fields):\n \"\"\"\n Create and save a SuperUser with the given email and password.\n \"\"\"\n extra_fields.setdefault('is_staff', True)\n extra_fields.setdefault('is_superuser', True)\n extra_fields.setdefault('is_active', True)\n\n if extra_fields.get('is_staff') is not True:\n raise ValueError(_('Superuser must have is_staff=True.'))\n if extra_fields.get('is_superuser') is not True:\n raise ValueError(_('Superuser must have is_superuser=True.'))\n user = self.create_user(email, password, **extra_fields)\n pm = Group.objects.get(name='project manager') \n pm.user_set.add(user)\n return user", "path": "django/cantusdb_project/users/managers.py"}]}
| 759 | 229 |
gh_patches_debug_41565
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2259
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sticky cookies are improperly formatted.
##### Steps to reproduce the problem:
1. Go to http://www.html-kit.com/tools/cookietester/
2. Click 'Set Test Cookie'
3. Observe that one cookie is sent to the server.
4. Remove the cookie.
5. launch mitmproxy with `mitmproxy -t html-kit\.com` and tell your browser to use it as a proxy
6. Reload the page.
7. Click 'Set Test Cookie'
8. Observe that two 'cookies' are sent to the server.
##### Any other comments? What have you tried so far?
There appears to be a comma in the output of mitmproxy, even though it is surrounded by quotes. It's possible, then that this is a parsing fail on the tool's end caused by a difference in what's sent back for the format of the date. Still, should it really be changing that?
##### System information
Arch Linux, freshly updated.
Mitmproxy version: 2.0.1 (release version)
Python version: 3.6.0
Platform: Linux-4.10.6-1-ARCH-x86_64-with-glibc2.3.4
SSL version: OpenSSL 1.0.2k 26 Jan 2017
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/addons/stickycookie.py`
Content:
```
1 import collections
2 from http import cookiejar
3
4 from mitmproxy.net.http import cookies
5
6 from mitmproxy import exceptions
7 from mitmproxy import flowfilter
8 from mitmproxy import ctx
9
10
11 def ckey(attrs, f):
12 """
13 Returns a (domain, port, path) tuple.
14 """
15 domain = f.request.host
16 path = "/"
17 if "domain" in attrs:
18 domain = attrs["domain"]
19 if "path" in attrs:
20 path = attrs["path"]
21 return (domain, f.request.port, path)
22
23
24 def domain_match(a, b):
25 if cookiejar.domain_match(a, b):
26 return True
27 elif cookiejar.domain_match(a, b.strip(".")):
28 return True
29 return False
30
31
32 class StickyCookie:
33 def __init__(self):
34 self.jar = collections.defaultdict(dict)
35 self.flt = None
36
37 def configure(self, updated):
38 if "stickycookie" in updated:
39 if ctx.options.stickycookie:
40 flt = flowfilter.parse(ctx.options.stickycookie)
41 if not flt:
42 raise exceptions.OptionsError(
43 "stickycookie: invalid filter expression: %s" % ctx.options.stickycookie
44 )
45 self.flt = flt
46 else:
47 self.flt = None
48
49 def response(self, flow):
50 if self.flt:
51 for name, (value, attrs) in flow.response.cookies.items(multi=True):
52 # FIXME: We now know that Cookie.py screws up some cookies with
53 # valid RFC 822/1123 datetime specifications for expiry. Sigh.
54 dom_port_path = ckey(attrs, flow)
55
56 if domain_match(flow.request.host, dom_port_path[0]):
57 if cookies.is_expired(attrs):
58 # Remove the cookie from jar
59 self.jar[dom_port_path].pop(name, None)
60
61 # If all cookies of a dom_port_path have been removed
62 # then remove it from the jar itself
63 if not self.jar[dom_port_path]:
64 self.jar.pop(dom_port_path, None)
65 else:
66 b = attrs.copy()
67 b.insert(0, name, value)
68 self.jar[dom_port_path][name] = b
69
70 def request(self, flow):
71 if self.flt:
72 l = []
73 if flowfilter.match(self.flt, flow):
74 for domain, port, path in self.jar.keys():
75 match = [
76 domain_match(flow.request.host, domain),
77 flow.request.port == port,
78 flow.request.path.startswith(path)
79 ]
80 if all(match):
81 c = self.jar[(domain, port, path)]
82 l.extend([cookies.format_cookie_header(c[name].items(multi=True)) for name in c.keys()])
83 if l:
84 # FIXME: we need to formalise this...
85 flow.request.stickycookie = True
86 flow.request.headers["cookie"] = "; ".join(l)
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/addons/stickycookie.py b/mitmproxy/addons/stickycookie.py
--- a/mitmproxy/addons/stickycookie.py
+++ b/mitmproxy/addons/stickycookie.py
@@ -1,14 +1,14 @@
import collections
from http import cookiejar
+from typing import List, Tuple, Dict, Optional # noqa
+from mitmproxy import http, flowfilter, ctx, exceptions
from mitmproxy.net.http import cookies
-from mitmproxy import exceptions
-from mitmproxy import flowfilter
-from mitmproxy import ctx
+TOrigin = Tuple[str, int, str]
-def ckey(attrs, f):
+def ckey(attrs: Dict[str, str], f: http.HTTPFlow) -> TOrigin:
"""
Returns a (domain, port, path) tuple.
"""
@@ -21,18 +21,18 @@
return (domain, f.request.port, path)
-def domain_match(a, b):
- if cookiejar.domain_match(a, b):
+def domain_match(a: str, b: str) -> bool:
+ if cookiejar.domain_match(a, b): # type: ignore
return True
- elif cookiejar.domain_match(a, b.strip(".")):
+ elif cookiejar.domain_match(a, b.strip(".")): # type: ignore
return True
return False
class StickyCookie:
def __init__(self):
- self.jar = collections.defaultdict(dict)
- self.flt = None
+ self.jar = collections.defaultdict(dict) # type: Dict[TOrigin, Dict[str, str]]
+ self.flt = None # type: Optional[flowfilter.TFilter]
def configure(self, updated):
if "stickycookie" in updated:
@@ -46,7 +46,7 @@
else:
self.flt = None
- def response(self, flow):
+ def response(self, flow: http.HTTPFlow):
if self.flt:
for name, (value, attrs) in flow.response.cookies.items(multi=True):
# FIXME: We now know that Cookie.py screws up some cookies with
@@ -63,24 +63,21 @@
if not self.jar[dom_port_path]:
self.jar.pop(dom_port_path, None)
else:
- b = attrs.copy()
- b.insert(0, name, value)
- self.jar[dom_port_path][name] = b
+ self.jar[dom_port_path][name] = value
- def request(self, flow):
+ def request(self, flow: http.HTTPFlow):
if self.flt:
- l = []
+ cookie_list = [] # type: List[Tuple[str,str]]
if flowfilter.match(self.flt, flow):
- for domain, port, path in self.jar.keys():
+ for (domain, port, path), c in self.jar.items():
match = [
domain_match(flow.request.host, domain),
flow.request.port == port,
flow.request.path.startswith(path)
]
if all(match):
- c = self.jar[(domain, port, path)]
- l.extend([cookies.format_cookie_header(c[name].items(multi=True)) for name in c.keys()])
- if l:
+ cookie_list.extend(c.items())
+ if cookie_list:
# FIXME: we need to formalise this...
- flow.request.stickycookie = True
- flow.request.headers["cookie"] = "; ".join(l)
+ flow.metadata["stickycookie"] = True
+ flow.request.headers["cookie"] = cookies.format_cookie_header(cookie_list)
|
{"golden_diff": "diff --git a/mitmproxy/addons/stickycookie.py b/mitmproxy/addons/stickycookie.py\n--- a/mitmproxy/addons/stickycookie.py\n+++ b/mitmproxy/addons/stickycookie.py\n@@ -1,14 +1,14 @@\n import collections\n from http import cookiejar\n+from typing import List, Tuple, Dict, Optional # noqa\n \n+from mitmproxy import http, flowfilter, ctx, exceptions\n from mitmproxy.net.http import cookies\n \n-from mitmproxy import exceptions\n-from mitmproxy import flowfilter\n-from mitmproxy import ctx\n+TOrigin = Tuple[str, int, str]\n \n \n-def ckey(attrs, f):\n+def ckey(attrs: Dict[str, str], f: http.HTTPFlow) -> TOrigin:\n \"\"\"\n Returns a (domain, port, path) tuple.\n \"\"\"\n@@ -21,18 +21,18 @@\n return (domain, f.request.port, path)\n \n \n-def domain_match(a, b):\n- if cookiejar.domain_match(a, b):\n+def domain_match(a: str, b: str) -> bool:\n+ if cookiejar.domain_match(a, b): # type: ignore\n return True\n- elif cookiejar.domain_match(a, b.strip(\".\")):\n+ elif cookiejar.domain_match(a, b.strip(\".\")): # type: ignore\n return True\n return False\n \n \n class StickyCookie:\n def __init__(self):\n- self.jar = collections.defaultdict(dict)\n- self.flt = None\n+ self.jar = collections.defaultdict(dict) # type: Dict[TOrigin, Dict[str, str]]\n+ self.flt = None # type: Optional[flowfilter.TFilter]\n \n def configure(self, updated):\n if \"stickycookie\" in updated:\n@@ -46,7 +46,7 @@\n else:\n self.flt = None\n \n- def response(self, flow):\n+ def response(self, flow: http.HTTPFlow):\n if self.flt:\n for name, (value, attrs) in flow.response.cookies.items(multi=True):\n # FIXME: We now know that Cookie.py screws up some cookies with\n@@ -63,24 +63,21 @@\n if not self.jar[dom_port_path]:\n self.jar.pop(dom_port_path, None)\n else:\n- b = attrs.copy()\n- b.insert(0, name, value)\n- self.jar[dom_port_path][name] = b\n+ self.jar[dom_port_path][name] = value\n \n- def request(self, flow):\n+ def request(self, flow: http.HTTPFlow):\n if self.flt:\n- l = []\n+ cookie_list = [] # type: List[Tuple[str,str]]\n if flowfilter.match(self.flt, flow):\n- for domain, port, path in self.jar.keys():\n+ for (domain, port, path), c in self.jar.items():\n match = [\n domain_match(flow.request.host, domain),\n flow.request.port == port,\n flow.request.path.startswith(path)\n ]\n if all(match):\n- c = self.jar[(domain, port, path)]\n- l.extend([cookies.format_cookie_header(c[name].items(multi=True)) for name in c.keys()])\n- if l:\n+ cookie_list.extend(c.items())\n+ if cookie_list:\n # FIXME: we need to formalise this...\n- flow.request.stickycookie = True\n- flow.request.headers[\"cookie\"] = \"; \".join(l)\n+ flow.metadata[\"stickycookie\"] = True\n+ flow.request.headers[\"cookie\"] = cookies.format_cookie_header(cookie_list)\n", "issue": "Sticky cookies are improperly formatted.\n##### Steps to reproduce the problem:\r\n\r\n1. Go to http://www.html-kit.com/tools/cookietester/\r\n2. Click 'Set Test Cookie'\r\n3. Observe that one cookie is sent to the server.\r\n4. Remove the cookie.\r\n5. launch mitmproxy with `mitmproxy -t html-kit\\.com` and tell your browser to use it as a proxy\r\n6. Reload the page.\r\n7. Click 'Set Test Cookie'\r\n8. Observe that two 'cookies' are sent to the server.\r\n\r\n##### Any other comments? What have you tried so far?\r\nThere appears to be a comma in the output of mitmproxy, even though it is surrounded by quotes. It's possible, then that this is a parsing fail on the tool's end caused by a difference in what's sent back for the format of the date. Still, should it really be changing that?\r\n\r\n##### System information\r\nArch Linux, freshly updated.\r\n\r\nMitmproxy version: 2.0.1 (release version) \r\nPython version: 3.6.0\r\nPlatform: Linux-4.10.6-1-ARCH-x86_64-with-glibc2.3.4\r\nSSL version: OpenSSL 1.0.2k 26 Jan 2017\r\n\n", "before_files": [{"content": "import collections\nfrom http import cookiejar\n\nfrom mitmproxy.net.http import cookies\n\nfrom mitmproxy import exceptions\nfrom mitmproxy import flowfilter\nfrom mitmproxy import ctx\n\n\ndef ckey(attrs, f):\n \"\"\"\n Returns a (domain, port, path) tuple.\n \"\"\"\n domain = f.request.host\n path = \"/\"\n if \"domain\" in attrs:\n domain = attrs[\"domain\"]\n if \"path\" in attrs:\n path = attrs[\"path\"]\n return (domain, f.request.port, path)\n\n\ndef domain_match(a, b):\n if cookiejar.domain_match(a, b):\n return True\n elif cookiejar.domain_match(a, b.strip(\".\")):\n return True\n return False\n\n\nclass StickyCookie:\n def __init__(self):\n self.jar = collections.defaultdict(dict)\n self.flt = None\n\n def configure(self, updated):\n if \"stickycookie\" in updated:\n if ctx.options.stickycookie:\n flt = flowfilter.parse(ctx.options.stickycookie)\n if not flt:\n raise exceptions.OptionsError(\n \"stickycookie: invalid filter expression: %s\" % ctx.options.stickycookie\n )\n self.flt = flt\n else:\n self.flt = None\n\n def response(self, flow):\n if self.flt:\n for name, (value, attrs) in flow.response.cookies.items(multi=True):\n # FIXME: We now know that Cookie.py screws up some cookies with\n # valid RFC 822/1123 datetime specifications for expiry. Sigh.\n dom_port_path = ckey(attrs, flow)\n\n if domain_match(flow.request.host, dom_port_path[0]):\n if cookies.is_expired(attrs):\n # Remove the cookie from jar\n self.jar[dom_port_path].pop(name, None)\n\n # If all cookies of a dom_port_path have been removed\n # then remove it from the jar itself\n if not self.jar[dom_port_path]:\n self.jar.pop(dom_port_path, None)\n else:\n b = attrs.copy()\n b.insert(0, name, value)\n self.jar[dom_port_path][name] = b\n\n def request(self, flow):\n if self.flt:\n l = []\n if flowfilter.match(self.flt, flow):\n for domain, port, path in self.jar.keys():\n match = [\n domain_match(flow.request.host, domain),\n flow.request.port == port,\n flow.request.path.startswith(path)\n ]\n if all(match):\n c = self.jar[(domain, port, path)]\n l.extend([cookies.format_cookie_header(c[name].items(multi=True)) for name in c.keys()])\n if l:\n # FIXME: we need to formalise this...\n flow.request.stickycookie = True\n flow.request.headers[\"cookie\"] = \"; \".join(l)\n", "path": "mitmproxy/addons/stickycookie.py"}], "after_files": [{"content": "import collections\nfrom http import cookiejar\nfrom typing import List, Tuple, Dict, Optional # noqa\n\nfrom mitmproxy import http, flowfilter, ctx, exceptions\nfrom mitmproxy.net.http import cookies\n\nTOrigin = Tuple[str, int, str]\n\n\ndef ckey(attrs: Dict[str, str], f: http.HTTPFlow) -> TOrigin:\n \"\"\"\n Returns a (domain, port, path) tuple.\n \"\"\"\n domain = f.request.host\n path = \"/\"\n if \"domain\" in attrs:\n domain = attrs[\"domain\"]\n if \"path\" in attrs:\n path = attrs[\"path\"]\n return (domain, f.request.port, path)\n\n\ndef domain_match(a: str, b: str) -> bool:\n if cookiejar.domain_match(a, b): # type: ignore\n return True\n elif cookiejar.domain_match(a, b.strip(\".\")): # type: ignore\n return True\n return False\n\n\nclass StickyCookie:\n def __init__(self):\n self.jar = collections.defaultdict(dict) # type: Dict[TOrigin, Dict[str, str]]\n self.flt = None # type: Optional[flowfilter.TFilter]\n\n def configure(self, updated):\n if \"stickycookie\" in updated:\n if ctx.options.stickycookie:\n flt = flowfilter.parse(ctx.options.stickycookie)\n if not flt:\n raise exceptions.OptionsError(\n \"stickycookie: invalid filter expression: %s\" % ctx.options.stickycookie\n )\n self.flt = flt\n else:\n self.flt = None\n\n def response(self, flow: http.HTTPFlow):\n if self.flt:\n for name, (value, attrs) in flow.response.cookies.items(multi=True):\n # FIXME: We now know that Cookie.py screws up some cookies with\n # valid RFC 822/1123 datetime specifications for expiry. Sigh.\n dom_port_path = ckey(attrs, flow)\n\n if domain_match(flow.request.host, dom_port_path[0]):\n if cookies.is_expired(attrs):\n # Remove the cookie from jar\n self.jar[dom_port_path].pop(name, None)\n\n # If all cookies of a dom_port_path have been removed\n # then remove it from the jar itself\n if not self.jar[dom_port_path]:\n self.jar.pop(dom_port_path, None)\n else:\n self.jar[dom_port_path][name] = value\n\n def request(self, flow: http.HTTPFlow):\n if self.flt:\n cookie_list = [] # type: List[Tuple[str,str]]\n if flowfilter.match(self.flt, flow):\n for (domain, port, path), c in self.jar.items():\n match = [\n domain_match(flow.request.host, domain),\n flow.request.port == port,\n flow.request.path.startswith(path)\n ]\n if all(match):\n cookie_list.extend(c.items())\n if cookie_list:\n # FIXME: we need to formalise this...\n flow.metadata[\"stickycookie\"] = True\n flow.request.headers[\"cookie\"] = cookies.format_cookie_header(cookie_list)\n", "path": "mitmproxy/addons/stickycookie.py"}]}
| 1,333 | 797 |
gh_patches_debug_19676
|
rasdani/github-patches
|
git_diff
|
holoviz__holoviews-1845
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Table broken with bokeh 0.12.7
When displaying a Table with bokeh 0.12.7 I currently see the following error:
```
Javascript error adding output!
Error: SlickGrid's 'enableColumnReorder = true' option requires jquery-ui.sortable module to be loaded
See your browser Javascript console for more details.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `holoviews/plotting/bokeh/tabular.py`
Content:
```
1 from bokeh.models.widgets import DataTable, TableColumn
2
3 import param
4
5 import numpy as np
6 from ...core import Dataset
7 from ...element import ItemTable
8 from ..plot import GenericElementPlot
9 from .plot import BokehPlot
10
11 class TablePlot(BokehPlot, GenericElementPlot):
12
13 height = param.Number(default=None)
14
15 width = param.Number(default=400)
16
17 style_opts = ['row_headers', 'selectable', 'editable',
18 'sortable', 'fit_columns', 'width', 'height']
19
20 finalize_hooks = param.HookList(default=[], doc="""
21 Optional list of hooks called when finalizing a column.
22 The hook is passed the plot object and the displayed
23 object, and other plotting handles can be accessed via plot.handles.""")
24
25 _update_handles = ['source', 'glyph']
26
27 def __init__(self, element, plot=None, **params):
28 super(TablePlot, self).__init__(element, **params)
29 self.handles = {} if plot is None else self.handles['plot']
30 element_ids = self.hmap.traverse(lambda x: id(x), [Dataset, ItemTable])
31 self.static = len(set(element_ids)) == 1 and len(self.keys) == len(self.hmap)
32 self.callbacks = [] # Callback support on tables not implemented
33
34
35 def _execute_hooks(self, element):
36 """
37 Executes finalize hooks
38 """
39 for hook in self.finalize_hooks:
40 try:
41 hook(self, element)
42 except Exception as e:
43 self.warning("Plotting hook %r could not be applied:\n\n %s" % (hook, e))
44
45
46 def get_data(self, element, ranges=None, empty=False):
47 dims = element.dimensions()
48 data = {d: np.array([]) if empty else element.dimension_values(d)
49 for d in dims}
50 mapping = {d.name: d.name for d in dims}
51 data = {d.name: values if values.dtype.kind in "if" else list(map(d.pprint_value, values))
52 for d, values in data.items()}
53 return data, mapping
54
55
56 def initialize_plot(self, ranges=None, plot=None, plots=None, source=None):
57 """
58 Initializes a new plot object with the last available frame.
59 """
60 # Get element key and ranges for frame
61 element = self.hmap.last
62 key = self.keys[-1]
63 self.current_frame = element
64 self.current_key = key
65
66 data, _ = self.get_data(element, ranges)
67 if source is None:
68 source = self._init_datasource(data)
69 self.handles['source'] = source
70
71 dims = element.dimensions()
72 columns = [TableColumn(field=d.name, title=d.pprint_label) for d in dims]
73 properties = self.lookup_options(element, 'style')[self.cyclic_index]
74 table = DataTable(source=source, columns=columns, height=self.height,
75 width=self.width, **properties)
76 self.handles['plot'] = table
77 self.handles['glyph_renderer'] = table
78 self._execute_hooks(element)
79 self.drawn = True
80
81 return table
82
83
84 @property
85 def current_handles(self):
86 """
87 Returns a list of the plot objects to update.
88 """
89 handles = []
90 if self.static and not self.dynamic:
91 return handles
92
93
94 element = self.current_frame
95 previous_id = self.handles.get('previous_id', None)
96 current_id = None if self.current_frame is None else element._plot_id
97 for handle in self._update_handles:
98 if (handle == 'source' and self.dynamic and current_id == previous_id):
99 continue
100 if handle in self.handles:
101 handles.append(self.handles[handle])
102
103 # Cache frame object id to skip updating if unchanged
104 if self.dynamic:
105 self.handles['previous_id'] = current_id
106
107 return handles
108
109
110 def update_frame(self, key, ranges=None, plot=None):
111 """
112 Updates an existing plot with data corresponding
113 to the key.
114 """
115 element = self._get_frame(key)
116 source = self.handles['source']
117 data, _ = self.get_data(element, ranges)
118 self._update_datasource(source, data)
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/holoviews/plotting/bokeh/tabular.py b/holoviews/plotting/bokeh/tabular.py
--- a/holoviews/plotting/bokeh/tabular.py
+++ b/holoviews/plotting/bokeh/tabular.py
@@ -7,6 +7,8 @@
from ...element import ItemTable
from ..plot import GenericElementPlot
from .plot import BokehPlot
+from .util import bokeh_version
+
class TablePlot(BokehPlot, GenericElementPlot):
@@ -71,6 +73,8 @@
dims = element.dimensions()
columns = [TableColumn(field=d.name, title=d.pprint_label) for d in dims]
properties = self.lookup_options(element, 'style')[self.cyclic_index]
+ if bokeh_version > '0.12.7':
+ properties['reorderable'] = False
table = DataTable(source=source, columns=columns, height=self.height,
width=self.width, **properties)
self.handles['plot'] = table
|
{"golden_diff": "diff --git a/holoviews/plotting/bokeh/tabular.py b/holoviews/plotting/bokeh/tabular.py\n--- a/holoviews/plotting/bokeh/tabular.py\n+++ b/holoviews/plotting/bokeh/tabular.py\n@@ -7,6 +7,8 @@\n from ...element import ItemTable\n from ..plot import GenericElementPlot\n from .plot import BokehPlot\n+from .util import bokeh_version\n+\n \n class TablePlot(BokehPlot, GenericElementPlot):\n \n@@ -71,6 +73,8 @@\n dims = element.dimensions()\n columns = [TableColumn(field=d.name, title=d.pprint_label) for d in dims]\n properties = self.lookup_options(element, 'style')[self.cyclic_index]\n+ if bokeh_version > '0.12.7':\n+ properties['reorderable'] = False\n table = DataTable(source=source, columns=columns, height=self.height,\n width=self.width, **properties)\n self.handles['plot'] = table\n", "issue": "Table broken with bokeh 0.12.7\nWhen displaying a Table with bokeh 0.12.7 I currently see the following error:\r\n\r\n```\r\nJavascript error adding output!\r\nError: SlickGrid's 'enableColumnReorder = true' option requires jquery-ui.sortable module to be loaded\r\nSee your browser Javascript console for more details.\r\n```\n", "before_files": [{"content": "from bokeh.models.widgets import DataTable, TableColumn\n\nimport param\n\nimport numpy as np\nfrom ...core import Dataset\nfrom ...element import ItemTable\nfrom ..plot import GenericElementPlot\nfrom .plot import BokehPlot\n\nclass TablePlot(BokehPlot, GenericElementPlot):\n\n height = param.Number(default=None)\n\n width = param.Number(default=400)\n\n style_opts = ['row_headers', 'selectable', 'editable',\n 'sortable', 'fit_columns', 'width', 'height']\n\n finalize_hooks = param.HookList(default=[], doc=\"\"\"\n Optional list of hooks called when finalizing a column.\n The hook is passed the plot object and the displayed\n object, and other plotting handles can be accessed via plot.handles.\"\"\")\n\n _update_handles = ['source', 'glyph']\n\n def __init__(self, element, plot=None, **params):\n super(TablePlot, self).__init__(element, **params)\n self.handles = {} if plot is None else self.handles['plot']\n element_ids = self.hmap.traverse(lambda x: id(x), [Dataset, ItemTable])\n self.static = len(set(element_ids)) == 1 and len(self.keys) == len(self.hmap)\n self.callbacks = [] # Callback support on tables not implemented\n\n\n def _execute_hooks(self, element):\n \"\"\"\n Executes finalize hooks\n \"\"\"\n for hook in self.finalize_hooks:\n try:\n hook(self, element)\n except Exception as e:\n self.warning(\"Plotting hook %r could not be applied:\\n\\n %s\" % (hook, e))\n\n\n def get_data(self, element, ranges=None, empty=False):\n dims = element.dimensions()\n data = {d: np.array([]) if empty else element.dimension_values(d)\n for d in dims}\n mapping = {d.name: d.name for d in dims}\n data = {d.name: values if values.dtype.kind in \"if\" else list(map(d.pprint_value, values))\n for d, values in data.items()}\n return data, mapping\n\n\n def initialize_plot(self, ranges=None, plot=None, plots=None, source=None):\n \"\"\"\n Initializes a new plot object with the last available frame.\n \"\"\"\n # Get element key and ranges for frame\n element = self.hmap.last\n key = self.keys[-1]\n self.current_frame = element\n self.current_key = key\n\n data, _ = self.get_data(element, ranges)\n if source is None:\n source = self._init_datasource(data)\n self.handles['source'] = source\n\n dims = element.dimensions()\n columns = [TableColumn(field=d.name, title=d.pprint_label) for d in dims]\n properties = self.lookup_options(element, 'style')[self.cyclic_index]\n table = DataTable(source=source, columns=columns, height=self.height,\n width=self.width, **properties)\n self.handles['plot'] = table\n self.handles['glyph_renderer'] = table\n self._execute_hooks(element)\n self.drawn = True\n\n return table\n\n\n @property\n def current_handles(self):\n \"\"\"\n Returns a list of the plot objects to update.\n \"\"\"\n handles = []\n if self.static and not self.dynamic:\n return handles\n\n\n element = self.current_frame\n previous_id = self.handles.get('previous_id', None)\n current_id = None if self.current_frame is None else element._plot_id\n for handle in self._update_handles:\n if (handle == 'source' and self.dynamic and current_id == previous_id):\n continue\n if handle in self.handles:\n handles.append(self.handles[handle])\n\n # Cache frame object id to skip updating if unchanged\n if self.dynamic:\n self.handles['previous_id'] = current_id\n\n return handles\n\n\n def update_frame(self, key, ranges=None, plot=None):\n \"\"\"\n Updates an existing plot with data corresponding\n to the key.\n \"\"\"\n element = self._get_frame(key)\n source = self.handles['source']\n data, _ = self.get_data(element, ranges)\n self._update_datasource(source, data)\n", "path": "holoviews/plotting/bokeh/tabular.py"}], "after_files": [{"content": "from bokeh.models.widgets import DataTable, TableColumn\n\nimport param\n\nimport numpy as np\nfrom ...core import Dataset\nfrom ...element import ItemTable\nfrom ..plot import GenericElementPlot\nfrom .plot import BokehPlot\nfrom .util import bokeh_version\n\n\nclass TablePlot(BokehPlot, GenericElementPlot):\n\n height = param.Number(default=None)\n\n width = param.Number(default=400)\n\n style_opts = ['row_headers', 'selectable', 'editable',\n 'sortable', 'fit_columns', 'width', 'height']\n\n finalize_hooks = param.HookList(default=[], doc=\"\"\"\n Optional list of hooks called when finalizing a column.\n The hook is passed the plot object and the displayed\n object, and other plotting handles can be accessed via plot.handles.\"\"\")\n\n _update_handles = ['source', 'glyph']\n\n def __init__(self, element, plot=None, **params):\n super(TablePlot, self).__init__(element, **params)\n self.handles = {} if plot is None else self.handles['plot']\n element_ids = self.hmap.traverse(lambda x: id(x), [Dataset, ItemTable])\n self.static = len(set(element_ids)) == 1 and len(self.keys) == len(self.hmap)\n self.callbacks = [] # Callback support on tables not implemented\n\n\n def _execute_hooks(self, element):\n \"\"\"\n Executes finalize hooks\n \"\"\"\n for hook in self.finalize_hooks:\n try:\n hook(self, element)\n except Exception as e:\n self.warning(\"Plotting hook %r could not be applied:\\n\\n %s\" % (hook, e))\n\n\n def get_data(self, element, ranges=None, empty=False):\n dims = element.dimensions()\n data = {d: np.array([]) if empty else element.dimension_values(d)\n for d in dims}\n mapping = {d.name: d.name for d in dims}\n data = {d.name: values if values.dtype.kind in \"if\" else list(map(d.pprint_value, values))\n for d, values in data.items()}\n return data, mapping\n\n\n def initialize_plot(self, ranges=None, plot=None, plots=None, source=None):\n \"\"\"\n Initializes a new plot object with the last available frame.\n \"\"\"\n # Get element key and ranges for frame\n element = self.hmap.last\n key = self.keys[-1]\n self.current_frame = element\n self.current_key = key\n\n data, _ = self.get_data(element, ranges)\n if source is None:\n source = self._init_datasource(data)\n self.handles['source'] = source\n\n dims = element.dimensions()\n columns = [TableColumn(field=d.name, title=d.pprint_label) for d in dims]\n properties = self.lookup_options(element, 'style')[self.cyclic_index]\n if bokeh_version > '0.12.7':\n properties['reorderable'] = False\n table = DataTable(source=source, columns=columns, height=self.height,\n width=self.width, **properties)\n self.handles['plot'] = table\n self.handles['glyph_renderer'] = table\n self._execute_hooks(element)\n self.drawn = True\n\n return table\n\n\n @property\n def current_handles(self):\n \"\"\"\n Returns a list of the plot objects to update.\n \"\"\"\n handles = []\n if self.static and not self.dynamic:\n return handles\n\n\n element = self.current_frame\n previous_id = self.handles.get('previous_id', None)\n current_id = None if self.current_frame is None else element._plot_id\n for handle in self._update_handles:\n if (handle == 'source' and self.dynamic and current_id == previous_id):\n continue\n if handle in self.handles:\n handles.append(self.handles[handle])\n\n # Cache frame object id to skip updating if unchanged\n if self.dynamic:\n self.handles['previous_id'] = current_id\n\n return handles\n\n\n def update_frame(self, key, ranges=None, plot=None):\n \"\"\"\n Updates an existing plot with data corresponding\n to the key.\n \"\"\"\n element = self._get_frame(key)\n source = self.handles['source']\n data, _ = self.get_data(element, ranges)\n self._update_datasource(source, data)\n", "path": "holoviews/plotting/bokeh/tabular.py"}]}
| 1,509 | 240 |
gh_patches_debug_20905
|
rasdani/github-patches
|
git_diff
|
nvaccess__nvda-11972
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dev docs: globalVars.appDir is not defined when attempting to build docs with Sphinx
Hi,
Related to #11970 and actually blocks it:
### Steps to reproduce:
When trying to build dev docs using "scons devDocs":
1. Run scons devDocs.
2. Once Sphinx is instlaled and ready, Sphinx will try to build dev docs for the source code.
### Actual behavior:
A traceback shows up, ending with:
AttributeError: module 'globalVars' has no attribute 'appDir'
### Expected behavior:
No errors with the dev docs building completing.
### System configuration
#### NVDA installed/portable/running from source:
Source
#### NVDA version:
Alpha-21561,7e5ffde2391c
#### Windows version:
Windows 10 Version 20H2 (build 19042.685)
#### Name and version of other software in use when reproducing the issue:
Python 3.7.9
#### Other information about your system:
N/A
### Other questions
#### Does the issue still occur after restarting your computer?
Yes
#### Have you tried any other versions of NVDA? If so, please report their behaviors.
Not applicable
#### If addons are disabled, is your problem still occurring?
Not applicable
#### Did you try to run the COM registry fixing tool in NVDA menu / tools?
Not applicable
### Cause:
This is caused by config file error, specifically when a mock config.conf instance is created. Prior to this, importing config module fails because globalVars.appDir is not defined by the time scons devDocs is run.
### Solution:
one solution is to define globalVars.appDir to point to the source directory.
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `devDocs/conf.py`
Content:
```
1 # A part of NonVisual Desktop Access (NVDA)
2 # Copyright (C) 2019 NV Access Limited, Leonard de Ruijter
3 # This file is covered by the GNU General Public License.
4 # See the file COPYING for more details.
5
6 # Configuration file for the Sphinx documentation builder.
7
8 # -- Path setup --------------------------------------------------------------
9
10 import os
11 import sys
12 sys.path.insert(0, os.path.abspath('../source'))
13 import sourceEnv # noqa: F401, E402
14
15 # Initialize languageHandler so that sphinx is able to deal with translatable strings.
16 import languageHandler # noqa: E402
17 languageHandler.setLanguage("en")
18
19 # Initialize globalvars.appArgs to something sensible.
20 import globalVars # noqa: E402
21
22
23 class AppArgs:
24 # Set an empty comnfig path
25 # This is never used as we don't initialize config, but some modules expect this to be set.
26 configPath = ""
27 secure = False
28 disableAddons = True
29 launcher = False
30
31
32 globalVars.appArgs = AppArgs()
33
34 # Import NVDA's versionInfo module.
35 import versionInfo # noqa: E402
36 # Set a suitable updateVersionType for the updateCheck module to be imported
37 versionInfo.updateVersionType = "stable"
38
39 # -- Project information -----------------------------------------------------
40
41 project = versionInfo.name
42 copyright = versionInfo.copyright
43 author = versionInfo.publisher
44
45 # The major project version
46 version = versionInfo.formatVersionForGUI(
47 versionInfo.version_year,
48 versionInfo.version_major,
49 versionInfo.version_minor
50 )
51
52 # The full version, including alpha/beta/rc tags
53 release = versionInfo.version
54
55 # -- General configuration ---------------------------------------------------
56
57 default_role = 'py:obj'
58
59 # Add any Sphinx extension module names here, as strings. They can be
60 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
61 # ones.
62 extensions = [
63 'sphinx.ext.autodoc',
64 ]
65
66 # Add any paths that contain templates here, relative to this directory.
67 templates_path = ['_templates']
68
69 # List of patterns, relative to source directory, that match files and
70 # directories to ignore when looking for source files.
71 # This pattern also affects html_static_path and html_extra_path.
72 exclude_patterns = [
73 "_build"
74 ]
75
76
77 # -- Options for HTML output -------------------------------------------------
78
79 # The theme to use for HTML and HTML Help pages.
80
81 html_theme = "sphinx_rtd_theme"
82
83 # Add any paths that contain custom static files (such as style sheets) here,
84 # relative to this directory. They are copied after the builtin static files,
85 # so a file named "default.css" will overwrite the builtin "default.css".
86 html_static_path = ['_static']
87
88 # -- Extension configuration -------------------------------------------------
89
90 # sphinx.ext.autodoc configuration
91
92 # Both the class’ and the __init__ method’s docstring are concatenated and inserted.
93 autoclass_content = "both"
94 autodoc_member_order = 'bysource'
95 autodoc_mock_imports = [
96 "louis", # Not our project
97 ]
98
99 # Perform some manual mocking of specific objects.
100 # autodoc can only mock modules, not objects.
101 from sphinx.ext.autodoc.mock import _make_subclass # noqa: E402
102
103 import config # noqa: E402
104 # Mock an instance of the configuration manager.
105 config.conf = _make_subclass("conf", "config")()
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/devDocs/conf.py b/devDocs/conf.py
--- a/devDocs/conf.py
+++ b/devDocs/conf.py
@@ -1,5 +1,5 @@
# A part of NonVisual Desktop Access (NVDA)
-# Copyright (C) 2019 NV Access Limited, Leonard de Ruijter
+# Copyright (C) 2019-2020 NV Access Limited, Leonard de Ruijter, Joseph Lee
# This file is covered by the GNU General Public License.
# See the file COPYING for more details.
@@ -16,7 +16,7 @@
import languageHandler # noqa: E402
languageHandler.setLanguage("en")
-# Initialize globalvars.appArgs to something sensible.
+# Initialize globalVars.appArgs to something sensible.
import globalVars # noqa: E402
@@ -30,6 +30,11 @@
globalVars.appArgs = AppArgs()
+# #11971: NVDA is not running, therefore app dir is undefined.
+# Therefore tell NVDA that apt source directory is app dir.
+appDir = os.path.join("..", "source")
+globalVars.appDir = os.path.abspath(appDir)
+
# Import NVDA's versionInfo module.
import versionInfo # noqa: E402
|
{"golden_diff": "diff --git a/devDocs/conf.py b/devDocs/conf.py\n--- a/devDocs/conf.py\n+++ b/devDocs/conf.py\n@@ -1,5 +1,5 @@\n # A part of NonVisual Desktop Access (NVDA)\n-# Copyright (C) 2019 NV Access Limited, Leonard de Ruijter\n+# Copyright (C) 2019-2020 NV Access Limited, Leonard de Ruijter, Joseph Lee\n # This file is covered by the GNU General Public License.\n # See the file COPYING for more details.\n \n@@ -16,7 +16,7 @@\n import languageHandler # noqa: E402\n languageHandler.setLanguage(\"en\")\n \n-# Initialize globalvars.appArgs to something sensible.\n+# Initialize globalVars.appArgs to something sensible.\n import globalVars # noqa: E402\n \n \n@@ -30,6 +30,11 @@\n \n \n globalVars.appArgs = AppArgs()\n+# #11971: NVDA is not running, therefore app dir is undefined.\n+# Therefore tell NVDA that apt source directory is app dir.\n+appDir = os.path.join(\"..\", \"source\")\n+globalVars.appDir = os.path.abspath(appDir)\n+\n \n # Import NVDA's versionInfo module.\n import versionInfo # noqa: E402\n", "issue": "Dev docs: globalVars.appDir is not defined when attempting to build docs with Sphinx\nHi,\r\nRelated to #11970 and actually blocks it:\r\n\r\n### Steps to reproduce:\r\nWhen trying to build dev docs using \"scons devDocs\":\r\n\r\n1. Run scons devDocs.\r\n2. Once Sphinx is instlaled and ready, Sphinx will try to build dev docs for the source code.\r\n\r\n### Actual behavior:\r\nA traceback shows up, ending with:\r\nAttributeError: module 'globalVars' has no attribute 'appDir'\r\n\r\n### Expected behavior:\r\nNo errors with the dev docs building completing.\r\n\r\n### System configuration\r\n#### NVDA installed/portable/running from source:\r\nSource\r\n\r\n#### NVDA version:\r\nAlpha-21561,7e5ffde2391c\r\n\r\n#### Windows version:\r\nWindows 10 Version 20H2 (build 19042.685)\r\n\r\n#### Name and version of other software in use when reproducing the issue:\r\nPython 3.7.9\r\n\r\n#### Other information about your system:\r\nN/A\r\n\r\n### Other questions\r\n#### Does the issue still occur after restarting your computer?\r\nYes\r\n\r\n#### Have you tried any other versions of NVDA? If so, please report their behaviors.\r\nNot applicable\r\n\r\n#### If addons are disabled, is your problem still occurring?\r\nNot applicable\r\n\r\n#### Did you try to run the COM registry fixing tool in NVDA menu / tools?\r\nNot applicable\r\n\r\n### Cause:\r\nThis is caused by config file error, specifically when a mock config.conf instance is created. Prior to this, importing config module fails because globalVars.appDir is not defined by the time scons devDocs is run.\r\n\r\n### Solution:\r\none solution is to define globalVars.appDir to point to the source directory.\r\n\r\nThanks.\n", "before_files": [{"content": "# A part of NonVisual Desktop Access (NVDA)\n# Copyright (C) 2019 NV Access Limited, Leonard de Ruijter\n# This file is covered by the GNU General Public License.\n# See the file COPYING for more details.\n\n# Configuration file for the Sphinx documentation builder.\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../source'))\nimport sourceEnv # noqa: F401, E402\n\n# Initialize languageHandler so that sphinx is able to deal with translatable strings.\nimport languageHandler # noqa: E402\nlanguageHandler.setLanguage(\"en\")\n\n# Initialize globalvars.appArgs to something sensible.\nimport globalVars # noqa: E402\n\n\nclass AppArgs:\n\t# Set an empty comnfig path\n\t# This is never used as we don't initialize config, but some modules expect this to be set.\n\tconfigPath = \"\"\n\tsecure = False\n\tdisableAddons = True\n\tlauncher = False\n\n\nglobalVars.appArgs = AppArgs()\n\n# Import NVDA's versionInfo module.\nimport versionInfo # noqa: E402\n# Set a suitable updateVersionType for the updateCheck module to be imported\nversionInfo.updateVersionType = \"stable\"\n\n# -- Project information -----------------------------------------------------\n\nproject = versionInfo.name\ncopyright = versionInfo.copyright\nauthor = versionInfo.publisher\n\n# The major project version\nversion = versionInfo.formatVersionForGUI(\n\tversionInfo.version_year,\n\tversionInfo.version_major,\n\tversionInfo.version_minor\n)\n\n# The full version, including alpha/beta/rc tags\nrelease = versionInfo.version\n\n# -- General configuration ---------------------------------------------------\n\ndefault_role = 'py:obj'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\n\t\"_build\"\n]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\n\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# -- Extension configuration -------------------------------------------------\n\n# sphinx.ext.autodoc configuration\n\n# Both the class\u2019 and the __init__ method\u2019s docstring are concatenated and inserted.\nautoclass_content = \"both\"\nautodoc_member_order = 'bysource'\nautodoc_mock_imports = [\n\t\"louis\", # Not our project\n]\n\n# Perform some manual mocking of specific objects.\n# autodoc can only mock modules, not objects.\nfrom sphinx.ext.autodoc.mock import _make_subclass # noqa: E402\n\nimport config # noqa: E402\n# Mock an instance of the configuration manager.\nconfig.conf = _make_subclass(\"conf\", \"config\")()\n", "path": "devDocs/conf.py"}], "after_files": [{"content": "# A part of NonVisual Desktop Access (NVDA)\n# Copyright (C) 2019-2020 NV Access Limited, Leonard de Ruijter, Joseph Lee\n# This file is covered by the GNU General Public License.\n# See the file COPYING for more details.\n\n# Configuration file for the Sphinx documentation builder.\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../source'))\nimport sourceEnv # noqa: F401, E402\n\n# Initialize languageHandler so that sphinx is able to deal with translatable strings.\nimport languageHandler # noqa: E402\nlanguageHandler.setLanguage(\"en\")\n\n# Initialize globalVars.appArgs to something sensible.\nimport globalVars # noqa: E402\n\n\nclass AppArgs:\n\t# Set an empty comnfig path\n\t# This is never used as we don't initialize config, but some modules expect this to be set.\n\tconfigPath = \"\"\n\tsecure = False\n\tdisableAddons = True\n\tlauncher = False\n\n\nglobalVars.appArgs = AppArgs()\n# #11971: NVDA is not running, therefore app dir is undefined.\n# Therefore tell NVDA that apt source directory is app dir.\nappDir = os.path.join(\"..\", \"source\")\nglobalVars.appDir = os.path.abspath(appDir)\n\n\n# Import NVDA's versionInfo module.\nimport versionInfo # noqa: E402\n# Set a suitable updateVersionType for the updateCheck module to be imported\nversionInfo.updateVersionType = \"stable\"\n\n# -- Project information -----------------------------------------------------\n\nproject = versionInfo.name\ncopyright = versionInfo.copyright\nauthor = versionInfo.publisher\n\n# The major project version\nversion = versionInfo.formatVersionForGUI(\n\tversionInfo.version_year,\n\tversionInfo.version_major,\n\tversionInfo.version_minor\n)\n\n# The full version, including alpha/beta/rc tags\nrelease = versionInfo.version\n\n# -- General configuration ---------------------------------------------------\n\ndefault_role = 'py:obj'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n\t'sphinx.ext.autodoc',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\n\t\"_build\"\n]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\n\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# -- Extension configuration -------------------------------------------------\n\n# sphinx.ext.autodoc configuration\n\n# Both the class\u2019 and the __init__ method\u2019s docstring are concatenated and inserted.\nautoclass_content = \"both\"\nautodoc_member_order = 'bysource'\nautodoc_mock_imports = [\n\t\"louis\", # Not our project\n]\n\n# Perform some manual mocking of specific objects.\n# autodoc can only mock modules, not objects.\nfrom sphinx.ext.autodoc.mock import _make_subclass # noqa: E402\n\nimport config # noqa: E402\n# Mock an instance of the configuration manager.\nconfig.conf = _make_subclass(\"conf\", \"config\")()\n", "path": "devDocs/conf.py"}]}
| 1,589 | 291 |
gh_patches_debug_27466
|
rasdani/github-patches
|
git_diff
|
vyperlang__vyper-543
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Question] Attack Vector described in Vipercoin's `approve` annotation
In [L89 of `vipercoin.v.py`](https://github.com/ethereum/viper/blob/master/examples/tokens/vipercoin.v.py#L89), the `approve` method has an annotation that begins like this
>To prevent attack vectors like the one described here and discussed here,
I don't see any description of the attack vectors described, perhaps there should be an external link here? Point me in the right direction and I can make the PR for it. :)
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/tokens/vipercoin.v.py`
Content:
```
1 # Viper Port of MyToken
2 # THIS CONTRACT HAS NOT BEEN AUDITED!
3 # ERC20 details at:
4 # https://theethereum.wiki/w/index.php/ERC20_Token_Standard
5 # https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20-token-standard.md
6 # Events of the token.
7 Transfer: __log__({_from: indexed(address), _to: indexed(address), _value: num256})
8 Approval: __log__({_owner: indexed(address), _spender: indexed(address), _value: num256})
9
10
11 # Variables of the token.
12 name: bytes32
13 symbol: bytes32
14 totalSupply: num
15 decimals: num
16 balances: num[address]
17 allowed: num[address][address]
18
19 @public
20 def __init__(_name: bytes32, _symbol: bytes32, _decimals: num, _initialSupply: num):
21
22 self.name = _name
23 self.symbol = _symbol
24 self.decimals = _decimals
25 self.totalSupply = _initialSupply * 10 ** _decimals
26 self.balances[msg.sender] = self.totalSupply
27
28 @public
29 @constant
30 def symbol() -> bytes32:
31
32 return self.symbol
33
34 @public
35 @constant
36 def name() -> bytes32:
37
38 return self.name
39
40
41 # What is the balance of a particular account?
42 @public
43 @constant
44 def balanceOf(_owner: address) -> num256:
45
46 return as_num256(self.balances[_owner])
47
48
49 # Return total supply of token.
50 @public
51 @constant
52 def totalSupply() -> num256:
53
54 return as_num256(self.totalSupply)
55
56
57 # Send `_value` tokens to `_to` from your account
58 @public
59 def transfer(_to: address, _amount: num(num256)) -> bool:
60
61 assert self.balances[msg.sender] >= _amount
62 assert self.balances[_to] + _amount >= self.balances[_to]
63
64 self.balances[msg.sender] -= _amount # Subtract from the sender
65 self.balances[_to] += _amount # Add the same to the recipient
66 log.Transfer(msg.sender, _to, as_num256(_amount)) # log transfer event.
67
68 return True
69
70
71 # Transfer allowed tokens from a specific account to another.
72 @public
73 def transferFrom(_from: address, _to: address, _value: num(num256)) -> bool:
74
75 assert _value <= self.allowed[_from][msg.sender]
76 assert _value <= self.balances[_from]
77
78 self.balances[_from] -= _value # decrease balance of from address.
79 self.allowed[_from][msg.sender] -= _value # decrease allowance.
80 self.balances[_to] += _value # incease balance of to address.
81 log.Transfer(_from, _to, as_num256(_value)) # log transfer event.
82
83 return True
84
85
86 # Allow _spender to withdraw from your account, multiple times, up to the _value amount.
87 # If this function is called again it overwrites the current allowance with _value.
88 #
89 # NOTE: To prevent attack vectors like the one described here and discussed here,
90 # clients SHOULD make sure to create user interfaces in such a way that they
91 # set the allowance first to 0 before setting it to another value for the
92 # same spender. THOUGH The contract itself shouldn't enforce it, to allow
93 # backwards compatilibilty with contracts deployed before.
94 #
95 @public
96 def approve(_spender: address, _amount: num(num256)) -> bool:
97
98 self.allowed[msg.sender][_spender] = _amount
99 log.Approval(msg.sender, _spender, as_num256(_amount))
100
101 return True
102
103
104 # Get the allowance an address has to spend anothers' token.
105 @public
106 def allowance(_owner: address, _spender: address) -> num256:
107
108 return as_num256(self.allowed[_owner][_spender])
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/tokens/vipercoin.v.py b/examples/tokens/vipercoin.v.py
--- a/examples/tokens/vipercoin.v.py
+++ b/examples/tokens/vipercoin.v.py
@@ -86,12 +86,15 @@
# Allow _spender to withdraw from your account, multiple times, up to the _value amount.
# If this function is called again it overwrites the current allowance with _value.
#
-# NOTE: To prevent attack vectors like the one described here and discussed here,
-# clients SHOULD make sure to create user interfaces in such a way that they
+# NOTE: We would like to prevent attack vectors like the one described here:
+# https://docs.google.com/document/d/1YLPtQxZu1UAvO9cZ1O2RPXBbT0mooh4DYKjA_jp-RLM/edit#heading=h.m9fhqynw2xvt
+# and discussed here:
+# https://github.com/ethereum/EIPs/issues/20#issuecomment-263524729
+#
+# Clients SHOULD make sure to create user interfaces in such a way that they
# set the allowance first to 0 before setting it to another value for the
# same spender. THOUGH The contract itself shouldn't enforce it, to allow
# backwards compatilibilty with contracts deployed before.
-#
@public
def approve(_spender: address, _amount: num(num256)) -> bool:
@@ -101,7 +104,7 @@
return True
-# Get the allowance an address has to spend anothers' token.
+# Get the allowance an address has to spend another's token.
@public
def allowance(_owner: address, _spender: address) -> num256:
|
{"golden_diff": "diff --git a/examples/tokens/vipercoin.v.py b/examples/tokens/vipercoin.v.py\n--- a/examples/tokens/vipercoin.v.py\n+++ b/examples/tokens/vipercoin.v.py\n@@ -86,12 +86,15 @@\n # Allow _spender to withdraw from your account, multiple times, up to the _value amount.\n # If this function is called again it overwrites the current allowance with _value.\n #\n-# NOTE: To prevent attack vectors like the one described here and discussed here,\n-# clients SHOULD make sure to create user interfaces in such a way that they\n+# NOTE: We would like to prevent attack vectors like the one described here:\n+# https://docs.google.com/document/d/1YLPtQxZu1UAvO9cZ1O2RPXBbT0mooh4DYKjA_jp-RLM/edit#heading=h.m9fhqynw2xvt\n+# and discussed here:\n+# https://github.com/ethereum/EIPs/issues/20#issuecomment-263524729\n+#\n+# Clients SHOULD make sure to create user interfaces in such a way that they\n # set the allowance first to 0 before setting it to another value for the\n # same spender. THOUGH The contract itself shouldn't enforce it, to allow\n # backwards compatilibilty with contracts deployed before.\n-#\n @public\n def approve(_spender: address, _amount: num(num256)) -> bool:\n \n@@ -101,7 +104,7 @@\n return True\n \n \n-# Get the allowance an address has to spend anothers' token.\n+# Get the allowance an address has to spend another's token.\n @public\n def allowance(_owner: address, _spender: address) -> num256:\n", "issue": "[Question] Attack Vector described in Vipercoin's `approve` annotation\nIn [L89 of `vipercoin.v.py`](https://github.com/ethereum/viper/blob/master/examples/tokens/vipercoin.v.py#L89), the `approve` method has an annotation that begins like this\r\n\r\n>To prevent attack vectors like the one described here and discussed here,\r\n\r\nI don't see any description of the attack vectors described, perhaps there should be an external link here? Point me in the right direction and I can make the PR for it. :)\r\n\r\nThanks!\n", "before_files": [{"content": "# Viper Port of MyToken\n# THIS CONTRACT HAS NOT BEEN AUDITED!\n# ERC20 details at:\n# https://theethereum.wiki/w/index.php/ERC20_Token_Standard\n# https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20-token-standard.md\n# Events of the token.\nTransfer: __log__({_from: indexed(address), _to: indexed(address), _value: num256})\nApproval: __log__({_owner: indexed(address), _spender: indexed(address), _value: num256})\n\n\n# Variables of the token.\nname: bytes32\nsymbol: bytes32\ntotalSupply: num\ndecimals: num\nbalances: num[address]\nallowed: num[address][address]\n\n@public\ndef __init__(_name: bytes32, _symbol: bytes32, _decimals: num, _initialSupply: num):\n \n self.name = _name\n self.symbol = _symbol\n self.decimals = _decimals\n self.totalSupply = _initialSupply * 10 ** _decimals\n self.balances[msg.sender] = self.totalSupply\n\n@public\n@constant\ndef symbol() -> bytes32:\n\n return self.symbol\n\n@public\n@constant\ndef name() -> bytes32:\n\n return self.name\n\n\n# What is the balance of a particular account?\n@public\n@constant\ndef balanceOf(_owner: address) -> num256:\n\n return as_num256(self.balances[_owner])\n\n\n# Return total supply of token.\n@public\n@constant\ndef totalSupply() -> num256:\n\n return as_num256(self.totalSupply)\n\n\n# Send `_value` tokens to `_to` from your account\n@public\ndef transfer(_to: address, _amount: num(num256)) -> bool:\n\n assert self.balances[msg.sender] >= _amount\n assert self.balances[_to] + _amount >= self.balances[_to]\n\n self.balances[msg.sender] -= _amount # Subtract from the sender\n self.balances[_to] += _amount # Add the same to the recipient\n log.Transfer(msg.sender, _to, as_num256(_amount)) # log transfer event.\n\n return True\n\n\n# Transfer allowed tokens from a specific account to another.\n@public\ndef transferFrom(_from: address, _to: address, _value: num(num256)) -> bool:\n\n assert _value <= self.allowed[_from][msg.sender]\n assert _value <= self.balances[_from]\n\n self.balances[_from] -= _value # decrease balance of from address.\n self.allowed[_from][msg.sender] -= _value # decrease allowance.\n self.balances[_to] += _value # incease balance of to address.\n log.Transfer(_from, _to, as_num256(_value)) # log transfer event.\n \n return True\n\n\n# Allow _spender to withdraw from your account, multiple times, up to the _value amount.\n# If this function is called again it overwrites the current allowance with _value.\n#\n# NOTE: To prevent attack vectors like the one described here and discussed here,\n# clients SHOULD make sure to create user interfaces in such a way that they\n# set the allowance first to 0 before setting it to another value for the\n# same spender. THOUGH The contract itself shouldn't enforce it, to allow\n# backwards compatilibilty with contracts deployed before.\n#\n@public\ndef approve(_spender: address, _amount: num(num256)) -> bool:\n\n self.allowed[msg.sender][_spender] = _amount\n log.Approval(msg.sender, _spender, as_num256(_amount))\n\n return True\n\n\n# Get the allowance an address has to spend anothers' token.\n@public\ndef allowance(_owner: address, _spender: address) -> num256:\n\n return as_num256(self.allowed[_owner][_spender])\n", "path": "examples/tokens/vipercoin.v.py"}], "after_files": [{"content": "# Viper Port of MyToken\n# THIS CONTRACT HAS NOT BEEN AUDITED!\n# ERC20 details at:\n# https://theethereum.wiki/w/index.php/ERC20_Token_Standard\n# https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20-token-standard.md\n# Events of the token.\nTransfer: __log__({_from: indexed(address), _to: indexed(address), _value: num256})\nApproval: __log__({_owner: indexed(address), _spender: indexed(address), _value: num256})\n\n\n# Variables of the token.\nname: bytes32\nsymbol: bytes32\ntotalSupply: num\ndecimals: num\nbalances: num[address]\nallowed: num[address][address]\n\n@public\ndef __init__(_name: bytes32, _symbol: bytes32, _decimals: num, _initialSupply: num):\n \n self.name = _name\n self.symbol = _symbol\n self.decimals = _decimals\n self.totalSupply = _initialSupply * 10 ** _decimals\n self.balances[msg.sender] = self.totalSupply\n\n@public\n@constant\ndef symbol() -> bytes32:\n\n return self.symbol\n\n@public\n@constant\ndef name() -> bytes32:\n\n return self.name\n\n\n# What is the balance of a particular account?\n@public\n@constant\ndef balanceOf(_owner: address) -> num256:\n\n return as_num256(self.balances[_owner])\n\n\n# Return total supply of token.\n@public\n@constant\ndef totalSupply() -> num256:\n\n return as_num256(self.totalSupply)\n\n\n# Send `_value` tokens to `_to` from your account\n@public\ndef transfer(_to: address, _amount: num(num256)) -> bool:\n\n assert self.balances[msg.sender] >= _amount\n assert self.balances[_to] + _amount >= self.balances[_to]\n\n self.balances[msg.sender] -= _amount # Subtract from the sender\n self.balances[_to] += _amount # Add the same to the recipient\n log.Transfer(msg.sender, _to, as_num256(_amount)) # log transfer event.\n\n return True\n\n\n# Transfer allowed tokens from a specific account to another.\n@public\ndef transferFrom(_from: address, _to: address, _value: num(num256)) -> bool:\n\n assert _value <= self.allowed[_from][msg.sender]\n assert _value <= self.balances[_from]\n\n self.balances[_from] -= _value # decrease balance of from address.\n self.allowed[_from][msg.sender] -= _value # decrease allowance.\n self.balances[_to] += _value # incease balance of to address.\n log.Transfer(_from, _to, as_num256(_value)) # log transfer event.\n \n return True\n\n\n# Allow _spender to withdraw from your account, multiple times, up to the _value amount.\n# If this function is called again it overwrites the current allowance with _value.\n#\n# NOTE: We would like to prevent attack vectors like the one described here:\n# https://docs.google.com/document/d/1YLPtQxZu1UAvO9cZ1O2RPXBbT0mooh4DYKjA_jp-RLM/edit#heading=h.m9fhqynw2xvt\n# and discussed here:\n# https://github.com/ethereum/EIPs/issues/20#issuecomment-263524729\n#\n# Clients SHOULD make sure to create user interfaces in such a way that they\n# set the allowance first to 0 before setting it to another value for the\n# same spender. THOUGH The contract itself shouldn't enforce it, to allow\n# backwards compatilibilty with contracts deployed before.\n@public\ndef approve(_spender: address, _amount: num(num256)) -> bool:\n\n self.allowed[msg.sender][_spender] = _amount\n log.Approval(msg.sender, _spender, as_num256(_amount))\n\n return True\n\n\n# Get the allowance an address has to spend another's token.\n@public\ndef allowance(_owner: address, _spender: address) -> num256:\n\n return as_num256(self.allowed[_owner][_spender])\n", "path": "examples/tokens/vipercoin.v.py"}]}
| 1,503 | 400 |
gh_patches_debug_22788
|
rasdani/github-patches
|
git_diff
|
CTPUG__wafer-193
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove'unicode' calls from wafer
Current wafer using python 3 fails on several admin tasks because `UserProfile.__str__` tries to call `unicode`, which is obviously not defined.
We should handle the difference between python 2 and python 3 correctly in this situation.
There are a couple of other calls to unicode() that look dangerous in the error paths in /registration/views.py that should probably be fixed as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wafer/users/models.py`
Content:
```
1 from django.contrib.auth.models import User
2 from django.db import models
3 from django.db.models.signals import post_save
4 from django.utils.encoding import python_2_unicode_compatible
5
6 from libravatar import libravatar_url
7 try:
8 from urllib2 import urlparse
9 except ImportError:
10 from urllib import parse as urlparse
11 from django.utils.http import urlquote
12
13 from wafer.talks.models import ACCEPTED, PENDING
14
15
16 @python_2_unicode_compatible
17 class UserProfile(models.Model):
18 user = models.OneToOneField(User)
19 contact_number = models.CharField(max_length=16, null=True, blank=True)
20 bio = models.TextField(null=True, blank=True)
21
22 homepage = models.CharField(max_length=256, null=True, blank=True)
23 # We should probably do social auth instead
24 # And care about other code hosting sites...
25 twitter_handle = models.CharField(max_length=15, null=True, blank=True)
26 github_username = models.CharField(max_length=32, null=True, blank=True)
27
28 def __str__(self):
29 return unicode(self.user)
30
31 def accepted_talks(self):
32 return self.user.talks.filter(status=ACCEPTED)
33
34 def pending_talks(self):
35 return self.user.talks.filter(status=PENDING)
36
37 def avatar_url(self, size=96, https=True, default='mm'):
38 if not self.user.email:
39 return None
40 return libravatar_url(self.user.email, size=size, https=https,
41 default=default)
42
43 def homepage_url(self):
44 """Try ensure we prepend http: to the url if there's nothing there
45
46 This is to ensure we're not generating relative links in the
47 user templates."""
48 if not self.homepage:
49 return self.homepage
50 parsed = urlparse.urlparse(self.homepage)
51 if parsed.scheme:
52 return self.homepage
53 # Vague sanity check
54 abs_url = ''.join(['http://', self.homepage])
55 if urlparse.urlparse(abs_url).scheme == 'http':
56 return abs_url
57 return self.homepage
58
59 def display_name(self):
60 return self.user.get_full_name() or self.user.username
61
62
63 def create_user_profile(sender, instance, created, raw=False, **kwargs):
64 if raw:
65 return
66 if created:
67 UserProfile.objects.create(user=instance)
68
69 post_save.connect(create_user_profile, sender=User)
70
```
Path: `wafer/registration/views.py`
Content:
```
1 import urllib
2
3 from django.contrib.auth import login
4 from django.contrib import messages
5 from django.core.urlresolvers import reverse
6 from django.conf import settings
7 from django.http import Http404, HttpResponseRedirect
8
9 from wafer.registration.sso import SSOError, debian_sso, github_sso
10
11
12 def redirect_profile(request):
13 '''
14 The default destination from logging in, redirect to the actual profile URL
15 '''
16 if request.user.is_authenticated():
17 return HttpResponseRedirect(reverse('wafer_user_profile',
18 args=(request.user.username,)))
19 else:
20 return HttpResponseRedirect(reverse('wafer_page', args=('index',)))
21
22
23 def github_login(request):
24 if 'github' not in settings.WAFER_SSO:
25 raise Http404()
26
27 if 'code' not in request.GET:
28 return HttpResponseRedirect(
29 'https://github.com/login/oauth/authorize?' + urllib.urlencode({
30 'client_id': settings.WAFER_GITHUB_CLIENT_ID,
31 'redirect_uri': request.build_absolute_uri(
32 reverse(github_login)),
33 'scope': 'user:email',
34 'state': request.META['CSRF_COOKIE'],
35 }))
36
37 try:
38 if request.GET['state'] != request.META['CSRF_COOKIE']:
39 raise SSOError('Incorrect state')
40
41 user = github_sso(request.GET['code'])
42 except SSOError as e:
43 messages.error(request, unicode(e))
44 return HttpResponseRedirect(reverse('auth_login'))
45
46 login(request, user)
47 return redirect_profile(request)
48
49
50 def debian_login(request):
51 if 'debian' not in settings.WAFER_SSO:
52 raise Http404()
53
54 try:
55 user = debian_sso(request.META)
56 except SSOError as e:
57 messages.error(request, unicode(e))
58 return HttpResponseRedirect(reverse('auth_login'))
59
60 login(request, user)
61 return redirect_profile(request)
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wafer/registration/views.py b/wafer/registration/views.py
--- a/wafer/registration/views.py
+++ b/wafer/registration/views.py
@@ -40,7 +40,7 @@
user = github_sso(request.GET['code'])
except SSOError as e:
- messages.error(request, unicode(e))
+ messages.error(request, u'%s' % e)
return HttpResponseRedirect(reverse('auth_login'))
login(request, user)
@@ -54,7 +54,7 @@
try:
user = debian_sso(request.META)
except SSOError as e:
- messages.error(request, unicode(e))
+ messages.error(request, u'%s' % e)
return HttpResponseRedirect(reverse('auth_login'))
login(request, user)
diff --git a/wafer/users/models.py b/wafer/users/models.py
--- a/wafer/users/models.py
+++ b/wafer/users/models.py
@@ -26,7 +26,7 @@
github_username = models.CharField(max_length=32, null=True, blank=True)
def __str__(self):
- return unicode(self.user)
+ return u'%s' % self.user
def accepted_talks(self):
return self.user.talks.filter(status=ACCEPTED)
|
{"golden_diff": "diff --git a/wafer/registration/views.py b/wafer/registration/views.py\n--- a/wafer/registration/views.py\n+++ b/wafer/registration/views.py\n@@ -40,7 +40,7 @@\n \n user = github_sso(request.GET['code'])\n except SSOError as e:\n- messages.error(request, unicode(e))\n+ messages.error(request, u'%s' % e)\n return HttpResponseRedirect(reverse('auth_login'))\n \n login(request, user)\n@@ -54,7 +54,7 @@\n try:\n user = debian_sso(request.META)\n except SSOError as e:\n- messages.error(request, unicode(e))\n+ messages.error(request, u'%s' % e)\n return HttpResponseRedirect(reverse('auth_login'))\n \n login(request, user)\ndiff --git a/wafer/users/models.py b/wafer/users/models.py\n--- a/wafer/users/models.py\n+++ b/wafer/users/models.py\n@@ -26,7 +26,7 @@\n github_username = models.CharField(max_length=32, null=True, blank=True)\n \n def __str__(self):\n- return unicode(self.user)\n+ return u'%s' % self.user\n \n def accepted_talks(self):\n return self.user.talks.filter(status=ACCEPTED)\n", "issue": "Remove'unicode' calls from wafer\nCurrent wafer using python 3 fails on several admin tasks because `UserProfile.__str__` tries to call `unicode`, which is obviously not defined.\n\nWe should handle the difference between python 2 and python 3 correctly in this situation.\n\nThere are a couple of other calls to unicode() that look dangerous in the error paths in /registration/views.py that should probably be fixed as well.\n\n", "before_files": [{"content": "from django.contrib.auth.models import User\nfrom django.db import models\nfrom django.db.models.signals import post_save\nfrom django.utils.encoding import python_2_unicode_compatible\n\nfrom libravatar import libravatar_url\ntry:\n from urllib2 import urlparse\nexcept ImportError:\n from urllib import parse as urlparse\nfrom django.utils.http import urlquote\n\nfrom wafer.talks.models import ACCEPTED, PENDING\n\n\n@python_2_unicode_compatible\nclass UserProfile(models.Model):\n user = models.OneToOneField(User)\n contact_number = models.CharField(max_length=16, null=True, blank=True)\n bio = models.TextField(null=True, blank=True)\n\n homepage = models.CharField(max_length=256, null=True, blank=True)\n # We should probably do social auth instead\n # And care about other code hosting sites...\n twitter_handle = models.CharField(max_length=15, null=True, blank=True)\n github_username = models.CharField(max_length=32, null=True, blank=True)\n\n def __str__(self):\n return unicode(self.user)\n\n def accepted_talks(self):\n return self.user.talks.filter(status=ACCEPTED)\n\n def pending_talks(self):\n return self.user.talks.filter(status=PENDING)\n\n def avatar_url(self, size=96, https=True, default='mm'):\n if not self.user.email:\n return None\n return libravatar_url(self.user.email, size=size, https=https,\n default=default)\n\n def homepage_url(self):\n \"\"\"Try ensure we prepend http: to the url if there's nothing there\n\n This is to ensure we're not generating relative links in the\n user templates.\"\"\"\n if not self.homepage:\n return self.homepage\n parsed = urlparse.urlparse(self.homepage)\n if parsed.scheme:\n return self.homepage\n # Vague sanity check\n abs_url = ''.join(['http://', self.homepage])\n if urlparse.urlparse(abs_url).scheme == 'http':\n return abs_url\n return self.homepage\n\n def display_name(self):\n return self.user.get_full_name() or self.user.username\n\n\ndef create_user_profile(sender, instance, created, raw=False, **kwargs):\n if raw:\n return\n if created:\n UserProfile.objects.create(user=instance)\n\npost_save.connect(create_user_profile, sender=User)\n", "path": "wafer/users/models.py"}, {"content": "import urllib\n\nfrom django.contrib.auth import login\nfrom django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.conf import settings\nfrom django.http import Http404, HttpResponseRedirect\n\nfrom wafer.registration.sso import SSOError, debian_sso, github_sso\n\n\ndef redirect_profile(request):\n '''\n The default destination from logging in, redirect to the actual profile URL\n '''\n if request.user.is_authenticated():\n return HttpResponseRedirect(reverse('wafer_user_profile',\n args=(request.user.username,)))\n else:\n return HttpResponseRedirect(reverse('wafer_page', args=('index',)))\n\n\ndef github_login(request):\n if 'github' not in settings.WAFER_SSO:\n raise Http404()\n\n if 'code' not in request.GET:\n return HttpResponseRedirect(\n 'https://github.com/login/oauth/authorize?' + urllib.urlencode({\n 'client_id': settings.WAFER_GITHUB_CLIENT_ID,\n 'redirect_uri': request.build_absolute_uri(\n reverse(github_login)),\n 'scope': 'user:email',\n 'state': request.META['CSRF_COOKIE'],\n }))\n\n try:\n if request.GET['state'] != request.META['CSRF_COOKIE']:\n raise SSOError('Incorrect state')\n\n user = github_sso(request.GET['code'])\n except SSOError as e:\n messages.error(request, unicode(e))\n return HttpResponseRedirect(reverse('auth_login'))\n\n login(request, user)\n return redirect_profile(request)\n\n\ndef debian_login(request):\n if 'debian' not in settings.WAFER_SSO:\n raise Http404()\n\n try:\n user = debian_sso(request.META)\n except SSOError as e:\n messages.error(request, unicode(e))\n return HttpResponseRedirect(reverse('auth_login'))\n\n login(request, user)\n return redirect_profile(request)\n", "path": "wafer/registration/views.py"}], "after_files": [{"content": "from django.contrib.auth.models import User\nfrom django.db import models\nfrom django.db.models.signals import post_save\nfrom django.utils.encoding import python_2_unicode_compatible\n\nfrom libravatar import libravatar_url\ntry:\n from urllib2 import urlparse\nexcept ImportError:\n from urllib import parse as urlparse\nfrom django.utils.http import urlquote\n\nfrom wafer.talks.models import ACCEPTED, PENDING\n\n\n@python_2_unicode_compatible\nclass UserProfile(models.Model):\n user = models.OneToOneField(User)\n contact_number = models.CharField(max_length=16, null=True, blank=True)\n bio = models.TextField(null=True, blank=True)\n\n homepage = models.CharField(max_length=256, null=True, blank=True)\n # We should probably do social auth instead\n # And care about other code hosting sites...\n twitter_handle = models.CharField(max_length=15, null=True, blank=True)\n github_username = models.CharField(max_length=32, null=True, blank=True)\n\n def __str__(self):\n return u'%s' % self.user\n\n def accepted_talks(self):\n return self.user.talks.filter(status=ACCEPTED)\n\n def pending_talks(self):\n return self.user.talks.filter(status=PENDING)\n\n def avatar_url(self, size=96, https=True, default='mm'):\n if not self.user.email:\n return None\n return libravatar_url(self.user.email, size=size, https=https,\n default=default)\n\n def homepage_url(self):\n \"\"\"Try ensure we prepend http: to the url if there's nothing there\n\n This is to ensure we're not generating relative links in the\n user templates.\"\"\"\n if not self.homepage:\n return self.homepage\n parsed = urlparse.urlparse(self.homepage)\n if parsed.scheme:\n return self.homepage\n # Vague sanity check\n abs_url = ''.join(['http://', self.homepage])\n if urlparse.urlparse(abs_url).scheme == 'http':\n return abs_url\n return self.homepage\n\n def display_name(self):\n return self.user.get_full_name() or self.user.username\n\n\ndef create_user_profile(sender, instance, created, raw=False, **kwargs):\n if raw:\n return\n if created:\n UserProfile.objects.create(user=instance)\n\npost_save.connect(create_user_profile, sender=User)\n", "path": "wafer/users/models.py"}, {"content": "import urllib\n\nfrom django.contrib.auth import login\nfrom django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.conf import settings\nfrom django.http import Http404, HttpResponseRedirect\n\nfrom wafer.registration.sso import SSOError, debian_sso, github_sso\n\n\ndef redirect_profile(request):\n '''\n The default destination from logging in, redirect to the actual profile URL\n '''\n if request.user.is_authenticated():\n return HttpResponseRedirect(reverse('wafer_user_profile',\n args=(request.user.username,)))\n else:\n return HttpResponseRedirect(reverse('wafer_page', args=('index',)))\n\n\ndef github_login(request):\n if 'github' not in settings.WAFER_SSO:\n raise Http404()\n\n if 'code' not in request.GET:\n return HttpResponseRedirect(\n 'https://github.com/login/oauth/authorize?' + urllib.urlencode({\n 'client_id': settings.WAFER_GITHUB_CLIENT_ID,\n 'redirect_uri': request.build_absolute_uri(\n reverse(github_login)),\n 'scope': 'user:email',\n 'state': request.META['CSRF_COOKIE'],\n }))\n\n try:\n if request.GET['state'] != request.META['CSRF_COOKIE']:\n raise SSOError('Incorrect state')\n\n user = github_sso(request.GET['code'])\n except SSOError as e:\n messages.error(request, u'%s' % e)\n return HttpResponseRedirect(reverse('auth_login'))\n\n login(request, user)\n return redirect_profile(request)\n\n\ndef debian_login(request):\n if 'debian' not in settings.WAFER_SSO:\n raise Http404()\n\n try:\n user = debian_sso(request.META)\n except SSOError as e:\n messages.error(request, u'%s' % e)\n return HttpResponseRedirect(reverse('auth_login'))\n\n login(request, user)\n return redirect_profile(request)\n", "path": "wafer/registration/views.py"}]}
| 1,520 | 292 |
gh_patches_debug_14315
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-1664
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix fastapi version
CI in `main` is failing right now because of `opentelemetery-instrumentation-fastapi` failures, fix `fastapi` version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 _instruments = ("fastapi ~= 0.58",)
17
18 _supports_metrics = True
19
```
Path: `opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # DO NOT EDIT. THIS FILE WAS AUTOGENERATED FROM INSTRUMENTATION PACKAGES.
16 # RUN `python scripts/generate_instrumentation_bootstrap.py` TO REGENERATE.
17
18 libraries = {
19 "aio_pika": {
20 "library": "aio_pika >= 7.2.0, < 9.0.0",
21 "instrumentation": "opentelemetry-instrumentation-aio-pika==0.37b0.dev",
22 },
23 "aiohttp": {
24 "library": "aiohttp ~= 3.0",
25 "instrumentation": "opentelemetry-instrumentation-aiohttp-client==0.37b0.dev",
26 },
27 "aiopg": {
28 "library": "aiopg >= 0.13.0, < 2.0.0",
29 "instrumentation": "opentelemetry-instrumentation-aiopg==0.37b0.dev",
30 },
31 "asgiref": {
32 "library": "asgiref ~= 3.0",
33 "instrumentation": "opentelemetry-instrumentation-asgi==0.37b0.dev",
34 },
35 "asyncpg": {
36 "library": "asyncpg >= 0.12.0",
37 "instrumentation": "opentelemetry-instrumentation-asyncpg==0.37b0.dev",
38 },
39 "boto": {
40 "library": "boto~=2.0",
41 "instrumentation": "opentelemetry-instrumentation-boto==0.37b0.dev",
42 },
43 "boto3": {
44 "library": "boto3 ~= 1.0",
45 "instrumentation": "opentelemetry-instrumentation-boto3sqs==0.37b0.dev",
46 },
47 "botocore": {
48 "library": "botocore ~= 1.0",
49 "instrumentation": "opentelemetry-instrumentation-botocore==0.37b0.dev",
50 },
51 "celery": {
52 "library": "celery >= 4.0, < 6.0",
53 "instrumentation": "opentelemetry-instrumentation-celery==0.37b0.dev",
54 },
55 "confluent-kafka": {
56 "library": "confluent-kafka >= 1.8.2, < 2.0.0",
57 "instrumentation": "opentelemetry-instrumentation-confluent-kafka==0.37b0.dev",
58 },
59 "django": {
60 "library": "django >= 1.10",
61 "instrumentation": "opentelemetry-instrumentation-django==0.37b0.dev",
62 },
63 "elasticsearch": {
64 "library": "elasticsearch >= 2.0",
65 "instrumentation": "opentelemetry-instrumentation-elasticsearch==0.37b0.dev",
66 },
67 "falcon": {
68 "library": "falcon >= 1.4.1, < 4.0.0",
69 "instrumentation": "opentelemetry-instrumentation-falcon==0.37b0.dev",
70 },
71 "fastapi": {
72 "library": "fastapi ~= 0.58",
73 "instrumentation": "opentelemetry-instrumentation-fastapi==0.37b0.dev",
74 },
75 "flask": {
76 "library": "flask >= 1.0, < 3.0",
77 "instrumentation": "opentelemetry-instrumentation-flask==0.37b0.dev",
78 },
79 "grpcio": {
80 "library": "grpcio ~= 1.27",
81 "instrumentation": "opentelemetry-instrumentation-grpc==0.37b0.dev",
82 },
83 "httpx": {
84 "library": "httpx >= 0.18.0, <= 0.23.0",
85 "instrumentation": "opentelemetry-instrumentation-httpx==0.37b0.dev",
86 },
87 "jinja2": {
88 "library": "jinja2 >= 2.7, < 4.0",
89 "instrumentation": "opentelemetry-instrumentation-jinja2==0.37b0.dev",
90 },
91 "kafka-python": {
92 "library": "kafka-python >= 2.0",
93 "instrumentation": "opentelemetry-instrumentation-kafka-python==0.37b0.dev",
94 },
95 "mysql-connector-python": {
96 "library": "mysql-connector-python ~= 8.0",
97 "instrumentation": "opentelemetry-instrumentation-mysql==0.37b0.dev",
98 },
99 "pika": {
100 "library": "pika >= 0.12.0",
101 "instrumentation": "opentelemetry-instrumentation-pika==0.37b0.dev",
102 },
103 "psycopg2": {
104 "library": "psycopg2 >= 2.7.3.1",
105 "instrumentation": "opentelemetry-instrumentation-psycopg2==0.37b0.dev",
106 },
107 "pymemcache": {
108 "library": "pymemcache >= 1.3.5, < 4",
109 "instrumentation": "opentelemetry-instrumentation-pymemcache==0.37b0.dev",
110 },
111 "pymongo": {
112 "library": "pymongo >= 3.1, < 5.0",
113 "instrumentation": "opentelemetry-instrumentation-pymongo==0.37b0.dev",
114 },
115 "PyMySQL": {
116 "library": "PyMySQL < 2",
117 "instrumentation": "opentelemetry-instrumentation-pymysql==0.37b0.dev",
118 },
119 "pyramid": {
120 "library": "pyramid >= 1.7",
121 "instrumentation": "opentelemetry-instrumentation-pyramid==0.37b0.dev",
122 },
123 "redis": {
124 "library": "redis >= 2.6",
125 "instrumentation": "opentelemetry-instrumentation-redis==0.37b0.dev",
126 },
127 "remoulade": {
128 "library": "remoulade >= 0.50",
129 "instrumentation": "opentelemetry-instrumentation-remoulade==0.37b0.dev",
130 },
131 "requests": {
132 "library": "requests ~= 2.0",
133 "instrumentation": "opentelemetry-instrumentation-requests==0.37b0.dev",
134 },
135 "scikit-learn": {
136 "library": "scikit-learn ~= 0.24.0",
137 "instrumentation": "opentelemetry-instrumentation-sklearn==0.37b0.dev",
138 },
139 "sqlalchemy": {
140 "library": "sqlalchemy",
141 "instrumentation": "opentelemetry-instrumentation-sqlalchemy==0.37b0.dev",
142 },
143 "starlette": {
144 "library": "starlette ~= 0.13.0",
145 "instrumentation": "opentelemetry-instrumentation-starlette==0.37b0.dev",
146 },
147 "psutil": {
148 "library": "psutil >= 5",
149 "instrumentation": "opentelemetry-instrumentation-system-metrics==0.37b0.dev",
150 },
151 "tornado": {
152 "library": "tornado >= 5.1.1",
153 "instrumentation": "opentelemetry-instrumentation-tornado==0.37b0.dev",
154 },
155 "tortoise-orm": {
156 "library": "tortoise-orm >= 0.17.0",
157 "instrumentation": "opentelemetry-instrumentation-tortoiseorm==0.37b0.dev",
158 },
159 "pydantic": {
160 "library": "pydantic >= 1.10.2",
161 "instrumentation": "opentelemetry-instrumentation-tortoiseorm==0.37b0.dev",
162 },
163 "urllib3": {
164 "library": "urllib3 >= 1.0.0, < 2.0.0",
165 "instrumentation": "opentelemetry-instrumentation-urllib3==0.37b0.dev",
166 },
167 }
168 default_instrumentations = [
169 "opentelemetry-instrumentation-aws-lambda==0.37b0.dev",
170 "opentelemetry-instrumentation-dbapi==0.37b0.dev",
171 "opentelemetry-instrumentation-logging==0.37b0.dev",
172 "opentelemetry-instrumentation-sqlite3==0.37b0.dev",
173 "opentelemetry-instrumentation-urllib==0.37b0.dev",
174 "opentelemetry-instrumentation-wsgi==0.37b0.dev",
175 ]
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py b/instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py
--- a/instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py
+++ b/instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py
@@ -13,6 +13,6 @@
# limitations under the License.
-_instruments = ("fastapi ~= 0.58",)
+_instruments = ("fastapi <= 0.90.1",)
_supports_metrics = True
diff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py
--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py
+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py
@@ -69,7 +69,7 @@
"instrumentation": "opentelemetry-instrumentation-falcon==0.37b0.dev",
},
"fastapi": {
- "library": "fastapi ~= 0.58",
+ "library": "fastapi <= 0.90.1",
"instrumentation": "opentelemetry-instrumentation-fastapi==0.37b0.dev",
},
"flask": {
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py b/instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py\n--- a/instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py\n+++ b/instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py\n@@ -13,6 +13,6 @@\n # limitations under the License.\n \n \n-_instruments = (\"fastapi ~= 0.58\",)\n+_instruments = (\"fastapi <= 0.90.1\",)\n \n _supports_metrics = True\ndiff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py\n--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py\n+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py\n@@ -69,7 +69,7 @@\n \"instrumentation\": \"opentelemetry-instrumentation-falcon==0.37b0.dev\",\n },\n \"fastapi\": {\n- \"library\": \"fastapi ~= 0.58\",\n+ \"library\": \"fastapi <= 0.90.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-fastapi==0.37b0.dev\",\n },\n \"flask\": {\n", "issue": "Fix fastapi version\nCI in `main` is failing right now because of `opentelemetery-instrumentation-fastapi` failures, fix `fastapi` version.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n_instruments = (\"fastapi ~= 0.58\",)\n\n_supports_metrics = True\n", "path": "instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# DO NOT EDIT. THIS FILE WAS AUTOGENERATED FROM INSTRUMENTATION PACKAGES.\n# RUN `python scripts/generate_instrumentation_bootstrap.py` TO REGENERATE.\n\nlibraries = {\n \"aio_pika\": {\n \"library\": \"aio_pika >= 7.2.0, < 9.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aio-pika==0.37b0.dev\",\n },\n \"aiohttp\": {\n \"library\": \"aiohttp ~= 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aiohttp-client==0.37b0.dev\",\n },\n \"aiopg\": {\n \"library\": \"aiopg >= 0.13.0, < 2.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aiopg==0.37b0.dev\",\n },\n \"asgiref\": {\n \"library\": \"asgiref ~= 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-asgi==0.37b0.dev\",\n },\n \"asyncpg\": {\n \"library\": \"asyncpg >= 0.12.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-asyncpg==0.37b0.dev\",\n },\n \"boto\": {\n \"library\": \"boto~=2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-boto==0.37b0.dev\",\n },\n \"boto3\": {\n \"library\": \"boto3 ~= 1.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-boto3sqs==0.37b0.dev\",\n },\n \"botocore\": {\n \"library\": \"botocore ~= 1.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-botocore==0.37b0.dev\",\n },\n \"celery\": {\n \"library\": \"celery >= 4.0, < 6.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-celery==0.37b0.dev\",\n },\n \"confluent-kafka\": {\n \"library\": \"confluent-kafka >= 1.8.2, < 2.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-confluent-kafka==0.37b0.dev\",\n },\n \"django\": {\n \"library\": \"django >= 1.10\",\n \"instrumentation\": \"opentelemetry-instrumentation-django==0.37b0.dev\",\n },\n \"elasticsearch\": {\n \"library\": \"elasticsearch >= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-elasticsearch==0.37b0.dev\",\n },\n \"falcon\": {\n \"library\": \"falcon >= 1.4.1, < 4.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-falcon==0.37b0.dev\",\n },\n \"fastapi\": {\n \"library\": \"fastapi ~= 0.58\",\n \"instrumentation\": \"opentelemetry-instrumentation-fastapi==0.37b0.dev\",\n },\n \"flask\": {\n \"library\": \"flask >= 1.0, < 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-flask==0.37b0.dev\",\n },\n \"grpcio\": {\n \"library\": \"grpcio ~= 1.27\",\n \"instrumentation\": \"opentelemetry-instrumentation-grpc==0.37b0.dev\",\n },\n \"httpx\": {\n \"library\": \"httpx >= 0.18.0, <= 0.23.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-httpx==0.37b0.dev\",\n },\n \"jinja2\": {\n \"library\": \"jinja2 >= 2.7, < 4.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-jinja2==0.37b0.dev\",\n },\n \"kafka-python\": {\n \"library\": \"kafka-python >= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-kafka-python==0.37b0.dev\",\n },\n \"mysql-connector-python\": {\n \"library\": \"mysql-connector-python ~= 8.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-mysql==0.37b0.dev\",\n },\n \"pika\": {\n \"library\": \"pika >= 0.12.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-pika==0.37b0.dev\",\n },\n \"psycopg2\": {\n \"library\": \"psycopg2 >= 2.7.3.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-psycopg2==0.37b0.dev\",\n },\n \"pymemcache\": {\n \"library\": \"pymemcache >= 1.3.5, < 4\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymemcache==0.37b0.dev\",\n },\n \"pymongo\": {\n \"library\": \"pymongo >= 3.1, < 5.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymongo==0.37b0.dev\",\n },\n \"PyMySQL\": {\n \"library\": \"PyMySQL < 2\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymysql==0.37b0.dev\",\n },\n \"pyramid\": {\n \"library\": \"pyramid >= 1.7\",\n \"instrumentation\": \"opentelemetry-instrumentation-pyramid==0.37b0.dev\",\n },\n \"redis\": {\n \"library\": \"redis >= 2.6\",\n \"instrumentation\": \"opentelemetry-instrumentation-redis==0.37b0.dev\",\n },\n \"remoulade\": {\n \"library\": \"remoulade >= 0.50\",\n \"instrumentation\": \"opentelemetry-instrumentation-remoulade==0.37b0.dev\",\n },\n \"requests\": {\n \"library\": \"requests ~= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-requests==0.37b0.dev\",\n },\n \"scikit-learn\": {\n \"library\": \"scikit-learn ~= 0.24.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-sklearn==0.37b0.dev\",\n },\n \"sqlalchemy\": {\n \"library\": \"sqlalchemy\",\n \"instrumentation\": \"opentelemetry-instrumentation-sqlalchemy==0.37b0.dev\",\n },\n \"starlette\": {\n \"library\": \"starlette ~= 0.13.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-starlette==0.37b0.dev\",\n },\n \"psutil\": {\n \"library\": \"psutil >= 5\",\n \"instrumentation\": \"opentelemetry-instrumentation-system-metrics==0.37b0.dev\",\n },\n \"tornado\": {\n \"library\": \"tornado >= 5.1.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-tornado==0.37b0.dev\",\n },\n \"tortoise-orm\": {\n \"library\": \"tortoise-orm >= 0.17.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-tortoiseorm==0.37b0.dev\",\n },\n \"pydantic\": {\n \"library\": \"pydantic >= 1.10.2\",\n \"instrumentation\": \"opentelemetry-instrumentation-tortoiseorm==0.37b0.dev\",\n },\n \"urllib3\": {\n \"library\": \"urllib3 >= 1.0.0, < 2.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-urllib3==0.37b0.dev\",\n },\n}\ndefault_instrumentations = [\n \"opentelemetry-instrumentation-aws-lambda==0.37b0.dev\",\n \"opentelemetry-instrumentation-dbapi==0.37b0.dev\",\n \"opentelemetry-instrumentation-logging==0.37b0.dev\",\n \"opentelemetry-instrumentation-sqlite3==0.37b0.dev\",\n \"opentelemetry-instrumentation-urllib==0.37b0.dev\",\n \"opentelemetry-instrumentation-wsgi==0.37b0.dev\",\n]\n", "path": "opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n_instruments = (\"fastapi <= 0.90.1\",)\n\n_supports_metrics = True\n", "path": "instrumentation/opentelemetry-instrumentation-fastapi/src/opentelemetry/instrumentation/fastapi/package.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# DO NOT EDIT. THIS FILE WAS AUTOGENERATED FROM INSTRUMENTATION PACKAGES.\n# RUN `python scripts/generate_instrumentation_bootstrap.py` TO REGENERATE.\n\nlibraries = {\n \"aio_pika\": {\n \"library\": \"aio_pika >= 7.2.0, < 9.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aio-pika==0.37b0.dev\",\n },\n \"aiohttp\": {\n \"library\": \"aiohttp ~= 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aiohttp-client==0.37b0.dev\",\n },\n \"aiopg\": {\n \"library\": \"aiopg >= 0.13.0, < 2.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-aiopg==0.37b0.dev\",\n },\n \"asgiref\": {\n \"library\": \"asgiref ~= 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-asgi==0.37b0.dev\",\n },\n \"asyncpg\": {\n \"library\": \"asyncpg >= 0.12.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-asyncpg==0.37b0.dev\",\n },\n \"boto\": {\n \"library\": \"boto~=2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-boto==0.37b0.dev\",\n },\n \"boto3\": {\n \"library\": \"boto3 ~= 1.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-boto3sqs==0.37b0.dev\",\n },\n \"botocore\": {\n \"library\": \"botocore ~= 1.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-botocore==0.37b0.dev\",\n },\n \"celery\": {\n \"library\": \"celery >= 4.0, < 6.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-celery==0.37b0.dev\",\n },\n \"confluent-kafka\": {\n \"library\": \"confluent-kafka >= 1.8.2, < 2.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-confluent-kafka==0.37b0.dev\",\n },\n \"django\": {\n \"library\": \"django >= 1.10\",\n \"instrumentation\": \"opentelemetry-instrumentation-django==0.37b0.dev\",\n },\n \"elasticsearch\": {\n \"library\": \"elasticsearch >= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-elasticsearch==0.37b0.dev\",\n },\n \"falcon\": {\n \"library\": \"falcon >= 1.4.1, < 4.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-falcon==0.37b0.dev\",\n },\n \"fastapi\": {\n \"library\": \"fastapi <= 0.90.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-fastapi==0.37b0.dev\",\n },\n \"flask\": {\n \"library\": \"flask >= 1.0, < 3.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-flask==0.37b0.dev\",\n },\n \"grpcio\": {\n \"library\": \"grpcio ~= 1.27\",\n \"instrumentation\": \"opentelemetry-instrumentation-grpc==0.37b0.dev\",\n },\n \"httpx\": {\n \"library\": \"httpx >= 0.18.0, <= 0.23.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-httpx==0.37b0.dev\",\n },\n \"jinja2\": {\n \"library\": \"jinja2 >= 2.7, < 4.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-jinja2==0.37b0.dev\",\n },\n \"kafka-python\": {\n \"library\": \"kafka-python >= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-kafka-python==0.37b0.dev\",\n },\n \"mysql-connector-python\": {\n \"library\": \"mysql-connector-python ~= 8.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-mysql==0.37b0.dev\",\n },\n \"pika\": {\n \"library\": \"pika >= 0.12.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-pika==0.37b0.dev\",\n },\n \"psycopg2\": {\n \"library\": \"psycopg2 >= 2.7.3.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-psycopg2==0.37b0.dev\",\n },\n \"pymemcache\": {\n \"library\": \"pymemcache >= 1.3.5, < 4\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymemcache==0.37b0.dev\",\n },\n \"pymongo\": {\n \"library\": \"pymongo >= 3.1, < 5.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymongo==0.37b0.dev\",\n },\n \"PyMySQL\": {\n \"library\": \"PyMySQL < 2\",\n \"instrumentation\": \"opentelemetry-instrumentation-pymysql==0.37b0.dev\",\n },\n \"pyramid\": {\n \"library\": \"pyramid >= 1.7\",\n \"instrumentation\": \"opentelemetry-instrumentation-pyramid==0.37b0.dev\",\n },\n \"redis\": {\n \"library\": \"redis >= 2.6\",\n \"instrumentation\": \"opentelemetry-instrumentation-redis==0.37b0.dev\",\n },\n \"remoulade\": {\n \"library\": \"remoulade >= 0.50\",\n \"instrumentation\": \"opentelemetry-instrumentation-remoulade==0.37b0.dev\",\n },\n \"requests\": {\n \"library\": \"requests ~= 2.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-requests==0.37b0.dev\",\n },\n \"scikit-learn\": {\n \"library\": \"scikit-learn ~= 0.24.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-sklearn==0.37b0.dev\",\n },\n \"sqlalchemy\": {\n \"library\": \"sqlalchemy\",\n \"instrumentation\": \"opentelemetry-instrumentation-sqlalchemy==0.37b0.dev\",\n },\n \"starlette\": {\n \"library\": \"starlette ~= 0.13.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-starlette==0.37b0.dev\",\n },\n \"psutil\": {\n \"library\": \"psutil >= 5\",\n \"instrumentation\": \"opentelemetry-instrumentation-system-metrics==0.37b0.dev\",\n },\n \"tornado\": {\n \"library\": \"tornado >= 5.1.1\",\n \"instrumentation\": \"opentelemetry-instrumentation-tornado==0.37b0.dev\",\n },\n \"tortoise-orm\": {\n \"library\": \"tortoise-orm >= 0.17.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-tortoiseorm==0.37b0.dev\",\n },\n \"pydantic\": {\n \"library\": \"pydantic >= 1.10.2\",\n \"instrumentation\": \"opentelemetry-instrumentation-tortoiseorm==0.37b0.dev\",\n },\n \"urllib3\": {\n \"library\": \"urllib3 >= 1.0.0, < 2.0.0\",\n \"instrumentation\": \"opentelemetry-instrumentation-urllib3==0.37b0.dev\",\n },\n}\ndefault_instrumentations = [\n \"opentelemetry-instrumentation-aws-lambda==0.37b0.dev\",\n \"opentelemetry-instrumentation-dbapi==0.37b0.dev\",\n \"opentelemetry-instrumentation-logging==0.37b0.dev\",\n \"opentelemetry-instrumentation-sqlite3==0.37b0.dev\",\n \"opentelemetry-instrumentation-urllib==0.37b0.dev\",\n \"opentelemetry-instrumentation-wsgi==0.37b0.dev\",\n]\n", "path": "opentelemetry-instrumentation/src/opentelemetry/instrumentation/bootstrap_gen.py"}]}
| 3,057 | 340 |
gh_patches_debug_33925
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-481
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The Reddit cog occassionally fails to relay new /r/python messages
Occasionally, the [Reddit cog](/python-discord/bot/blob/master/bot/cogs/reddit.py) stops relaying new messages to the #reddit channel. From what I've seen, there are two ways in which the cog breaks and both should be easy to fix.
#### 1. The background task fails on unexpected http responses
The background task to fetch new posts, the `self.new_posts_task` created in the `on_ready` event listener, assumes that the response we get from the Reddit API is valid `json` in [this line](/python-discord/bot/blob/master/bot/cogs/reddit.py#L55). If, for some temporary reason, we don't get a `json` response, but a `text/html` response (404, probably), the task fails with the following exception:
```py
In [160]: bot.get_cog("Reddit").new_posts_task
Out[160]:
File "/bot/bot/cogs/eval.py", line 167, in _eval
res = await func()
File "<string>", line 8, in func
File "/bot/bot/cogs/eval.py", line 167, in _eval
res = await func()
File "<string>", line 8, in func
File "/bot/bot/cogs/reddit.py", line 142, in poll_new_posts
posts = await self.fetch_posts(f"{subreddit}/new")
File "/bot/bot/cogs/reddit.py", line 55, in fetch_posts
content = await response.json()
File "/bot/.venv/lib/python3.6/site-packages/aiohttp/client_reqrep.py", line 938, in json
headers=self.headers)
aiohttp.client_exceptions.ContentTypeError: 0, message='Attempt to decode JSON with unexpected mimetype: text/html; charset=utf-8'
```
This line is in the `fetch_posts` utility function used by both the `new_posts_task` as well as the `top_weekly_posts_task`. This means that can break because of this error.
The likely solution is to either handle the exception or check the response status code before trying to parse it as json. We probably want to either have some kind of retry logic in the handling of the non-200 response, since otherwise the weekly top posts task skips a week.
#### ~~2. The channel can't be initialized during `on_ready`~~
~~Similar to the issue we had with the watch channels, it can occur that the bot's internal channel cache isn't fully loaded when the `on_ready` event fires. This means that the channel will not be retrieved and the tasks will never be started. This only happens intermittently, but it does occasionally happen.~~
~~In such cases, the `self.bot.get_channel` returns `None`, because the channel has not been loaded into the internal bot cache yet at that point.~~
```py
async def on_ready(self):
self.reddit_channel = self.bot.get_channel(Channels.reddit)
```
~~While this could be fixed by adding a slight delay or a bit of retry logic, another option is to wait for the migration to a later version of `discord.py` and use `await self.bot.fetch_channel` here instead. That will fetch the channel from the API, bypassing any issues we may have with the internal bot channel cache.~~
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/cogs/reddit.py`
Content:
```
1 import asyncio
2 import logging
3 import random
4 import textwrap
5 from datetime import datetime, timedelta
6 from typing import List
7
8 from discord import Colour, Embed, Message, TextChannel
9 from discord.ext.commands import Bot, Cog, Context, group
10
11 from bot.constants import Channels, ERROR_REPLIES, Reddit as RedditConfig, STAFF_ROLES
12 from bot.converters import Subreddit
13 from bot.decorators import with_role
14 from bot.pagination import LinePaginator
15
16 log = logging.getLogger(__name__)
17
18
19 class Reddit(Cog):
20 """Track subreddit posts and show detailed statistics about them."""
21
22 HEADERS = {"User-Agent": "Discord Bot: PythonDiscord (https://pythondiscord.com/)"}
23 URL = "https://www.reddit.com"
24
25 def __init__(self, bot: Bot):
26 self.bot = bot
27
28 self.reddit_channel = None
29
30 self.prev_lengths = {}
31 self.last_ids = {}
32
33 self.new_posts_task = None
34 self.top_weekly_posts_task = None
35
36 async def fetch_posts(self, route: str, *, amount: int = 25, params: dict = None) -> List[dict]:
37 """A helper method to fetch a certain amount of Reddit posts at a given route."""
38 # Reddit's JSON responses only provide 25 posts at most.
39 if not 25 >= amount > 0:
40 raise ValueError("Invalid amount of subreddit posts requested.")
41
42 if params is None:
43 params = {}
44
45 response = await self.bot.http_session.get(
46 url=f"{self.URL}/{route}.json",
47 headers=self.HEADERS,
48 params=params
49 )
50
51 content = await response.json()
52 posts = content["data"]["children"]
53
54 return posts[:amount]
55
56 async def send_top_posts(
57 self, channel: TextChannel, subreddit: Subreddit, content: str = None, time: str = "all"
58 ) -> Message:
59 """Create an embed for the top posts, then send it in a given TextChannel."""
60 # Create the new spicy embed.
61 embed = Embed()
62 embed.description = ""
63
64 # Get the posts
65 posts = await self.fetch_posts(
66 route=f"{subreddit}/top",
67 amount=5,
68 params={
69 "t": time
70 }
71 )
72
73 if not posts:
74 embed.title = random.choice(ERROR_REPLIES)
75 embed.colour = Colour.red()
76 embed.description = (
77 "Sorry! We couldn't find any posts from that subreddit. "
78 "If this problem persists, please let us know."
79 )
80
81 return await channel.send(
82 embed=embed
83 )
84
85 for post in posts:
86 data = post["data"]
87
88 text = data["selftext"]
89 if text:
90 text = textwrap.shorten(text, width=128, placeholder="...")
91 text += "\n" # Add newline to separate embed info
92
93 ups = data["ups"]
94 comments = data["num_comments"]
95 author = data["author"]
96
97 title = textwrap.shorten(data["title"], width=64, placeholder="...")
98 link = self.URL + data["permalink"]
99
100 embed.description += (
101 f"[**{title}**]({link})\n"
102 f"{text}"
103 f"| {ups} upvotes | {comments} comments | u/{author} | {subreddit} |\n\n"
104 )
105
106 embed.colour = Colour.blurple()
107
108 return await channel.send(
109 content=content,
110 embed=embed
111 )
112
113 async def poll_new_posts(self) -> None:
114 """Periodically search for new subreddit posts."""
115 while True:
116 await asyncio.sleep(RedditConfig.request_delay)
117
118 for subreddit in RedditConfig.subreddits:
119 # Make a HEAD request to the subreddit
120 head_response = await self.bot.http_session.head(
121 url=f"{self.URL}/{subreddit}/new.rss",
122 headers=self.HEADERS
123 )
124
125 content_length = head_response.headers["content-length"]
126
127 # If the content is the same size as before, assume there's no new posts.
128 if content_length == self.prev_lengths.get(subreddit, None):
129 continue
130
131 self.prev_lengths[subreddit] = content_length
132
133 # Now we can actually fetch the new data
134 posts = await self.fetch_posts(f"{subreddit}/new")
135 new_posts = []
136
137 # Only show new posts if we've checked before.
138 if subreddit in self.last_ids:
139 for post in posts:
140 data = post["data"]
141
142 # Convert the ID to an integer for easy comparison.
143 int_id = int(data["id"], 36)
144
145 # If we've already seen this post, finish checking
146 if int_id <= self.last_ids[subreddit]:
147 break
148
149 embed_data = {
150 "title": textwrap.shorten(data["title"], width=64, placeholder="..."),
151 "text": textwrap.shorten(data["selftext"], width=128, placeholder="..."),
152 "url": self.URL + data["permalink"],
153 "author": data["author"]
154 }
155
156 new_posts.append(embed_data)
157
158 self.last_ids[subreddit] = int(posts[0]["data"]["id"], 36)
159
160 # Send all of the new posts as spicy embeds
161 for data in new_posts:
162 embed = Embed()
163
164 embed.title = data["title"]
165 embed.url = data["url"]
166 embed.description = data["text"]
167 embed.set_footer(text=f"Posted by u/{data['author']} in {subreddit}")
168 embed.colour = Colour.blurple()
169
170 await self.reddit_channel.send(embed=embed)
171
172 log.trace(f"Sent {len(new_posts)} new {subreddit} posts to channel {self.reddit_channel.id}.")
173
174 async def poll_top_weekly_posts(self) -> None:
175 """Post a summary of the top posts every week."""
176 while True:
177 now = datetime.utcnow()
178
179 # Calculate the amount of seconds until midnight next monday.
180 monday = now + timedelta(days=7 - now.weekday())
181 monday = monday.replace(hour=0, minute=0, second=0)
182 until_monday = (monday - now).total_seconds()
183
184 await asyncio.sleep(until_monday)
185
186 for subreddit in RedditConfig.subreddits:
187 # Send and pin the new weekly posts.
188 message = await self.send_top_posts(
189 channel=self.reddit_channel,
190 subreddit=subreddit,
191 content=f"This week's top {subreddit} posts have arrived!",
192 time="week"
193 )
194
195 if subreddit.lower() == "r/python":
196 # Remove the oldest pins so that only 5 remain at most.
197 pins = await self.reddit_channel.pins()
198
199 while len(pins) >= 5:
200 await pins[-1].unpin()
201 del pins[-1]
202
203 await message.pin()
204
205 @group(name="reddit", invoke_without_command=True)
206 async def reddit_group(self, ctx: Context) -> None:
207 """View the top posts from various subreddits."""
208 await ctx.invoke(self.bot.get_command("help"), "reddit")
209
210 @reddit_group.command(name="top")
211 async def top_command(self, ctx: Context, subreddit: Subreddit = "r/Python") -> None:
212 """Send the top posts of all time from a given subreddit."""
213 await self.send_top_posts(
214 channel=ctx.channel,
215 subreddit=subreddit,
216 content=f"Here are the top {subreddit} posts of all time!",
217 time="all"
218 )
219
220 @reddit_group.command(name="daily")
221 async def daily_command(self, ctx: Context, subreddit: Subreddit = "r/Python") -> None:
222 """Send the top posts of today from a given subreddit."""
223 await self.send_top_posts(
224 channel=ctx.channel,
225 subreddit=subreddit,
226 content=f"Here are today's top {subreddit} posts!",
227 time="day"
228 )
229
230 @reddit_group.command(name="weekly")
231 async def weekly_command(self, ctx: Context, subreddit: Subreddit = "r/Python") -> None:
232 """Send the top posts of this week from a given subreddit."""
233 await self.send_top_posts(
234 channel=ctx.channel,
235 subreddit=subreddit,
236 content=f"Here are this week's top {subreddit} posts!",
237 time="week"
238 )
239
240 @with_role(*STAFF_ROLES)
241 @reddit_group.command(name="subreddits", aliases=("subs",))
242 async def subreddits_command(self, ctx: Context) -> None:
243 """Send a paginated embed of all the subreddits we're relaying."""
244 embed = Embed()
245 embed.title = "Relayed subreddits."
246 embed.colour = Colour.blurple()
247
248 await LinePaginator.paginate(
249 RedditConfig.subreddits,
250 ctx, embed,
251 footer_text="Use the reddit commands along with these to view their posts.",
252 empty=False,
253 max_lines=15
254 )
255
256 @Cog.listener()
257 async def on_ready(self) -> None:
258 """Initiate reddit post event loop."""
259 self.reddit_channel = await self.bot.fetch_channel(Channels.reddit)
260
261 if self.reddit_channel is not None:
262 if self.new_posts_task is None:
263 self.new_posts_task = self.bot.loop.create_task(self.poll_new_posts())
264 if self.top_weekly_posts_task is None:
265 self.top_weekly_posts_task = self.bot.loop.create_task(self.poll_top_weekly_posts())
266 else:
267 log.warning("Couldn't locate a channel for subreddit relaying.")
268
269
270 def setup(bot: Bot) -> None:
271 """Reddit cog load."""
272 bot.add_cog(Reddit(bot))
273 log.info("Cog loaded: Reddit")
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bot/cogs/reddit.py b/bot/cogs/reddit.py
--- a/bot/cogs/reddit.py
+++ b/bot/cogs/reddit.py
@@ -21,6 +21,7 @@
HEADERS = {"User-Agent": "Discord Bot: PythonDiscord (https://pythondiscord.com/)"}
URL = "https://www.reddit.com"
+ MAX_FETCH_RETRIES = 3
def __init__(self, bot: Bot):
self.bot = bot
@@ -42,16 +43,23 @@
if params is None:
params = {}
- response = await self.bot.http_session.get(
- url=f"{self.URL}/{route}.json",
- headers=self.HEADERS,
- params=params
- )
+ url = f"{self.URL}/{route}.json"
+ for _ in range(self.MAX_FETCH_RETRIES):
+ response = await self.bot.http_session.get(
+ url=url,
+ headers=self.HEADERS,
+ params=params
+ )
+ if response.status == 200 and response.content_type == 'application/json':
+ # Got appropriate response - process and return.
+ content = await response.json()
+ posts = content["data"]["children"]
+ return posts[:amount]
- content = await response.json()
- posts = content["data"]["children"]
+ await asyncio.sleep(3)
- return posts[:amount]
+ log.debug(f"Invalid response from: {url} - status code {response.status}, mimetype {response.content_type}")
+ return list() # Failed to get appropriate response within allowed number of retries.
async def send_top_posts(
self, channel: TextChannel, subreddit: Subreddit, content: str = None, time: str = "all"
@@ -62,13 +70,14 @@
embed.description = ""
# Get the posts
- posts = await self.fetch_posts(
- route=f"{subreddit}/top",
- amount=5,
- params={
- "t": time
- }
- )
+ async with channel.typing():
+ posts = await self.fetch_posts(
+ route=f"{subreddit}/top",
+ amount=5,
+ params={
+ "t": time
+ }
+ )
if not posts:
embed.title = random.choice(ERROR_REPLIES)
|
{"golden_diff": "diff --git a/bot/cogs/reddit.py b/bot/cogs/reddit.py\n--- a/bot/cogs/reddit.py\n+++ b/bot/cogs/reddit.py\n@@ -21,6 +21,7 @@\n \n HEADERS = {\"User-Agent\": \"Discord Bot: PythonDiscord (https://pythondiscord.com/)\"}\n URL = \"https://www.reddit.com\"\n+ MAX_FETCH_RETRIES = 3\n \n def __init__(self, bot: Bot):\n self.bot = bot\n@@ -42,16 +43,23 @@\n if params is None:\n params = {}\n \n- response = await self.bot.http_session.get(\n- url=f\"{self.URL}/{route}.json\",\n- headers=self.HEADERS,\n- params=params\n- )\n+ url = f\"{self.URL}/{route}.json\"\n+ for _ in range(self.MAX_FETCH_RETRIES):\n+ response = await self.bot.http_session.get(\n+ url=url,\n+ headers=self.HEADERS,\n+ params=params\n+ )\n+ if response.status == 200 and response.content_type == 'application/json':\n+ # Got appropriate response - process and return.\n+ content = await response.json()\n+ posts = content[\"data\"][\"children\"]\n+ return posts[:amount]\n \n- content = await response.json()\n- posts = content[\"data\"][\"children\"]\n+ await asyncio.sleep(3)\n \n- return posts[:amount]\n+ log.debug(f\"Invalid response from: {url} - status code {response.status}, mimetype {response.content_type}\")\n+ return list() # Failed to get appropriate response within allowed number of retries.\n \n async def send_top_posts(\n self, channel: TextChannel, subreddit: Subreddit, content: str = None, time: str = \"all\"\n@@ -62,13 +70,14 @@\n embed.description = \"\"\n \n # Get the posts\n- posts = await self.fetch_posts(\n- route=f\"{subreddit}/top\",\n- amount=5,\n- params={\n- \"t\": time\n- }\n- )\n+ async with channel.typing():\n+ posts = await self.fetch_posts(\n+ route=f\"{subreddit}/top\",\n+ amount=5,\n+ params={\n+ \"t\": time\n+ }\n+ )\n \n if not posts:\n embed.title = random.choice(ERROR_REPLIES)\n", "issue": "The Reddit cog occassionally fails to relay new /r/python messages\nOccasionally, the [Reddit cog](/python-discord/bot/blob/master/bot/cogs/reddit.py) stops relaying new messages to the #reddit channel. From what I've seen, there are two ways in which the cog breaks and both should be easy to fix.\r\n\r\n#### 1. The background task fails on unexpected http responses\r\nThe background task to fetch new posts, the `self.new_posts_task` created in the `on_ready` event listener, assumes that the response we get from the Reddit API is valid `json` in [this line](/python-discord/bot/blob/master/bot/cogs/reddit.py#L55). If, for some temporary reason, we don't get a `json` response, but a `text/html` response (404, probably), the task fails with the following exception:\r\n\r\n```py\r\nIn [160]: bot.get_cog(\"Reddit\").new_posts_task\r\nOut[160]: \r\n File \"/bot/bot/cogs/eval.py\", line 167, in _eval\r\n res = await func()\r\n File \"<string>\", line 8, in func\r\n File \"/bot/bot/cogs/eval.py\", line 167, in _eval\r\n res = await func()\r\n File \"<string>\", line 8, in func\r\n File \"/bot/bot/cogs/reddit.py\", line 142, in poll_new_posts\r\n posts = await self.fetch_posts(f\"{subreddit}/new\")\r\n File \"/bot/bot/cogs/reddit.py\", line 55, in fetch_posts\r\n content = await response.json()\r\n File \"/bot/.venv/lib/python3.6/site-packages/aiohttp/client_reqrep.py\", line 938, in json\r\n headers=self.headers)\r\naiohttp.client_exceptions.ContentTypeError: 0, message='Attempt to decode JSON with unexpected mimetype: text/html; charset=utf-8'\r\n```\r\n\r\nThis line is in the `fetch_posts` utility function used by both the `new_posts_task` as well as the `top_weekly_posts_task`. This means that can break because of this error. \r\n\r\nThe likely solution is to either handle the exception or check the response status code before trying to parse it as json. We probably want to either have some kind of retry logic in the handling of the non-200 response, since otherwise the weekly top posts task skips a week.\r\n\r\n#### ~~2. The channel can't be initialized during `on_ready`~~\r\n\r\n~~Similar to the issue we had with the watch channels, it can occur that the bot's internal channel cache isn't fully loaded when the `on_ready` event fires. This means that the channel will not be retrieved and the tasks will never be started. This only happens intermittently, but it does occasionally happen.~~\r\n\r\n~~In such cases, the `self.bot.get_channel` returns `None`, because the channel has not been loaded into the internal bot cache yet at that point.~~\r\n\r\n```py\r\n async def on_ready(self):\r\n self.reddit_channel = self.bot.get_channel(Channels.reddit)\r\n```\r\n\r\n~~While this could be fixed by adding a slight delay or a bit of retry logic, another option is to wait for the migration to a later version of `discord.py` and use `await self.bot.fetch_channel` here instead. That will fetch the channel from the API, bypassing any issues we may have with the internal bot channel cache.~~\n", "before_files": [{"content": "import asyncio\nimport logging\nimport random\nimport textwrap\nfrom datetime import datetime, timedelta\nfrom typing import List\n\nfrom discord import Colour, Embed, Message, TextChannel\nfrom discord.ext.commands import Bot, Cog, Context, group\n\nfrom bot.constants import Channels, ERROR_REPLIES, Reddit as RedditConfig, STAFF_ROLES\nfrom bot.converters import Subreddit\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\n\nlog = logging.getLogger(__name__)\n\n\nclass Reddit(Cog):\n \"\"\"Track subreddit posts and show detailed statistics about them.\"\"\"\n\n HEADERS = {\"User-Agent\": \"Discord Bot: PythonDiscord (https://pythondiscord.com/)\"}\n URL = \"https://www.reddit.com\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n self.reddit_channel = None\n\n self.prev_lengths = {}\n self.last_ids = {}\n\n self.new_posts_task = None\n self.top_weekly_posts_task = None\n\n async def fetch_posts(self, route: str, *, amount: int = 25, params: dict = None) -> List[dict]:\n \"\"\"A helper method to fetch a certain amount of Reddit posts at a given route.\"\"\"\n # Reddit's JSON responses only provide 25 posts at most.\n if not 25 >= amount > 0:\n raise ValueError(\"Invalid amount of subreddit posts requested.\")\n\n if params is None:\n params = {}\n\n response = await self.bot.http_session.get(\n url=f\"{self.URL}/{route}.json\",\n headers=self.HEADERS,\n params=params\n )\n\n content = await response.json()\n posts = content[\"data\"][\"children\"]\n\n return posts[:amount]\n\n async def send_top_posts(\n self, channel: TextChannel, subreddit: Subreddit, content: str = None, time: str = \"all\"\n ) -> Message:\n \"\"\"Create an embed for the top posts, then send it in a given TextChannel.\"\"\"\n # Create the new spicy embed.\n embed = Embed()\n embed.description = \"\"\n\n # Get the posts\n posts = await self.fetch_posts(\n route=f\"{subreddit}/top\",\n amount=5,\n params={\n \"t\": time\n }\n )\n\n if not posts:\n embed.title = random.choice(ERROR_REPLIES)\n embed.colour = Colour.red()\n embed.description = (\n \"Sorry! We couldn't find any posts from that subreddit. \"\n \"If this problem persists, please let us know.\"\n )\n\n return await channel.send(\n embed=embed\n )\n\n for post in posts:\n data = post[\"data\"]\n\n text = data[\"selftext\"]\n if text:\n text = textwrap.shorten(text, width=128, placeholder=\"...\")\n text += \"\\n\" # Add newline to separate embed info\n\n ups = data[\"ups\"]\n comments = data[\"num_comments\"]\n author = data[\"author\"]\n\n title = textwrap.shorten(data[\"title\"], width=64, placeholder=\"...\")\n link = self.URL + data[\"permalink\"]\n\n embed.description += (\n f\"[**{title}**]({link})\\n\"\n f\"{text}\"\n f\"| {ups} upvotes | {comments} comments | u/{author} | {subreddit} |\\n\\n\"\n )\n\n embed.colour = Colour.blurple()\n\n return await channel.send(\n content=content,\n embed=embed\n )\n\n async def poll_new_posts(self) -> None:\n \"\"\"Periodically search for new subreddit posts.\"\"\"\n while True:\n await asyncio.sleep(RedditConfig.request_delay)\n\n for subreddit in RedditConfig.subreddits:\n # Make a HEAD request to the subreddit\n head_response = await self.bot.http_session.head(\n url=f\"{self.URL}/{subreddit}/new.rss\",\n headers=self.HEADERS\n )\n\n content_length = head_response.headers[\"content-length\"]\n\n # If the content is the same size as before, assume there's no new posts.\n if content_length == self.prev_lengths.get(subreddit, None):\n continue\n\n self.prev_lengths[subreddit] = content_length\n\n # Now we can actually fetch the new data\n posts = await self.fetch_posts(f\"{subreddit}/new\")\n new_posts = []\n\n # Only show new posts if we've checked before.\n if subreddit in self.last_ids:\n for post in posts:\n data = post[\"data\"]\n\n # Convert the ID to an integer for easy comparison.\n int_id = int(data[\"id\"], 36)\n\n # If we've already seen this post, finish checking\n if int_id <= self.last_ids[subreddit]:\n break\n\n embed_data = {\n \"title\": textwrap.shorten(data[\"title\"], width=64, placeholder=\"...\"),\n \"text\": textwrap.shorten(data[\"selftext\"], width=128, placeholder=\"...\"),\n \"url\": self.URL + data[\"permalink\"],\n \"author\": data[\"author\"]\n }\n\n new_posts.append(embed_data)\n\n self.last_ids[subreddit] = int(posts[0][\"data\"][\"id\"], 36)\n\n # Send all of the new posts as spicy embeds\n for data in new_posts:\n embed = Embed()\n\n embed.title = data[\"title\"]\n embed.url = data[\"url\"]\n embed.description = data[\"text\"]\n embed.set_footer(text=f\"Posted by u/{data['author']} in {subreddit}\")\n embed.colour = Colour.blurple()\n\n await self.reddit_channel.send(embed=embed)\n\n log.trace(f\"Sent {len(new_posts)} new {subreddit} posts to channel {self.reddit_channel.id}.\")\n\n async def poll_top_weekly_posts(self) -> None:\n \"\"\"Post a summary of the top posts every week.\"\"\"\n while True:\n now = datetime.utcnow()\n\n # Calculate the amount of seconds until midnight next monday.\n monday = now + timedelta(days=7 - now.weekday())\n monday = monday.replace(hour=0, minute=0, second=0)\n until_monday = (monday - now).total_seconds()\n\n await asyncio.sleep(until_monday)\n\n for subreddit in RedditConfig.subreddits:\n # Send and pin the new weekly posts.\n message = await self.send_top_posts(\n channel=self.reddit_channel,\n subreddit=subreddit,\n content=f\"This week's top {subreddit} posts have arrived!\",\n time=\"week\"\n )\n\n if subreddit.lower() == \"r/python\":\n # Remove the oldest pins so that only 5 remain at most.\n pins = await self.reddit_channel.pins()\n\n while len(pins) >= 5:\n await pins[-1].unpin()\n del pins[-1]\n\n await message.pin()\n\n @group(name=\"reddit\", invoke_without_command=True)\n async def reddit_group(self, ctx: Context) -> None:\n \"\"\"View the top posts from various subreddits.\"\"\"\n await ctx.invoke(self.bot.get_command(\"help\"), \"reddit\")\n\n @reddit_group.command(name=\"top\")\n async def top_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of all time from a given subreddit.\"\"\"\n await self.send_top_posts(\n channel=ctx.channel,\n subreddit=subreddit,\n content=f\"Here are the top {subreddit} posts of all time!\",\n time=\"all\"\n )\n\n @reddit_group.command(name=\"daily\")\n async def daily_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of today from a given subreddit.\"\"\"\n await self.send_top_posts(\n channel=ctx.channel,\n subreddit=subreddit,\n content=f\"Here are today's top {subreddit} posts!\",\n time=\"day\"\n )\n\n @reddit_group.command(name=\"weekly\")\n async def weekly_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of this week from a given subreddit.\"\"\"\n await self.send_top_posts(\n channel=ctx.channel,\n subreddit=subreddit,\n content=f\"Here are this week's top {subreddit} posts!\",\n time=\"week\"\n )\n\n @with_role(*STAFF_ROLES)\n @reddit_group.command(name=\"subreddits\", aliases=(\"subs\",))\n async def subreddits_command(self, ctx: Context) -> None:\n \"\"\"Send a paginated embed of all the subreddits we're relaying.\"\"\"\n embed = Embed()\n embed.title = \"Relayed subreddits.\"\n embed.colour = Colour.blurple()\n\n await LinePaginator.paginate(\n RedditConfig.subreddits,\n ctx, embed,\n footer_text=\"Use the reddit commands along with these to view their posts.\",\n empty=False,\n max_lines=15\n )\n\n @Cog.listener()\n async def on_ready(self) -> None:\n \"\"\"Initiate reddit post event loop.\"\"\"\n self.reddit_channel = await self.bot.fetch_channel(Channels.reddit)\n\n if self.reddit_channel is not None:\n if self.new_posts_task is None:\n self.new_posts_task = self.bot.loop.create_task(self.poll_new_posts())\n if self.top_weekly_posts_task is None:\n self.top_weekly_posts_task = self.bot.loop.create_task(self.poll_top_weekly_posts())\n else:\n log.warning(\"Couldn't locate a channel for subreddit relaying.\")\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Reddit cog load.\"\"\"\n bot.add_cog(Reddit(bot))\n log.info(\"Cog loaded: Reddit\")\n", "path": "bot/cogs/reddit.py"}], "after_files": [{"content": "import asyncio\nimport logging\nimport random\nimport textwrap\nfrom datetime import datetime, timedelta\nfrom typing import List\n\nfrom discord import Colour, Embed, Message, TextChannel\nfrom discord.ext.commands import Bot, Cog, Context, group\n\nfrom bot.constants import Channels, ERROR_REPLIES, Reddit as RedditConfig, STAFF_ROLES\nfrom bot.converters import Subreddit\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\n\nlog = logging.getLogger(__name__)\n\n\nclass Reddit(Cog):\n \"\"\"Track subreddit posts and show detailed statistics about them.\"\"\"\n\n HEADERS = {\"User-Agent\": \"Discord Bot: PythonDiscord (https://pythondiscord.com/)\"}\n URL = \"https://www.reddit.com\"\n MAX_FETCH_RETRIES = 3\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n self.reddit_channel = None\n\n self.prev_lengths = {}\n self.last_ids = {}\n\n self.new_posts_task = None\n self.top_weekly_posts_task = None\n\n async def fetch_posts(self, route: str, *, amount: int = 25, params: dict = None) -> List[dict]:\n \"\"\"A helper method to fetch a certain amount of Reddit posts at a given route.\"\"\"\n # Reddit's JSON responses only provide 25 posts at most.\n if not 25 >= amount > 0:\n raise ValueError(\"Invalid amount of subreddit posts requested.\")\n\n if params is None:\n params = {}\n\n url = f\"{self.URL}/{route}.json\"\n for _ in range(self.MAX_FETCH_RETRIES):\n response = await self.bot.http_session.get(\n url=url,\n headers=self.HEADERS,\n params=params\n )\n if response.status == 200 and response.content_type == 'application/json':\n # Got appropriate response - process and return.\n content = await response.json()\n posts = content[\"data\"][\"children\"]\n return posts[:amount]\n\n await asyncio.sleep(3)\n\n log.debug(f\"Invalid response from: {url} - status code {response.status}, mimetype {response.content_type}\")\n return list() # Failed to get appropriate response within allowed number of retries.\n\n async def send_top_posts(\n self, channel: TextChannel, subreddit: Subreddit, content: str = None, time: str = \"all\"\n ) -> Message:\n \"\"\"Create an embed for the top posts, then send it in a given TextChannel.\"\"\"\n # Create the new spicy embed.\n embed = Embed()\n embed.description = \"\"\n\n # Get the posts\n async with channel.typing():\n posts = await self.fetch_posts(\n route=f\"{subreddit}/top\",\n amount=5,\n params={\n \"t\": time\n }\n )\n\n if not posts:\n embed.title = random.choice(ERROR_REPLIES)\n embed.colour = Colour.red()\n embed.description = (\n \"Sorry! We couldn't find any posts from that subreddit. \"\n \"If this problem persists, please let us know.\"\n )\n\n return await channel.send(\n embed=embed\n )\n\n for post in posts:\n data = post[\"data\"]\n\n text = data[\"selftext\"]\n if text:\n text = textwrap.shorten(text, width=128, placeholder=\"...\")\n text += \"\\n\" # Add newline to separate embed info\n\n ups = data[\"ups\"]\n comments = data[\"num_comments\"]\n author = data[\"author\"]\n\n title = textwrap.shorten(data[\"title\"], width=64, placeholder=\"...\")\n link = self.URL + data[\"permalink\"]\n\n embed.description += (\n f\"[**{title}**]({link})\\n\"\n f\"{text}\"\n f\"| {ups} upvotes | {comments} comments | u/{author} | {subreddit} |\\n\\n\"\n )\n\n embed.colour = Colour.blurple()\n\n return await channel.send(\n content=content,\n embed=embed\n )\n\n async def poll_new_posts(self) -> None:\n \"\"\"Periodically search for new subreddit posts.\"\"\"\n while True:\n await asyncio.sleep(RedditConfig.request_delay)\n\n for subreddit in RedditConfig.subreddits:\n # Make a HEAD request to the subreddit\n head_response = await self.bot.http_session.head(\n url=f\"{self.URL}/{subreddit}/new.rss\",\n headers=self.HEADERS\n )\n\n content_length = head_response.headers[\"content-length\"]\n\n # If the content is the same size as before, assume there's no new posts.\n if content_length == self.prev_lengths.get(subreddit, None):\n continue\n\n self.prev_lengths[subreddit] = content_length\n\n # Now we can actually fetch the new data\n posts = await self.fetch_posts(f\"{subreddit}/new\")\n new_posts = []\n\n # Only show new posts if we've checked before.\n if subreddit in self.last_ids:\n for post in posts:\n data = post[\"data\"]\n\n # Convert the ID to an integer for easy comparison.\n int_id = int(data[\"id\"], 36)\n\n # If we've already seen this post, finish checking\n if int_id <= self.last_ids[subreddit]:\n break\n\n embed_data = {\n \"title\": textwrap.shorten(data[\"title\"], width=64, placeholder=\"...\"),\n \"text\": textwrap.shorten(data[\"selftext\"], width=128, placeholder=\"...\"),\n \"url\": self.URL + data[\"permalink\"],\n \"author\": data[\"author\"]\n }\n\n new_posts.append(embed_data)\n\n self.last_ids[subreddit] = int(posts[0][\"data\"][\"id\"], 36)\n\n # Send all of the new posts as spicy embeds\n for data in new_posts:\n embed = Embed()\n\n embed.title = data[\"title\"]\n embed.url = data[\"url\"]\n embed.description = data[\"text\"]\n embed.set_footer(text=f\"Posted by u/{data['author']} in {subreddit}\")\n embed.colour = Colour.blurple()\n\n await self.reddit_channel.send(embed=embed)\n\n log.trace(f\"Sent {len(new_posts)} new {subreddit} posts to channel {self.reddit_channel.id}.\")\n\n async def poll_top_weekly_posts(self) -> None:\n \"\"\"Post a summary of the top posts every week.\"\"\"\n while True:\n now = datetime.utcnow()\n\n # Calculate the amount of seconds until midnight next monday.\n monday = now + timedelta(days=7 - now.weekday())\n monday = monday.replace(hour=0, minute=0, second=0)\n until_monday = (monday - now).total_seconds()\n\n await asyncio.sleep(until_monday)\n\n for subreddit in RedditConfig.subreddits:\n # Send and pin the new weekly posts.\n message = await self.send_top_posts(\n channel=self.reddit_channel,\n subreddit=subreddit,\n content=f\"This week's top {subreddit} posts have arrived!\",\n time=\"week\"\n )\n\n if subreddit.lower() == \"r/python\":\n # Remove the oldest pins so that only 5 remain at most.\n pins = await self.reddit_channel.pins()\n\n while len(pins) >= 5:\n await pins[-1].unpin()\n del pins[-1]\n\n await message.pin()\n\n @group(name=\"reddit\", invoke_without_command=True)\n async def reddit_group(self, ctx: Context) -> None:\n \"\"\"View the top posts from various subreddits.\"\"\"\n await ctx.invoke(self.bot.get_command(\"help\"), \"reddit\")\n\n @reddit_group.command(name=\"top\")\n async def top_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of all time from a given subreddit.\"\"\"\n await self.send_top_posts(\n channel=ctx.channel,\n subreddit=subreddit,\n content=f\"Here are the top {subreddit} posts of all time!\",\n time=\"all\"\n )\n\n @reddit_group.command(name=\"daily\")\n async def daily_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of today from a given subreddit.\"\"\"\n await self.send_top_posts(\n channel=ctx.channel,\n subreddit=subreddit,\n content=f\"Here are today's top {subreddit} posts!\",\n time=\"day\"\n )\n\n @reddit_group.command(name=\"weekly\")\n async def weekly_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of this week from a given subreddit.\"\"\"\n await self.send_top_posts(\n channel=ctx.channel,\n subreddit=subreddit,\n content=f\"Here are this week's top {subreddit} posts!\",\n time=\"week\"\n )\n\n @with_role(*STAFF_ROLES)\n @reddit_group.command(name=\"subreddits\", aliases=(\"subs\",))\n async def subreddits_command(self, ctx: Context) -> None:\n \"\"\"Send a paginated embed of all the subreddits we're relaying.\"\"\"\n embed = Embed()\n embed.title = \"Relayed subreddits.\"\n embed.colour = Colour.blurple()\n\n await LinePaginator.paginate(\n RedditConfig.subreddits,\n ctx, embed,\n footer_text=\"Use the reddit commands along with these to view their posts.\",\n empty=False,\n max_lines=15\n )\n\n @Cog.listener()\n async def on_ready(self) -> None:\n \"\"\"Initiate reddit post event loop.\"\"\"\n self.reddit_channel = await self.bot.fetch_channel(Channels.reddit)\n\n if self.reddit_channel is not None:\n if self.new_posts_task is None:\n self.new_posts_task = self.bot.loop.create_task(self.poll_new_posts())\n if self.top_weekly_posts_task is None:\n self.top_weekly_posts_task = self.bot.loop.create_task(self.poll_top_weekly_posts())\n else:\n log.warning(\"Couldn't locate a channel for subreddit relaying.\")\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Reddit cog load.\"\"\"\n bot.add_cog(Reddit(bot))\n log.info(\"Cog loaded: Reddit\")\n", "path": "bot/cogs/reddit.py"}]}
| 3,822 | 542 |
gh_patches_debug_11094
|
rasdani/github-patches
|
git_diff
|
facebookresearch__dynabench-766
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Creating a task with the "Task Code" as a number doesn't work as expected.
After creating a task with the task code as a number, and accepting the task, when users want to navigate to the task, it should ideally take us to a page which says "The task owner still needs to activate this task.", but in this case, we are shown the respective page for a millisecond, and taken back to the home page, which I think is unexpected behaviour.
A demonstration is given in the following screen recording of the same issue.
**Steps to reproduce**:
- Create a task proposal with the "Task Code" field as a number
- Accept the task as the admin user.
- Now try to click on the respective task from your "Tasks" page. It should just take you back to the homepage.
This seems to happen only for a purely numeric "Task Code" and not for an alphanumeric "Task Code"
https://user-images.githubusercontent.com/48560219/135757335-d98f116f-b7d6-44dc-a1fd-0c8b6fac7c61.mov
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api/controllers/task_proposals.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates.
2 # This source code is licensed under the MIT license found in the
3 # LICENSE file in the root directory of this source tree.
4
5 import re
6
7 import bottle
8
9 import common.auth as _auth
10 import common.helpers as util
11 from common.logging import logger
12 from models.base import DBSession as dbs
13 from models.task import TaskModel
14 from models.task_proposal import TaskProposal, TaskProposalModel
15 from models.user import UserModel
16
17
18 @bottle.get("/task_proposals/user/<page:int>/<limit:int>")
19 @_auth.requires_auth
20 def get_user_task_proposals(credentials, page, limit):
21 tpm = TaskProposalModel()
22 proposals = tpm.getByUid(credentials["id"])
23 identifiers = []
24 for proposal in proposals:
25 identifiers.append(proposal.to_dict())
26 return util.json_encode(
27 {
28 "data": identifiers[page * limit : page * limit + limit],
29 "count": len(identifiers),
30 }
31 )
32
33
34 @bottle.get("/task_proposals/all/<page:int>/<limit:int>")
35 @_auth.requires_auth
36 def get_all_task_proposals(credentials, page, limit):
37 um = UserModel()
38 user = um.get(credentials["id"])
39 if not user.admin:
40 bottle.abort(403, "Access denied")
41
42 proposals = dbs.query(TaskProposal)
43 identifiers = []
44 for proposal in proposals:
45 identifiers.append(proposal.to_dict())
46 return util.json_encode(
47 {
48 "data": identifiers[page * limit : page * limit + limit],
49 "count": len(identifiers),
50 }
51 )
52
53
54 @bottle.post("/task_proposals/create")
55 @_auth.requires_auth
56 def create_task_proposal(credentials):
57 data = bottle.request.json
58
59 if not util.check_fields(data, ["task_code", "name", "desc", "longdesc"]):
60 bottle.abort(400, "Missing data")
61
62 tm = TaskModel()
63 if tm.getByTaskCode(data["task_code"]):
64 bottle.abort(400, "Invalid task code; this task code is already taken")
65
66 if tm.getByName(data["name"]):
67 bottle.abort(400, "Invalid name; this name is already taken")
68
69 if not bool(re.search("^[a-zA-Z0-9_-]*$", data["task_code"])):
70 bottle.abort(
71 400,
72 "Invalid task code (no special characters allowed besides underscores "
73 + "and dashes)",
74 )
75
76 try:
77 tp = TaskProposal(
78 uid=credentials["id"],
79 task_code=data["task_code"],
80 name=data["name"],
81 desc=data["desc"],
82 longdesc=data["longdesc"],
83 )
84
85 tm.dbs.add(tp)
86 tm.dbs.flush()
87 tm.dbs.commit()
88 logger.info("Added task proposal (%s)" % (tp.id))
89
90 except Exception as error_message:
91 logger.error("Could not create task proposal (%s)" % error_message)
92 return False
93
94 return util.json_encode({"success": "ok", "id": tp.id})
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/api/controllers/task_proposals.py b/api/controllers/task_proposals.py
--- a/api/controllers/task_proposals.py
+++ b/api/controllers/task_proposals.py
@@ -66,11 +66,13 @@
if tm.getByName(data["name"]):
bottle.abort(400, "Invalid name; this name is already taken")
- if not bool(re.search("^[a-zA-Z0-9_-]*$", data["task_code"])):
+ if not bool(
+ re.search("(?=^[a-zA-Z0-9_-]*$)(?=.*[a-zA-Z].*).*$", data["task_code"])
+ ):
bottle.abort(
400,
"Invalid task code (no special characters allowed besides underscores "
- + "and dashes)",
+ + "and dashes. At least one letter required)",
)
try:
|
{"golden_diff": "diff --git a/api/controllers/task_proposals.py b/api/controllers/task_proposals.py\n--- a/api/controllers/task_proposals.py\n+++ b/api/controllers/task_proposals.py\n@@ -66,11 +66,13 @@\n if tm.getByName(data[\"name\"]):\n bottle.abort(400, \"Invalid name; this name is already taken\")\n \n- if not bool(re.search(\"^[a-zA-Z0-9_-]*$\", data[\"task_code\"])):\n+ if not bool(\n+ re.search(\"(?=^[a-zA-Z0-9_-]*$)(?=.*[a-zA-Z].*).*$\", data[\"task_code\"])\n+ ):\n bottle.abort(\n 400,\n \"Invalid task code (no special characters allowed besides underscores \"\n- + \"and dashes)\",\n+ + \"and dashes. At least one letter required)\",\n )\n \n try:\n", "issue": "Creating a task with the \"Task Code\" as a number doesn't work as expected.\nAfter creating a task with the task code as a number, and accepting the task, when users want to navigate to the task, it should ideally take us to a page which says \"The task owner still needs to activate this task.\", but in this case, we are shown the respective page for a millisecond, and taken back to the home page, which I think is unexpected behaviour.\r\n\r\nA demonstration is given in the following screen recording of the same issue.\r\n\r\n**Steps to reproduce**:\r\n- Create a task proposal with the \"Task Code\" field as a number\r\n- Accept the task as the admin user.\r\n- Now try to click on the respective task from your \"Tasks\" page. It should just take you back to the homepage.\r\n\r\nThis seems to happen only for a purely numeric \"Task Code\" and not for an alphanumeric \"Task Code\"\r\n\r\nhttps://user-images.githubusercontent.com/48560219/135757335-d98f116f-b7d6-44dc-a1fd-0c8b6fac7c61.mov\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport re\n\nimport bottle\n\nimport common.auth as _auth\nimport common.helpers as util\nfrom common.logging import logger\nfrom models.base import DBSession as dbs\nfrom models.task import TaskModel\nfrom models.task_proposal import TaskProposal, TaskProposalModel\nfrom models.user import UserModel\n\n\[email protected](\"/task_proposals/user/<page:int>/<limit:int>\")\n@_auth.requires_auth\ndef get_user_task_proposals(credentials, page, limit):\n tpm = TaskProposalModel()\n proposals = tpm.getByUid(credentials[\"id\"])\n identifiers = []\n for proposal in proposals:\n identifiers.append(proposal.to_dict())\n return util.json_encode(\n {\n \"data\": identifiers[page * limit : page * limit + limit],\n \"count\": len(identifiers),\n }\n )\n\n\[email protected](\"/task_proposals/all/<page:int>/<limit:int>\")\n@_auth.requires_auth\ndef get_all_task_proposals(credentials, page, limit):\n um = UserModel()\n user = um.get(credentials[\"id\"])\n if not user.admin:\n bottle.abort(403, \"Access denied\")\n\n proposals = dbs.query(TaskProposal)\n identifiers = []\n for proposal in proposals:\n identifiers.append(proposal.to_dict())\n return util.json_encode(\n {\n \"data\": identifiers[page * limit : page * limit + limit],\n \"count\": len(identifiers),\n }\n )\n\n\[email protected](\"/task_proposals/create\")\n@_auth.requires_auth\ndef create_task_proposal(credentials):\n data = bottle.request.json\n\n if not util.check_fields(data, [\"task_code\", \"name\", \"desc\", \"longdesc\"]):\n bottle.abort(400, \"Missing data\")\n\n tm = TaskModel()\n if tm.getByTaskCode(data[\"task_code\"]):\n bottle.abort(400, \"Invalid task code; this task code is already taken\")\n\n if tm.getByName(data[\"name\"]):\n bottle.abort(400, \"Invalid name; this name is already taken\")\n\n if not bool(re.search(\"^[a-zA-Z0-9_-]*$\", data[\"task_code\"])):\n bottle.abort(\n 400,\n \"Invalid task code (no special characters allowed besides underscores \"\n + \"and dashes)\",\n )\n\n try:\n tp = TaskProposal(\n uid=credentials[\"id\"],\n task_code=data[\"task_code\"],\n name=data[\"name\"],\n desc=data[\"desc\"],\n longdesc=data[\"longdesc\"],\n )\n\n tm.dbs.add(tp)\n tm.dbs.flush()\n tm.dbs.commit()\n logger.info(\"Added task proposal (%s)\" % (tp.id))\n\n except Exception as error_message:\n logger.error(\"Could not create task proposal (%s)\" % error_message)\n return False\n\n return util.json_encode({\"success\": \"ok\", \"id\": tp.id})\n", "path": "api/controllers/task_proposals.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport re\n\nimport bottle\n\nimport common.auth as _auth\nimport common.helpers as util\nfrom common.logging import logger\nfrom models.base import DBSession as dbs\nfrom models.task import TaskModel\nfrom models.task_proposal import TaskProposal, TaskProposalModel\nfrom models.user import UserModel\n\n\[email protected](\"/task_proposals/user/<page:int>/<limit:int>\")\n@_auth.requires_auth\ndef get_user_task_proposals(credentials, page, limit):\n tpm = TaskProposalModel()\n proposals = tpm.getByUid(credentials[\"id\"])\n identifiers = []\n for proposal in proposals:\n identifiers.append(proposal.to_dict())\n return util.json_encode(\n {\n \"data\": identifiers[page * limit : page * limit + limit],\n \"count\": len(identifiers),\n }\n )\n\n\[email protected](\"/task_proposals/all/<page:int>/<limit:int>\")\n@_auth.requires_auth\ndef get_all_task_proposals(credentials, page, limit):\n um = UserModel()\n user = um.get(credentials[\"id\"])\n if not user.admin:\n bottle.abort(403, \"Access denied\")\n\n proposals = dbs.query(TaskProposal)\n identifiers = []\n for proposal in proposals:\n identifiers.append(proposal.to_dict())\n return util.json_encode(\n {\n \"data\": identifiers[page * limit : page * limit + limit],\n \"count\": len(identifiers),\n }\n )\n\n\[email protected](\"/task_proposals/create\")\n@_auth.requires_auth\ndef create_task_proposal(credentials):\n data = bottle.request.json\n\n if not util.check_fields(data, [\"task_code\", \"name\", \"desc\", \"longdesc\"]):\n bottle.abort(400, \"Missing data\")\n\n tm = TaskModel()\n if tm.getByTaskCode(data[\"task_code\"]):\n bottle.abort(400, \"Invalid task code; this task code is already taken\")\n\n if tm.getByName(data[\"name\"]):\n bottle.abort(400, \"Invalid name; this name is already taken\")\n\n if not bool(\n re.search(\"(?=^[a-zA-Z0-9_-]*$)(?=.*[a-zA-Z].*).*$\", data[\"task_code\"])\n ):\n bottle.abort(\n 400,\n \"Invalid task code (no special characters allowed besides underscores \"\n + \"and dashes. At least one letter required)\",\n )\n\n try:\n tp = TaskProposal(\n uid=credentials[\"id\"],\n task_code=data[\"task_code\"],\n name=data[\"name\"],\n desc=data[\"desc\"],\n longdesc=data[\"longdesc\"],\n )\n\n tm.dbs.add(tp)\n tm.dbs.flush()\n tm.dbs.commit()\n logger.info(\"Added task proposal (%s)\" % (tp.id))\n\n except Exception as error_message:\n logger.error(\"Could not create task proposal (%s)\" % error_message)\n return False\n\n return util.json_encode({\"success\": \"ok\", \"id\": tp.id})\n", "path": "api/controllers/task_proposals.py"}]}
| 1,347 | 193 |
gh_patches_debug_4002
|
rasdani/github-patches
|
git_diff
|
pypa__cibuildwheel-199
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cibuildwheel CI tests failing on Azure for windows
`cibuildwheel` CI tests which are using the sample configuration in README are failing on Windows following Azure update to support python 3.8
Given the number of CI providers now tested, I guess we can try to test `cibuildwheel` on python 2.7, 3.5, 3.6, 3.7 and 3.8 without too much overhead on test time by dispatching the python versions running `cibuildwheel` across CI providers.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 try:
5 from setuptools import setup
6 except ImportError:
7 from distutils.core import setup
8
9 setup(
10 name='cibuildwheel',
11 version='0.12.0',
12 install_requires=['bashlex!=0.13'],
13 description="Build Python wheels on CI with minimal configuration.",
14 long_description='For readme please see http://github.com/joerick/cibuildwheel',
15 author="Joe Rickerby",
16 author_email='[email protected]',
17 url='https://github.com/joerick/cibuildwheel',
18 packages=['cibuildwheel',],
19 license="BSD",
20 zip_safe=False,
21 package_data={
22 'cibuildwheel': ['resources/*'],
23 },
24 keywords='ci wheel packaging pypi travis appveyor macos linux windows',
25 classifiers=[
26 'Intended Audience :: Developers',
27 'Natural Language :: English',
28 'Programming Language :: Python :: 2',
29 'Programming Language :: Python :: 3',
30 'Development Status :: 4 - Beta',
31 'License :: OSI Approved :: BSD License',
32 'Programming Language :: Python :: Implementation :: CPython',
33 'Topic :: Software Development :: Build Tools',
34 ],
35 entry_points={
36 'console_scripts': [
37 'cibuildwheel = cibuildwheel.__main__:main',
38 ],
39 },
40 )
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -21,6 +21,8 @@
package_data={
'cibuildwheel': ['resources/*'],
},
+ # Supported python versions
+ python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',
keywords='ci wheel packaging pypi travis appveyor macos linux windows',
classifiers=[
'Intended Audience :: Developers',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -21,6 +21,8 @@\n package_data={\n 'cibuildwheel': ['resources/*'],\n },\n+ # Supported python versions\n+ python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n keywords='ci wheel packaging pypi travis appveyor macos linux windows',\n classifiers=[\n 'Intended Audience :: Developers',\n", "issue": "cibuildwheel CI tests failing on Azure for windows\n`cibuildwheel` CI tests which are using the sample configuration in README are failing on Windows following Azure update to support python 3.8\r\n\r\nGiven the number of CI providers now tested, I guess we can try to test `cibuildwheel` on python 2.7, 3.5, 3.6, 3.7 and 3.8 without too much overhead on test time by dispatching the python versions running `cibuildwheel` across CI providers.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nsetup(\n name='cibuildwheel',\n version='0.12.0',\n install_requires=['bashlex!=0.13'],\n description=\"Build Python wheels on CI with minimal configuration.\",\n long_description='For readme please see http://github.com/joerick/cibuildwheel',\n author=\"Joe Rickerby\",\n author_email='[email protected]',\n url='https://github.com/joerick/cibuildwheel',\n packages=['cibuildwheel',],\n license=\"BSD\",\n zip_safe=False,\n package_data={\n 'cibuildwheel': ['resources/*'],\n },\n keywords='ci wheel packaging pypi travis appveyor macos linux windows',\n classifiers=[\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'Development Status :: 4 - Beta',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Topic :: Software Development :: Build Tools',\n ],\n entry_points={\n 'console_scripts': [\n 'cibuildwheel = cibuildwheel.__main__:main',\n ],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nsetup(\n name='cibuildwheel',\n version='0.12.0',\n install_requires=['bashlex!=0.13'],\n description=\"Build Python wheels on CI with minimal configuration.\",\n long_description='For readme please see http://github.com/joerick/cibuildwheel',\n author=\"Joe Rickerby\",\n author_email='[email protected]',\n url='https://github.com/joerick/cibuildwheel',\n packages=['cibuildwheel',],\n license=\"BSD\",\n zip_safe=False,\n package_data={\n 'cibuildwheel': ['resources/*'],\n },\n # Supported python versions\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n keywords='ci wheel packaging pypi travis appveyor macos linux windows',\n classifiers=[\n 'Intended Audience :: Developers',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'Development Status :: 4 - Beta',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Topic :: Software Development :: Build Tools',\n ],\n entry_points={\n 'console_scripts': [\n 'cibuildwheel = cibuildwheel.__main__:main',\n ],\n },\n)\n", "path": "setup.py"}]}
| 748 | 120 |
gh_patches_debug_15457
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-1268
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
if self.container().type() == 'tab': AttributeError: 'NoneType' object has no attribute 'type'
Not sure why I was seeing this or why no one else had. But I seem to have solved the problem by just adding a check that container is not None on line 155 of Dock.py.
if self.container() is not None and self.container().type() == 'tab':
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyqtgraph/dockarea/Dock.py`
Content:
```
1 from ..Qt import QtCore, QtGui
2
3 from .DockDrop import *
4 from ..widgets.VerticalLabel import VerticalLabel
5 from ..python2_3 import asUnicode
6
7 class Dock(QtGui.QWidget, DockDrop):
8
9 sigStretchChanged = QtCore.Signal()
10 sigClosed = QtCore.Signal(object)
11
12 def __init__(self, name, area=None, size=(10, 10), widget=None, hideTitle=False, autoOrientation=True, closable=False):
13 QtGui.QWidget.__init__(self)
14 DockDrop.__init__(self)
15 self._container = None
16 self._name = name
17 self.area = area
18 self.label = DockLabel(name, self, closable)
19 if closable:
20 self.label.sigCloseClicked.connect(self.close)
21 self.labelHidden = False
22 self.moveLabel = True ## If false, the dock is no longer allowed to move the label.
23 self.autoOrient = autoOrientation
24 self.orientation = 'horizontal'
25 #self.label.setAlignment(QtCore.Qt.AlignHCenter)
26 self.topLayout = QtGui.QGridLayout()
27 self.topLayout.setContentsMargins(0, 0, 0, 0)
28 self.topLayout.setSpacing(0)
29 self.setLayout(self.topLayout)
30 self.topLayout.addWidget(self.label, 0, 1)
31 self.widgetArea = QtGui.QWidget()
32 self.topLayout.addWidget(self.widgetArea, 1, 1)
33 self.layout = QtGui.QGridLayout()
34 self.layout.setContentsMargins(0, 0, 0, 0)
35 self.layout.setSpacing(0)
36 self.widgetArea.setLayout(self.layout)
37 self.widgetArea.setSizePolicy(QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Expanding)
38 self.widgets = []
39 self._container = None
40 self.currentRow = 0
41 #self.titlePos = 'top'
42 self.raiseOverlay()
43 self.hStyle = """
44 Dock > QWidget {
45 border: 1px solid #000;
46 border-radius: 5px;
47 border-top-left-radius: 0px;
48 border-top-right-radius: 0px;
49 border-top-width: 0px;
50 }"""
51 self.vStyle = """
52 Dock > QWidget {
53 border: 1px solid #000;
54 border-radius: 5px;
55 border-top-left-radius: 0px;
56 border-bottom-left-radius: 0px;
57 border-left-width: 0px;
58 }"""
59 self.nStyle = """
60 Dock > QWidget {
61 border: 1px solid #000;
62 border-radius: 5px;
63 }"""
64 self.dragStyle = """
65 Dock > QWidget {
66 border: 4px solid #00F;
67 border-radius: 5px;
68 }"""
69 self.setAutoFillBackground(False)
70 self.widgetArea.setStyleSheet(self.hStyle)
71
72 self.setStretch(*size)
73
74 if widget is not None:
75 self.addWidget(widget)
76
77 if hideTitle:
78 self.hideTitleBar()
79
80 def implements(self, name=None):
81 if name is None:
82 return ['dock']
83 else:
84 return name == 'dock'
85
86 def setStretch(self, x=None, y=None):
87 """
88 Set the 'target' size for this Dock.
89 The actual size will be determined by comparing this Dock's
90 stretch value to the rest of the docks it shares space with.
91 """
92 if x is None:
93 x = 0
94 if y is None:
95 y = 0
96 self._stretch = (x, y)
97 self.sigStretchChanged.emit()
98
99 def stretch(self):
100 return self._stretch
101
102 def hideTitleBar(self):
103 """
104 Hide the title bar for this Dock.
105 This will prevent the Dock being moved by the user.
106 """
107 self.label.hide()
108 self.labelHidden = True
109 if 'center' in self.allowedAreas:
110 self.allowedAreas.remove('center')
111 self.updateStyle()
112
113 def showTitleBar(self):
114 """
115 Show the title bar for this Dock.
116 """
117 self.label.show()
118 self.labelHidden = False
119 self.allowedAreas.add('center')
120 self.updateStyle()
121
122 def title(self):
123 """
124 Gets the text displayed in the title bar for this dock.
125 """
126 return asUnicode(self.label.text())
127
128 def setTitle(self, text):
129 """
130 Sets the text displayed in title bar for this Dock.
131 """
132 self.label.setText(text)
133
134 def setOrientation(self, o='auto', force=False):
135 """
136 Sets the orientation of the title bar for this Dock.
137 Must be one of 'auto', 'horizontal', or 'vertical'.
138 By default ('auto'), the orientation is determined
139 based on the aspect ratio of the Dock.
140 """
141 if o == 'auto' and self.autoOrient:
142 if self.container().type() == 'tab':
143 o = 'horizontal'
144 elif self.width() > self.height()*1.5:
145 o = 'vertical'
146 else:
147 o = 'horizontal'
148 if force or self.orientation != o:
149 self.orientation = o
150 self.label.setOrientation(o)
151 self.updateStyle()
152
153 def updateStyle(self):
154 ## updates orientation and appearance of title bar
155 if self.labelHidden:
156 self.widgetArea.setStyleSheet(self.nStyle)
157 elif self.orientation == 'vertical':
158 self.label.setOrientation('vertical')
159 if self.moveLabel:
160 self.topLayout.addWidget(self.label, 1, 0)
161 self.widgetArea.setStyleSheet(self.vStyle)
162 else:
163 self.label.setOrientation('horizontal')
164 if self.moveLabel:
165 self.topLayout.addWidget(self.label, 0, 1)
166 self.widgetArea.setStyleSheet(self.hStyle)
167
168 def resizeEvent(self, ev):
169 self.setOrientation()
170 self.resizeOverlay(self.size())
171
172 def name(self):
173 return self._name
174
175 def addWidget(self, widget, row=None, col=0, rowspan=1, colspan=1):
176 """
177 Add a new widget to the interior of this Dock.
178 Each Dock uses a QGridLayout to arrange widgets within.
179 """
180 if row is None:
181 row = self.currentRow
182 self.currentRow = max(row+1, self.currentRow)
183 self.widgets.append(widget)
184 self.layout.addWidget(widget, row, col, rowspan, colspan)
185 self.raiseOverlay()
186
187 def startDrag(self):
188 self.drag = QtGui.QDrag(self)
189 mime = QtCore.QMimeData()
190 self.drag.setMimeData(mime)
191 self.widgetArea.setStyleSheet(self.dragStyle)
192 self.update()
193 action = self.drag.exec_()
194 self.updateStyle()
195
196 def float(self):
197 self.area.floatDock(self)
198
199 def container(self):
200 return self._container
201
202 def containerChanged(self, c):
203 if self._container is not None:
204 # ask old container to close itself if it is no longer needed
205 self._container.apoptose()
206 self._container = c
207 if c is None:
208 self.area = None
209 else:
210 self.area = c.area
211 if c.type() != 'tab':
212 self.moveLabel = True
213 self.label.setDim(False)
214 else:
215 self.moveLabel = False
216
217 self.setOrientation(force=True)
218
219 def raiseDock(self):
220 """If this Dock is stacked underneath others, raise it to the top."""
221 self.container().raiseDock(self)
222
223 def close(self):
224 """Remove this dock from the DockArea it lives inside."""
225 self.setParent(None)
226 QtGui.QLabel.close(self.label)
227 self.label.setParent(None)
228 self._container.apoptose()
229 self._container = None
230 self.sigClosed.emit(self)
231
232 def __repr__(self):
233 return "<Dock %s %s>" % (self.name(), self.stretch())
234
235 ## PySide bug: We need to explicitly redefine these methods
236 ## or else drag/drop events will not be delivered.
237 def dragEnterEvent(self, *args):
238 DockDrop.dragEnterEvent(self, *args)
239
240 def dragMoveEvent(self, *args):
241 DockDrop.dragMoveEvent(self, *args)
242
243 def dragLeaveEvent(self, *args):
244 DockDrop.dragLeaveEvent(self, *args)
245
246 def dropEvent(self, *args):
247 DockDrop.dropEvent(self, *args)
248
249
250 class DockLabel(VerticalLabel):
251
252 sigClicked = QtCore.Signal(object, object)
253 sigCloseClicked = QtCore.Signal()
254
255 def __init__(self, text, dock, showCloseButton):
256 self.dim = False
257 self.fixedWidth = False
258 VerticalLabel.__init__(self, text, orientation='horizontal', forceWidth=False)
259 self.setAlignment(QtCore.Qt.AlignTop|QtCore.Qt.AlignHCenter)
260 self.dock = dock
261 self.updateStyle()
262 self.setAutoFillBackground(False)
263 self.startedDrag = False
264
265 self.closeButton = None
266 if showCloseButton:
267 self.closeButton = QtGui.QToolButton(self)
268 self.closeButton.clicked.connect(self.sigCloseClicked)
269 self.closeButton.setIcon(QtGui.QApplication.style().standardIcon(QtGui.QStyle.SP_TitleBarCloseButton))
270
271 def updateStyle(self):
272 r = '3px'
273 if self.dim:
274 fg = '#aaa'
275 bg = '#44a'
276 border = '#339'
277 else:
278 fg = '#fff'
279 bg = '#66c'
280 border = '#55B'
281
282 if self.orientation == 'vertical':
283 self.vStyle = """DockLabel {
284 background-color : %s;
285 color : %s;
286 border-top-right-radius: 0px;
287 border-top-left-radius: %s;
288 border-bottom-right-radius: 0px;
289 border-bottom-left-radius: %s;
290 border-width: 0px;
291 border-right: 2px solid %s;
292 padding-top: 3px;
293 padding-bottom: 3px;
294 }""" % (bg, fg, r, r, border)
295 self.setStyleSheet(self.vStyle)
296 else:
297 self.hStyle = """DockLabel {
298 background-color : %s;
299 color : %s;
300 border-top-right-radius: %s;
301 border-top-left-radius: %s;
302 border-bottom-right-radius: 0px;
303 border-bottom-left-radius: 0px;
304 border-width: 0px;
305 border-bottom: 2px solid %s;
306 padding-left: 3px;
307 padding-right: 3px;
308 }""" % (bg, fg, r, r, border)
309 self.setStyleSheet(self.hStyle)
310
311 def setDim(self, d):
312 if self.dim != d:
313 self.dim = d
314 self.updateStyle()
315
316 def setOrientation(self, o):
317 VerticalLabel.setOrientation(self, o)
318 self.updateStyle()
319
320 def mousePressEvent(self, ev):
321 if ev.button() == QtCore.Qt.LeftButton:
322 self.pressPos = ev.pos()
323 self.startedDrag = False
324 ev.accept()
325
326 def mouseMoveEvent(self, ev):
327 if not self.startedDrag and (ev.pos() - self.pressPos).manhattanLength() > QtGui.QApplication.startDragDistance():
328 self.dock.startDrag()
329 ev.accept()
330
331 def mouseReleaseEvent(self, ev):
332 ev.accept()
333 if not self.startedDrag:
334 self.sigClicked.emit(self, ev)
335
336 def mouseDoubleClickEvent(self, ev):
337 if ev.button() == QtCore.Qt.LeftButton:
338 self.dock.float()
339
340 def resizeEvent (self, ev):
341 if self.closeButton:
342 if self.orientation == 'vertical':
343 size = ev.size().width()
344 pos = QtCore.QPoint(0, 0)
345 else:
346 size = ev.size().height()
347 pos = QtCore.QPoint(ev.size().width() - size, 0)
348 self.closeButton.setFixedSize(QtCore.QSize(size, size))
349 self.closeButton.move(pos)
350 super(DockLabel,self).resizeEvent(ev)
351
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyqtgraph/dockarea/Dock.py b/pyqtgraph/dockarea/Dock.py
--- a/pyqtgraph/dockarea/Dock.py
+++ b/pyqtgraph/dockarea/Dock.py
@@ -1,3 +1,4 @@
+# -*- coding: utf-8 -*-
from ..Qt import QtCore, QtGui
from .DockDrop import *
@@ -138,6 +139,12 @@
By default ('auto'), the orientation is determined
based on the aspect ratio of the Dock.
"""
+ # setOrientation may be called before the container is set in some cases
+ # (via resizeEvent), so there's no need to do anything here until called
+ # again by containerChanged
+ if self.container() is None:
+ return
+
if o == 'auto' and self.autoOrient:
if self.container().type() == 'tab':
o = 'horizontal'
|
{"golden_diff": "diff --git a/pyqtgraph/dockarea/Dock.py b/pyqtgraph/dockarea/Dock.py\n--- a/pyqtgraph/dockarea/Dock.py\n+++ b/pyqtgraph/dockarea/Dock.py\n@@ -1,3 +1,4 @@\n+# -*- coding: utf-8 -*-\n from ..Qt import QtCore, QtGui\n \n from .DockDrop import *\n@@ -138,6 +139,12 @@\n By default ('auto'), the orientation is determined\n based on the aspect ratio of the Dock.\n \"\"\"\n+ # setOrientation may be called before the container is set in some cases\n+ # (via resizeEvent), so there's no need to do anything here until called\n+ # again by containerChanged\n+ if self.container() is None:\n+ return\n+\n if o == 'auto' and self.autoOrient:\n if self.container().type() == 'tab':\n o = 'horizontal'\n", "issue": " if self.container().type() == 'tab': AttributeError: 'NoneType' object has no attribute 'type'\nNot sure why I was seeing this or why no one else had. But I seem to have solved the problem by just adding a check that container is not None on line 155 of Dock.py. \r\n\r\n if self.container() is not None and self.container().type() == 'tab':\r\n\n", "before_files": [{"content": "from ..Qt import QtCore, QtGui\n\nfrom .DockDrop import *\nfrom ..widgets.VerticalLabel import VerticalLabel\nfrom ..python2_3 import asUnicode\n\nclass Dock(QtGui.QWidget, DockDrop):\n\n sigStretchChanged = QtCore.Signal()\n sigClosed = QtCore.Signal(object)\n\n def __init__(self, name, area=None, size=(10, 10), widget=None, hideTitle=False, autoOrientation=True, closable=False):\n QtGui.QWidget.__init__(self)\n DockDrop.__init__(self)\n self._container = None\n self._name = name\n self.area = area\n self.label = DockLabel(name, self, closable)\n if closable:\n self.label.sigCloseClicked.connect(self.close)\n self.labelHidden = False\n self.moveLabel = True ## If false, the dock is no longer allowed to move the label.\n self.autoOrient = autoOrientation\n self.orientation = 'horizontal'\n #self.label.setAlignment(QtCore.Qt.AlignHCenter)\n self.topLayout = QtGui.QGridLayout()\n self.topLayout.setContentsMargins(0, 0, 0, 0)\n self.topLayout.setSpacing(0)\n self.setLayout(self.topLayout)\n self.topLayout.addWidget(self.label, 0, 1)\n self.widgetArea = QtGui.QWidget()\n self.topLayout.addWidget(self.widgetArea, 1, 1)\n self.layout = QtGui.QGridLayout()\n self.layout.setContentsMargins(0, 0, 0, 0)\n self.layout.setSpacing(0)\n self.widgetArea.setLayout(self.layout)\n self.widgetArea.setSizePolicy(QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Expanding)\n self.widgets = []\n self._container = None\n self.currentRow = 0\n #self.titlePos = 'top'\n self.raiseOverlay()\n self.hStyle = \"\"\"\n Dock > QWidget {\n border: 1px solid #000;\n border-radius: 5px;\n border-top-left-radius: 0px;\n border-top-right-radius: 0px;\n border-top-width: 0px;\n }\"\"\"\n self.vStyle = \"\"\"\n Dock > QWidget {\n border: 1px solid #000;\n border-radius: 5px;\n border-top-left-radius: 0px;\n border-bottom-left-radius: 0px;\n border-left-width: 0px;\n }\"\"\"\n self.nStyle = \"\"\"\n Dock > QWidget {\n border: 1px solid #000;\n border-radius: 5px;\n }\"\"\"\n self.dragStyle = \"\"\"\n Dock > QWidget {\n border: 4px solid #00F;\n border-radius: 5px;\n }\"\"\"\n self.setAutoFillBackground(False)\n self.widgetArea.setStyleSheet(self.hStyle)\n\n self.setStretch(*size)\n\n if widget is not None:\n self.addWidget(widget)\n\n if hideTitle:\n self.hideTitleBar()\n\n def implements(self, name=None):\n if name is None:\n return ['dock']\n else:\n return name == 'dock'\n\n def setStretch(self, x=None, y=None):\n \"\"\"\n Set the 'target' size for this Dock.\n The actual size will be determined by comparing this Dock's\n stretch value to the rest of the docks it shares space with.\n \"\"\"\n if x is None:\n x = 0\n if y is None:\n y = 0\n self._stretch = (x, y)\n self.sigStretchChanged.emit()\n \n def stretch(self):\n return self._stretch\n\n def hideTitleBar(self):\n \"\"\"\n Hide the title bar for this Dock.\n This will prevent the Dock being moved by the user.\n \"\"\"\n self.label.hide()\n self.labelHidden = True\n if 'center' in self.allowedAreas:\n self.allowedAreas.remove('center')\n self.updateStyle()\n\n def showTitleBar(self):\n \"\"\"\n Show the title bar for this Dock.\n \"\"\"\n self.label.show()\n self.labelHidden = False\n self.allowedAreas.add('center')\n self.updateStyle()\n\n def title(self):\n \"\"\"\n Gets the text displayed in the title bar for this dock.\n \"\"\"\n return asUnicode(self.label.text())\n\n def setTitle(self, text):\n \"\"\"\n Sets the text displayed in title bar for this Dock.\n \"\"\"\n self.label.setText(text)\n\n def setOrientation(self, o='auto', force=False):\n \"\"\"\n Sets the orientation of the title bar for this Dock.\n Must be one of 'auto', 'horizontal', or 'vertical'.\n By default ('auto'), the orientation is determined\n based on the aspect ratio of the Dock.\n \"\"\"\n if o == 'auto' and self.autoOrient:\n if self.container().type() == 'tab':\n o = 'horizontal'\n elif self.width() > self.height()*1.5:\n o = 'vertical'\n else:\n o = 'horizontal'\n if force or self.orientation != o:\n self.orientation = o\n self.label.setOrientation(o)\n self.updateStyle()\n\n def updateStyle(self):\n ## updates orientation and appearance of title bar\n if self.labelHidden:\n self.widgetArea.setStyleSheet(self.nStyle)\n elif self.orientation == 'vertical':\n self.label.setOrientation('vertical')\n if self.moveLabel:\n self.topLayout.addWidget(self.label, 1, 0)\n self.widgetArea.setStyleSheet(self.vStyle)\n else:\n self.label.setOrientation('horizontal')\n if self.moveLabel:\n self.topLayout.addWidget(self.label, 0, 1)\n self.widgetArea.setStyleSheet(self.hStyle)\n\n def resizeEvent(self, ev):\n self.setOrientation()\n self.resizeOverlay(self.size())\n\n def name(self):\n return self._name\n\n def addWidget(self, widget, row=None, col=0, rowspan=1, colspan=1):\n \"\"\"\n Add a new widget to the interior of this Dock.\n Each Dock uses a QGridLayout to arrange widgets within.\n \"\"\"\n if row is None:\n row = self.currentRow\n self.currentRow = max(row+1, self.currentRow)\n self.widgets.append(widget)\n self.layout.addWidget(widget, row, col, rowspan, colspan)\n self.raiseOverlay()\n \n def startDrag(self):\n self.drag = QtGui.QDrag(self)\n mime = QtCore.QMimeData()\n self.drag.setMimeData(mime)\n self.widgetArea.setStyleSheet(self.dragStyle)\n self.update()\n action = self.drag.exec_()\n self.updateStyle()\n\n def float(self):\n self.area.floatDock(self)\n \n def container(self):\n return self._container\n\n def containerChanged(self, c):\n if self._container is not None:\n # ask old container to close itself if it is no longer needed\n self._container.apoptose()\n self._container = c\n if c is None:\n self.area = None\n else:\n self.area = c.area\n if c.type() != 'tab':\n self.moveLabel = True\n self.label.setDim(False)\n else:\n self.moveLabel = False\n \n self.setOrientation(force=True)\n\n def raiseDock(self):\n \"\"\"If this Dock is stacked underneath others, raise it to the top.\"\"\"\n self.container().raiseDock(self)\n\n def close(self):\n \"\"\"Remove this dock from the DockArea it lives inside.\"\"\"\n self.setParent(None)\n QtGui.QLabel.close(self.label)\n self.label.setParent(None)\n self._container.apoptose()\n self._container = None\n self.sigClosed.emit(self)\n\n def __repr__(self):\n return \"<Dock %s %s>\" % (self.name(), self.stretch())\n\n ## PySide bug: We need to explicitly redefine these methods\n ## or else drag/drop events will not be delivered.\n def dragEnterEvent(self, *args):\n DockDrop.dragEnterEvent(self, *args)\n\n def dragMoveEvent(self, *args):\n DockDrop.dragMoveEvent(self, *args)\n\n def dragLeaveEvent(self, *args):\n DockDrop.dragLeaveEvent(self, *args)\n\n def dropEvent(self, *args):\n DockDrop.dropEvent(self, *args)\n\n\nclass DockLabel(VerticalLabel):\n\n sigClicked = QtCore.Signal(object, object)\n sigCloseClicked = QtCore.Signal()\n\n def __init__(self, text, dock, showCloseButton):\n self.dim = False\n self.fixedWidth = False\n VerticalLabel.__init__(self, text, orientation='horizontal', forceWidth=False)\n self.setAlignment(QtCore.Qt.AlignTop|QtCore.Qt.AlignHCenter)\n self.dock = dock\n self.updateStyle()\n self.setAutoFillBackground(False)\n self.startedDrag = False\n\n self.closeButton = None\n if showCloseButton:\n self.closeButton = QtGui.QToolButton(self)\n self.closeButton.clicked.connect(self.sigCloseClicked)\n self.closeButton.setIcon(QtGui.QApplication.style().standardIcon(QtGui.QStyle.SP_TitleBarCloseButton))\n\n def updateStyle(self):\n r = '3px'\n if self.dim:\n fg = '#aaa'\n bg = '#44a'\n border = '#339'\n else:\n fg = '#fff'\n bg = '#66c'\n border = '#55B'\n\n if self.orientation == 'vertical':\n self.vStyle = \"\"\"DockLabel {\n background-color : %s;\n color : %s;\n border-top-right-radius: 0px;\n border-top-left-radius: %s;\n border-bottom-right-radius: 0px;\n border-bottom-left-radius: %s;\n border-width: 0px;\n border-right: 2px solid %s;\n padding-top: 3px;\n padding-bottom: 3px;\n }\"\"\" % (bg, fg, r, r, border)\n self.setStyleSheet(self.vStyle)\n else:\n self.hStyle = \"\"\"DockLabel {\n background-color : %s;\n color : %s;\n border-top-right-radius: %s;\n border-top-left-radius: %s;\n border-bottom-right-radius: 0px;\n border-bottom-left-radius: 0px;\n border-width: 0px;\n border-bottom: 2px solid %s;\n padding-left: 3px;\n padding-right: 3px;\n }\"\"\" % (bg, fg, r, r, border)\n self.setStyleSheet(self.hStyle)\n\n def setDim(self, d):\n if self.dim != d:\n self.dim = d\n self.updateStyle()\n\n def setOrientation(self, o):\n VerticalLabel.setOrientation(self, o)\n self.updateStyle()\n\n def mousePressEvent(self, ev):\n if ev.button() == QtCore.Qt.LeftButton:\n self.pressPos = ev.pos()\n self.startedDrag = False\n ev.accept()\n\n def mouseMoveEvent(self, ev):\n if not self.startedDrag and (ev.pos() - self.pressPos).manhattanLength() > QtGui.QApplication.startDragDistance():\n self.dock.startDrag()\n ev.accept()\n\n def mouseReleaseEvent(self, ev):\n ev.accept()\n if not self.startedDrag:\n self.sigClicked.emit(self, ev)\n \n def mouseDoubleClickEvent(self, ev):\n if ev.button() == QtCore.Qt.LeftButton:\n self.dock.float()\n\n def resizeEvent (self, ev):\n if self.closeButton:\n if self.orientation == 'vertical':\n size = ev.size().width()\n pos = QtCore.QPoint(0, 0)\n else:\n size = ev.size().height()\n pos = QtCore.QPoint(ev.size().width() - size, 0)\n self.closeButton.setFixedSize(QtCore.QSize(size, size))\n self.closeButton.move(pos)\n super(DockLabel,self).resizeEvent(ev)\n", "path": "pyqtgraph/dockarea/Dock.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom ..Qt import QtCore, QtGui\n\nfrom .DockDrop import *\nfrom ..widgets.VerticalLabel import VerticalLabel\nfrom ..python2_3 import asUnicode\n\nclass Dock(QtGui.QWidget, DockDrop):\n\n sigStretchChanged = QtCore.Signal()\n sigClosed = QtCore.Signal(object)\n\n def __init__(self, name, area=None, size=(10, 10), widget=None, hideTitle=False, autoOrientation=True, closable=False):\n QtGui.QWidget.__init__(self)\n DockDrop.__init__(self)\n self._container = None\n self._name = name\n self.area = area\n self.label = DockLabel(name, self, closable)\n if closable:\n self.label.sigCloseClicked.connect(self.close)\n self.labelHidden = False\n self.moveLabel = True ## If false, the dock is no longer allowed to move the label.\n self.autoOrient = autoOrientation\n self.orientation = 'horizontal'\n #self.label.setAlignment(QtCore.Qt.AlignHCenter)\n self.topLayout = QtGui.QGridLayout()\n self.topLayout.setContentsMargins(0, 0, 0, 0)\n self.topLayout.setSpacing(0)\n self.setLayout(self.topLayout)\n self.topLayout.addWidget(self.label, 0, 1)\n self.widgetArea = QtGui.QWidget()\n self.topLayout.addWidget(self.widgetArea, 1, 1)\n self.layout = QtGui.QGridLayout()\n self.layout.setContentsMargins(0, 0, 0, 0)\n self.layout.setSpacing(0)\n self.widgetArea.setLayout(self.layout)\n self.widgetArea.setSizePolicy(QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Expanding)\n self.widgets = []\n self._container = None\n self.currentRow = 0\n #self.titlePos = 'top'\n self.raiseOverlay()\n self.hStyle = \"\"\"\n Dock > QWidget {\n border: 1px solid #000;\n border-radius: 5px;\n border-top-left-radius: 0px;\n border-top-right-radius: 0px;\n border-top-width: 0px;\n }\"\"\"\n self.vStyle = \"\"\"\n Dock > QWidget {\n border: 1px solid #000;\n border-radius: 5px;\n border-top-left-radius: 0px;\n border-bottom-left-radius: 0px;\n border-left-width: 0px;\n }\"\"\"\n self.nStyle = \"\"\"\n Dock > QWidget {\n border: 1px solid #000;\n border-radius: 5px;\n }\"\"\"\n self.dragStyle = \"\"\"\n Dock > QWidget {\n border: 4px solid #00F;\n border-radius: 5px;\n }\"\"\"\n self.setAutoFillBackground(False)\n self.widgetArea.setStyleSheet(self.hStyle)\n\n self.setStretch(*size)\n\n if widget is not None:\n self.addWidget(widget)\n\n if hideTitle:\n self.hideTitleBar()\n\n def implements(self, name=None):\n if name is None:\n return ['dock']\n else:\n return name == 'dock'\n\n def setStretch(self, x=None, y=None):\n \"\"\"\n Set the 'target' size for this Dock.\n The actual size will be determined by comparing this Dock's\n stretch value to the rest of the docks it shares space with.\n \"\"\"\n if x is None:\n x = 0\n if y is None:\n y = 0\n self._stretch = (x, y)\n self.sigStretchChanged.emit()\n \n def stretch(self):\n return self._stretch\n\n def hideTitleBar(self):\n \"\"\"\n Hide the title bar for this Dock.\n This will prevent the Dock being moved by the user.\n \"\"\"\n self.label.hide()\n self.labelHidden = True\n if 'center' in self.allowedAreas:\n self.allowedAreas.remove('center')\n self.updateStyle()\n\n def showTitleBar(self):\n \"\"\"\n Show the title bar for this Dock.\n \"\"\"\n self.label.show()\n self.labelHidden = False\n self.allowedAreas.add('center')\n self.updateStyle()\n\n def title(self):\n \"\"\"\n Gets the text displayed in the title bar for this dock.\n \"\"\"\n return asUnicode(self.label.text())\n\n def setTitle(self, text):\n \"\"\"\n Sets the text displayed in title bar for this Dock.\n \"\"\"\n self.label.setText(text)\n\n def setOrientation(self, o='auto', force=False):\n \"\"\"\n Sets the orientation of the title bar for this Dock.\n Must be one of 'auto', 'horizontal', or 'vertical'.\n By default ('auto'), the orientation is determined\n based on the aspect ratio of the Dock.\n \"\"\"\n # setOrientation may be called before the container is set in some cases\n # (via resizeEvent), so there's no need to do anything here until called\n # again by containerChanged\n if self.container() is None:\n return\n\n if o == 'auto' and self.autoOrient:\n if self.container().type() == 'tab':\n o = 'horizontal'\n elif self.width() > self.height()*1.5:\n o = 'vertical'\n else:\n o = 'horizontal'\n if force or self.orientation != o:\n self.orientation = o\n self.label.setOrientation(o)\n self.updateStyle()\n\n def updateStyle(self):\n ## updates orientation and appearance of title bar\n if self.labelHidden:\n self.widgetArea.setStyleSheet(self.nStyle)\n elif self.orientation == 'vertical':\n self.label.setOrientation('vertical')\n if self.moveLabel:\n self.topLayout.addWidget(self.label, 1, 0)\n self.widgetArea.setStyleSheet(self.vStyle)\n else:\n self.label.setOrientation('horizontal')\n if self.moveLabel:\n self.topLayout.addWidget(self.label, 0, 1)\n self.widgetArea.setStyleSheet(self.hStyle)\n\n def resizeEvent(self, ev):\n self.setOrientation()\n self.resizeOverlay(self.size())\n\n def name(self):\n return self._name\n\n def addWidget(self, widget, row=None, col=0, rowspan=1, colspan=1):\n \"\"\"\n Add a new widget to the interior of this Dock.\n Each Dock uses a QGridLayout to arrange widgets within.\n \"\"\"\n if row is None:\n row = self.currentRow\n self.currentRow = max(row+1, self.currentRow)\n self.widgets.append(widget)\n self.layout.addWidget(widget, row, col, rowspan, colspan)\n self.raiseOverlay()\n \n def startDrag(self):\n self.drag = QtGui.QDrag(self)\n mime = QtCore.QMimeData()\n self.drag.setMimeData(mime)\n self.widgetArea.setStyleSheet(self.dragStyle)\n self.update()\n action = self.drag.exec_()\n self.updateStyle()\n\n def float(self):\n self.area.floatDock(self)\n \n def container(self):\n return self._container\n\n def containerChanged(self, c):\n if self._container is not None:\n # ask old container to close itself if it is no longer needed\n self._container.apoptose()\n self._container = c\n if c is None:\n self.area = None\n else:\n self.area = c.area\n if c.type() != 'tab':\n self.moveLabel = True\n self.label.setDim(False)\n else:\n self.moveLabel = False\n \n self.setOrientation(force=True)\n\n def raiseDock(self):\n \"\"\"If this Dock is stacked underneath others, raise it to the top.\"\"\"\n self.container().raiseDock(self)\n\n def close(self):\n \"\"\"Remove this dock from the DockArea it lives inside.\"\"\"\n self.setParent(None)\n QtGui.QLabel.close(self.label)\n self.label.setParent(None)\n self._container.apoptose()\n self._container = None\n self.sigClosed.emit(self)\n\n def __repr__(self):\n return \"<Dock %s %s>\" % (self.name(), self.stretch())\n\n ## PySide bug: We need to explicitly redefine these methods\n ## or else drag/drop events will not be delivered.\n def dragEnterEvent(self, *args):\n DockDrop.dragEnterEvent(self, *args)\n\n def dragMoveEvent(self, *args):\n DockDrop.dragMoveEvent(self, *args)\n\n def dragLeaveEvent(self, *args):\n DockDrop.dragLeaveEvent(self, *args)\n\n def dropEvent(self, *args):\n DockDrop.dropEvent(self, *args)\n\n\nclass DockLabel(VerticalLabel):\n\n sigClicked = QtCore.Signal(object, object)\n sigCloseClicked = QtCore.Signal()\n\n def __init__(self, text, dock, showCloseButton):\n self.dim = False\n self.fixedWidth = False\n VerticalLabel.__init__(self, text, orientation='horizontal', forceWidth=False)\n self.setAlignment(QtCore.Qt.AlignTop|QtCore.Qt.AlignHCenter)\n self.dock = dock\n self.updateStyle()\n self.setAutoFillBackground(False)\n self.startedDrag = False\n\n self.closeButton = None\n if showCloseButton:\n self.closeButton = QtGui.QToolButton(self)\n self.closeButton.clicked.connect(self.sigCloseClicked)\n self.closeButton.setIcon(QtGui.QApplication.style().standardIcon(QtGui.QStyle.SP_TitleBarCloseButton))\n\n def updateStyle(self):\n r = '3px'\n if self.dim:\n fg = '#aaa'\n bg = '#44a'\n border = '#339'\n else:\n fg = '#fff'\n bg = '#66c'\n border = '#55B'\n\n if self.orientation == 'vertical':\n self.vStyle = \"\"\"DockLabel {\n background-color : %s;\n color : %s;\n border-top-right-radius: 0px;\n border-top-left-radius: %s;\n border-bottom-right-radius: 0px;\n border-bottom-left-radius: %s;\n border-width: 0px;\n border-right: 2px solid %s;\n padding-top: 3px;\n padding-bottom: 3px;\n }\"\"\" % (bg, fg, r, r, border)\n self.setStyleSheet(self.vStyle)\n else:\n self.hStyle = \"\"\"DockLabel {\n background-color : %s;\n color : %s;\n border-top-right-radius: %s;\n border-top-left-radius: %s;\n border-bottom-right-radius: 0px;\n border-bottom-left-radius: 0px;\n border-width: 0px;\n border-bottom: 2px solid %s;\n padding-left: 3px;\n padding-right: 3px;\n }\"\"\" % (bg, fg, r, r, border)\n self.setStyleSheet(self.hStyle)\n\n def setDim(self, d):\n if self.dim != d:\n self.dim = d\n self.updateStyle()\n\n def setOrientation(self, o):\n VerticalLabel.setOrientation(self, o)\n self.updateStyle()\n\n def mousePressEvent(self, ev):\n if ev.button() == QtCore.Qt.LeftButton:\n self.pressPos = ev.pos()\n self.startedDrag = False\n ev.accept()\n\n def mouseMoveEvent(self, ev):\n if not self.startedDrag and (ev.pos() - self.pressPos).manhattanLength() > QtGui.QApplication.startDragDistance():\n self.dock.startDrag()\n ev.accept()\n\n def mouseReleaseEvent(self, ev):\n ev.accept()\n if not self.startedDrag:\n self.sigClicked.emit(self, ev)\n \n def mouseDoubleClickEvent(self, ev):\n if ev.button() == QtCore.Qt.LeftButton:\n self.dock.float()\n\n def resizeEvent (self, ev):\n if self.closeButton:\n if self.orientation == 'vertical':\n size = ev.size().width()\n pos = QtCore.QPoint(0, 0)\n else:\n size = ev.size().height()\n pos = QtCore.QPoint(ev.size().width() - size, 0)\n self.closeButton.setFixedSize(QtCore.QSize(size, size))\n self.closeButton.move(pos)\n super(DockLabel,self).resizeEvent(ev)\n", "path": "pyqtgraph/dockarea/Dock.py"}]}
| 3,858 | 207 |
gh_patches_debug_65044
|
rasdani/github-patches
|
git_diff
|
kserve__kserve-1583
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CommandException: No URLs matched: gs://kfserving-examples/models/mnist
/kind bug
I would like to run the kafka mnist example but when I run:
```bash
gsutil cp gs://kfserving-examples/models/mnist .
```
As per the readme, I get
```
CommandException: No URLs matched: gs://kfserving-examples/models/mnist
```
**What did you expect to happen:**
I expected to be able to download the model checkpoint.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/samples/kafka/setup.py`
Content:
```
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import setup, find_packages
16
17 tests_require = [
18 'pytest',
19 'pytest-tornasync',
20 'mypy'
21 ]
22
23 setup(
24 name='transformer',
25 version='0.1.0',
26 author_email='[email protected]',
27 license='../../LICENSE.txt',
28 url='https://github.com/kubeflow/kfserving/docs/sameples/transformer',
29 description='Transformer',
30 long_description=open('README.md').read(),
31 python_requires='>=3.6',
32 packages=find_packages("transformer"),
33 install_requires=[
34 "kfserving>=0.2.1",
35 "argparse>=1.4.0",
36 "requests>=2.22.0",
37 "joblib>=0.13.2",
38 "pandas>=0.24.2",
39 "numpy>=1.16.3",
40 "kubernetes >= 9.0.0",
41 "opencv-python-headless==4.0.0.21",
42 "boto3==1.7.2"
43 ],
44 tests_require=tests_require,
45 extras_require={'test': tests_require}
46 )
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/samples/kafka/setup.py b/docs/samples/kafka/setup.py
--- a/docs/samples/kafka/setup.py
+++ b/docs/samples/kafka/setup.py
@@ -25,7 +25,7 @@
version='0.1.0',
author_email='[email protected]',
license='../../LICENSE.txt',
- url='https://github.com/kubeflow/kfserving/docs/sameples/transformer',
+ url='https://github.com/kubeflow/kfserving/tree/master/docs/samples#deploy-inferenceservice-with-transformer',
description='Transformer',
long_description=open('README.md').read(),
python_requires='>=3.6',
|
{"golden_diff": "diff --git a/docs/samples/kafka/setup.py b/docs/samples/kafka/setup.py\n--- a/docs/samples/kafka/setup.py\n+++ b/docs/samples/kafka/setup.py\n@@ -25,7 +25,7 @@\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n- url='https://github.com/kubeflow/kfserving/docs/sameples/transformer',\n+ url='https://github.com/kubeflow/kfserving/tree/master/docs/samples#deploy-inferenceservice-with-transformer',\n description='Transformer',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n", "issue": "CommandException: No URLs matched: gs://kfserving-examples/models/mnist\n/kind bug \r\n\r\nI would like to run the kafka mnist example but when I run:\r\n```bash\r\ngsutil cp gs://kfserving-examples/models/mnist .\r\n```\r\nAs per the readme, I get\r\n```\r\nCommandException: No URLs matched: gs://kfserving-examples/models/mnist\r\n```\r\n\r\n**What did you expect to happen:**\r\nI expected to be able to download the model checkpoint. \r\n\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='transformer',\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/docs/sameples/transformer',\n description='Transformer',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"transformer\"),\n install_requires=[\n \"kfserving>=0.2.1\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"kubernetes >= 9.0.0\",\n \"opencv-python-headless==4.0.0.21\",\n \"boto3==1.7.2\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "docs/samples/kafka/setup.py"}], "after_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='transformer',\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/tree/master/docs/samples#deploy-inferenceservice-with-transformer',\n description='Transformer',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"transformer\"),\n install_requires=[\n \"kfserving>=0.2.1\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"kubernetes >= 9.0.0\",\n \"opencv-python-headless==4.0.0.21\",\n \"boto3==1.7.2\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "docs/samples/kafka/setup.py"}]}
| 843 | 158 |
gh_patches_debug_10991
|
rasdani/github-patches
|
git_diff
|
biolab__orange3-text-526
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Word Enrichment: sort by p-value
<!--
This is an issue template. Please fill in the relevant details in the
sections below.
-->
##### Text version
<!-- From menu _Options→Add-ons→Orange3-Text_ or code `orangecontrib.text.version.full_version` -->
0.8.0
##### Orange version
<!-- From menu _Help→About→Version_ or code `Orange.version.full_version` -->
3.26.0.dev
##### Expected behavior
Word Enrichment sorts by p-value by default.
##### Actual behavior
It sorts by words (alphabetically).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `orangecontrib/text/widgets/owwordenrichment.py`
Content:
```
1 from types import SimpleNamespace
2 from typing import List, Optional, Any
3
4 import numpy as np
5 from AnyQt.QtWidgets import QTreeWidget, QTreeView, QTreeWidgetItem
6
7 from Orange.data import Table, Domain
8 from Orange.widgets import gui
9 from Orange.widgets.settings import Setting
10 from Orange.widgets.utils.concurrent import ConcurrentWidgetMixin, TaskState
11 from Orange.widgets.widget import OWWidget, Msg, Input
12 from Orange.statistics.util import FDR
13 from PyQt5.QtCore import QSize
14 from orangecontrib.text import Corpus
15 from orangecontrib.text.util import np_sp_sum
16 from orangecontrib.text.stats import hypergeom_p_values
17
18
19 class Result(SimpleNamespace):
20 words: Optional[List[str]] = None
21 p_values: Optional[List[float]] = None
22 fdr_values: Optional[List[float]] = None
23
24
25 class Runner:
26 @staticmethod
27 def run(
28 selected_data_transformed: Table,
29 data: Table,
30 result: Result,
31 state: TaskState
32 ) -> None:
33 state.set_status("Listing words")
34 result.words = [
35 i.name for i in selected_data_transformed.domain.attributes]
36 state.set_status("Computing p-values")
37 result.p_values = hypergeom_p_values(
38 data.X, selected_data_transformed.X,
39 callback=state.set_progress_value
40 )
41 state.set_status("Computing FDR values")
42 result.fdr_values = FDR(result.p_values)
43
44
45 class OWWordEnrichment(OWWidget, ConcurrentWidgetMixin):
46 # Basic widget info
47 name = "Word Enrichment"
48 description = "Word enrichment analysis for selected documents."
49 icon = "icons/SetEnrichment.svg"
50 priority = 600
51
52 # Input/output
53 class Inputs:
54 selected_data = Input("Selected Data", Table)
55 data = Input("Data", Table)
56
57 want_main_area = True
58
59 class Error(OWWidget.Error):
60 no_bow_features = Msg('No bag-of-words features!')
61 no_words_overlap = Msg('No words overlap!')
62 empty_selection = Msg('Selected data is empty!')
63 all_selected = Msg('All examples can not be selected!')
64
65 # Settings
66 filter_by_p: bool = Setting(False)
67 filter_p_value: float = Setting(0.01)
68 filter_by_fdr: bool = Setting(True)
69 filter_fdr_value: float = Setting(0.2)
70
71 def __init__(self):
72 OWWidget.__init__(self)
73 ConcurrentWidgetMixin.__init__(self)
74
75 # Init data
76 self.data = None
77 self.selected_data = None
78 # used for transforming the 'selected data' into the 'data' domain
79 self.selected_data_transformed = None
80
81 self.results = Result()
82
83 # info box
84 fbox = gui.widgetBox(self.controlArea, "Info")
85 self.info_fil = gui.label(fbox, self, 'Words displayed: 0')
86
87 # Filtering settings
88 fbox = gui.widgetBox(self.controlArea, "Filter")
89 hbox = gui.widgetBox(fbox, orientation=0)
90
91 self.chb_p = gui.checkBox(hbox, self, "filter_by_p", "p-value",
92 callback=self.filter_and_display,
93 tooltip="Filter by word p-value")
94 self.spin_p = gui.doubleSpin(hbox, self, 'filter_p_value',
95 1e-4, 1, step=1e-4, labelWidth=15,
96 callback=self.filter_and_display,
97 tooltip="Max p-value for word")
98 self.spin_p.setEnabled(self.filter_by_p)
99
100 hbox = gui.widgetBox(fbox, orientation=0)
101 self.chb_fdr = gui.checkBox(hbox, self, "filter_by_fdr", "FDR",
102 callback=self.filter_and_display,
103 tooltip="Filter by word FDR")
104 self.spin_fdr = gui.doubleSpin(hbox, self, 'filter_fdr_value',
105 1e-4, 1, step=1e-4, labelWidth=15,
106 callback=self.filter_and_display,
107 tooltip="Max p-value for word")
108 self.spin_fdr.setEnabled(self.filter_by_fdr)
109 gui.rubber(self.controlArea)
110
111 # Word's list view
112 self.cols = ['Word', 'p-value', 'FDR']
113 self.sig_words = QTreeWidget()
114 self.sig_words.setColumnCount(len(self.cols))
115 self.sig_words.setHeaderLabels(self.cols)
116 self.sig_words.setSortingEnabled(True)
117 self.sig_words.setSelectionMode(QTreeView.NoSelection)
118 self.sig_words.sortByColumn(2, 0) # 0 is ascending order
119 for i in range(len(self.cols)):
120 self.sig_words.resizeColumnToContents(i)
121 self.mainArea.layout().addWidget(self.sig_words)
122
123 def sizeHint(self):
124 return QSize(450, 240)
125
126 @Inputs.data
127 def set_data(self, data=None):
128 self.data = data
129 # selected data transformed depends on data domain
130 self.selected_data_transformed = None
131
132 @Inputs.selected_data
133 def set_data_selected(self, data=None):
134 self.selected_data = data
135
136 def handleNewSignals(self):
137 self.check_data()
138
139 def get_bow_domain(self):
140 domain = self.data.domain
141 return Domain(
142 attributes=[a for a in domain.attributes
143 if a.attributes.get('bow-feature', False)],
144 class_vars=domain.class_vars,
145 metas=domain.metas,
146 source=domain)
147
148 def check_data(self):
149 self.Error.clear()
150 if isinstance(self.data, Table) and \
151 isinstance(self.selected_data, Table):
152 if len(self.selected_data) == 0:
153 self.Error.empty_selection()
154 self.clear()
155 return
156
157 # keep only BoW features
158 bow_domain = self.get_bow_domain()
159 if len(bow_domain.attributes) == 0:
160 self.Error.no_bow_features()
161 self.clear()
162 return
163 self.data = Corpus.from_table(bow_domain, self.data)
164 self.selected_data_transformed = Corpus.from_table(
165 bow_domain, self.selected_data)
166
167 if np_sp_sum(self.selected_data_transformed.X) == 0:
168 self.Error.no_words_overlap()
169 self.clear()
170 elif len(self.data) == len(self.selected_data):
171 self.Error.all_selected()
172 self.clear()
173 else:
174 self.set_input_info()
175 self.apply()
176 else:
177 self.clear()
178
179 def clear(self):
180 self.sig_words.clear()
181 self.info.set_input_summary(self.info.NoInput)
182 self.set_displayed_info(0)
183
184 def filter_enabled(self, b):
185 self.chb_p.setEnabled(b)
186 self.chb_fdr.setEnabled(b)
187 self.spin_p.setEnabled(b)
188 self.spin_fdr.setEnabled(b)
189
190 def filter_and_display(self):
191 self.spin_p.setEnabled(self.filter_by_p)
192 self.spin_fdr.setEnabled(self.filter_by_fdr)
193 self.sig_words.clear()
194
195 if self.selected_data_transformed is None: # do nothing when no Data
196 return
197
198 if self.results.words:
199 count = self.build_tree()
200 else:
201 count = 0
202
203 for i in range(len(self.cols)):
204 self.sig_words.resizeColumnToContents(i)
205 self.set_displayed_info(count)
206
207 def build_tree(self) -> int:
208 count = 0
209 for word, pval, fval in zip(
210 self.results.words,
211 self.results.p_values,
212 self.results.fdr_values
213 ):
214 if ((not self.filter_by_p or pval <= self.filter_p_value) and
215 (not self.filter_by_fdr or fval <= self.filter_fdr_value)):
216 it = EATreeWidgetItem(word, pval, fval, self.sig_words)
217 self.sig_words.addTopLevelItem(it)
218 count += 1
219 return count
220
221 def set_input_info(self) -> None:
222 cluster_words = len(self.selected_data_transformed.domain.attributes)
223 selected_words = np.count_nonzero(np_sp_sum(
224 self.selected_data_transformed.X, axis=0))
225
226 self.info.set_input_summary(
227 f"{cluster_words}|{selected_words}",
228 f"Total words: {cluster_words}\n"
229 f"Words in subset: {selected_words}")
230
231 def set_displayed_info(self, count: int) -> None:
232 self.info_fil.setText(f"Words displayed: {count}")
233
234 def apply(self):
235 self.sig_words.clear()
236 self.filter_enabled(False)
237 self.start(
238 Runner.run,
239 self.selected_data_transformed,
240 self.data,
241 self.results
242 )
243
244 def on_done(self, result: Result) -> None:
245 self.filter_and_display()
246 self.filter_enabled(True)
247
248 def on_exception(self, ex: Exception) -> None:
249 self.filter_enabled(True)
250
251 def tree_to_table(self):
252 view = [self.cols]
253 items = self.sig_words.topLevelItemCount()
254 for i in range(items):
255 line = []
256 for j in range(3):
257 line.append(self.sig_words.topLevelItem(i).text(j))
258 view.append(line)
259 return view
260
261 def send_report(self):
262 if self.results.words:
263 self.report_table("Enriched words", self.tree_to_table())
264
265
266 fp = lambda score: "%0.5f" % score if score > 10e-3 else "%0.1e" % score
267 fpt = lambda score: "%0.9f" % score if score > 10e-3 else "%0.5e" % score
268
269
270 class EATreeWidgetItem(QTreeWidgetItem):
271 def __init__(self, word, p_value, f_value, parent):
272 super().__init__(parent)
273 self.data = [word, p_value, f_value]
274 self.setText(0, word)
275 self.setText(1, fp(p_value))
276 self.setToolTip(1, fpt(p_value))
277 self.setText(2, fp(f_value))
278 self.setToolTip(2, fpt(f_value))
279
280 def __lt__(self, other):
281 col = self.treeWidget().sortColumn()
282 return self.data[col] < other.data[col]
283
284
285 if __name__ == '__main__':
286 from orangewidget.utils.widgetpreview import WidgetPreview
287 from orangecontrib.text.vectorization import BowVectorizer
288
289 corpus = Corpus.from_file('book-excerpts')
290 vect = BowVectorizer()
291 corpus_vect = vect.transform(corpus)
292 WidgetPreview(OWWordEnrichment).run(
293 set_data_selected=corpus_vect[:10], set_data=corpus_vect)
294
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/orangecontrib/text/widgets/owwordenrichment.py b/orangecontrib/text/widgets/owwordenrichment.py
--- a/orangecontrib/text/widgets/owwordenrichment.py
+++ b/orangecontrib/text/widgets/owwordenrichment.py
@@ -115,7 +115,7 @@
self.sig_words.setHeaderLabels(self.cols)
self.sig_words.setSortingEnabled(True)
self.sig_words.setSelectionMode(QTreeView.NoSelection)
- self.sig_words.sortByColumn(2, 0) # 0 is ascending order
+ self.sig_words.sortByColumn(1, 0) # 0 is ascending order
for i in range(len(self.cols)):
self.sig_words.resizeColumnToContents(i)
self.mainArea.layout().addWidget(self.sig_words)
|
{"golden_diff": "diff --git a/orangecontrib/text/widgets/owwordenrichment.py b/orangecontrib/text/widgets/owwordenrichment.py\n--- a/orangecontrib/text/widgets/owwordenrichment.py\n+++ b/orangecontrib/text/widgets/owwordenrichment.py\n@@ -115,7 +115,7 @@\n self.sig_words.setHeaderLabels(self.cols)\n self.sig_words.setSortingEnabled(True)\n self.sig_words.setSelectionMode(QTreeView.NoSelection)\n- self.sig_words.sortByColumn(2, 0) # 0 is ascending order\n+ self.sig_words.sortByColumn(1, 0) # 0 is ascending order\n for i in range(len(self.cols)):\n self.sig_words.resizeColumnToContents(i)\n self.mainArea.layout().addWidget(self.sig_words)\n", "issue": "Word Enrichment: sort by p-value\n<!--\r\nThis is an issue template. Please fill in the relevant details in the\r\nsections below.\r\n-->\r\n\r\n##### Text version\r\n<!-- From menu _Options\u2192Add-ons\u2192Orange3-Text_ or code `orangecontrib.text.version.full_version` -->\r\n0.8.0\r\n\r\n##### Orange version\r\n<!-- From menu _Help\u2192About\u2192Version_ or code `Orange.version.full_version` -->\r\n3.26.0.dev\r\n\r\n##### Expected behavior\r\nWord Enrichment sorts by p-value by default.\r\n\r\n\r\n##### Actual behavior\r\nIt sorts by words (alphabetically).\r\n\r\n\r\n\n", "before_files": [{"content": "from types import SimpleNamespace\nfrom typing import List, Optional, Any\n\nimport numpy as np\nfrom AnyQt.QtWidgets import QTreeWidget, QTreeView, QTreeWidgetItem\n\nfrom Orange.data import Table, Domain\nfrom Orange.widgets import gui\nfrom Orange.widgets.settings import Setting\nfrom Orange.widgets.utils.concurrent import ConcurrentWidgetMixin, TaskState\nfrom Orange.widgets.widget import OWWidget, Msg, Input\nfrom Orange.statistics.util import FDR\nfrom PyQt5.QtCore import QSize\nfrom orangecontrib.text import Corpus\nfrom orangecontrib.text.util import np_sp_sum\nfrom orangecontrib.text.stats import hypergeom_p_values\n\n\nclass Result(SimpleNamespace):\n words: Optional[List[str]] = None\n p_values: Optional[List[float]] = None\n fdr_values: Optional[List[float]] = None\n\n\nclass Runner:\n @staticmethod\n def run(\n selected_data_transformed: Table,\n data: Table,\n result: Result,\n state: TaskState\n ) -> None:\n state.set_status(\"Listing words\")\n result.words = [\n i.name for i in selected_data_transformed.domain.attributes]\n state.set_status(\"Computing p-values\")\n result.p_values = hypergeom_p_values(\n data.X, selected_data_transformed.X,\n callback=state.set_progress_value\n )\n state.set_status(\"Computing FDR values\")\n result.fdr_values = FDR(result.p_values)\n\n\nclass OWWordEnrichment(OWWidget, ConcurrentWidgetMixin):\n # Basic widget info\n name = \"Word Enrichment\"\n description = \"Word enrichment analysis for selected documents.\"\n icon = \"icons/SetEnrichment.svg\"\n priority = 600\n\n # Input/output\n class Inputs:\n selected_data = Input(\"Selected Data\", Table)\n data = Input(\"Data\", Table)\n\n want_main_area = True\n\n class Error(OWWidget.Error):\n no_bow_features = Msg('No bag-of-words features!')\n no_words_overlap = Msg('No words overlap!')\n empty_selection = Msg('Selected data is empty!')\n all_selected = Msg('All examples can not be selected!')\n\n # Settings\n filter_by_p: bool = Setting(False)\n filter_p_value: float = Setting(0.01)\n filter_by_fdr: bool = Setting(True)\n filter_fdr_value: float = Setting(0.2)\n\n def __init__(self):\n OWWidget.__init__(self)\n ConcurrentWidgetMixin.__init__(self)\n\n # Init data\n self.data = None\n self.selected_data = None\n # used for transforming the 'selected data' into the 'data' domain\n self.selected_data_transformed = None\n\n self.results = Result()\n\n # info box\n fbox = gui.widgetBox(self.controlArea, \"Info\")\n self.info_fil = gui.label(fbox, self, 'Words displayed: 0')\n\n # Filtering settings\n fbox = gui.widgetBox(self.controlArea, \"Filter\")\n hbox = gui.widgetBox(fbox, orientation=0)\n\n self.chb_p = gui.checkBox(hbox, self, \"filter_by_p\", \"p-value\",\n callback=self.filter_and_display,\n tooltip=\"Filter by word p-value\")\n self.spin_p = gui.doubleSpin(hbox, self, 'filter_p_value',\n 1e-4, 1, step=1e-4, labelWidth=15,\n callback=self.filter_and_display,\n tooltip=\"Max p-value for word\")\n self.spin_p.setEnabled(self.filter_by_p)\n\n hbox = gui.widgetBox(fbox, orientation=0)\n self.chb_fdr = gui.checkBox(hbox, self, \"filter_by_fdr\", \"FDR\",\n callback=self.filter_and_display,\n tooltip=\"Filter by word FDR\")\n self.spin_fdr = gui.doubleSpin(hbox, self, 'filter_fdr_value',\n 1e-4, 1, step=1e-4, labelWidth=15,\n callback=self.filter_and_display,\n tooltip=\"Max p-value for word\")\n self.spin_fdr.setEnabled(self.filter_by_fdr)\n gui.rubber(self.controlArea)\n\n # Word's list view\n self.cols = ['Word', 'p-value', 'FDR']\n self.sig_words = QTreeWidget()\n self.sig_words.setColumnCount(len(self.cols))\n self.sig_words.setHeaderLabels(self.cols)\n self.sig_words.setSortingEnabled(True)\n self.sig_words.setSelectionMode(QTreeView.NoSelection)\n self.sig_words.sortByColumn(2, 0) # 0 is ascending order\n for i in range(len(self.cols)):\n self.sig_words.resizeColumnToContents(i)\n self.mainArea.layout().addWidget(self.sig_words)\n\n def sizeHint(self):\n return QSize(450, 240)\n\n @Inputs.data\n def set_data(self, data=None):\n self.data = data\n # selected data transformed depends on data domain\n self.selected_data_transformed = None\n\n @Inputs.selected_data\n def set_data_selected(self, data=None):\n self.selected_data = data\n\n def handleNewSignals(self):\n self.check_data()\n\n def get_bow_domain(self):\n domain = self.data.domain\n return Domain(\n attributes=[a for a in domain.attributes\n if a.attributes.get('bow-feature', False)],\n class_vars=domain.class_vars,\n metas=domain.metas,\n source=domain)\n\n def check_data(self):\n self.Error.clear()\n if isinstance(self.data, Table) and \\\n isinstance(self.selected_data, Table):\n if len(self.selected_data) == 0:\n self.Error.empty_selection()\n self.clear()\n return\n\n # keep only BoW features\n bow_domain = self.get_bow_domain()\n if len(bow_domain.attributes) == 0:\n self.Error.no_bow_features()\n self.clear()\n return\n self.data = Corpus.from_table(bow_domain, self.data)\n self.selected_data_transformed = Corpus.from_table(\n bow_domain, self.selected_data)\n\n if np_sp_sum(self.selected_data_transformed.X) == 0:\n self.Error.no_words_overlap()\n self.clear()\n elif len(self.data) == len(self.selected_data):\n self.Error.all_selected()\n self.clear()\n else:\n self.set_input_info()\n self.apply()\n else:\n self.clear()\n\n def clear(self):\n self.sig_words.clear()\n self.info.set_input_summary(self.info.NoInput)\n self.set_displayed_info(0)\n\n def filter_enabled(self, b):\n self.chb_p.setEnabled(b)\n self.chb_fdr.setEnabled(b)\n self.spin_p.setEnabled(b)\n self.spin_fdr.setEnabled(b)\n\n def filter_and_display(self):\n self.spin_p.setEnabled(self.filter_by_p)\n self.spin_fdr.setEnabled(self.filter_by_fdr)\n self.sig_words.clear()\n\n if self.selected_data_transformed is None: # do nothing when no Data\n return\n\n if self.results.words:\n count = self.build_tree()\n else:\n count = 0\n\n for i in range(len(self.cols)):\n self.sig_words.resizeColumnToContents(i)\n self.set_displayed_info(count)\n\n def build_tree(self) -> int:\n count = 0\n for word, pval, fval in zip(\n self.results.words,\n self.results.p_values,\n self.results.fdr_values\n ):\n if ((not self.filter_by_p or pval <= self.filter_p_value) and\n (not self.filter_by_fdr or fval <= self.filter_fdr_value)):\n it = EATreeWidgetItem(word, pval, fval, self.sig_words)\n self.sig_words.addTopLevelItem(it)\n count += 1\n return count\n\n def set_input_info(self) -> None:\n cluster_words = len(self.selected_data_transformed.domain.attributes)\n selected_words = np.count_nonzero(np_sp_sum(\n self.selected_data_transformed.X, axis=0))\n\n self.info.set_input_summary(\n f\"{cluster_words}|{selected_words}\",\n f\"Total words: {cluster_words}\\n\"\n f\"Words in subset: {selected_words}\")\n\n def set_displayed_info(self, count: int) -> None:\n self.info_fil.setText(f\"Words displayed: {count}\")\n\n def apply(self):\n self.sig_words.clear()\n self.filter_enabled(False)\n self.start(\n Runner.run,\n self.selected_data_transformed,\n self.data,\n self.results\n )\n\n def on_done(self, result: Result) -> None:\n self.filter_and_display()\n self.filter_enabled(True)\n\n def on_exception(self, ex: Exception) -> None:\n self.filter_enabled(True)\n\n def tree_to_table(self):\n view = [self.cols]\n items = self.sig_words.topLevelItemCount()\n for i in range(items):\n line = []\n for j in range(3):\n line.append(self.sig_words.topLevelItem(i).text(j))\n view.append(line)\n return view\n\n def send_report(self):\n if self.results.words:\n self.report_table(\"Enriched words\", self.tree_to_table())\n\n\nfp = lambda score: \"%0.5f\" % score if score > 10e-3 else \"%0.1e\" % score\nfpt = lambda score: \"%0.9f\" % score if score > 10e-3 else \"%0.5e\" % score\n\n\nclass EATreeWidgetItem(QTreeWidgetItem):\n def __init__(self, word, p_value, f_value, parent):\n super().__init__(parent)\n self.data = [word, p_value, f_value]\n self.setText(0, word)\n self.setText(1, fp(p_value))\n self.setToolTip(1, fpt(p_value))\n self.setText(2, fp(f_value))\n self.setToolTip(2, fpt(f_value))\n\n def __lt__(self, other):\n col = self.treeWidget().sortColumn()\n return self.data[col] < other.data[col]\n\n\nif __name__ == '__main__':\n from orangewidget.utils.widgetpreview import WidgetPreview\n from orangecontrib.text.vectorization import BowVectorizer\n\n corpus = Corpus.from_file('book-excerpts')\n vect = BowVectorizer()\n corpus_vect = vect.transform(corpus)\n WidgetPreview(OWWordEnrichment).run(\n set_data_selected=corpus_vect[:10], set_data=corpus_vect)\n", "path": "orangecontrib/text/widgets/owwordenrichment.py"}], "after_files": [{"content": "from types import SimpleNamespace\nfrom typing import List, Optional, Any\n\nimport numpy as np\nfrom AnyQt.QtWidgets import QTreeWidget, QTreeView, QTreeWidgetItem\n\nfrom Orange.data import Table, Domain\nfrom Orange.widgets import gui\nfrom Orange.widgets.settings import Setting\nfrom Orange.widgets.utils.concurrent import ConcurrentWidgetMixin, TaskState\nfrom Orange.widgets.widget import OWWidget, Msg, Input\nfrom Orange.statistics.util import FDR\nfrom PyQt5.QtCore import QSize\nfrom orangecontrib.text import Corpus\nfrom orangecontrib.text.util import np_sp_sum\nfrom orangecontrib.text.stats import hypergeom_p_values\n\n\nclass Result(SimpleNamespace):\n words: Optional[List[str]] = None\n p_values: Optional[List[float]] = None\n fdr_values: Optional[List[float]] = None\n\n\nclass Runner:\n @staticmethod\n def run(\n selected_data_transformed: Table,\n data: Table,\n result: Result,\n state: TaskState\n ) -> None:\n state.set_status(\"Listing words\")\n result.words = [\n i.name for i in selected_data_transformed.domain.attributes]\n state.set_status(\"Computing p-values\")\n result.p_values = hypergeom_p_values(\n data.X, selected_data_transformed.X,\n callback=state.set_progress_value\n )\n state.set_status(\"Computing FDR values\")\n result.fdr_values = FDR(result.p_values)\n\n\nclass OWWordEnrichment(OWWidget, ConcurrentWidgetMixin):\n # Basic widget info\n name = \"Word Enrichment\"\n description = \"Word enrichment analysis for selected documents.\"\n icon = \"icons/SetEnrichment.svg\"\n priority = 600\n\n # Input/output\n class Inputs:\n selected_data = Input(\"Selected Data\", Table)\n data = Input(\"Data\", Table)\n\n want_main_area = True\n\n class Error(OWWidget.Error):\n no_bow_features = Msg('No bag-of-words features!')\n no_words_overlap = Msg('No words overlap!')\n empty_selection = Msg('Selected data is empty!')\n all_selected = Msg('All examples can not be selected!')\n\n # Settings\n filter_by_p: bool = Setting(False)\n filter_p_value: float = Setting(0.01)\n filter_by_fdr: bool = Setting(True)\n filter_fdr_value: float = Setting(0.2)\n\n def __init__(self):\n OWWidget.__init__(self)\n ConcurrentWidgetMixin.__init__(self)\n\n # Init data\n self.data = None\n self.selected_data = None\n # used for transforming the 'selected data' into the 'data' domain\n self.selected_data_transformed = None\n\n self.results = Result()\n\n # info box\n fbox = gui.widgetBox(self.controlArea, \"Info\")\n self.info_fil = gui.label(fbox, self, 'Words displayed: 0')\n\n # Filtering settings\n fbox = gui.widgetBox(self.controlArea, \"Filter\")\n hbox = gui.widgetBox(fbox, orientation=0)\n\n self.chb_p = gui.checkBox(hbox, self, \"filter_by_p\", \"p-value\",\n callback=self.filter_and_display,\n tooltip=\"Filter by word p-value\")\n self.spin_p = gui.doubleSpin(hbox, self, 'filter_p_value',\n 1e-4, 1, step=1e-4, labelWidth=15,\n callback=self.filter_and_display,\n tooltip=\"Max p-value for word\")\n self.spin_p.setEnabled(self.filter_by_p)\n\n hbox = gui.widgetBox(fbox, orientation=0)\n self.chb_fdr = gui.checkBox(hbox, self, \"filter_by_fdr\", \"FDR\",\n callback=self.filter_and_display,\n tooltip=\"Filter by word FDR\")\n self.spin_fdr = gui.doubleSpin(hbox, self, 'filter_fdr_value',\n 1e-4, 1, step=1e-4, labelWidth=15,\n callback=self.filter_and_display,\n tooltip=\"Max p-value for word\")\n self.spin_fdr.setEnabled(self.filter_by_fdr)\n gui.rubber(self.controlArea)\n\n # Word's list view\n self.cols = ['Word', 'p-value', 'FDR']\n self.sig_words = QTreeWidget()\n self.sig_words.setColumnCount(len(self.cols))\n self.sig_words.setHeaderLabels(self.cols)\n self.sig_words.setSortingEnabled(True)\n self.sig_words.setSelectionMode(QTreeView.NoSelection)\n self.sig_words.sortByColumn(1, 0) # 0 is ascending order\n for i in range(len(self.cols)):\n self.sig_words.resizeColumnToContents(i)\n self.mainArea.layout().addWidget(self.sig_words)\n\n def sizeHint(self):\n return QSize(450, 240)\n\n @Inputs.data\n def set_data(self, data=None):\n self.data = data\n # selected data transformed depends on data domain\n self.selected_data_transformed = None\n\n @Inputs.selected_data\n def set_data_selected(self, data=None):\n self.selected_data = data\n\n def handleNewSignals(self):\n self.check_data()\n\n def get_bow_domain(self):\n domain = self.data.domain\n return Domain(\n attributes=[a for a in domain.attributes\n if a.attributes.get('bow-feature', False)],\n class_vars=domain.class_vars,\n metas=domain.metas,\n source=domain)\n\n def check_data(self):\n self.Error.clear()\n if isinstance(self.data, Table) and \\\n isinstance(self.selected_data, Table):\n if len(self.selected_data) == 0:\n self.Error.empty_selection()\n self.clear()\n return\n\n # keep only BoW features\n bow_domain = self.get_bow_domain()\n if len(bow_domain.attributes) == 0:\n self.Error.no_bow_features()\n self.clear()\n return\n self.data = Corpus.from_table(bow_domain, self.data)\n self.selected_data_transformed = Corpus.from_table(\n bow_domain, self.selected_data)\n\n if np_sp_sum(self.selected_data_transformed.X) == 0:\n self.Error.no_words_overlap()\n self.clear()\n elif len(self.data) == len(self.selected_data):\n self.Error.all_selected()\n self.clear()\n else:\n self.set_input_info()\n self.apply()\n else:\n self.clear()\n\n def clear(self):\n self.sig_words.clear()\n self.info.set_input_summary(self.info.NoInput)\n self.set_displayed_info(0)\n\n def filter_enabled(self, b):\n self.chb_p.setEnabled(b)\n self.chb_fdr.setEnabled(b)\n self.spin_p.setEnabled(b)\n self.spin_fdr.setEnabled(b)\n\n def filter_and_display(self):\n self.spin_p.setEnabled(self.filter_by_p)\n self.spin_fdr.setEnabled(self.filter_by_fdr)\n self.sig_words.clear()\n\n if self.selected_data_transformed is None: # do nothing when no Data\n return\n\n if self.results.words:\n count = self.build_tree()\n else:\n count = 0\n\n for i in range(len(self.cols)):\n self.sig_words.resizeColumnToContents(i)\n self.set_displayed_info(count)\n\n def build_tree(self) -> int:\n count = 0\n for word, pval, fval in zip(\n self.results.words,\n self.results.p_values,\n self.results.fdr_values\n ):\n if ((not self.filter_by_p or pval <= self.filter_p_value) and\n (not self.filter_by_fdr or fval <= self.filter_fdr_value)):\n it = EATreeWidgetItem(word, pval, fval, self.sig_words)\n self.sig_words.addTopLevelItem(it)\n count += 1\n return count\n\n def set_input_info(self) -> None:\n cluster_words = len(self.selected_data_transformed.domain.attributes)\n selected_words = np.count_nonzero(np_sp_sum(\n self.selected_data_transformed.X, axis=0))\n\n self.info.set_input_summary(\n f\"{cluster_words}|{selected_words}\",\n f\"Total words: {cluster_words}\\n\"\n f\"Words in subset: {selected_words}\")\n\n def set_displayed_info(self, count: int) -> None:\n self.info_fil.setText(f\"Words displayed: {count}\")\n\n def apply(self):\n self.sig_words.clear()\n self.filter_enabled(False)\n self.start(\n Runner.run,\n self.selected_data_transformed,\n self.data,\n self.results\n )\n\n def on_done(self, result: Result) -> None:\n self.filter_and_display()\n self.filter_enabled(True)\n\n def on_exception(self, ex: Exception) -> None:\n self.filter_enabled(True)\n\n def tree_to_table(self):\n view = [self.cols]\n items = self.sig_words.topLevelItemCount()\n for i in range(items):\n line = []\n for j in range(3):\n line.append(self.sig_words.topLevelItem(i).text(j))\n view.append(line)\n return view\n\n def send_report(self):\n if self.results.words:\n self.report_table(\"Enriched words\", self.tree_to_table())\n\n\nfp = lambda score: \"%0.5f\" % score if score > 10e-3 else \"%0.1e\" % score\nfpt = lambda score: \"%0.9f\" % score if score > 10e-3 else \"%0.5e\" % score\n\n\nclass EATreeWidgetItem(QTreeWidgetItem):\n def __init__(self, word, p_value, f_value, parent):\n super().__init__(parent)\n self.data = [word, p_value, f_value]\n self.setText(0, word)\n self.setText(1, fp(p_value))\n self.setToolTip(1, fpt(p_value))\n self.setText(2, fp(f_value))\n self.setToolTip(2, fpt(f_value))\n\n def __lt__(self, other):\n col = self.treeWidget().sortColumn()\n return self.data[col] < other.data[col]\n\n\nif __name__ == '__main__':\n from orangewidget.utils.widgetpreview import WidgetPreview\n from orangecontrib.text.vectorization import BowVectorizer\n\n corpus = Corpus.from_file('book-excerpts')\n vect = BowVectorizer()\n corpus_vect = vect.transform(corpus)\n WidgetPreview(OWWordEnrichment).run(\n set_data_selected=corpus_vect[:10], set_data=corpus_vect)\n", "path": "orangecontrib/text/widgets/owwordenrichment.py"}]}
| 3,447 | 177 |
gh_patches_debug_29047
|
rasdani/github-patches
|
git_diff
|
enthought__chaco-93
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ZoomTool "zoom history" keys are not working
From the examples/data_labels.py docstring:
> Pressing "z" brings up the Zoom Box, and you can click-drag a rectangular region to
> zoom. If you use a sequence of zoom boxes, pressing alt-left-arrow and
> alt-right-arrow moves you forwards and backwards through the "zoom history".
but the alt-right-arrow and alt-left-arrow keys don't seem to have any effect.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/demo/data_labels.py`
Content:
```
1 #!/usr/bin/env python
2 """
3 Draws a line plot with several points labelled. Demonstrates how to annotate
4 plots.
5
6 Left-drag pans the plot.
7
8 Mousewheel up and down zooms the plot in and out.
9
10 Pressing "z" brings up the Zoom Box, and you can click-drag a rectangular
11 region to zoom. If you use a sequence of zoom boxes, pressing alt-left-arrow
12 and alt-right-arrow moves you forwards and backwards through the
13 "zoom history".
14
15 Right-drag is enabled on some of the labels.
16 """
17
18 # Major library imports
19 from numpy import linspace
20 from scipy.special import jn
21
22 # Enthought library imports
23 from enable.api import Component, ComponentEditor
24 from traits.api import Float, HasTraits, Instance, Int
25 from traitsui.api import Item, View
26
27 # Chaco imports
28 from chaco.api import create_line_plot, add_default_axes, add_default_grids, \
29 OverlayPlotContainer, DataLabel
30 from chaco.example_support import COLOR_PALETTE
31 from chaco.tools.api import PanTool, ZoomTool, DataLabelTool
32
33
34 class PlotExample(HasTraits):
35 plot = Instance(Component)
36 numpoints = Int(100)
37 low = Float(-5.0)
38 high = Float(15.0)
39
40 traits_view = View(Item('plot', editor=ComponentEditor(),
41 show_label=False),
42 width=800, height=700, resizable=True,
43 title="Data label example")
44
45 def _plot_default(self):
46
47 container = OverlayPlotContainer(padding=50, fill_padding=True,
48 bgcolor="lightgray",
49 use_backbuffer=True)
50
51 # Create the initial X-series of data
52 numpoints = self.numpoints
53 low = self.low
54 high = self.high
55 x = linspace(low, high, numpoints + 1)
56 y = jn(0, x)
57 plot = create_line_plot((x, y), color=tuple(COLOR_PALETTE[0]),
58 width=2.0)
59 plot.index.sort_order = "ascending"
60 plot.bgcolor = "white"
61 plot.border_visible = True
62 add_default_grids(plot)
63 add_default_axes(plot)
64
65 # Add some tools
66 plot.tools.append(PanTool(plot))
67 zoom = ZoomTool(plot, tool_mode="box", always_on=False)
68 plot.overlays.append(zoom)
69
70 # Add a dynamic label. This can be dragged and moved around using the
71 # right mouse button. Note the use of padding to offset the label
72 # from its data point.
73 label = DataLabel(component=plot, data_point=(x[40], y[40]),
74 label_position="top left", padding=40,
75 bgcolor="lightgray",
76 border_visible=False)
77 plot.overlays.append(label)
78 tool = DataLabelTool(label, drag_button="right", auto_arrow_root=True)
79 label.tools.append(tool)
80
81 # Add some static labels.
82 label2 = DataLabel(component=plot, data_point=(x[20], y[20]),
83 label_position="bottom right",
84 border_visible=False,
85 bgcolor="transparent",
86 marker_color="blue",
87 marker_line_color="transparent",
88 marker="diamond",
89 font='modern 14',
90 arrow_visible=False)
91 plot.overlays.append(label2)
92
93 label3 = DataLabel(component=plot, data_point=(x[80], y[80]),
94 label_position="top", padding_bottom=20,
95 marker_color="transparent",
96 marker_size=8,
97 marker="circle",
98 arrow_visible=False)
99 plot.overlays.append(label3)
100
101 # This label uses label_style='bubble'.
102 label4 = DataLabel(component=plot, data_point=(x[60], y[60]),
103 border_padding=10,
104 marker_color="red",
105 marker_size=3,
106 label_position=(20, 50),
107 label_style='bubble',
108 label_text="Something interesting",
109 label_format="at x=%(x).2f, y=%(y).2f",
110 font='modern 18',
111 bgcolor=(1, 1, 0.75, 1),
112 )
113 plot.overlays.append(label4)
114 tool4 = DataLabelTool(label4, drag_button="right",
115 auto_arrow_root=True)
116 label4.tools.append(tool4)
117
118 # Another 'bubble' label. This one sets arrow_min_length=20, so
119 # the arrow is not drawn when the label is close to the data point.
120 label5 = DataLabel(component=plot, data_point=(x[65], y[65]),
121 border_padding=10,
122 marker_color="green",
123 marker_size=4,
124 show_label_coords=False,
125 label_style='bubble',
126 label_position=(25, 5),
127 label_text="Label with\narrow_min_length=20",
128 border_visible=False,
129 arrow_min_length=20,
130 font='modern 14',
131 bgcolor=(0.75, 0.75, 0.75, 1),
132 )
133 plot.overlays.append(label5)
134 tool5 = DataLabelTool(label5, drag_button="right",
135 auto_arrow_root=True)
136 label5.tools.append(tool5)
137
138 container.add(plot)
139
140 return container
141
142 demo = PlotExample()
143
144 if __name__ == "__main__":
145 demo.configure_traits()
146
```
Path: `examples/demo/edit_line.py`
Content:
```
1 #!/usr/bin/env python
2 """
3 Allows editing of a line plot.
4
5 Left-dragging a point will move its position.
6
7 Right-drag pans the plot.
8
9 Mousewheel up and down zooms the plot in and out.
10
11 Pressing "z" brings up the Zoom Box, and you can click-drag a rectangular region to
12 zoom. If you use a sequence of zoom boxes, pressing alt-left-arrow and
13 alt-right-arrow moves you forwards and backwards through the "zoom history".
14 """
15
16 # Major library imports
17 from numpy import linspace
18 from scipy.special import jn
19
20 from chaco.example_support import COLOR_PALETTE
21
22 # Enthought library imports
23 from enable.tools.api import DragTool
24 from enable.api import Component, ComponentEditor
25 from traits.api import HasTraits, Instance, Int, Tuple
26 from traitsui.api import UItem, View
27
28 # Chaco imports
29 from chaco.api import add_default_axes, add_default_grids, \
30 OverlayPlotContainer, PlotLabel, ScatterPlot, create_line_plot
31 from chaco.tools.api import PanTool, ZoomTool
32
33
34
35 class PointDraggingTool(DragTool):
36
37 component = Instance(Component)
38
39 # The pixel distance from a point that the cursor is still considered
40 # to be 'on' the point
41 threshold = Int(5)
42
43 # The index of the point being dragged
44 _drag_index = Int(-1)
45
46 # The original dataspace values of the index and value datasources
47 # corresponding to _drag_index
48 _orig_value = Tuple
49
50 def is_draggable(self, x, y):
51 # Check to see if (x,y) are over one of the points in self.component
52 if self._lookup_point(x, y) is not None:
53 return True
54 else:
55 return False
56
57 def normal_mouse_move(self, event):
58 plot = self.component
59
60 ndx = plot.map_index((event.x, event.y), self.threshold)
61 if ndx is None:
62 if plot.index.metadata.has_key('selections'):
63 del plot.index.metadata['selections']
64 else:
65 plot.index.metadata['selections'] = [ndx]
66
67 plot.invalidate_draw()
68 plot.request_redraw()
69
70
71 def drag_start(self, event):
72 plot = self.component
73 ndx = plot.map_index((event.x, event.y), self.threshold)
74 if ndx is None:
75 return
76 self._drag_index = ndx
77 self._orig_value = (plot.index.get_data()[ndx], plot.value.get_data()[ndx])
78
79 def dragging(self, event):
80 plot = self.component
81
82 data_x, data_y = plot.map_data((event.x, event.y))
83
84 plot.index._data[self._drag_index] = data_x
85 plot.value._data[self._drag_index] = data_y
86 plot.index.data_changed = True
87 plot.value.data_changed = True
88 plot.request_redraw()
89
90 def drag_cancel(self, event):
91 plot = self.component
92 plot.index._data[self._drag_index] = self._orig_value[0]
93 plot.value._data[self._drag_index] = self._orig_value[1]
94 plot.index.data_changed = True
95 plot.value.data_changed = True
96 plot.request_redraw()
97
98 def drag_end(self, event):
99 plot = self.component
100 if plot.index.metadata.has_key('selections'):
101 del plot.index.metadata['selections']
102 plot.invalidate_draw()
103 plot.request_redraw()
104
105 def _lookup_point(self, x, y):
106 """ Finds the point closest to a screen point if it is within self.threshold
107
108 Parameters
109 ==========
110 x : float
111 screen x-coordinate
112 y : float
113 screen y-coordinate
114
115 Returns
116 =======
117 (screen_x, screen_y, distance) of datapoint nearest to the input *(x,y)*.
118 If no data points are within *self.threshold* of *(x,y)*, returns None.
119 """
120
121 if hasattr(self.component, 'get_closest_point'):
122 # This is on BaseXYPlots
123 return self.component.get_closest_point((x,y), threshold=self.threshold)
124
125 return None
126
127
128 #===============================================================================
129 # # Create the Chaco plot.
130 #===============================================================================
131 def _create_plot_component():
132
133 container = OverlayPlotContainer(padding = 50, fill_padding = True,
134 bgcolor = "lightgray", use_backbuffer=True)
135
136 # Create the initial X-series of data
137 numpoints = 30
138 low = -5
139 high = 15.0
140 x = linspace(low, high, numpoints)
141 y = jn(0, x)
142
143 lineplot = create_line_plot((x,y), color=tuple(COLOR_PALETTE[0]), width=2.0)
144 lineplot.selected_color = "none"
145 scatter = ScatterPlot(index = lineplot.index,
146 value = lineplot.value,
147 index_mapper = lineplot.index_mapper,
148 value_mapper = lineplot.value_mapper,
149 color = tuple(COLOR_PALETTE[0]),
150 marker_size = 5)
151 scatter.index.sort_order = "ascending"
152
153 scatter.bgcolor = "white"
154 scatter.border_visible = True
155
156 add_default_grids(scatter)
157 add_default_axes(scatter)
158
159 scatter.tools.append(PanTool(scatter, drag_button="right"))
160
161 # The ZoomTool tool is stateful and allows drawing a zoom
162 # box to select a zoom region.
163 zoom = ZoomTool(scatter, tool_mode="box", always_on=False, drag_button=None)
164 scatter.overlays.append(zoom)
165
166 scatter.tools.append(PointDraggingTool(scatter))
167
168 container.add(lineplot)
169 container.add(scatter)
170
171 # Add the title at the top
172 container.overlays.append(PlotLabel("Line Editor",
173 component=container,
174 font = "swiss 16",
175 overlay_position="top"))
176
177 return container
178
179
180 #===============================================================================
181 # Attributes to use for the plot view.
182 size=(800,700)
183 title="Simple line plot"
184
185 #===============================================================================
186 # # Demo class that is used by the demo.py application.
187 #===============================================================================
188 class Demo(HasTraits):
189 plot = Instance(Component)
190
191 traits_view = View(UItem('plot', editor=ComponentEditor()),
192 width=size[0], height=size[1], resizable=True,
193 title=title
194 )
195
196 def _plot_default(self):
197 return _create_plot_component()
198
199 demo = Demo()
200
201 if __name__ == "__main__":
202 demo.configure_traits()
203
204 #--EOF---
205
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/demo/data_labels.py b/examples/demo/data_labels.py
--- a/examples/demo/data_labels.py
+++ b/examples/demo/data_labels.py
@@ -8,9 +8,9 @@
Mousewheel up and down zooms the plot in and out.
Pressing "z" brings up the Zoom Box, and you can click-drag a rectangular
-region to zoom. If you use a sequence of zoom boxes, pressing alt-left-arrow
-and alt-right-arrow moves you forwards and backwards through the
-"zoom history".
+region to zoom. If you use a sequence of zoom boxes, pressing control-y
+and control-z (Meta-y and Meta-z on Mac) moves you forwards and backwards
+through the "zoom history".
Right-drag is enabled on some of the labels.
"""
diff --git a/examples/demo/edit_line.py b/examples/demo/edit_line.py
--- a/examples/demo/edit_line.py
+++ b/examples/demo/edit_line.py
@@ -8,9 +8,10 @@
Mousewheel up and down zooms the plot in and out.
-Pressing "z" brings up the Zoom Box, and you can click-drag a rectangular region to
-zoom. If you use a sequence of zoom boxes, pressing alt-left-arrow and
-alt-right-arrow moves you forwards and backwards through the "zoom history".
+Pressing "z" brings up the Zoom Box, and you can click-drag a rectangular
+region to zoom. If you use a sequence of zoom boxes, pressing control-y and
+control-z (use Meta-y and Meta-z on Mac) moves you forwards and backwards
+through the "zoom history".
"""
# Major library imports
@@ -160,7 +161,7 @@
# The ZoomTool tool is stateful and allows drawing a zoom
# box to select a zoom region.
- zoom = ZoomTool(scatter, tool_mode="box", always_on=False, drag_button=None)
+ zoom = ZoomTool(scatter, tool_mode="box", always_on=False)
scatter.overlays.append(zoom)
scatter.tools.append(PointDraggingTool(scatter))
|
{"golden_diff": "diff --git a/examples/demo/data_labels.py b/examples/demo/data_labels.py\n--- a/examples/demo/data_labels.py\n+++ b/examples/demo/data_labels.py\n@@ -8,9 +8,9 @@\n Mousewheel up and down zooms the plot in and out.\n \n Pressing \"z\" brings up the Zoom Box, and you can click-drag a rectangular\n-region to zoom. If you use a sequence of zoom boxes, pressing alt-left-arrow\n-and alt-right-arrow moves you forwards and backwards through the\n-\"zoom history\".\n+region to zoom. If you use a sequence of zoom boxes, pressing control-y\n+and control-z (Meta-y and Meta-z on Mac) moves you forwards and backwards\n+through the \"zoom history\".\n \n Right-drag is enabled on some of the labels.\n \"\"\"\ndiff --git a/examples/demo/edit_line.py b/examples/demo/edit_line.py\n--- a/examples/demo/edit_line.py\n+++ b/examples/demo/edit_line.py\n@@ -8,9 +8,10 @@\n \n Mousewheel up and down zooms the plot in and out.\n \n-Pressing \"z\" brings up the Zoom Box, and you can click-drag a rectangular region to\n-zoom. If you use a sequence of zoom boxes, pressing alt-left-arrow and\n-alt-right-arrow moves you forwards and backwards through the \"zoom history\".\n+Pressing \"z\" brings up the Zoom Box, and you can click-drag a rectangular\n+region to zoom. If you use a sequence of zoom boxes, pressing control-y and\n+control-z (use Meta-y and Meta-z on Mac) moves you forwards and backwards\n+through the \"zoom history\".\n \"\"\"\n \n # Major library imports\n@@ -160,7 +161,7 @@\n \n # The ZoomTool tool is stateful and allows drawing a zoom\n # box to select a zoom region.\n- zoom = ZoomTool(scatter, tool_mode=\"box\", always_on=False, drag_button=None)\n+ zoom = ZoomTool(scatter, tool_mode=\"box\", always_on=False)\n scatter.overlays.append(zoom)\n \n scatter.tools.append(PointDraggingTool(scatter))\n", "issue": "ZoomTool \"zoom history\" keys are not working\nFrom the examples/data_labels.py docstring:\n\n> Pressing \"z\" brings up the Zoom Box, and you can click-drag a rectangular region to\n> zoom. If you use a sequence of zoom boxes, pressing alt-left-arrow and\n> alt-right-arrow moves you forwards and backwards through the \"zoom history\".\n\nbut the alt-right-arrow and alt-left-arrow keys don't seem to have any effect.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nDraws a line plot with several points labelled. Demonstrates how to annotate\nplots.\n\nLeft-drag pans the plot.\n\nMousewheel up and down zooms the plot in and out.\n\nPressing \"z\" brings up the Zoom Box, and you can click-drag a rectangular\nregion to zoom. If you use a sequence of zoom boxes, pressing alt-left-arrow\nand alt-right-arrow moves you forwards and backwards through the\n\"zoom history\".\n\nRight-drag is enabled on some of the labels.\n\"\"\"\n\n# Major library imports\nfrom numpy import linspace\nfrom scipy.special import jn\n\n# Enthought library imports\nfrom enable.api import Component, ComponentEditor\nfrom traits.api import Float, HasTraits, Instance, Int\nfrom traitsui.api import Item, View\n\n# Chaco imports\nfrom chaco.api import create_line_plot, add_default_axes, add_default_grids, \\\n OverlayPlotContainer, DataLabel\nfrom chaco.example_support import COLOR_PALETTE\nfrom chaco.tools.api import PanTool, ZoomTool, DataLabelTool\n\n\nclass PlotExample(HasTraits):\n plot = Instance(Component)\n numpoints = Int(100)\n low = Float(-5.0)\n high = Float(15.0)\n\n traits_view = View(Item('plot', editor=ComponentEditor(),\n show_label=False),\n width=800, height=700, resizable=True,\n title=\"Data label example\")\n\n def _plot_default(self):\n\n container = OverlayPlotContainer(padding=50, fill_padding=True,\n bgcolor=\"lightgray\",\n use_backbuffer=True)\n\n # Create the initial X-series of data\n numpoints = self.numpoints\n low = self.low\n high = self.high\n x = linspace(low, high, numpoints + 1)\n y = jn(0, x)\n plot = create_line_plot((x, y), color=tuple(COLOR_PALETTE[0]),\n width=2.0)\n plot.index.sort_order = \"ascending\"\n plot.bgcolor = \"white\"\n plot.border_visible = True\n add_default_grids(plot)\n add_default_axes(plot)\n\n # Add some tools\n plot.tools.append(PanTool(plot))\n zoom = ZoomTool(plot, tool_mode=\"box\", always_on=False)\n plot.overlays.append(zoom)\n\n # Add a dynamic label. This can be dragged and moved around using the\n # right mouse button. Note the use of padding to offset the label\n # from its data point.\n label = DataLabel(component=plot, data_point=(x[40], y[40]),\n label_position=\"top left\", padding=40,\n bgcolor=\"lightgray\",\n border_visible=False)\n plot.overlays.append(label)\n tool = DataLabelTool(label, drag_button=\"right\", auto_arrow_root=True)\n label.tools.append(tool)\n\n # Add some static labels.\n label2 = DataLabel(component=plot, data_point=(x[20], y[20]),\n label_position=\"bottom right\",\n border_visible=False,\n bgcolor=\"transparent\",\n marker_color=\"blue\",\n marker_line_color=\"transparent\",\n marker=\"diamond\",\n font='modern 14',\n arrow_visible=False)\n plot.overlays.append(label2)\n\n label3 = DataLabel(component=plot, data_point=(x[80], y[80]),\n label_position=\"top\", padding_bottom=20,\n marker_color=\"transparent\",\n marker_size=8,\n marker=\"circle\",\n arrow_visible=False)\n plot.overlays.append(label3)\n\n # This label uses label_style='bubble'.\n label4 = DataLabel(component=plot, data_point=(x[60], y[60]),\n border_padding=10,\n marker_color=\"red\",\n marker_size=3,\n label_position=(20, 50),\n label_style='bubble',\n label_text=\"Something interesting\",\n label_format=\"at x=%(x).2f, y=%(y).2f\",\n font='modern 18',\n bgcolor=(1, 1, 0.75, 1),\n )\n plot.overlays.append(label4)\n tool4 = DataLabelTool(label4, drag_button=\"right\",\n auto_arrow_root=True)\n label4.tools.append(tool4)\n\n # Another 'bubble' label. This one sets arrow_min_length=20, so\n # the arrow is not drawn when the label is close to the data point.\n label5 = DataLabel(component=plot, data_point=(x[65], y[65]),\n border_padding=10,\n marker_color=\"green\",\n marker_size=4,\n show_label_coords=False,\n label_style='bubble',\n label_position=(25, 5),\n label_text=\"Label with\\narrow_min_length=20\",\n border_visible=False,\n arrow_min_length=20,\n font='modern 14',\n bgcolor=(0.75, 0.75, 0.75, 1),\n )\n plot.overlays.append(label5)\n tool5 = DataLabelTool(label5, drag_button=\"right\",\n auto_arrow_root=True)\n label5.tools.append(tool5)\n\n container.add(plot)\n\n return container\n\ndemo = PlotExample()\n\nif __name__ == \"__main__\":\n demo.configure_traits()\n", "path": "examples/demo/data_labels.py"}, {"content": "#!/usr/bin/env python\n\"\"\"\nAllows editing of a line plot.\n\nLeft-dragging a point will move its position.\n\nRight-drag pans the plot.\n\nMousewheel up and down zooms the plot in and out.\n\nPressing \"z\" brings up the Zoom Box, and you can click-drag a rectangular region to\nzoom. If you use a sequence of zoom boxes, pressing alt-left-arrow and\nalt-right-arrow moves you forwards and backwards through the \"zoom history\".\n\"\"\"\n\n# Major library imports\nfrom numpy import linspace\nfrom scipy.special import jn\n\nfrom chaco.example_support import COLOR_PALETTE\n\n# Enthought library imports\nfrom enable.tools.api import DragTool\nfrom enable.api import Component, ComponentEditor\nfrom traits.api import HasTraits, Instance, Int, Tuple\nfrom traitsui.api import UItem, View\n\n# Chaco imports\nfrom chaco.api import add_default_axes, add_default_grids, \\\n OverlayPlotContainer, PlotLabel, ScatterPlot, create_line_plot\nfrom chaco.tools.api import PanTool, ZoomTool\n\n\n\nclass PointDraggingTool(DragTool):\n\n component = Instance(Component)\n\n # The pixel distance from a point that the cursor is still considered\n # to be 'on' the point\n threshold = Int(5)\n\n # The index of the point being dragged\n _drag_index = Int(-1)\n\n # The original dataspace values of the index and value datasources\n # corresponding to _drag_index\n _orig_value = Tuple\n\n def is_draggable(self, x, y):\n # Check to see if (x,y) are over one of the points in self.component\n if self._lookup_point(x, y) is not None:\n return True\n else:\n return False\n\n def normal_mouse_move(self, event):\n plot = self.component\n\n ndx = plot.map_index((event.x, event.y), self.threshold)\n if ndx is None:\n if plot.index.metadata.has_key('selections'):\n del plot.index.metadata['selections']\n else:\n plot.index.metadata['selections'] = [ndx]\n\n plot.invalidate_draw()\n plot.request_redraw()\n\n\n def drag_start(self, event):\n plot = self.component\n ndx = plot.map_index((event.x, event.y), self.threshold)\n if ndx is None:\n return\n self._drag_index = ndx\n self._orig_value = (plot.index.get_data()[ndx], plot.value.get_data()[ndx])\n\n def dragging(self, event):\n plot = self.component\n\n data_x, data_y = plot.map_data((event.x, event.y))\n\n plot.index._data[self._drag_index] = data_x\n plot.value._data[self._drag_index] = data_y\n plot.index.data_changed = True\n plot.value.data_changed = True\n plot.request_redraw()\n\n def drag_cancel(self, event):\n plot = self.component\n plot.index._data[self._drag_index] = self._orig_value[0]\n plot.value._data[self._drag_index] = self._orig_value[1]\n plot.index.data_changed = True\n plot.value.data_changed = True\n plot.request_redraw()\n\n def drag_end(self, event):\n plot = self.component\n if plot.index.metadata.has_key('selections'):\n del plot.index.metadata['selections']\n plot.invalidate_draw()\n plot.request_redraw()\n\n def _lookup_point(self, x, y):\n \"\"\" Finds the point closest to a screen point if it is within self.threshold\n\n Parameters\n ==========\n x : float\n screen x-coordinate\n y : float\n screen y-coordinate\n\n Returns\n =======\n (screen_x, screen_y, distance) of datapoint nearest to the input *(x,y)*.\n If no data points are within *self.threshold* of *(x,y)*, returns None.\n \"\"\"\n\n if hasattr(self.component, 'get_closest_point'):\n # This is on BaseXYPlots\n return self.component.get_closest_point((x,y), threshold=self.threshold)\n\n return None\n\n\n#===============================================================================\n# # Create the Chaco plot.\n#===============================================================================\ndef _create_plot_component():\n\n container = OverlayPlotContainer(padding = 50, fill_padding = True,\n bgcolor = \"lightgray\", use_backbuffer=True)\n\n # Create the initial X-series of data\n numpoints = 30\n low = -5\n high = 15.0\n x = linspace(low, high, numpoints)\n y = jn(0, x)\n\n lineplot = create_line_plot((x,y), color=tuple(COLOR_PALETTE[0]), width=2.0)\n lineplot.selected_color = \"none\"\n scatter = ScatterPlot(index = lineplot.index,\n value = lineplot.value,\n index_mapper = lineplot.index_mapper,\n value_mapper = lineplot.value_mapper,\n color = tuple(COLOR_PALETTE[0]),\n marker_size = 5)\n scatter.index.sort_order = \"ascending\"\n\n scatter.bgcolor = \"white\"\n scatter.border_visible = True\n\n add_default_grids(scatter)\n add_default_axes(scatter)\n\n scatter.tools.append(PanTool(scatter, drag_button=\"right\"))\n\n # The ZoomTool tool is stateful and allows drawing a zoom\n # box to select a zoom region.\n zoom = ZoomTool(scatter, tool_mode=\"box\", always_on=False, drag_button=None)\n scatter.overlays.append(zoom)\n\n scatter.tools.append(PointDraggingTool(scatter))\n\n container.add(lineplot)\n container.add(scatter)\n\n # Add the title at the top\n container.overlays.append(PlotLabel(\"Line Editor\",\n component=container,\n font = \"swiss 16\",\n overlay_position=\"top\"))\n\n return container\n\n\n#===============================================================================\n# Attributes to use for the plot view.\nsize=(800,700)\ntitle=\"Simple line plot\"\n\n#===============================================================================\n# # Demo class that is used by the demo.py application.\n#===============================================================================\nclass Demo(HasTraits):\n plot = Instance(Component)\n\n traits_view = View(UItem('plot', editor=ComponentEditor()),\n width=size[0], height=size[1], resizable=True,\n title=title\n )\n\n def _plot_default(self):\n return _create_plot_component()\n\ndemo = Demo()\n\nif __name__ == \"__main__\":\n demo.configure_traits()\n\n#--EOF---\n", "path": "examples/demo/edit_line.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nDraws a line plot with several points labelled. Demonstrates how to annotate\nplots.\n\nLeft-drag pans the plot.\n\nMousewheel up and down zooms the plot in and out.\n\nPressing \"z\" brings up the Zoom Box, and you can click-drag a rectangular\nregion to zoom. If you use a sequence of zoom boxes, pressing control-y\nand control-z (Meta-y and Meta-z on Mac) moves you forwards and backwards\nthrough the \"zoom history\".\n\nRight-drag is enabled on some of the labels.\n\"\"\"\n\n# Major library imports\nfrom numpy import linspace\nfrom scipy.special import jn\n\n# Enthought library imports\nfrom enable.api import Component, ComponentEditor\nfrom traits.api import Float, HasTraits, Instance, Int\nfrom traitsui.api import Item, View\n\n# Chaco imports\nfrom chaco.api import create_line_plot, add_default_axes, add_default_grids, \\\n OverlayPlotContainer, DataLabel\nfrom chaco.example_support import COLOR_PALETTE\nfrom chaco.tools.api import PanTool, ZoomTool, DataLabelTool\n\n\nclass PlotExample(HasTraits):\n plot = Instance(Component)\n numpoints = Int(100)\n low = Float(-5.0)\n high = Float(15.0)\n\n traits_view = View(Item('plot', editor=ComponentEditor(),\n show_label=False),\n width=800, height=700, resizable=True,\n title=\"Data label example\")\n\n def _plot_default(self):\n\n container = OverlayPlotContainer(padding=50, fill_padding=True,\n bgcolor=\"lightgray\",\n use_backbuffer=True)\n\n # Create the initial X-series of data\n numpoints = self.numpoints\n low = self.low\n high = self.high\n x = linspace(low, high, numpoints + 1)\n y = jn(0, x)\n plot = create_line_plot((x, y), color=tuple(COLOR_PALETTE[0]),\n width=2.0)\n plot.index.sort_order = \"ascending\"\n plot.bgcolor = \"white\"\n plot.border_visible = True\n add_default_grids(plot)\n add_default_axes(plot)\n\n # Add some tools\n plot.tools.append(PanTool(plot))\n zoom = ZoomTool(plot, tool_mode=\"box\", always_on=False)\n plot.overlays.append(zoom)\n\n # Add a dynamic label. This can be dragged and moved around using the\n # right mouse button. Note the use of padding to offset the label\n # from its data point.\n label = DataLabel(component=plot, data_point=(x[40], y[40]),\n label_position=\"top left\", padding=40,\n bgcolor=\"lightgray\",\n border_visible=False)\n plot.overlays.append(label)\n tool = DataLabelTool(label, drag_button=\"right\", auto_arrow_root=True)\n label.tools.append(tool)\n\n # Add some static labels.\n label2 = DataLabel(component=plot, data_point=(x[20], y[20]),\n label_position=\"bottom right\",\n border_visible=False,\n bgcolor=\"transparent\",\n marker_color=\"blue\",\n marker_line_color=\"transparent\",\n marker=\"diamond\",\n font='modern 14',\n arrow_visible=False)\n plot.overlays.append(label2)\n\n label3 = DataLabel(component=plot, data_point=(x[80], y[80]),\n label_position=\"top\", padding_bottom=20,\n marker_color=\"transparent\",\n marker_size=8,\n marker=\"circle\",\n arrow_visible=False)\n plot.overlays.append(label3)\n\n # This label uses label_style='bubble'.\n label4 = DataLabel(component=plot, data_point=(x[60], y[60]),\n border_padding=10,\n marker_color=\"red\",\n marker_size=3,\n label_position=(20, 50),\n label_style='bubble',\n label_text=\"Something interesting\",\n label_format=\"at x=%(x).2f, y=%(y).2f\",\n font='modern 18',\n bgcolor=(1, 1, 0.75, 1),\n )\n plot.overlays.append(label4)\n tool4 = DataLabelTool(label4, drag_button=\"right\",\n auto_arrow_root=True)\n label4.tools.append(tool4)\n\n # Another 'bubble' label. This one sets arrow_min_length=20, so\n # the arrow is not drawn when the label is close to the data point.\n label5 = DataLabel(component=plot, data_point=(x[65], y[65]),\n border_padding=10,\n marker_color=\"green\",\n marker_size=4,\n show_label_coords=False,\n label_style='bubble',\n label_position=(25, 5),\n label_text=\"Label with\\narrow_min_length=20\",\n border_visible=False,\n arrow_min_length=20,\n font='modern 14',\n bgcolor=(0.75, 0.75, 0.75, 1),\n )\n plot.overlays.append(label5)\n tool5 = DataLabelTool(label5, drag_button=\"right\",\n auto_arrow_root=True)\n label5.tools.append(tool5)\n\n container.add(plot)\n\n return container\n\ndemo = PlotExample()\n\nif __name__ == \"__main__\":\n demo.configure_traits()\n", "path": "examples/demo/data_labels.py"}, {"content": "#!/usr/bin/env python\n\"\"\"\nAllows editing of a line plot.\n\nLeft-dragging a point will move its position.\n\nRight-drag pans the plot.\n\nMousewheel up and down zooms the plot in and out.\n\nPressing \"z\" brings up the Zoom Box, and you can click-drag a rectangular\nregion to zoom. If you use a sequence of zoom boxes, pressing control-y and\ncontrol-z (use Meta-y and Meta-z on Mac) moves you forwards and backwards\nthrough the \"zoom history\".\n\"\"\"\n\n# Major library imports\nfrom numpy import linspace\nfrom scipy.special import jn\n\nfrom chaco.example_support import COLOR_PALETTE\n\n# Enthought library imports\nfrom enable.tools.api import DragTool\nfrom enable.api import Component, ComponentEditor\nfrom traits.api import HasTraits, Instance, Int, Tuple\nfrom traitsui.api import UItem, View\n\n# Chaco imports\nfrom chaco.api import add_default_axes, add_default_grids, \\\n OverlayPlotContainer, PlotLabel, ScatterPlot, create_line_plot\nfrom chaco.tools.api import PanTool, ZoomTool\n\n\n\nclass PointDraggingTool(DragTool):\n\n component = Instance(Component)\n\n # The pixel distance from a point that the cursor is still considered\n # to be 'on' the point\n threshold = Int(5)\n\n # The index of the point being dragged\n _drag_index = Int(-1)\n\n # The original dataspace values of the index and value datasources\n # corresponding to _drag_index\n _orig_value = Tuple\n\n def is_draggable(self, x, y):\n # Check to see if (x,y) are over one of the points in self.component\n if self._lookup_point(x, y) is not None:\n return True\n else:\n return False\n\n def normal_mouse_move(self, event):\n plot = self.component\n\n ndx = plot.map_index((event.x, event.y), self.threshold)\n if ndx is None:\n if plot.index.metadata.has_key('selections'):\n del plot.index.metadata['selections']\n else:\n plot.index.metadata['selections'] = [ndx]\n\n plot.invalidate_draw()\n plot.request_redraw()\n\n\n def drag_start(self, event):\n plot = self.component\n ndx = plot.map_index((event.x, event.y), self.threshold)\n if ndx is None:\n return\n self._drag_index = ndx\n self._orig_value = (plot.index.get_data()[ndx], plot.value.get_data()[ndx])\n\n def dragging(self, event):\n plot = self.component\n\n data_x, data_y = plot.map_data((event.x, event.y))\n\n plot.index._data[self._drag_index] = data_x\n plot.value._data[self._drag_index] = data_y\n plot.index.data_changed = True\n plot.value.data_changed = True\n plot.request_redraw()\n\n def drag_cancel(self, event):\n plot = self.component\n plot.index._data[self._drag_index] = self._orig_value[0]\n plot.value._data[self._drag_index] = self._orig_value[1]\n plot.index.data_changed = True\n plot.value.data_changed = True\n plot.request_redraw()\n\n def drag_end(self, event):\n plot = self.component\n if plot.index.metadata.has_key('selections'):\n del plot.index.metadata['selections']\n plot.invalidate_draw()\n plot.request_redraw()\n\n def _lookup_point(self, x, y):\n \"\"\" Finds the point closest to a screen point if it is within self.threshold\n\n Parameters\n ==========\n x : float\n screen x-coordinate\n y : float\n screen y-coordinate\n\n Returns\n =======\n (screen_x, screen_y, distance) of datapoint nearest to the input *(x,y)*.\n If no data points are within *self.threshold* of *(x,y)*, returns None.\n \"\"\"\n\n if hasattr(self.component, 'get_closest_point'):\n # This is on BaseXYPlots\n return self.component.get_closest_point((x,y), threshold=self.threshold)\n\n return None\n\n\n#===============================================================================\n# # Create the Chaco plot.\n#===============================================================================\ndef _create_plot_component():\n\n container = OverlayPlotContainer(padding = 50, fill_padding = True,\n bgcolor = \"lightgray\", use_backbuffer=True)\n\n # Create the initial X-series of data\n numpoints = 30\n low = -5\n high = 15.0\n x = linspace(low, high, numpoints)\n y = jn(0, x)\n\n lineplot = create_line_plot((x,y), color=tuple(COLOR_PALETTE[0]), width=2.0)\n lineplot.selected_color = \"none\"\n scatter = ScatterPlot(index = lineplot.index,\n value = lineplot.value,\n index_mapper = lineplot.index_mapper,\n value_mapper = lineplot.value_mapper,\n color = tuple(COLOR_PALETTE[0]),\n marker_size = 5)\n scatter.index.sort_order = \"ascending\"\n\n scatter.bgcolor = \"white\"\n scatter.border_visible = True\n\n add_default_grids(scatter)\n add_default_axes(scatter)\n\n scatter.tools.append(PanTool(scatter, drag_button=\"right\"))\n\n # The ZoomTool tool is stateful and allows drawing a zoom\n # box to select a zoom region.\n zoom = ZoomTool(scatter, tool_mode=\"box\", always_on=False)\n scatter.overlays.append(zoom)\n\n scatter.tools.append(PointDraggingTool(scatter))\n\n container.add(lineplot)\n container.add(scatter)\n\n # Add the title at the top\n container.overlays.append(PlotLabel(\"Line Editor\",\n component=container,\n font = \"swiss 16\",\n overlay_position=\"top\"))\n\n return container\n\n\n#===============================================================================\n# Attributes to use for the plot view.\nsize=(800,700)\ntitle=\"Simple line plot\"\n\n#===============================================================================\n# # Demo class that is used by the demo.py application.\n#===============================================================================\nclass Demo(HasTraits):\n plot = Instance(Component)\n\n traits_view = View(UItem('plot', editor=ComponentEditor()),\n width=size[0], height=size[1], resizable=True,\n title=title\n )\n\n def _plot_default(self):\n return _create_plot_component()\n\ndemo = Demo()\n\nif __name__ == \"__main__\":\n demo.configure_traits()\n\n#--EOF---\n", "path": "examples/demo/edit_line.py"}]}
| 3,811 | 456 |
gh_patches_debug_39370
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-8335
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use clean_address function to join multiple free text lines together
The `clean_address` method added in #7568 allows a standardised approach to taking messy ordered multiple line address strings (of any type of composition) and joining them together into a single string.
We can now use `clean_address` to replace the many variants throughout spiders of attempting to join these multi-line address strings. An added benefit is being able to quickly find where multi-line address strings are parsed (via searching for `clean_address` instances), making it easier to change address handling in the future.
Related to #5598
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/zizzi_gb.py`
Content:
```
1 import scrapy
2
3 from locations.dict_parser import DictParser
4
5
6 class ZizziGBSpider(scrapy.Spider):
7 name = "zizzi_gb"
8 item_attributes = {"brand": "Zizzi", "brand_wikidata": "Q8072944"}
9 start_urls = ["https://www.zizzi.co.uk/wp-json/locations/get_venues"]
10
11 def parse(self, response):
12 for store in response.json()["data"]:
13 item = DictParser.parse(store)
14 item["addr_full"] = ", ".join(store["address"].split("\r\n"))
15 item["image"] = store["featured_image"]
16 item["website"] = store["link"]
17
18 if store["region"] == "Ireland":
19 item.pop("state")
20 item["country"] = "IE"
21 else:
22 item["country"] = "GB"
23
24 yield item
25
```
Path: `locations/spiders/zambrero_au.py`
Content:
```
1 import re
2
3 from scrapy import Spider
4 from scrapy.http import Request
5
6 from locations.categories import Categories
7 from locations.hours import OpeningHours
8 from locations.items import Feature
9
10
11 class ZambreroAUSpider(Spider):
12 name = "zambrero_au"
13 item_attributes = {"brand": "Zambrero", "brand_wikidata": "Q18636431", "extras": Categories.FAST_FOOD.value}
14 allowed_domains = ["www.zambrero.com.au"]
15
16 def start_requests(self):
17 yield Request(url=f"https://{self.allowed_domains[0]}/locations", callback=self.parse_location_list)
18
19 def parse_location_list(self, response):
20 location_urls = response.xpath('//div[@data-location-id]//a[@title="Order & Store Info"]/@href').getall()
21 for location_url in location_urls:
22 yield Request(url=location_url, callback=self.parse_location)
23
24 def parse_location(self, response):
25 properties = {
26 "ref": response.xpath("//@data-location-id").get(),
27 "name": re.sub(r"\s+", " ", response.xpath("//div[@data-location-id]/h4/text()").get()).strip(),
28 "lat": response.xpath("//@data-lat").get(),
29 "lon": response.xpath("///@data-lng").get(),
30 "addr_full": re.sub(
31 r"\s+",
32 " ",
33 " ".join(response.xpath('//div[@data-location-id]//span[contains(@class, "address")]/text()').getall()),
34 ).strip(),
35 "phone": response.xpath('//a[contains(@class, "phone")]/@href').get().replace("tel:", ""),
36 "email": response.xpath('//a[contains(@href, "mailto:")]/@href').get().replace("mailto:", ""),
37 "website": response.url,
38 "opening_hours": OpeningHours(),
39 }
40 if "Temporarily Closed" in properties["name"]:
41 return
42 if properties["phone"] == "0":
43 properties.pop("phone")
44
45 hours_text = re.sub(
46 r"\s+", " ", " ".join(response.xpath('//div[contains(@class, "hours-item")]/span/text()').getall())
47 )
48 properties["opening_hours"].add_ranges_from_string(hours_text)
49
50 # Some store names and URLs contain "Opening Soon" but numerous of
51 # these are already open and the URL hasn't been changed. A more
52 # reliable way of knowing a store is not yet open is that it has
53 # no opening hours specified.
54 if not properties["opening_hours"].as_opening_hours():
55 return
56
57 yield Feature(**properties)
58
```
Path: `locations/spiders/woolworths_au.py`
Content:
```
1 import scrapy
2
3 from locations.dict_parser import DictParser
4
5
6 class WoolworthsAUSpider(scrapy.Spider):
7 name = "woolworths_au"
8 item_attributes = {"brand": "Woolworths", "brand_wikidata": "Q3249145"}
9 allowed_domains = ["woolworths.com.au"]
10 start_urls = [
11 "https://www.woolworths.com.au/apis/ui/StoreLocator/Stores?Max=10000&Division=SUPERMARKETS,PETROL,CALTEXWOW,AMPOLMETRO,AMPOL&Facility=&postcode=*"
12 ]
13 custom_settings = {"ROBOTSTXT_OBEY": False}
14 requires_proxy = "AU"
15
16 def parse(self, response):
17 data = response.json()
18
19 for i in data["Stores"]:
20 if not i["IsOpen"]:
21 continue
22
23 i["street_address"] = ", ".join(filter(None, [i["AddressLine1"], i["AddressLine2"]]))
24 i["ref"] = i.pop("StoreNo")
25 i["city"] = i.pop("Suburb")
26
27 item = DictParser.parse(i)
28
29 item["website"] = (
30 "https://www.woolworths.com.au/shop/storelocator/"
31 + "-".join([item["state"], item["city"], item["ref"], i["Division"]]).lower()
32 )
33
34 # TODO: types needs some work, NSI seems out of date too
35 item["extras"] = {"type": i["Division"]}
36
37 yield item
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/woolworths_au.py b/locations/spiders/woolworths_au.py
--- a/locations/spiders/woolworths_au.py
+++ b/locations/spiders/woolworths_au.py
@@ -1,6 +1,7 @@
import scrapy
from locations.dict_parser import DictParser
+from locations.pipelines.address_clean_up import clean_address
class WoolworthsAUSpider(scrapy.Spider):
@@ -20,7 +21,7 @@
if not i["IsOpen"]:
continue
- i["street_address"] = ", ".join(filter(None, [i["AddressLine1"], i["AddressLine2"]]))
+ i["street_address"] = clean_address([i["AddressLine1"], i["AddressLine2"]])
i["ref"] = i.pop("StoreNo")
i["city"] = i.pop("Suburb")
diff --git a/locations/spiders/zambrero_au.py b/locations/spiders/zambrero_au.py
--- a/locations/spiders/zambrero_au.py
+++ b/locations/spiders/zambrero_au.py
@@ -6,6 +6,7 @@
from locations.categories import Categories
from locations.hours import OpeningHours
from locations.items import Feature
+from locations.pipelines.address_clean_up import clean_address
class ZambreroAUSpider(Spider):
@@ -27,11 +28,9 @@
"name": re.sub(r"\s+", " ", response.xpath("//div[@data-location-id]/h4/text()").get()).strip(),
"lat": response.xpath("//@data-lat").get(),
"lon": response.xpath("///@data-lng").get(),
- "addr_full": re.sub(
- r"\s+",
- " ",
- " ".join(response.xpath('//div[@data-location-id]//span[contains(@class, "address")]/text()').getall()),
- ).strip(),
+ "addr_full": clean_address(
+ " ".join(response.xpath('//div[@data-location-id]//span[contains(@class, "address")]/text()').getall())
+ ),
"phone": response.xpath('//a[contains(@class, "phone")]/@href').get().replace("tel:", ""),
"email": response.xpath('//a[contains(@href, "mailto:")]/@href').get().replace("mailto:", ""),
"website": response.url,
diff --git a/locations/spiders/zizzi_gb.py b/locations/spiders/zizzi_gb.py
--- a/locations/spiders/zizzi_gb.py
+++ b/locations/spiders/zizzi_gb.py
@@ -1,6 +1,7 @@
import scrapy
from locations.dict_parser import DictParser
+from locations.pipelines.address_clean_up import clean_address
class ZizziGBSpider(scrapy.Spider):
@@ -11,7 +12,7 @@
def parse(self, response):
for store in response.json()["data"]:
item = DictParser.parse(store)
- item["addr_full"] = ", ".join(store["address"].split("\r\n"))
+ item["addr_full"] = clean_address(store["address"].split("\r\n"))
item["image"] = store["featured_image"]
item["website"] = store["link"]
|
{"golden_diff": "diff --git a/locations/spiders/woolworths_au.py b/locations/spiders/woolworths_au.py\n--- a/locations/spiders/woolworths_au.py\n+++ b/locations/spiders/woolworths_au.py\n@@ -1,6 +1,7 @@\n import scrapy\n \n from locations.dict_parser import DictParser\n+from locations.pipelines.address_clean_up import clean_address\n \n \n class WoolworthsAUSpider(scrapy.Spider):\n@@ -20,7 +21,7 @@\n if not i[\"IsOpen\"]:\n continue\n \n- i[\"street_address\"] = \", \".join(filter(None, [i[\"AddressLine1\"], i[\"AddressLine2\"]]))\n+ i[\"street_address\"] = clean_address([i[\"AddressLine1\"], i[\"AddressLine2\"]])\n i[\"ref\"] = i.pop(\"StoreNo\")\n i[\"city\"] = i.pop(\"Suburb\")\n \ndiff --git a/locations/spiders/zambrero_au.py b/locations/spiders/zambrero_au.py\n--- a/locations/spiders/zambrero_au.py\n+++ b/locations/spiders/zambrero_au.py\n@@ -6,6 +6,7 @@\n from locations.categories import Categories\n from locations.hours import OpeningHours\n from locations.items import Feature\n+from locations.pipelines.address_clean_up import clean_address\n \n \n class ZambreroAUSpider(Spider):\n@@ -27,11 +28,9 @@\n \"name\": re.sub(r\"\\s+\", \" \", response.xpath(\"//div[@data-location-id]/h4/text()\").get()).strip(),\n \"lat\": response.xpath(\"//@data-lat\").get(),\n \"lon\": response.xpath(\"///@data-lng\").get(),\n- \"addr_full\": re.sub(\n- r\"\\s+\",\n- \" \",\n- \" \".join(response.xpath('//div[@data-location-id]//span[contains(@class, \"address\")]/text()').getall()),\n- ).strip(),\n+ \"addr_full\": clean_address(\n+ \" \".join(response.xpath('//div[@data-location-id]//span[contains(@class, \"address\")]/text()').getall())\n+ ),\n \"phone\": response.xpath('//a[contains(@class, \"phone\")]/@href').get().replace(\"tel:\", \"\"),\n \"email\": response.xpath('//a[contains(@href, \"mailto:\")]/@href').get().replace(\"mailto:\", \"\"),\n \"website\": response.url,\ndiff --git a/locations/spiders/zizzi_gb.py b/locations/spiders/zizzi_gb.py\n--- a/locations/spiders/zizzi_gb.py\n+++ b/locations/spiders/zizzi_gb.py\n@@ -1,6 +1,7 @@\n import scrapy\n \n from locations.dict_parser import DictParser\n+from locations.pipelines.address_clean_up import clean_address\n \n \n class ZizziGBSpider(scrapy.Spider):\n@@ -11,7 +12,7 @@\n def parse(self, response):\n for store in response.json()[\"data\"]:\n item = DictParser.parse(store)\n- item[\"addr_full\"] = \", \".join(store[\"address\"].split(\"\\r\\n\"))\n+ item[\"addr_full\"] = clean_address(store[\"address\"].split(\"\\r\\n\"))\n item[\"image\"] = store[\"featured_image\"]\n item[\"website\"] = store[\"link\"]\n", "issue": "Use clean_address function to join multiple free text lines together\nThe `clean_address` method added in #7568 allows a standardised approach to taking messy ordered multiple line address strings (of any type of composition) and joining them together into a single string.\r\n\r\nWe can now use `clean_address` to replace the many variants throughout spiders of attempting to join these multi-line address strings. An added benefit is being able to quickly find where multi-line address strings are parsed (via searching for `clean_address` instances), making it easier to change address handling in the future.\r\n\r\nRelated to #5598\n", "before_files": [{"content": "import scrapy\n\nfrom locations.dict_parser import DictParser\n\n\nclass ZizziGBSpider(scrapy.Spider):\n name = \"zizzi_gb\"\n item_attributes = {\"brand\": \"Zizzi\", \"brand_wikidata\": \"Q8072944\"}\n start_urls = [\"https://www.zizzi.co.uk/wp-json/locations/get_venues\"]\n\n def parse(self, response):\n for store in response.json()[\"data\"]:\n item = DictParser.parse(store)\n item[\"addr_full\"] = \", \".join(store[\"address\"].split(\"\\r\\n\"))\n item[\"image\"] = store[\"featured_image\"]\n item[\"website\"] = store[\"link\"]\n\n if store[\"region\"] == \"Ireland\":\n item.pop(\"state\")\n item[\"country\"] = \"IE\"\n else:\n item[\"country\"] = \"GB\"\n\n yield item\n", "path": "locations/spiders/zizzi_gb.py"}, {"content": "import re\n\nfrom scrapy import Spider\nfrom scrapy.http import Request\n\nfrom locations.categories import Categories\nfrom locations.hours import OpeningHours\nfrom locations.items import Feature\n\n\nclass ZambreroAUSpider(Spider):\n name = \"zambrero_au\"\n item_attributes = {\"brand\": \"Zambrero\", \"brand_wikidata\": \"Q18636431\", \"extras\": Categories.FAST_FOOD.value}\n allowed_domains = [\"www.zambrero.com.au\"]\n\n def start_requests(self):\n yield Request(url=f\"https://{self.allowed_domains[0]}/locations\", callback=self.parse_location_list)\n\n def parse_location_list(self, response):\n location_urls = response.xpath('//div[@data-location-id]//a[@title=\"Order & Store Info\"]/@href').getall()\n for location_url in location_urls:\n yield Request(url=location_url, callback=self.parse_location)\n\n def parse_location(self, response):\n properties = {\n \"ref\": response.xpath(\"//@data-location-id\").get(),\n \"name\": re.sub(r\"\\s+\", \" \", response.xpath(\"//div[@data-location-id]/h4/text()\").get()).strip(),\n \"lat\": response.xpath(\"//@data-lat\").get(),\n \"lon\": response.xpath(\"///@data-lng\").get(),\n \"addr_full\": re.sub(\n r\"\\s+\",\n \" \",\n \" \".join(response.xpath('//div[@data-location-id]//span[contains(@class, \"address\")]/text()').getall()),\n ).strip(),\n \"phone\": response.xpath('//a[contains(@class, \"phone\")]/@href').get().replace(\"tel:\", \"\"),\n \"email\": response.xpath('//a[contains(@href, \"mailto:\")]/@href').get().replace(\"mailto:\", \"\"),\n \"website\": response.url,\n \"opening_hours\": OpeningHours(),\n }\n if \"Temporarily Closed\" in properties[\"name\"]:\n return\n if properties[\"phone\"] == \"0\":\n properties.pop(\"phone\")\n\n hours_text = re.sub(\n r\"\\s+\", \" \", \" \".join(response.xpath('//div[contains(@class, \"hours-item\")]/span/text()').getall())\n )\n properties[\"opening_hours\"].add_ranges_from_string(hours_text)\n\n # Some store names and URLs contain \"Opening Soon\" but numerous of\n # these are already open and the URL hasn't been changed. A more\n # reliable way of knowing a store is not yet open is that it has\n # no opening hours specified.\n if not properties[\"opening_hours\"].as_opening_hours():\n return\n\n yield Feature(**properties)\n", "path": "locations/spiders/zambrero_au.py"}, {"content": "import scrapy\n\nfrom locations.dict_parser import DictParser\n\n\nclass WoolworthsAUSpider(scrapy.Spider):\n name = \"woolworths_au\"\n item_attributes = {\"brand\": \"Woolworths\", \"brand_wikidata\": \"Q3249145\"}\n allowed_domains = [\"woolworths.com.au\"]\n start_urls = [\n \"https://www.woolworths.com.au/apis/ui/StoreLocator/Stores?Max=10000&Division=SUPERMARKETS,PETROL,CALTEXWOW,AMPOLMETRO,AMPOL&Facility=&postcode=*\"\n ]\n custom_settings = {\"ROBOTSTXT_OBEY\": False}\n requires_proxy = \"AU\"\n\n def parse(self, response):\n data = response.json()\n\n for i in data[\"Stores\"]:\n if not i[\"IsOpen\"]:\n continue\n\n i[\"street_address\"] = \", \".join(filter(None, [i[\"AddressLine1\"], i[\"AddressLine2\"]]))\n i[\"ref\"] = i.pop(\"StoreNo\")\n i[\"city\"] = i.pop(\"Suburb\")\n\n item = DictParser.parse(i)\n\n item[\"website\"] = (\n \"https://www.woolworths.com.au/shop/storelocator/\"\n + \"-\".join([item[\"state\"], item[\"city\"], item[\"ref\"], i[\"Division\"]]).lower()\n )\n\n # TODO: types needs some work, NSI seems out of date too\n item[\"extras\"] = {\"type\": i[\"Division\"]}\n\n yield item\n", "path": "locations/spiders/woolworths_au.py"}], "after_files": [{"content": "import scrapy\n\nfrom locations.dict_parser import DictParser\nfrom locations.pipelines.address_clean_up import clean_address\n\n\nclass ZizziGBSpider(scrapy.Spider):\n name = \"zizzi_gb\"\n item_attributes = {\"brand\": \"Zizzi\", \"brand_wikidata\": \"Q8072944\"}\n start_urls = [\"https://www.zizzi.co.uk/wp-json/locations/get_venues\"]\n\n def parse(self, response):\n for store in response.json()[\"data\"]:\n item = DictParser.parse(store)\n item[\"addr_full\"] = clean_address(store[\"address\"].split(\"\\r\\n\"))\n item[\"image\"] = store[\"featured_image\"]\n item[\"website\"] = store[\"link\"]\n\n if store[\"region\"] == \"Ireland\":\n item.pop(\"state\")\n item[\"country\"] = \"IE\"\n else:\n item[\"country\"] = \"GB\"\n\n yield item\n", "path": "locations/spiders/zizzi_gb.py"}, {"content": "import re\n\nfrom scrapy import Spider\nfrom scrapy.http import Request\n\nfrom locations.categories import Categories\nfrom locations.hours import OpeningHours\nfrom locations.items import Feature\nfrom locations.pipelines.address_clean_up import clean_address\n\n\nclass ZambreroAUSpider(Spider):\n name = \"zambrero_au\"\n item_attributes = {\"brand\": \"Zambrero\", \"brand_wikidata\": \"Q18636431\", \"extras\": Categories.FAST_FOOD.value}\n allowed_domains = [\"www.zambrero.com.au\"]\n\n def start_requests(self):\n yield Request(url=f\"https://{self.allowed_domains[0]}/locations\", callback=self.parse_location_list)\n\n def parse_location_list(self, response):\n location_urls = response.xpath('//div[@data-location-id]//a[@title=\"Order & Store Info\"]/@href').getall()\n for location_url in location_urls:\n yield Request(url=location_url, callback=self.parse_location)\n\n def parse_location(self, response):\n properties = {\n \"ref\": response.xpath(\"//@data-location-id\").get(),\n \"name\": re.sub(r\"\\s+\", \" \", response.xpath(\"//div[@data-location-id]/h4/text()\").get()).strip(),\n \"lat\": response.xpath(\"//@data-lat\").get(),\n \"lon\": response.xpath(\"///@data-lng\").get(),\n \"addr_full\": clean_address(\n \" \".join(response.xpath('//div[@data-location-id]//span[contains(@class, \"address\")]/text()').getall())\n ),\n \"phone\": response.xpath('//a[contains(@class, \"phone\")]/@href').get().replace(\"tel:\", \"\"),\n \"email\": response.xpath('//a[contains(@href, \"mailto:\")]/@href').get().replace(\"mailto:\", \"\"),\n \"website\": response.url,\n \"opening_hours\": OpeningHours(),\n }\n if \"Temporarily Closed\" in properties[\"name\"]:\n return\n if properties[\"phone\"] == \"0\":\n properties.pop(\"phone\")\n\n hours_text = re.sub(\n r\"\\s+\", \" \", \" \".join(response.xpath('//div[contains(@class, \"hours-item\")]/span/text()').getall())\n )\n properties[\"opening_hours\"].add_ranges_from_string(hours_text)\n\n # Some store names and URLs contain \"Opening Soon\" but numerous of\n # these are already open and the URL hasn't been changed. A more\n # reliable way of knowing a store is not yet open is that it has\n # no opening hours specified.\n if not properties[\"opening_hours\"].as_opening_hours():\n return\n\n yield Feature(**properties)\n", "path": "locations/spiders/zambrero_au.py"}, {"content": "import scrapy\n\nfrom locations.dict_parser import DictParser\nfrom locations.pipelines.address_clean_up import clean_address\n\n\nclass WoolworthsAUSpider(scrapy.Spider):\n name = \"woolworths_au\"\n item_attributes = {\"brand\": \"Woolworths\", \"brand_wikidata\": \"Q3249145\"}\n allowed_domains = [\"woolworths.com.au\"]\n start_urls = [\n \"https://www.woolworths.com.au/apis/ui/StoreLocator/Stores?Max=10000&Division=SUPERMARKETS,PETROL,CALTEXWOW,AMPOLMETRO,AMPOL&Facility=&postcode=*\"\n ]\n custom_settings = {\"ROBOTSTXT_OBEY\": False}\n requires_proxy = \"AU\"\n\n def parse(self, response):\n data = response.json()\n\n for i in data[\"Stores\"]:\n if not i[\"IsOpen\"]:\n continue\n\n i[\"street_address\"] = clean_address([i[\"AddressLine1\"], i[\"AddressLine2\"]])\n i[\"ref\"] = i.pop(\"StoreNo\")\n i[\"city\"] = i.pop(\"Suburb\")\n\n item = DictParser.parse(i)\n\n item[\"website\"] = (\n \"https://www.woolworths.com.au/shop/storelocator/\"\n + \"-\".join([item[\"state\"], item[\"city\"], item[\"ref\"], i[\"Division\"]]).lower()\n )\n\n # TODO: types needs some work, NSI seems out of date too\n item[\"extras\"] = {\"type\": i[\"Division\"]}\n\n yield item\n", "path": "locations/spiders/woolworths_au.py"}]}
| 1,748 | 728 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.