problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_22005
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__torchmetrics-1447
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Small typo in PESQ docs
## 📚 Documentation
[Perceptual Evaluation Of Speech Quality (PESQ) documentation](https://torchmetrics.readthedocs.io/en/stable/audio/perceptual_evaluation_speech_quality.html) states following for both Module Interface and Functional Interface:
Raises ModuleNotFoundError – If **peqs** package is not installed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/functional/audio/pesq.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import numpy as np
15 import torch
16 from torch import Tensor
17
18 from torchmetrics.utilities.checks import _check_same_shape
19 from torchmetrics.utilities.imports import _MULTIPROCESSING_AVAILABLE, _PESQ_AVAILABLE
20
21 if _PESQ_AVAILABLE:
22 import pesq as pesq_backend
23 else:
24 pesq_backend = None
25
26
27 __doctest_requires__ = {("perceptual_evaluation_speech_quality",): ["pesq"]}
28
29
30 def perceptual_evaluation_speech_quality(
31 preds: Tensor,
32 target: Tensor,
33 fs: int,
34 mode: str,
35 keep_same_device: bool = False,
36 n_processes: int = 1,
37 ) -> Tensor:
38 r"""Calculates `Perceptual Evaluation of Speech Quality`_ (PESQ). It's a recognized industry standard for audio
39 quality that takes into considerations characteristics such as: audio sharpness, call volume, background noise,
40 clipping, audio interference ect. PESQ returns a score between -0.5 and 4.5 with the higher scores indicating a
41 better quality.
42
43 This metric is a wrapper for the `pesq package`_. Note that input will be moved to `cpu` to perform the metric
44 calculation.
45
46 .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install
47 torchmetrics[audio]`` or ``pip install pesq``. Note that ``pesq`` will compile with your currently
48 installed version of numpy, meaning that if you upgrade numpy at some point in the future you will
49 most likely have to reinstall ``pesq``.
50
51 Args:
52 preds: float tensor with shape ``(...,time)``
53 target: float tensor with shape ``(...,time)``
54 fs: sampling frequency, should be 16000 or 8000 (Hz)
55 mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)
56 keep_same_device: whether to move the pesq value to the device of preds
57 n_processes: integer specifiying the number of processes to run in parallel for the metric calculation.
58 Only applies to batches of data and if ``multiprocessing`` package is installed.
59
60 Returns:
61 Float tensor with shape ``(...,)`` of PESQ values per sample
62
63 Raises:
64 ModuleNotFoundError:
65 If ``peqs`` package is not installed
66 ValueError:
67 If ``fs`` is not either ``8000`` or ``16000``
68 ValueError:
69 If ``mode`` is not either ``"wb"`` or ``"nb"``
70 RuntimeError:
71 If ``preds`` and ``target`` does not have the same shape
72
73 Example:
74 >>> from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality
75 >>> import torch
76 >>> g = torch.manual_seed(1)
77 >>> preds = torch.randn(8000)
78 >>> target = torch.randn(8000)
79 >>> perceptual_evaluation_speech_quality(preds, target, 8000, 'nb')
80 tensor(2.2076)
81 >>> perceptual_evaluation_speech_quality(preds, target, 16000, 'wb')
82 tensor(1.7359)
83 """
84 if not _PESQ_AVAILABLE:
85 raise ModuleNotFoundError(
86 "PESQ metric requires that pesq is installed."
87 " Either install as `pip install torchmetrics[audio]` or `pip install pesq`."
88 )
89 if fs not in (8000, 16000):
90 raise ValueError(f"Expected argument `fs` to either be 8000 or 16000 but got {fs}")
91 if mode not in ("wb", "nb"):
92 raise ValueError(f"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}")
93 _check_same_shape(preds, target)
94
95 if preds.ndim == 1:
96 pesq_val_np = pesq_backend.pesq(fs, target.detach().cpu().numpy(), preds.detach().cpu().numpy(), mode)
97 pesq_val = torch.tensor(pesq_val_np)
98 else:
99 preds_np = preds.reshape(-1, preds.shape[-1]).detach().cpu().numpy()
100 target_np = target.reshape(-1, preds.shape[-1]).detach().cpu().numpy()
101
102 if _MULTIPROCESSING_AVAILABLE and n_processes != 1:
103 pesq_val_np = pesq_backend.pesq_batch(fs, target_np, preds_np, mode, n_processor=n_processes)
104 pesq_val_np = np.array(pesq_val_np)
105 else:
106 pesq_val_np = np.empty(shape=(preds_np.shape[0]))
107 for b in range(preds_np.shape[0]):
108 pesq_val_np[b] = pesq_backend.pesq(fs, target_np[b, :], preds_np[b, :], mode)
109 pesq_val = torch.from_numpy(pesq_val_np)
110 pesq_val = pesq_val.reshape(preds.shape[:-1])
111
112 if keep_same_device:
113 pesq_val = pesq_val.to(preds.device)
114
115 return pesq_val
116
```
Path: `src/torchmetrics/audio/pesq.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any
15
16 from torch import Tensor, tensor
17
18 from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality
19 from torchmetrics.metric import Metric
20 from torchmetrics.utilities.imports import _PESQ_AVAILABLE
21
22 __doctest_requires__ = {"PerceptualEvaluationSpeechQuality": ["pesq"]}
23
24
25 class PerceptualEvaluationSpeechQuality(Metric):
26 """Calculates `Perceptual Evaluation of Speech Quality`_ (PESQ). It's a recognized industry standard for audio
27 quality that takes into considerations characteristics such as: audio sharpness, call volume, background noise,
28 clipping, audio interference ect. PESQ returns a score between -0.5 and 4.5 with the higher scores indicating a
29 better quality.
30
31 This metric is a wrapper for the `pesq package`_. Note that input will be moved to ``cpu`` to perform the metric
32 calculation.
33
34 As input to ``forward`` and ``update`` the metric accepts the following input
35
36 - ``preds`` (:class:`~torch.Tensor`): float tensor with shape ``(...,time)``
37 - ``target`` (:class:`~torch.Tensor`): float tensor with shape ``(...,time)``
38
39 As output of `forward` and `compute` the metric returns the following output
40
41 - ``pesq`` (:class:`~torch.Tensor`): float tensor with shape ``(...,)`` of PESQ value per sample
42
43 .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install
44 torchmetrics[audio]`` or ``pip install pesq``. ``pesq`` will compile with your currently
45 installed version of numpy, meaning that if you upgrade numpy at some point in the future you will
46 most likely have to reinstall ``pesq``.
47
48 Args:
49 fs: sampling frequency, should be 16000 or 8000 (Hz)
50 mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)
51 keep_same_device: whether to move the pesq value to the device of preds
52 n_processes: integer specifiying the number of processes to run in parallel for the metric calculation.
53 Only applies to batches of data and if ``multiprocessing`` package is installed.
54 kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
55
56 Raises:
57 ModuleNotFoundError:
58 If ``peqs`` package is not installed
59 ValueError:
60 If ``fs`` is not either ``8000`` or ``16000``
61 ValueError:
62 If ``mode`` is not either ``"wb"`` or ``"nb"``
63
64 Example:
65 >>> from torchmetrics.audio.pesq import PerceptualEvaluationSpeechQuality
66 >>> import torch
67 >>> g = torch.manual_seed(1)
68 >>> preds = torch.randn(8000)
69 >>> target = torch.randn(8000)
70 >>> nb_pesq = PerceptualEvaluationSpeechQuality(8000, 'nb')
71 >>> nb_pesq(preds, target)
72 tensor(2.2076)
73 >>> wb_pesq = PerceptualEvaluationSpeechQuality(16000, 'wb')
74 >>> wb_pesq(preds, target)
75 tensor(1.7359)
76 """
77
78 sum_pesq: Tensor
79 total: Tensor
80 full_state_update: bool = False
81 is_differentiable: bool = False
82 higher_is_better: bool = True
83
84 def __init__(
85 self,
86 fs: int,
87 mode: str,
88 n_processes: int = 1,
89 **kwargs: Any,
90 ) -> None:
91 super().__init__(**kwargs)
92 if not _PESQ_AVAILABLE:
93 raise ModuleNotFoundError(
94 "PerceptualEvaluationSpeechQuality metric requires that `pesq` is installed."
95 " Either install as `pip install torchmetrics[audio]` or `pip install pesq`."
96 )
97 if fs not in (8000, 16000):
98 raise ValueError(f"Expected argument `fs` to either be 8000 or 16000 but got {fs}")
99 self.fs = fs
100 if mode not in ("wb", "nb"):
101 raise ValueError(f"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}")
102 self.mode = mode
103 if not isinstance(n_processes, int) and n_processes <= 0:
104 raise ValueError(f"Expected argument `n_processes` to be an int larger than 0 but got {n_processes}")
105 self.n_processes = n_processes
106
107 self.add_state("sum_pesq", default=tensor(0.0), dist_reduce_fx="sum")
108 self.add_state("total", default=tensor(0), dist_reduce_fx="sum")
109
110 def update(self, preds: Tensor, target: Tensor) -> None:
111 """Update state with predictions and targets."""
112 pesq_batch = perceptual_evaluation_speech_quality(
113 preds, target, self.fs, self.mode, False, self.n_processes
114 ).to(self.sum_pesq.device)
115
116 self.sum_pesq += pesq_batch.sum()
117 self.total += pesq_batch.numel()
118
119 def compute(self) -> Tensor:
120 """Computes metric."""
121 return self.sum_pesq / self.total
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/torchmetrics/audio/pesq.py b/src/torchmetrics/audio/pesq.py
--- a/src/torchmetrics/audio/pesq.py
+++ b/src/torchmetrics/audio/pesq.py
@@ -55,7 +55,7 @@
Raises:
ModuleNotFoundError:
- If ``peqs`` package is not installed
+ If ``pesq`` package is not installed
ValueError:
If ``fs`` is not either ``8000`` or ``16000``
ValueError:
diff --git a/src/torchmetrics/functional/audio/pesq.py b/src/torchmetrics/functional/audio/pesq.py
--- a/src/torchmetrics/functional/audio/pesq.py
+++ b/src/torchmetrics/functional/audio/pesq.py
@@ -62,13 +62,13 @@
Raises:
ModuleNotFoundError:
- If ``peqs`` package is not installed
+ If ``pesq`` package is not installed
ValueError:
If ``fs`` is not either ``8000`` or ``16000``
ValueError:
If ``mode`` is not either ``"wb"`` or ``"nb"``
RuntimeError:
- If ``preds`` and ``target`` does not have the same shape
+ If ``preds`` and ``target`` do not have the same shape
Example:
>>> from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality
|
{"golden_diff": "diff --git a/src/torchmetrics/audio/pesq.py b/src/torchmetrics/audio/pesq.py\n--- a/src/torchmetrics/audio/pesq.py\n+++ b/src/torchmetrics/audio/pesq.py\n@@ -55,7 +55,7 @@\n \n Raises:\n ModuleNotFoundError:\n- If ``peqs`` package is not installed\n+ If ``pesq`` package is not installed\n ValueError:\n If ``fs`` is not either ``8000`` or ``16000``\n ValueError:\ndiff --git a/src/torchmetrics/functional/audio/pesq.py b/src/torchmetrics/functional/audio/pesq.py\n--- a/src/torchmetrics/functional/audio/pesq.py\n+++ b/src/torchmetrics/functional/audio/pesq.py\n@@ -62,13 +62,13 @@\n \n Raises:\n ModuleNotFoundError:\n- If ``peqs`` package is not installed\n+ If ``pesq`` package is not installed\n ValueError:\n If ``fs`` is not either ``8000`` or ``16000``\n ValueError:\n If ``mode`` is not either ``\"wb\"`` or ``\"nb\"``\n RuntimeError:\n- If ``preds`` and ``target`` does not have the same shape\n+ If ``preds`` and ``target`` do not have the same shape\n \n Example:\n >>> from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality\n", "issue": "Small typo in PESQ docs\n## \ud83d\udcda Documentation\r\n\r\n[Perceptual Evaluation Of Speech Quality (PESQ) documentation](https://torchmetrics.readthedocs.io/en/stable/audio/perceptual_evaluation_speech_quality.html) states following for both Module Interface and Functional Interface:\r\nRaises ModuleNotFoundError \u2013 If **peqs** package is not installed.\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport numpy as np\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.checks import _check_same_shape\nfrom torchmetrics.utilities.imports import _MULTIPROCESSING_AVAILABLE, _PESQ_AVAILABLE\n\nif _PESQ_AVAILABLE:\n import pesq as pesq_backend\nelse:\n pesq_backend = None\n\n\n__doctest_requires__ = {(\"perceptual_evaluation_speech_quality\",): [\"pesq\"]}\n\n\ndef perceptual_evaluation_speech_quality(\n preds: Tensor,\n target: Tensor,\n fs: int,\n mode: str,\n keep_same_device: bool = False,\n n_processes: int = 1,\n) -> Tensor:\n r\"\"\"Calculates `Perceptual Evaluation of Speech Quality`_ (PESQ). It's a recognized industry standard for audio\n quality that takes into considerations characteristics such as: audio sharpness, call volume, background noise,\n clipping, audio interference ect. PESQ returns a score between -0.5 and 4.5 with the higher scores indicating a\n better quality.\n\n This metric is a wrapper for the `pesq package`_. Note that input will be moved to `cpu` to perform the metric\n calculation.\n\n .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install\n torchmetrics[audio]`` or ``pip install pesq``. Note that ``pesq`` will compile with your currently\n installed version of numpy, meaning that if you upgrade numpy at some point in the future you will\n most likely have to reinstall ``pesq``.\n\n Args:\n preds: float tensor with shape ``(...,time)``\n target: float tensor with shape ``(...,time)``\n fs: sampling frequency, should be 16000 or 8000 (Hz)\n mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)\n keep_same_device: whether to move the pesq value to the device of preds\n n_processes: integer specifiying the number of processes to run in parallel for the metric calculation.\n Only applies to batches of data and if ``multiprocessing`` package is installed.\n\n Returns:\n Float tensor with shape ``(...,)`` of PESQ values per sample\n\n Raises:\n ModuleNotFoundError:\n If ``peqs`` package is not installed\n ValueError:\n If ``fs`` is not either ``8000`` or ``16000``\n ValueError:\n If ``mode`` is not either ``\"wb\"`` or ``\"nb\"``\n RuntimeError:\n If ``preds`` and ``target`` does not have the same shape\n\n Example:\n >>> from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality\n >>> import torch\n >>> g = torch.manual_seed(1)\n >>> preds = torch.randn(8000)\n >>> target = torch.randn(8000)\n >>> perceptual_evaluation_speech_quality(preds, target, 8000, 'nb')\n tensor(2.2076)\n >>> perceptual_evaluation_speech_quality(preds, target, 16000, 'wb')\n tensor(1.7359)\n \"\"\"\n if not _PESQ_AVAILABLE:\n raise ModuleNotFoundError(\n \"PESQ metric requires that pesq is installed.\"\n \" Either install as `pip install torchmetrics[audio]` or `pip install pesq`.\"\n )\n if fs not in (8000, 16000):\n raise ValueError(f\"Expected argument `fs` to either be 8000 or 16000 but got {fs}\")\n if mode not in (\"wb\", \"nb\"):\n raise ValueError(f\"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}\")\n _check_same_shape(preds, target)\n\n if preds.ndim == 1:\n pesq_val_np = pesq_backend.pesq(fs, target.detach().cpu().numpy(), preds.detach().cpu().numpy(), mode)\n pesq_val = torch.tensor(pesq_val_np)\n else:\n preds_np = preds.reshape(-1, preds.shape[-1]).detach().cpu().numpy()\n target_np = target.reshape(-1, preds.shape[-1]).detach().cpu().numpy()\n\n if _MULTIPROCESSING_AVAILABLE and n_processes != 1:\n pesq_val_np = pesq_backend.pesq_batch(fs, target_np, preds_np, mode, n_processor=n_processes)\n pesq_val_np = np.array(pesq_val_np)\n else:\n pesq_val_np = np.empty(shape=(preds_np.shape[0]))\n for b in range(preds_np.shape[0]):\n pesq_val_np[b] = pesq_backend.pesq(fs, target_np[b, :], preds_np[b, :], mode)\n pesq_val = torch.from_numpy(pesq_val_np)\n pesq_val = pesq_val.reshape(preds.shape[:-1])\n\n if keep_same_device:\n pesq_val = pesq_val.to(preds.device)\n\n return pesq_val\n", "path": "src/torchmetrics/functional/audio/pesq.py"}, {"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any\n\nfrom torch import Tensor, tensor\n\nfrom torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities.imports import _PESQ_AVAILABLE\n\n__doctest_requires__ = {\"PerceptualEvaluationSpeechQuality\": [\"pesq\"]}\n\n\nclass PerceptualEvaluationSpeechQuality(Metric):\n \"\"\"Calculates `Perceptual Evaluation of Speech Quality`_ (PESQ). It's a recognized industry standard for audio\n quality that takes into considerations characteristics such as: audio sharpness, call volume, background noise,\n clipping, audio interference ect. PESQ returns a score between -0.5 and 4.5 with the higher scores indicating a\n better quality.\n\n This metric is a wrapper for the `pesq package`_. Note that input will be moved to ``cpu`` to perform the metric\n calculation.\n\n As input to ``forward`` and ``update`` the metric accepts the following input\n\n - ``preds`` (:class:`~torch.Tensor`): float tensor with shape ``(...,time)``\n - ``target`` (:class:`~torch.Tensor`): float tensor with shape ``(...,time)``\n\n As output of `forward` and `compute` the metric returns the following output\n\n - ``pesq`` (:class:`~torch.Tensor`): float tensor with shape ``(...,)`` of PESQ value per sample\n\n .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install\n torchmetrics[audio]`` or ``pip install pesq``. ``pesq`` will compile with your currently\n installed version of numpy, meaning that if you upgrade numpy at some point in the future you will\n most likely have to reinstall ``pesq``.\n\n Args:\n fs: sampling frequency, should be 16000 or 8000 (Hz)\n mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)\n keep_same_device: whether to move the pesq value to the device of preds\n n_processes: integer specifiying the number of processes to run in parallel for the metric calculation.\n Only applies to batches of data and if ``multiprocessing`` package is installed.\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Raises:\n ModuleNotFoundError:\n If ``peqs`` package is not installed\n ValueError:\n If ``fs`` is not either ``8000`` or ``16000``\n ValueError:\n If ``mode`` is not either ``\"wb\"`` or ``\"nb\"``\n\n Example:\n >>> from torchmetrics.audio.pesq import PerceptualEvaluationSpeechQuality\n >>> import torch\n >>> g = torch.manual_seed(1)\n >>> preds = torch.randn(8000)\n >>> target = torch.randn(8000)\n >>> nb_pesq = PerceptualEvaluationSpeechQuality(8000, 'nb')\n >>> nb_pesq(preds, target)\n tensor(2.2076)\n >>> wb_pesq = PerceptualEvaluationSpeechQuality(16000, 'wb')\n >>> wb_pesq(preds, target)\n tensor(1.7359)\n \"\"\"\n\n sum_pesq: Tensor\n total: Tensor\n full_state_update: bool = False\n is_differentiable: bool = False\n higher_is_better: bool = True\n\n def __init__(\n self,\n fs: int,\n mode: str,\n n_processes: int = 1,\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n if not _PESQ_AVAILABLE:\n raise ModuleNotFoundError(\n \"PerceptualEvaluationSpeechQuality metric requires that `pesq` is installed.\"\n \" Either install as `pip install torchmetrics[audio]` or `pip install pesq`.\"\n )\n if fs not in (8000, 16000):\n raise ValueError(f\"Expected argument `fs` to either be 8000 or 16000 but got {fs}\")\n self.fs = fs\n if mode not in (\"wb\", \"nb\"):\n raise ValueError(f\"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}\")\n self.mode = mode\n if not isinstance(n_processes, int) and n_processes <= 0:\n raise ValueError(f\"Expected argument `n_processes` to be an int larger than 0 but got {n_processes}\")\n self.n_processes = n_processes\n\n self.add_state(\"sum_pesq\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"total\", default=tensor(0), dist_reduce_fx=\"sum\")\n\n def update(self, preds: Tensor, target: Tensor) -> None:\n \"\"\"Update state with predictions and targets.\"\"\"\n pesq_batch = perceptual_evaluation_speech_quality(\n preds, target, self.fs, self.mode, False, self.n_processes\n ).to(self.sum_pesq.device)\n\n self.sum_pesq += pesq_batch.sum()\n self.total += pesq_batch.numel()\n\n def compute(self) -> Tensor:\n \"\"\"Computes metric.\"\"\"\n return self.sum_pesq / self.total\n", "path": "src/torchmetrics/audio/pesq.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport numpy as np\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.checks import _check_same_shape\nfrom torchmetrics.utilities.imports import _MULTIPROCESSING_AVAILABLE, _PESQ_AVAILABLE\n\nif _PESQ_AVAILABLE:\n import pesq as pesq_backend\nelse:\n pesq_backend = None\n\n\n__doctest_requires__ = {(\"perceptual_evaluation_speech_quality\",): [\"pesq\"]}\n\n\ndef perceptual_evaluation_speech_quality(\n preds: Tensor,\n target: Tensor,\n fs: int,\n mode: str,\n keep_same_device: bool = False,\n n_processes: int = 1,\n) -> Tensor:\n r\"\"\"Calculates `Perceptual Evaluation of Speech Quality`_ (PESQ). It's a recognized industry standard for audio\n quality that takes into considerations characteristics such as: audio sharpness, call volume, background noise,\n clipping, audio interference ect. PESQ returns a score between -0.5 and 4.5 with the higher scores indicating a\n better quality.\n\n This metric is a wrapper for the `pesq package`_. Note that input will be moved to `cpu` to perform the metric\n calculation.\n\n .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install\n torchmetrics[audio]`` or ``pip install pesq``. Note that ``pesq`` will compile with your currently\n installed version of numpy, meaning that if you upgrade numpy at some point in the future you will\n most likely have to reinstall ``pesq``.\n\n Args:\n preds: float tensor with shape ``(...,time)``\n target: float tensor with shape ``(...,time)``\n fs: sampling frequency, should be 16000 or 8000 (Hz)\n mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)\n keep_same_device: whether to move the pesq value to the device of preds\n n_processes: integer specifiying the number of processes to run in parallel for the metric calculation.\n Only applies to batches of data and if ``multiprocessing`` package is installed.\n\n Returns:\n Float tensor with shape ``(...,)`` of PESQ values per sample\n\n Raises:\n ModuleNotFoundError:\n If ``pesq`` package is not installed\n ValueError:\n If ``fs`` is not either ``8000`` or ``16000``\n ValueError:\n If ``mode`` is not either ``\"wb\"`` or ``\"nb\"``\n RuntimeError:\n If ``preds`` and ``target`` do not have the same shape\n\n Example:\n >>> from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality\n >>> import torch\n >>> g = torch.manual_seed(1)\n >>> preds = torch.randn(8000)\n >>> target = torch.randn(8000)\n >>> perceptual_evaluation_speech_quality(preds, target, 8000, 'nb')\n tensor(2.2076)\n >>> perceptual_evaluation_speech_quality(preds, target, 16000, 'wb')\n tensor(1.7359)\n \"\"\"\n if not _PESQ_AVAILABLE:\n raise ModuleNotFoundError(\n \"PESQ metric requires that pesq is installed.\"\n \" Either install as `pip install torchmetrics[audio]` or `pip install pesq`.\"\n )\n if fs not in (8000, 16000):\n raise ValueError(f\"Expected argument `fs` to either be 8000 or 16000 but got {fs}\")\n if mode not in (\"wb\", \"nb\"):\n raise ValueError(f\"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}\")\n _check_same_shape(preds, target)\n\n if preds.ndim == 1:\n pesq_val_np = pesq_backend.pesq(fs, target.detach().cpu().numpy(), preds.detach().cpu().numpy(), mode)\n pesq_val = torch.tensor(pesq_val_np)\n else:\n preds_np = preds.reshape(-1, preds.shape[-1]).detach().cpu().numpy()\n target_np = target.reshape(-1, preds.shape[-1]).detach().cpu().numpy()\n\n if _MULTIPROCESSING_AVAILABLE and n_processes != 1:\n pesq_val_np = pesq_backend.pesq_batch(fs, target_np, preds_np, mode, n_processor=n_processes)\n pesq_val_np = np.array(pesq_val_np)\n else:\n pesq_val_np = np.empty(shape=(preds_np.shape[0]))\n for b in range(preds_np.shape[0]):\n pesq_val_np[b] = pesq_backend.pesq(fs, target_np[b, :], preds_np[b, :], mode)\n pesq_val = torch.from_numpy(pesq_val_np)\n pesq_val = pesq_val.reshape(preds.shape[:-1])\n\n if keep_same_device:\n pesq_val = pesq_val.to(preds.device)\n\n return pesq_val\n", "path": "src/torchmetrics/functional/audio/pesq.py"}, {"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any\n\nfrom torch import Tensor, tensor\n\nfrom torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities.imports import _PESQ_AVAILABLE\n\n__doctest_requires__ = {\"PerceptualEvaluationSpeechQuality\": [\"pesq\"]}\n\n\nclass PerceptualEvaluationSpeechQuality(Metric):\n \"\"\"Calculates `Perceptual Evaluation of Speech Quality`_ (PESQ). It's a recognized industry standard for audio\n quality that takes into considerations characteristics such as: audio sharpness, call volume, background noise,\n clipping, audio interference ect. PESQ returns a score between -0.5 and 4.5 with the higher scores indicating a\n better quality.\n\n This metric is a wrapper for the `pesq package`_. Note that input will be moved to ``cpu`` to perform the metric\n calculation.\n\n As input to ``forward`` and ``update`` the metric accepts the following input\n\n - ``preds`` (:class:`~torch.Tensor`): float tensor with shape ``(...,time)``\n - ``target`` (:class:`~torch.Tensor`): float tensor with shape ``(...,time)``\n\n As output of `forward` and `compute` the metric returns the following output\n\n - ``pesq`` (:class:`~torch.Tensor`): float tensor with shape ``(...,)`` of PESQ value per sample\n\n .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install\n torchmetrics[audio]`` or ``pip install pesq``. ``pesq`` will compile with your currently\n installed version of numpy, meaning that if you upgrade numpy at some point in the future you will\n most likely have to reinstall ``pesq``.\n\n Args:\n fs: sampling frequency, should be 16000 or 8000 (Hz)\n mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)\n keep_same_device: whether to move the pesq value to the device of preds\n n_processes: integer specifiying the number of processes to run in parallel for the metric calculation.\n Only applies to batches of data and if ``multiprocessing`` package is installed.\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Raises:\n ModuleNotFoundError:\n If ``pesq`` package is not installed\n ValueError:\n If ``fs`` is not either ``8000`` or ``16000``\n ValueError:\n If ``mode`` is not either ``\"wb\"`` or ``\"nb\"``\n\n Example:\n >>> from torchmetrics.audio.pesq import PerceptualEvaluationSpeechQuality\n >>> import torch\n >>> g = torch.manual_seed(1)\n >>> preds = torch.randn(8000)\n >>> target = torch.randn(8000)\n >>> nb_pesq = PerceptualEvaluationSpeechQuality(8000, 'nb')\n >>> nb_pesq(preds, target)\n tensor(2.2076)\n >>> wb_pesq = PerceptualEvaluationSpeechQuality(16000, 'wb')\n >>> wb_pesq(preds, target)\n tensor(1.7359)\n \"\"\"\n\n sum_pesq: Tensor\n total: Tensor\n full_state_update: bool = False\n is_differentiable: bool = False\n higher_is_better: bool = True\n\n def __init__(\n self,\n fs: int,\n mode: str,\n n_processes: int = 1,\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n if not _PESQ_AVAILABLE:\n raise ModuleNotFoundError(\n \"PerceptualEvaluationSpeechQuality metric requires that `pesq` is installed.\"\n \" Either install as `pip install torchmetrics[audio]` or `pip install pesq`.\"\n )\n if fs not in (8000, 16000):\n raise ValueError(f\"Expected argument `fs` to either be 8000 or 16000 but got {fs}\")\n self.fs = fs\n if mode not in (\"wb\", \"nb\"):\n raise ValueError(f\"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}\")\n self.mode = mode\n if not isinstance(n_processes, int) and n_processes <= 0:\n raise ValueError(f\"Expected argument `n_processes` to be an int larger than 0 but got {n_processes}\")\n self.n_processes = n_processes\n\n self.add_state(\"sum_pesq\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"total\", default=tensor(0), dist_reduce_fx=\"sum\")\n\n def update(self, preds: Tensor, target: Tensor) -> None:\n \"\"\"Update state with predictions and targets.\"\"\"\n pesq_batch = perceptual_evaluation_speech_quality(\n preds, target, self.fs, self.mode, False, self.n_processes\n ).to(self.sum_pesq.device)\n\n self.sum_pesq += pesq_batch.sum()\n self.total += pesq_batch.numel()\n\n def compute(self) -> Tensor:\n \"\"\"Computes metric.\"\"\"\n return self.sum_pesq / self.total\n", "path": "src/torchmetrics/audio/pesq.py"}]}
| 3,419 | 331 |
gh_patches_debug_9861
|
rasdani/github-patches
|
git_diff
|
mindsdb__mindsdb-1019
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error if try create datasource from not existing integration
If send query `PUT api/datasources/ds_name` with data:
```
{
"integration_id": 'unexists_integration',
"name": 'ds_name',
"query": f"select * from test_data.any_data limit 100;"
}
```
then in response will be error:
```
{
"message": 'TypeError: expected str, bytes or os.PathLike object, not dict'
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mindsdb/api/http/namespaces/datasource.py`
Content:
```
1 import datetime
2 import os
3 import threading
4 import tempfile
5 import re
6 import multipart
7
8 import mindsdb
9 from dateutil.parser import parse
10 from flask import request, send_file
11 from flask_restx import Resource, abort # 'abort' using to return errors as json: {'message': 'error text'}
12 from flask import current_app as ca
13
14 from mindsdb.api.http.namespaces.configs.datasources import ns_conf
15 from mindsdb.api.http.namespaces.entitites.datasources.datasource import (
16 datasource_metadata,
17 put_datasource_params
18 )
19 from mindsdb.api.http.namespaces.entitites.datasources.datasource_data import (
20 get_datasource_rows_params,
21 datasource_rows_metadata
22 )
23 from mindsdb.api.http.namespaces.entitites.datasources.datasource_files import (
24 put_datasource_file_params
25 )
26 from mindsdb.api.http.namespaces.entitites.datasources.datasource_missed_files import (
27 datasource_missed_files_metadata,
28 get_datasource_missed_files_params
29 )
30
31
32 def parse_filter(key, value):
33 result = re.search(r'filter(_*.*)\[(.*)\]', key)
34 operator = result.groups()[0].strip('_') or 'like'
35 field = result.groups()[1]
36 operators_map = {
37 'like': 'like',
38 'in': 'in',
39 'nin': 'not in',
40 'gt': '>',
41 'lt': '<',
42 'gte': '>=',
43 'lte': '<=',
44 'eq': '=',
45 'neq': '!='
46 }
47 if operator not in operators_map:
48 return None
49 operator = operators_map[operator]
50 return [field, operator, value]
51
52
53 @ns_conf.route('/')
54 class DatasourcesList(Resource):
55 @ns_conf.doc('get_datasources_list')
56 @ns_conf.marshal_list_with(datasource_metadata)
57 def get(self):
58 '''List all datasources'''
59 return ca.default_store.get_datasources()
60
61
62 @ns_conf.route('/<name>')
63 @ns_conf.param('name', 'Datasource name')
64 class Datasource(Resource):
65 @ns_conf.doc('get_datasource')
66 @ns_conf.marshal_with(datasource_metadata)
67 def get(self, name):
68 '''return datasource metadata'''
69 ds = ca.default_store.get_datasource(name)
70 if ds is not None:
71 return ds
72 return '', 404
73
74 @ns_conf.doc('delete_datasource')
75 def delete(self, name):
76 '''delete datasource'''
77 try:
78 ca.default_store.delete_datasource(name)
79 except Exception as e:
80 print(e)
81 abort(400, str(e))
82 return '', 200
83
84 @ns_conf.doc('put_datasource', params=put_datasource_params)
85 @ns_conf.marshal_with(datasource_metadata)
86 def put(self, name):
87 '''add new datasource'''
88 data = {}
89
90 def on_field(field):
91 name = field.field_name.decode()
92 value = field.value.decode()
93 data[name] = value
94
95 file_object = None
96
97 def on_file(file):
98 nonlocal file_object
99 data['file'] = file.file_name.decode()
100 file_object = file.file_object
101
102 temp_dir_path = tempfile.mkdtemp(prefix='datasource_file_')
103
104 if request.headers['Content-Type'].startswith('multipart/form-data'):
105 parser = multipart.create_form_parser(
106 headers=request.headers,
107 on_field=on_field,
108 on_file=on_file,
109 config={
110 'UPLOAD_DIR': temp_dir_path.encode(), # bytes required
111 'UPLOAD_KEEP_FILENAME': True,
112 'UPLOAD_KEEP_EXTENSIONS': True,
113 'MAX_MEMORY_FILE_SIZE': 0
114 }
115 )
116
117 while True:
118 chunk = request.stream.read(8192)
119 if not chunk:
120 break
121 parser.write(chunk)
122 parser.finalize()
123 parser.close()
124
125 if file_object is not None and not file_object.closed:
126 file_object.close()
127 else:
128 data = request.json
129
130 if 'query' in data:
131 source_type = request.json['integration_id']
132 ca.default_store.save_datasource(name, source_type, request.json)
133 os.rmdir(temp_dir_path)
134 return ca.default_store.get_datasource(name)
135
136 ds_name = data['name'] if 'name' in data else name
137 source = data['source'] if 'source' in data else name
138 source_type = data['source_type']
139
140 if source_type == 'file':
141 file_path = os.path.join(temp_dir_path, data['file'])
142 else:
143 file_path = None
144
145 ca.default_store.save_datasource(ds_name, source_type, source, file_path)
146 os.rmdir(temp_dir_path)
147
148 return ca.default_store.get_datasource(ds_name)
149
150
151 ds_analysis = {}
152
153
154 def analyzing_thread(name, default_store):
155 global ds_analysis
156 ds_analysis[name] = None
157 ds = default_store.get_datasource(name)
158 analysis = default_store.get_analysis(ds['name'])
159 ds_analysis[name] = {
160 'created_at': datetime.datetime.utcnow(),
161 'data': analysis
162 }
163
164
165 @ns_conf.route('/<name>/analyze')
166 @ns_conf.param('name', 'Datasource name')
167 class Analyze(Resource):
168 @ns_conf.doc('analyse_dataset')
169 def get(self, name):
170 global ds_analysis
171 if name in ds_analysis:
172 if ds_analysis[name] is None:
173 return {'status': 'analyzing'}, 200
174 else:
175 analysis = ds_analysis[name]['data']
176 return analysis, 200
177
178 ds = ca.default_store.get_datasource(name)
179 if ds is None:
180 print('No valid datasource given')
181 abort(400, 'No valid datasource given')
182
183 x = threading.Thread(target=analyzing_thread, args=(name, ca.default_store))
184 x.start()
185 return {'status': 'analyzing'}, 200
186
187 @ns_conf.route('/<name>/analyze_refresh')
188 @ns_conf.param('name', 'Datasource name')
189 class Analyze(Resource):
190 @ns_conf.doc('analyze_refresh_dataset')
191 def get(self, name):
192 global ds_analysis
193 if name in ds_analysis:
194 if ds_analysis[name] is None:
195 return {'status': 'analyzing'}, 200
196 else:
197 del ds_analysis[name]
198
199 ds = ca.default_store.get_datasource(name)
200 if ds is None:
201 print('No valid datasource given')
202 abort(400, 'No valid datasource given')
203
204 x = threading.Thread(target=analyzing_thread, args=(name, ca.default_store))
205 x.start()
206 return {'status': 'analyzing'}, 200
207
208
209 @ns_conf.route('/<name>/analyze_subset')
210 @ns_conf.param('name', 'Datasource name')
211 class AnalyzeSubset(Resource):
212 @ns_conf.doc('analyse_datasubset')
213 def get(self, name):
214 ds = ca.default_store.get_datasource(name)
215 if ds is None:
216 print('No valid datasource given')
217 abort(400, 'No valid datasource given')
218
219 where = []
220 for key, value in request.args.items():
221 if key.startswith('filter'):
222 param = parse_filter(key, value)
223 if param is None:
224 abort(400, f'Not valid filter "{key}"')
225 where.append(param)
226
227 data_dict = ca.default_store.get_data(ds['name'], where)
228
229 if data_dict['rowcount'] == 0:
230 return abort(400, 'Empty dataset after filters applying')
231
232 return get_analysis(pd.DataFrame(data_dict['data'])), 200
233
234
235 @ns_conf.route('/<name>/data/')
236 @ns_conf.param('name', 'Datasource name')
237 class DatasourceData(Resource):
238 @ns_conf.doc('get_datasource_data', params=get_datasource_rows_params)
239 @ns_conf.marshal_with(datasource_rows_metadata)
240 def get(self, name):
241 '''return data rows'''
242 ds = ca.default_store.get_datasource(name)
243 if ds is None:
244 abort(400, 'No valid datasource given')
245
246 params = {
247 'page[size]': None,
248 'page[offset]': None
249 }
250 where = []
251 for key, value in request.args.items():
252 if key == 'page[size]':
253 params['page[size]'] = int(value)
254 if key == 'page[offset]':
255 params['page[offset]'] = int(value)
256 elif key.startswith('filter'):
257 param = parse_filter(key, value)
258 if param is None:
259 abort(400, f'Not valid filter "{key}"')
260 where.append(param)
261
262 data_dict = ca.default_store.get_data(name, where, params['page[size]'], params['page[offset]'])
263
264 return data_dict, 200
265
266
267 @ns_conf.route('/<name>/download')
268 @ns_conf.param('name', 'Datasource name')
269 class DatasourceMissedFilesDownload(Resource):
270 @ns_conf.doc('get_datasource_download')
271 def get(self, name):
272 '''download uploaded file'''
273 ds = ca.default_store.get_datasource(name)
274 if not ds:
275 abort(404, "{} not found".format(name))
276 if not os.path.exists(ds['source']):
277 abort(404, "{} not found".format(name))
278
279 return send_file(os.path.abspath(ds['source']), as_attachment=True)
280
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mindsdb/api/http/namespaces/datasource.py b/mindsdb/api/http/namespaces/datasource.py
--- a/mindsdb/api/http/namespaces/datasource.py
+++ b/mindsdb/api/http/namespaces/datasource.py
@@ -129,6 +129,10 @@
if 'query' in data:
source_type = request.json['integration_id']
+ if source_type not in ca.default_store.config['integrations']:
+ # integration doens't exist
+ abort(400, f"{source_type} integration doesn't exist")
+
ca.default_store.save_datasource(name, source_type, request.json)
os.rmdir(temp_dir_path)
return ca.default_store.get_datasource(name)
|
{"golden_diff": "diff --git a/mindsdb/api/http/namespaces/datasource.py b/mindsdb/api/http/namespaces/datasource.py\n--- a/mindsdb/api/http/namespaces/datasource.py\n+++ b/mindsdb/api/http/namespaces/datasource.py\n@@ -129,6 +129,10 @@\n \n if 'query' in data:\n source_type = request.json['integration_id']\n+ if source_type not in ca.default_store.config['integrations']:\n+ # integration doens't exist\n+ abort(400, f\"{source_type} integration doesn't exist\")\n+\n ca.default_store.save_datasource(name, source_type, request.json)\n os.rmdir(temp_dir_path)\n return ca.default_store.get_datasource(name)\n", "issue": "Error if try create datasource from not existing integration\nIf send query `PUT api/datasources/ds_name` with data:\r\n```\r\n{\r\n \"integration_id\": 'unexists_integration',\r\n \"name\": 'ds_name',\r\n \"query\": f\"select * from test_data.any_data limit 100;\"\r\n}\r\n```\r\nthen in response will be error:\r\n```\r\n{\r\n \"message\": 'TypeError: expected str, bytes or os.PathLike object, not dict'\r\n}\r\n```\n", "before_files": [{"content": "import datetime\nimport os\nimport threading\nimport tempfile\nimport re\nimport multipart\n\nimport mindsdb\nfrom dateutil.parser import parse\nfrom flask import request, send_file\nfrom flask_restx import Resource, abort # 'abort' using to return errors as json: {'message': 'error text'}\nfrom flask import current_app as ca\n\nfrom mindsdb.api.http.namespaces.configs.datasources import ns_conf\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource import (\n datasource_metadata,\n put_datasource_params\n)\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource_data import (\n get_datasource_rows_params,\n datasource_rows_metadata\n)\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource_files import (\n put_datasource_file_params\n)\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource_missed_files import (\n datasource_missed_files_metadata,\n get_datasource_missed_files_params\n)\n\n\ndef parse_filter(key, value):\n result = re.search(r'filter(_*.*)\\[(.*)\\]', key)\n operator = result.groups()[0].strip('_') or 'like'\n field = result.groups()[1]\n operators_map = {\n 'like': 'like',\n 'in': 'in',\n 'nin': 'not in',\n 'gt': '>',\n 'lt': '<',\n 'gte': '>=',\n 'lte': '<=',\n 'eq': '=',\n 'neq': '!='\n }\n if operator not in operators_map:\n return None\n operator = operators_map[operator]\n return [field, operator, value]\n\n\n@ns_conf.route('/')\nclass DatasourcesList(Resource):\n @ns_conf.doc('get_datasources_list')\n @ns_conf.marshal_list_with(datasource_metadata)\n def get(self):\n '''List all datasources'''\n return ca.default_store.get_datasources()\n\n\n@ns_conf.route('/<name>')\n@ns_conf.param('name', 'Datasource name')\nclass Datasource(Resource):\n @ns_conf.doc('get_datasource')\n @ns_conf.marshal_with(datasource_metadata)\n def get(self, name):\n '''return datasource metadata'''\n ds = ca.default_store.get_datasource(name)\n if ds is not None:\n return ds\n return '', 404\n\n @ns_conf.doc('delete_datasource')\n def delete(self, name):\n '''delete datasource'''\n try:\n ca.default_store.delete_datasource(name)\n except Exception as e:\n print(e)\n abort(400, str(e))\n return '', 200\n\n @ns_conf.doc('put_datasource', params=put_datasource_params)\n @ns_conf.marshal_with(datasource_metadata)\n def put(self, name):\n '''add new datasource'''\n data = {}\n\n def on_field(field):\n name = field.field_name.decode()\n value = field.value.decode()\n data[name] = value\n\n file_object = None\n\n def on_file(file):\n nonlocal file_object\n data['file'] = file.file_name.decode()\n file_object = file.file_object\n\n temp_dir_path = tempfile.mkdtemp(prefix='datasource_file_')\n\n if request.headers['Content-Type'].startswith('multipart/form-data'):\n parser = multipart.create_form_parser(\n headers=request.headers,\n on_field=on_field,\n on_file=on_file,\n config={\n 'UPLOAD_DIR': temp_dir_path.encode(), # bytes required\n 'UPLOAD_KEEP_FILENAME': True,\n 'UPLOAD_KEEP_EXTENSIONS': True,\n 'MAX_MEMORY_FILE_SIZE': 0\n }\n )\n\n while True:\n chunk = request.stream.read(8192)\n if not chunk:\n break\n parser.write(chunk)\n parser.finalize()\n parser.close()\n\n if file_object is not None and not file_object.closed:\n file_object.close()\n else:\n data = request.json\n\n if 'query' in data:\n source_type = request.json['integration_id']\n ca.default_store.save_datasource(name, source_type, request.json)\n os.rmdir(temp_dir_path)\n return ca.default_store.get_datasource(name)\n\n ds_name = data['name'] if 'name' in data else name\n source = data['source'] if 'source' in data else name\n source_type = data['source_type']\n\n if source_type == 'file':\n file_path = os.path.join(temp_dir_path, data['file'])\n else:\n file_path = None\n\n ca.default_store.save_datasource(ds_name, source_type, source, file_path)\n os.rmdir(temp_dir_path)\n\n return ca.default_store.get_datasource(ds_name)\n\n\nds_analysis = {}\n\n\ndef analyzing_thread(name, default_store):\n global ds_analysis\n ds_analysis[name] = None\n ds = default_store.get_datasource(name)\n analysis = default_store.get_analysis(ds['name'])\n ds_analysis[name] = {\n 'created_at': datetime.datetime.utcnow(),\n 'data': analysis\n }\n\n\n@ns_conf.route('/<name>/analyze')\n@ns_conf.param('name', 'Datasource name')\nclass Analyze(Resource):\n @ns_conf.doc('analyse_dataset')\n def get(self, name):\n global ds_analysis\n if name in ds_analysis:\n if ds_analysis[name] is None:\n return {'status': 'analyzing'}, 200\n else:\n analysis = ds_analysis[name]['data']\n return analysis, 200\n\n ds = ca.default_store.get_datasource(name)\n if ds is None:\n print('No valid datasource given')\n abort(400, 'No valid datasource given')\n\n x = threading.Thread(target=analyzing_thread, args=(name, ca.default_store))\n x.start()\n return {'status': 'analyzing'}, 200\n\n@ns_conf.route('/<name>/analyze_refresh')\n@ns_conf.param('name', 'Datasource name')\nclass Analyze(Resource):\n @ns_conf.doc('analyze_refresh_dataset')\n def get(self, name):\n global ds_analysis\n if name in ds_analysis:\n if ds_analysis[name] is None:\n return {'status': 'analyzing'}, 200\n else:\n del ds_analysis[name]\n\n ds = ca.default_store.get_datasource(name)\n if ds is None:\n print('No valid datasource given')\n abort(400, 'No valid datasource given')\n\n x = threading.Thread(target=analyzing_thread, args=(name, ca.default_store))\n x.start()\n return {'status': 'analyzing'}, 200\n\n\n@ns_conf.route('/<name>/analyze_subset')\n@ns_conf.param('name', 'Datasource name')\nclass AnalyzeSubset(Resource):\n @ns_conf.doc('analyse_datasubset')\n def get(self, name):\n ds = ca.default_store.get_datasource(name)\n if ds is None:\n print('No valid datasource given')\n abort(400, 'No valid datasource given')\n\n where = []\n for key, value in request.args.items():\n if key.startswith('filter'):\n param = parse_filter(key, value)\n if param is None:\n abort(400, f'Not valid filter \"{key}\"')\n where.append(param)\n\n data_dict = ca.default_store.get_data(ds['name'], where)\n\n if data_dict['rowcount'] == 0:\n return abort(400, 'Empty dataset after filters applying')\n\n return get_analysis(pd.DataFrame(data_dict['data'])), 200\n\n\n@ns_conf.route('/<name>/data/')\n@ns_conf.param('name', 'Datasource name')\nclass DatasourceData(Resource):\n @ns_conf.doc('get_datasource_data', params=get_datasource_rows_params)\n @ns_conf.marshal_with(datasource_rows_metadata)\n def get(self, name):\n '''return data rows'''\n ds = ca.default_store.get_datasource(name)\n if ds is None:\n abort(400, 'No valid datasource given')\n\n params = {\n 'page[size]': None,\n 'page[offset]': None\n }\n where = []\n for key, value in request.args.items():\n if key == 'page[size]':\n params['page[size]'] = int(value)\n if key == 'page[offset]':\n params['page[offset]'] = int(value)\n elif key.startswith('filter'):\n param = parse_filter(key, value)\n if param is None:\n abort(400, f'Not valid filter \"{key}\"')\n where.append(param)\n\n data_dict = ca.default_store.get_data(name, where, params['page[size]'], params['page[offset]'])\n\n return data_dict, 200\n\n\n@ns_conf.route('/<name>/download')\n@ns_conf.param('name', 'Datasource name')\nclass DatasourceMissedFilesDownload(Resource):\n @ns_conf.doc('get_datasource_download')\n def get(self, name):\n '''download uploaded file'''\n ds = ca.default_store.get_datasource(name)\n if not ds:\n abort(404, \"{} not found\".format(name))\n if not os.path.exists(ds['source']):\n abort(404, \"{} not found\".format(name))\n\n return send_file(os.path.abspath(ds['source']), as_attachment=True)\n", "path": "mindsdb/api/http/namespaces/datasource.py"}], "after_files": [{"content": "import datetime\nimport os\nimport threading\nimport tempfile\nimport re\nimport multipart\n\nimport mindsdb\nfrom dateutil.parser import parse\nfrom flask import request, send_file\nfrom flask_restx import Resource, abort # 'abort' using to return errors as json: {'message': 'error text'}\nfrom flask import current_app as ca\n\nfrom mindsdb.api.http.namespaces.configs.datasources import ns_conf\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource import (\n datasource_metadata,\n put_datasource_params\n)\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource_data import (\n get_datasource_rows_params,\n datasource_rows_metadata\n)\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource_files import (\n put_datasource_file_params\n)\nfrom mindsdb.api.http.namespaces.entitites.datasources.datasource_missed_files import (\n datasource_missed_files_metadata,\n get_datasource_missed_files_params\n)\n\n\ndef parse_filter(key, value):\n result = re.search(r'filter(_*.*)\\[(.*)\\]', key)\n operator = result.groups()[0].strip('_') or 'like'\n field = result.groups()[1]\n operators_map = {\n 'like': 'like',\n 'in': 'in',\n 'nin': 'not in',\n 'gt': '>',\n 'lt': '<',\n 'gte': '>=',\n 'lte': '<=',\n 'eq': '=',\n 'neq': '!='\n }\n if operator not in operators_map:\n return None\n operator = operators_map[operator]\n return [field, operator, value]\n\n\n@ns_conf.route('/')\nclass DatasourcesList(Resource):\n @ns_conf.doc('get_datasources_list')\n @ns_conf.marshal_list_with(datasource_metadata)\n def get(self):\n '''List all datasources'''\n return ca.default_store.get_datasources()\n\n\n@ns_conf.route('/<name>')\n@ns_conf.param('name', 'Datasource name')\nclass Datasource(Resource):\n @ns_conf.doc('get_datasource')\n @ns_conf.marshal_with(datasource_metadata)\n def get(self, name):\n '''return datasource metadata'''\n ds = ca.default_store.get_datasource(name)\n if ds is not None:\n return ds\n return '', 404\n\n @ns_conf.doc('delete_datasource')\n def delete(self, name):\n '''delete datasource'''\n try:\n ca.default_store.delete_datasource(name)\n except Exception as e:\n print(e)\n abort(400, str(e))\n return '', 200\n\n @ns_conf.doc('put_datasource', params=put_datasource_params)\n @ns_conf.marshal_with(datasource_metadata)\n def put(self, name):\n '''add new datasource'''\n data = {}\n\n def on_field(field):\n name = field.field_name.decode()\n value = field.value.decode()\n data[name] = value\n\n file_object = None\n\n def on_file(file):\n nonlocal file_object\n data['file'] = file.file_name.decode()\n file_object = file.file_object\n\n temp_dir_path = tempfile.mkdtemp(prefix='datasource_file_')\n\n if request.headers['Content-Type'].startswith('multipart/form-data'):\n parser = multipart.create_form_parser(\n headers=request.headers,\n on_field=on_field,\n on_file=on_file,\n config={\n 'UPLOAD_DIR': temp_dir_path.encode(), # bytes required\n 'UPLOAD_KEEP_FILENAME': True,\n 'UPLOAD_KEEP_EXTENSIONS': True,\n 'MAX_MEMORY_FILE_SIZE': 0\n }\n )\n\n while True:\n chunk = request.stream.read(8192)\n if not chunk:\n break\n parser.write(chunk)\n parser.finalize()\n parser.close()\n\n if file_object is not None and not file_object.closed:\n file_object.close()\n else:\n data = request.json\n\n if 'query' in data:\n source_type = request.json['integration_id']\n if source_type not in ca.default_store.config['integrations']:\n # integration doens't exist\n abort(400, f\"{source_type} integration doesn't exist\")\n\n ca.default_store.save_datasource(name, source_type, request.json)\n os.rmdir(temp_dir_path)\n return ca.default_store.get_datasource(name)\n\n ds_name = data['name'] if 'name' in data else name\n source = data['source'] if 'source' in data else name\n source_type = data['source_type']\n\n if source_type == 'file':\n file_path = os.path.join(temp_dir_path, data['file'])\n else:\n file_path = None\n\n ca.default_store.save_datasource(ds_name, source_type, source, file_path)\n os.rmdir(temp_dir_path)\n\n return ca.default_store.get_datasource(ds_name)\n\n\nds_analysis = {}\n\n\ndef analyzing_thread(name, default_store):\n global ds_analysis\n ds_analysis[name] = None\n ds = default_store.get_datasource(name)\n analysis = default_store.get_analysis(ds['name'])\n ds_analysis[name] = {\n 'created_at': datetime.datetime.utcnow(),\n 'data': analysis\n }\n\n\n@ns_conf.route('/<name>/analyze')\n@ns_conf.param('name', 'Datasource name')\nclass Analyze(Resource):\n @ns_conf.doc('analyse_dataset')\n def get(self, name):\n global ds_analysis\n if name in ds_analysis:\n if ds_analysis[name] is None:\n return {'status': 'analyzing'}, 200\n else:\n analysis = ds_analysis[name]['data']\n return analysis, 200\n\n ds = ca.default_store.get_datasource(name)\n if ds is None:\n print('No valid datasource given')\n abort(400, 'No valid datasource given')\n\n x = threading.Thread(target=analyzing_thread, args=(name, ca.default_store))\n x.start()\n return {'status': 'analyzing'}, 200\n\n@ns_conf.route('/<name>/analyze_refresh')\n@ns_conf.param('name', 'Datasource name')\nclass Analyze(Resource):\n @ns_conf.doc('analyze_refresh_dataset')\n def get(self, name):\n global ds_analysis\n if name in ds_analysis:\n if ds_analysis[name] is None:\n return {'status': 'analyzing'}, 200\n else:\n del ds_analysis[name]\n\n ds = ca.default_store.get_datasource(name)\n if ds is None:\n print('No valid datasource given')\n abort(400, 'No valid datasource given')\n\n x = threading.Thread(target=analyzing_thread, args=(name, ca.default_store))\n x.start()\n return {'status': 'analyzing'}, 200\n\n\n@ns_conf.route('/<name>/analyze_subset')\n@ns_conf.param('name', 'Datasource name')\nclass AnalyzeSubset(Resource):\n @ns_conf.doc('analyse_datasubset')\n def get(self, name):\n ds = ca.default_store.get_datasource(name)\n if ds is None:\n print('No valid datasource given')\n abort(400, 'No valid datasource given')\n\n where = []\n for key, value in request.args.items():\n if key.startswith('filter'):\n param = parse_filter(key, value)\n if param is None:\n abort(400, f'Not valid filter \"{key}\"')\n where.append(param)\n\n data_dict = ca.default_store.get_data(ds['name'], where)\n\n if data_dict['rowcount'] == 0:\n return abort(400, 'Empty dataset after filters applying')\n\n return get_analysis(pd.DataFrame(data_dict['data'])), 200\n\n\n@ns_conf.route('/<name>/data/')\n@ns_conf.param('name', 'Datasource name')\nclass DatasourceData(Resource):\n @ns_conf.doc('get_datasource_data', params=get_datasource_rows_params)\n @ns_conf.marshal_with(datasource_rows_metadata)\n def get(self, name):\n '''return data rows'''\n ds = ca.default_store.get_datasource(name)\n if ds is None:\n abort(400, 'No valid datasource given')\n\n params = {\n 'page[size]': None,\n 'page[offset]': None\n }\n where = []\n for key, value in request.args.items():\n if key == 'page[size]':\n params['page[size]'] = int(value)\n if key == 'page[offset]':\n params['page[offset]'] = int(value)\n elif key.startswith('filter'):\n param = parse_filter(key, value)\n if param is None:\n abort(400, f'Not valid filter \"{key}\"')\n where.append(param)\n\n data_dict = ca.default_store.get_data(name, where, params['page[size]'], params['page[offset]'])\n\n return data_dict, 200\n\n\n@ns_conf.route('/<name>/download')\n@ns_conf.param('name', 'Datasource name')\nclass DatasourceMissedFilesDownload(Resource):\n @ns_conf.doc('get_datasource_download')\n def get(self, name):\n '''download uploaded file'''\n ds = ca.default_store.get_datasource(name)\n if not ds:\n abort(404, \"{} not found\".format(name))\n if not os.path.exists(ds['source']):\n abort(404, \"{} not found\".format(name))\n\n return send_file(os.path.abspath(ds['source']), as_attachment=True)\n", "path": "mindsdb/api/http/namespaces/datasource.py"}]}
| 3,137 | 170 |
gh_patches_debug_39730
|
rasdani/github-patches
|
git_diff
|
carpentries__amy-351
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deal with breadcrumbs
As Greg mentioned in #296, he hadn't liked the way breadcrumbs repeat current page title (or header) in their last element, for example: page "Event 2015-05-25-something" will have breadcrumbs "Amy / All events / Event 2015-05-25-something".
I took a look at other big websites and how they do breadcrumbs and @gvwilson was right. They don't repeat current site at the end of breadcrumbs.
This means we'd only have breadcrumbs at most 3 links long: Amy / All \* / \* [ / action ], for example:
Was:
- Amy / All events / Event 2015-05-25-something / Edit
Will be:
- Amy / All events / Event 2015-05-25-something
But this does not bug me. In case of `All *` pages, we can just as well drop breadcrumbs (because they'd look like "Amy / ").
So I don't really know what to do:
1. Display breadcrumbs on the same pages as now, but hide the last item.
2. Display breadcrumbs on the pages that would have more than 1 breadcrumbs.
3. Drop breadcrumbs completely.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `workshops/templatetags/breadcrumbs.py`
Content:
```
1 import logging
2
3 from django import template
4 from django.core.urlresolvers import reverse
5 from django.utils.encoding import force_text
6 from django.utils.html import escape
7
8 register = template.Library()
9 _LOG = logging.getLogger(__name__)
10
11
12 @register.simple_tag
13 def breadcrumb(title, url):
14 '''
15 Create a simple anchor with provided text and already-resolved URL.
16 Example usage:
17 {% breadcrumb "Title of breadcrumb" resolved_url %}
18 '''
19 return create_crumb(title, url)
20
21
22 @register.simple_tag
23 def breadcrumb_url(title, url_name):
24 '''
25 Add non-active breadcrumb with specified title. Second argument should be
26 a string name of URL that needs to be resolved.
27 Example usage:
28 {% breadcrumb_url "Title of breadcrumb" url_name %}
29 '''
30 url = reverse(url_name)
31 return create_crumb(title, url)
32
33
34 @register.simple_tag
35 def breadcrumb_active(title):
36 '''
37 Add active breadcrumb, but not in an anchor.
38 Example usage:
39 {% breadcrumb_active "Title of breadcrumb" %}
40 '''
41 return create_crumb(str(title), url=None, active=True)
42
43
44 @register.simple_tag
45 def breadcrumb_index_all_objects(model):
46 '''
47 Add breadcrumb linking to the listing of all objects of specific type.
48 This tag accepts both models or model instances as an argument.
49 Example usage:
50 {% breadcrumb_index_all_objects model %}
51 {% breadcrumb_index_all_objects person %}
52 '''
53 plural = force_text(model._meta.verbose_name_plural)
54 title = 'All {}'.format(plural)
55 url_name = 'all_{}'.format(plural)
56 url = reverse(url_name)
57 return create_crumb(title, url)
58
59
60 @register.simple_tag
61 def breadcrumb_edit_object(obj):
62 '''
63 Add an active breadcrumb with the title "Edit MODEL_NAME".
64 This tag accepts model instance as an argument.
65 Example usage:
66 {% breadcrumb_edit_object person %}
67 '''
68 singular = force_text(obj._meta.verbose_name)
69 title = 'Edit {}'.format(singular)
70 return create_crumb(title, url=None, active=True)
71
72
73 @register.simple_tag
74 def breadcrumb_new_object(model):
75 '''
76 Add an active breadcrumb with the title "Add new MODEL_NAME".
77 This tag accepts model class as an argument.
78 Example usage:
79 {% breadcrumb_new_object person %}
80 '''
81 singular = force_text(model._meta.verbose_name)
82 title = 'Add new {}'.format(singular)
83 return create_crumb(title, url=None, active=True)
84
85
86 @register.simple_tag
87 def breadcrumb_object(obj):
88 '''
89 Add non-active breadcrumb with the title "Add new MODEL_NAME".
90 This tag accepts model instance as an argument.
91 Example usage:
92 {% breadcrumb_object person %}
93 '''
94 title = str(obj)
95 url = obj.get_absolute_url()
96 return create_crumb(title, url, active=False)
97
98
99 @register.simple_tag
100 def breadcrumb_main_page():
101 '''
102 Special case of ``breadcrumb_url``. In all templates there's always a link
103 to the main page so I wanted to save everyone thinking & writing by
104 introducing this helper tag.
105 Example usage:
106 {% breadcrumb_main_page %}
107 '''
108 title = 'Amy'
109 url = reverse('index')
110 return create_crumb(title, url)
111
112
113 def create_crumb(title, url=None, active=False):
114 '''
115 Helper function that creates breadcrumb.
116 '''
117 active_str = ''
118 if active:
119 active_str = ' class="active"'
120
121 title = escape(title)
122 inner_str = title
123 if url:
124 inner_str = '<a href="{0}">{1}</a>'.format(url, title)
125
126 crumb = '<li{0}>{1}</li>'.format(active_str, inner_str)
127
128 return crumb
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/workshops/templatetags/breadcrumbs.py b/workshops/templatetags/breadcrumbs.py
deleted file mode 100644
--- a/workshops/templatetags/breadcrumbs.py
+++ /dev/null
@@ -1,128 +0,0 @@
-import logging
-
-from django import template
-from django.core.urlresolvers import reverse
-from django.utils.encoding import force_text
-from django.utils.html import escape
-
-register = template.Library()
-_LOG = logging.getLogger(__name__)
-
-
[email protected]_tag
-def breadcrumb(title, url):
- '''
- Create a simple anchor with provided text and already-resolved URL.
- Example usage:
- {% breadcrumb "Title of breadcrumb" resolved_url %}
- '''
- return create_crumb(title, url)
-
-
[email protected]_tag
-def breadcrumb_url(title, url_name):
- '''
- Add non-active breadcrumb with specified title. Second argument should be
- a string name of URL that needs to be resolved.
- Example usage:
- {% breadcrumb_url "Title of breadcrumb" url_name %}
- '''
- url = reverse(url_name)
- return create_crumb(title, url)
-
-
[email protected]_tag
-def breadcrumb_active(title):
- '''
- Add active breadcrumb, but not in an anchor.
- Example usage:
- {% breadcrumb_active "Title of breadcrumb" %}
- '''
- return create_crumb(str(title), url=None, active=True)
-
-
[email protected]_tag
-def breadcrumb_index_all_objects(model):
- '''
- Add breadcrumb linking to the listing of all objects of specific type.
- This tag accepts both models or model instances as an argument.
- Example usage:
- {% breadcrumb_index_all_objects model %}
- {% breadcrumb_index_all_objects person %}
- '''
- plural = force_text(model._meta.verbose_name_plural)
- title = 'All {}'.format(plural)
- url_name = 'all_{}'.format(plural)
- url = reverse(url_name)
- return create_crumb(title, url)
-
-
[email protected]_tag
-def breadcrumb_edit_object(obj):
- '''
- Add an active breadcrumb with the title "Edit MODEL_NAME".
- This tag accepts model instance as an argument.
- Example usage:
- {% breadcrumb_edit_object person %}
- '''
- singular = force_text(obj._meta.verbose_name)
- title = 'Edit {}'.format(singular)
- return create_crumb(title, url=None, active=True)
-
-
[email protected]_tag
-def breadcrumb_new_object(model):
- '''
- Add an active breadcrumb with the title "Add new MODEL_NAME".
- This tag accepts model class as an argument.
- Example usage:
- {% breadcrumb_new_object person %}
- '''
- singular = force_text(model._meta.verbose_name)
- title = 'Add new {}'.format(singular)
- return create_crumb(title, url=None, active=True)
-
-
[email protected]_tag
-def breadcrumb_object(obj):
- '''
- Add non-active breadcrumb with the title "Add new MODEL_NAME".
- This tag accepts model instance as an argument.
- Example usage:
- {% breadcrumb_object person %}
- '''
- title = str(obj)
- url = obj.get_absolute_url()
- return create_crumb(title, url, active=False)
-
-
[email protected]_tag
-def breadcrumb_main_page():
- '''
- Special case of ``breadcrumb_url``. In all templates there's always a link
- to the main page so I wanted to save everyone thinking & writing by
- introducing this helper tag.
- Example usage:
- {% breadcrumb_main_page %}
- '''
- title = 'Amy'
- url = reverse('index')
- return create_crumb(title, url)
-
-
-def create_crumb(title, url=None, active=False):
- '''
- Helper function that creates breadcrumb.
- '''
- active_str = ''
- if active:
- active_str = ' class="active"'
-
- title = escape(title)
- inner_str = title
- if url:
- inner_str = '<a href="{0}">{1}</a>'.format(url, title)
-
- crumb = '<li{0}>{1}</li>'.format(active_str, inner_str)
-
- return crumb
|
{"golden_diff": "diff --git a/workshops/templatetags/breadcrumbs.py b/workshops/templatetags/breadcrumbs.py\ndeleted file mode 100644\n--- a/workshops/templatetags/breadcrumbs.py\n+++ /dev/null\n@@ -1,128 +0,0 @@\n-import logging\n-\n-from django import template\n-from django.core.urlresolvers import reverse\n-from django.utils.encoding import force_text\n-from django.utils.html import escape\n-\n-register = template.Library()\n-_LOG = logging.getLogger(__name__)\n-\n-\[email protected]_tag\n-def breadcrumb(title, url):\n- '''\n- Create a simple anchor with provided text and already-resolved URL.\n- Example usage:\n- {% breadcrumb \"Title of breadcrumb\" resolved_url %}\n- '''\n- return create_crumb(title, url)\n-\n-\[email protected]_tag\n-def breadcrumb_url(title, url_name):\n- '''\n- Add non-active breadcrumb with specified title. Second argument should be\n- a string name of URL that needs to be resolved.\n- Example usage:\n- {% breadcrumb_url \"Title of breadcrumb\" url_name %}\n- '''\n- url = reverse(url_name)\n- return create_crumb(title, url)\n-\n-\[email protected]_tag\n-def breadcrumb_active(title):\n- '''\n- Add active breadcrumb, but not in an anchor.\n- Example usage:\n- {% breadcrumb_active \"Title of breadcrumb\" %}\n- '''\n- return create_crumb(str(title), url=None, active=True)\n-\n-\[email protected]_tag\n-def breadcrumb_index_all_objects(model):\n- '''\n- Add breadcrumb linking to the listing of all objects of specific type.\n- This tag accepts both models or model instances as an argument.\n- Example usage:\n- {% breadcrumb_index_all_objects model %}\n- {% breadcrumb_index_all_objects person %}\n- '''\n- plural = force_text(model._meta.verbose_name_plural)\n- title = 'All {}'.format(plural)\n- url_name = 'all_{}'.format(plural)\n- url = reverse(url_name)\n- return create_crumb(title, url)\n-\n-\[email protected]_tag\n-def breadcrumb_edit_object(obj):\n- '''\n- Add an active breadcrumb with the title \"Edit MODEL_NAME\".\n- This tag accepts model instance as an argument.\n- Example usage:\n- {% breadcrumb_edit_object person %}\n- '''\n- singular = force_text(obj._meta.verbose_name)\n- title = 'Edit {}'.format(singular)\n- return create_crumb(title, url=None, active=True)\n-\n-\[email protected]_tag\n-def breadcrumb_new_object(model):\n- '''\n- Add an active breadcrumb with the title \"Add new MODEL_NAME\".\n- This tag accepts model class as an argument.\n- Example usage:\n- {% breadcrumb_new_object person %}\n- '''\n- singular = force_text(model._meta.verbose_name)\n- title = 'Add new {}'.format(singular)\n- return create_crumb(title, url=None, active=True)\n-\n-\[email protected]_tag\n-def breadcrumb_object(obj):\n- '''\n- Add non-active breadcrumb with the title \"Add new MODEL_NAME\".\n- This tag accepts model instance as an argument.\n- Example usage:\n- {% breadcrumb_object person %}\n- '''\n- title = str(obj)\n- url = obj.get_absolute_url()\n- return create_crumb(title, url, active=False)\n-\n-\[email protected]_tag\n-def breadcrumb_main_page():\n- '''\n- Special case of ``breadcrumb_url``. In all templates there's always a link\n- to the main page so I wanted to save everyone thinking & writing by\n- introducing this helper tag.\n- Example usage:\n- {% breadcrumb_main_page %}\n- '''\n- title = 'Amy'\n- url = reverse('index')\n- return create_crumb(title, url)\n-\n-\n-def create_crumb(title, url=None, active=False):\n- '''\n- Helper function that creates breadcrumb.\n- '''\n- active_str = ''\n- if active:\n- active_str = ' class=\"active\"'\n-\n- title = escape(title)\n- inner_str = title\n- if url:\n- inner_str = '<a href=\"{0}\">{1}</a>'.format(url, title)\n-\n- crumb = '<li{0}>{1}</li>'.format(active_str, inner_str)\n-\n- return crumb\n", "issue": "Deal with breadcrumbs\nAs Greg mentioned in #296, he hadn't liked the way breadcrumbs repeat current page title (or header) in their last element, for example: page \"Event 2015-05-25-something\" will have breadcrumbs \"Amy / All events / Event 2015-05-25-something\".\n\nI took a look at other big websites and how they do breadcrumbs and @gvwilson was right. They don't repeat current site at the end of breadcrumbs.\n\nThis means we'd only have breadcrumbs at most 3 links long: Amy / All \\* / \\* [ / action ], for example:\n\nWas:\n- Amy / All events / Event 2015-05-25-something / Edit\n\nWill be:\n- Amy / All events / Event 2015-05-25-something\n\nBut this does not bug me. In case of `All *` pages, we can just as well drop breadcrumbs (because they'd look like \"Amy / \").\n\nSo I don't really know what to do:\n1. Display breadcrumbs on the same pages as now, but hide the last item.\n2. Display breadcrumbs on the pages that would have more than 1 breadcrumbs.\n3. Drop breadcrumbs completely.\n\n", "before_files": [{"content": "import logging\n\nfrom django import template\nfrom django.core.urlresolvers import reverse\nfrom django.utils.encoding import force_text\nfrom django.utils.html import escape\n\nregister = template.Library()\n_LOG = logging.getLogger(__name__)\n\n\[email protected]_tag\ndef breadcrumb(title, url):\n '''\n Create a simple anchor with provided text and already-resolved URL.\n Example usage:\n {% breadcrumb \"Title of breadcrumb\" resolved_url %}\n '''\n return create_crumb(title, url)\n\n\[email protected]_tag\ndef breadcrumb_url(title, url_name):\n '''\n Add non-active breadcrumb with specified title. Second argument should be\n a string name of URL that needs to be resolved.\n Example usage:\n {% breadcrumb_url \"Title of breadcrumb\" url_name %}\n '''\n url = reverse(url_name)\n return create_crumb(title, url)\n\n\[email protected]_tag\ndef breadcrumb_active(title):\n '''\n Add active breadcrumb, but not in an anchor.\n Example usage:\n {% breadcrumb_active \"Title of breadcrumb\" %}\n '''\n return create_crumb(str(title), url=None, active=True)\n\n\[email protected]_tag\ndef breadcrumb_index_all_objects(model):\n '''\n Add breadcrumb linking to the listing of all objects of specific type.\n This tag accepts both models or model instances as an argument.\n Example usage:\n {% breadcrumb_index_all_objects model %}\n {% breadcrumb_index_all_objects person %}\n '''\n plural = force_text(model._meta.verbose_name_plural)\n title = 'All {}'.format(plural)\n url_name = 'all_{}'.format(plural)\n url = reverse(url_name)\n return create_crumb(title, url)\n\n\[email protected]_tag\ndef breadcrumb_edit_object(obj):\n '''\n Add an active breadcrumb with the title \"Edit MODEL_NAME\".\n This tag accepts model instance as an argument.\n Example usage:\n {% breadcrumb_edit_object person %}\n '''\n singular = force_text(obj._meta.verbose_name)\n title = 'Edit {}'.format(singular)\n return create_crumb(title, url=None, active=True)\n\n\[email protected]_tag\ndef breadcrumb_new_object(model):\n '''\n Add an active breadcrumb with the title \"Add new MODEL_NAME\".\n This tag accepts model class as an argument.\n Example usage:\n {% breadcrumb_new_object person %}\n '''\n singular = force_text(model._meta.verbose_name)\n title = 'Add new {}'.format(singular)\n return create_crumb(title, url=None, active=True)\n\n\[email protected]_tag\ndef breadcrumb_object(obj):\n '''\n Add non-active breadcrumb with the title \"Add new MODEL_NAME\".\n This tag accepts model instance as an argument.\n Example usage:\n {% breadcrumb_object person %}\n '''\n title = str(obj)\n url = obj.get_absolute_url()\n return create_crumb(title, url, active=False)\n\n\[email protected]_tag\ndef breadcrumb_main_page():\n '''\n Special case of ``breadcrumb_url``. In all templates there's always a link\n to the main page so I wanted to save everyone thinking & writing by\n introducing this helper tag.\n Example usage:\n {% breadcrumb_main_page %}\n '''\n title = 'Amy'\n url = reverse('index')\n return create_crumb(title, url)\n\n\ndef create_crumb(title, url=None, active=False):\n '''\n Helper function that creates breadcrumb.\n '''\n active_str = ''\n if active:\n active_str = ' class=\"active\"'\n\n title = escape(title)\n inner_str = title\n if url:\n inner_str = '<a href=\"{0}\">{1}</a>'.format(url, title)\n\n crumb = '<li{0}>{1}</li>'.format(active_str, inner_str)\n\n return crumb\n", "path": "workshops/templatetags/breadcrumbs.py"}], "after_files": [{"content": null, "path": "workshops/templatetags/breadcrumbs.py"}]}
| 1,608 | 950 |
gh_patches_debug_20266
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-2725
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sections conflict with pages that would replace them
> ERROR: Two different tasks can't have a common target.'output/sec1/index.html' is a target for render_pages:output/sec1/index.html and render_taxonomies:output/sec1/index.html.
To reproduce:
* set POSTS and PAGES to output to the root of the site
* create `posts/sec1/foo.rst` and `pages/sec1.rst`
`should_generate_classification_page` is supposed to prevent this, but it fails — `post_list` only contains posts from the section, so it doesn’t check the page, and thus fails. Perhaps we could fix it by giving all posts/pages to that function?
Sections conflict with pages that would replace them
> ERROR: Two different tasks can't have a common target.'output/sec1/index.html' is a target for render_pages:output/sec1/index.html and render_taxonomies:output/sec1/index.html.
To reproduce:
* set POSTS and PAGES to output to the root of the site
* create `posts/sec1/foo.rst` and `pages/sec1.rst`
`should_generate_classification_page` is supposed to prevent this, but it fails — `post_list` only contains posts from the section, so it doesn’t check the page, and thus fails. Perhaps we could fix it by giving all posts/pages to that function?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/task/sections.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2017 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Render the blog indexes."""
28
29 from __future__ import unicode_literals
30
31 from nikola.plugin_categories import Taxonomy
32 from nikola import utils
33
34
35 class ClassifySections(Taxonomy):
36 """Classify the posts by sections."""
37
38 name = "classify_sections"
39
40 classification_name = "section_index"
41 overview_page_variable_name = "sections"
42 more_than_one_classifications_per_post = False
43 has_hierarchy = False
44 generate_atom_feeds_for_post_lists = False
45 template_for_classification_overview = None
46 apply_to_posts = True
47 apply_to_pages = False
48 omit_empty_classifications = True
49 also_create_classifications_from_other_languages = False
50 path_handler_docstrings = {
51 'section_index_index': False,
52 'section_index': """Link to the index for a section.
53
54 Example:
55
56 link://section_index/cars => /cars/index.html""",
57 'section_index_atom': """Link to the Atom index for a section.
58
59 Example:
60
61 link://section_index_atom/cars => /cars/index.atom""",
62 'section_index_rss': """Link to the RSS feed for a section.
63
64 Example:
65
66 link://section_index_rss/cars => /cars/rss.xml""",
67 }
68
69 def set_site(self, site):
70 """Set Nikola site."""
71 self.show_list_as_index = site.config["POSTS_SECTIONS_ARE_INDEXES"]
72 self.template_for_single_list = "sectionindex.tmpl" if self.show_list_as_index else "list.tmpl"
73 self.enable_for_lang = {}
74 return super(ClassifySections, self).set_site(site)
75
76 def is_enabled(self, lang=None):
77 """Return True if this taxonomy is enabled, or False otherwise."""
78 if not self.site.config['POSTS_SECTIONS']:
79 return False
80 if lang is not None:
81 return self.enable_for_lang.get(lang, False)
82 return True
83
84 def classify(self, post, lang):
85 """Classify the given post for the given language."""
86 return [post.section_slug(lang)]
87
88 def _get_section_name(self, section, lang):
89 # Check whether we have a name for this section
90 if section in self.site.config['POSTS_SECTION_NAME'](lang):
91 return self.site.config['POSTS_SECTION_NAME'](lang)[section]
92 else:
93 return section.replace('-', ' ').title()
94
95 def get_classification_friendly_name(self, section, lang, only_last_component=False):
96 """Extract a friendly name from the classification."""
97 return self._get_section_name(section, lang)
98
99 def get_path(self, section, lang, dest_type='page'):
100 """A path handler for the given classification."""
101 result = [_f for _f in [section] if _f]
102 if dest_type == 'rss':
103 return result + ['rss.xml'], 'never'
104 return result, 'always'
105
106 def provide_context_and_uptodate(self, section, lang, node=None):
107 """Provide data for the context and the uptodate list for the list of the given classifiation."""
108 kw = {
109 "messages": self.site.MESSAGES,
110 }
111 section_name = self._get_section_name(section, lang)
112 # Compose section title
113 section_title = section_name
114 posts_section_title = self.site.config['POSTS_SECTION_TITLE'](lang)
115 if isinstance(posts_section_title, dict):
116 if section in posts_section_title:
117 section_title = posts_section_title[section]
118 elif isinstance(posts_section_title, (utils.bytes_str, utils.unicode_str)):
119 section_title = posts_section_title
120 section_title = section_title.format(name=section_name)
121 # Compose context
122 context = {
123 "title": section_title,
124 "description": self.site.config['POSTS_SECTION_DESCRIPTIONS'](lang)[section] if section in self.site.config['POSTS_SECTION_DESCRIPTIONS'](lang) else "",
125 "pagekind": ["section_page", "index" if self.show_list_as_index else "list"]
126 }
127 kw.update(context)
128 return context, kw
129
130 def postprocess_posts_per_classification(self, posts_per_section_per_language, flat_hierarchy_per_lang=None, hierarchy_lookup_per_lang=None):
131 """Rearrange, modify or otherwise use the list of posts per classification and per language."""
132 for lang, posts_per_section in posts_per_section_per_language.items():
133 # Don't build sections when there is only one, a.k.a. default setups
134 sections = set()
135 for section, posts in posts_per_section.items():
136 for post in posts:
137 if not self.site.config["SHOW_UNTRANSLATED_POSTS"] and not post.is_translation_available(lang):
138 continue
139 sections.add(section)
140 self.enable_for_lang[lang] = (len(sections) > 1)
141
142 def should_generate_classification_page(self, dirname, post_list, lang):
143 """Only generates list of posts for classification if this function returns True."""
144 short_destination = dirname + '/' + self.site.config['INDEX_FILE']
145 for post in post_list:
146 # If there is an index.html pending to be created from a page, do not generate the section page.
147 # The section page would be useless anyways. (via Issue #2613)
148 if post.destination_path(lang, sep='/') == short_destination:
149 return False
150 return True
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nikola/plugins/task/sections.py b/nikola/plugins/task/sections.py
--- a/nikola/plugins/task/sections.py
+++ b/nikola/plugins/task/sections.py
@@ -142,9 +142,11 @@
def should_generate_classification_page(self, dirname, post_list, lang):
"""Only generates list of posts for classification if this function returns True."""
short_destination = dirname + '/' + self.site.config['INDEX_FILE']
- for post in post_list:
- # If there is an index.html pending to be created from a page, do not generate the section page.
- # The section page would be useless anyways. (via Issue #2613)
+ # If there is an index.html pending to be created from a page, do not generate the section page.
+ # The section page would be useless anyways. (via Issue #2613)
+ for post in self.site.timeline:
+ if not self.site.config["SHOW_UNTRANSLATED_POSTS"] and not post.is_translation_available(lang):
+ continue
if post.destination_path(lang, sep='/') == short_destination:
return False
return True
|
{"golden_diff": "diff --git a/nikola/plugins/task/sections.py b/nikola/plugins/task/sections.py\n--- a/nikola/plugins/task/sections.py\n+++ b/nikola/plugins/task/sections.py\n@@ -142,9 +142,11 @@\n def should_generate_classification_page(self, dirname, post_list, lang):\n \"\"\"Only generates list of posts for classification if this function returns True.\"\"\"\n short_destination = dirname + '/' + self.site.config['INDEX_FILE']\n- for post in post_list:\n- # If there is an index.html pending to be created from a page, do not generate the section page.\n- # The section page would be useless anyways. (via Issue #2613)\n+ # If there is an index.html pending to be created from a page, do not generate the section page.\n+ # The section page would be useless anyways. (via Issue #2613)\n+ for post in self.site.timeline:\n+ if not self.site.config[\"SHOW_UNTRANSLATED_POSTS\"] and not post.is_translation_available(lang):\n+ continue\n if post.destination_path(lang, sep='/') == short_destination:\n return False\n return True\n", "issue": "Sections conflict with pages that would replace them\n> ERROR: Two different tasks can't have a common target.'output/sec1/index.html' is a target for render_pages:output/sec1/index.html and render_taxonomies:output/sec1/index.html.\r\n\r\nTo reproduce:\r\n\r\n* set POSTS and PAGES to output to the root of the site\r\n* create `posts/sec1/foo.rst` and `pages/sec1.rst`\r\n\r\n`should_generate_classification_page` is supposed to prevent this, but it fails \u2014 `post_list` only contains posts from the section, so it doesn\u2019t check the page, and thus fails. Perhaps we could fix it by giving all posts/pages to that function?\nSections conflict with pages that would replace them\n> ERROR: Two different tasks can't have a common target.'output/sec1/index.html' is a target for render_pages:output/sec1/index.html and render_taxonomies:output/sec1/index.html.\r\n\r\nTo reproduce:\r\n\r\n* set POSTS and PAGES to output to the root of the site\r\n* create `posts/sec1/foo.rst` and `pages/sec1.rst`\r\n\r\n`should_generate_classification_page` is supposed to prevent this, but it fails \u2014 `post_list` only contains posts from the section, so it doesn\u2019t check the page, and thus fails. Perhaps we could fix it by giving all posts/pages to that function?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2017 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Render the blog indexes.\"\"\"\n\nfrom __future__ import unicode_literals\n\nfrom nikola.plugin_categories import Taxonomy\nfrom nikola import utils\n\n\nclass ClassifySections(Taxonomy):\n \"\"\"Classify the posts by sections.\"\"\"\n\n name = \"classify_sections\"\n\n classification_name = \"section_index\"\n overview_page_variable_name = \"sections\"\n more_than_one_classifications_per_post = False\n has_hierarchy = False\n generate_atom_feeds_for_post_lists = False\n template_for_classification_overview = None\n apply_to_posts = True\n apply_to_pages = False\n omit_empty_classifications = True\n also_create_classifications_from_other_languages = False\n path_handler_docstrings = {\n 'section_index_index': False,\n 'section_index': \"\"\"Link to the index for a section.\n\nExample:\n\nlink://section_index/cars => /cars/index.html\"\"\",\n 'section_index_atom': \"\"\"Link to the Atom index for a section.\n\nExample:\n\nlink://section_index_atom/cars => /cars/index.atom\"\"\",\n 'section_index_rss': \"\"\"Link to the RSS feed for a section.\n\nExample:\n\nlink://section_index_rss/cars => /cars/rss.xml\"\"\",\n }\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.show_list_as_index = site.config[\"POSTS_SECTIONS_ARE_INDEXES\"]\n self.template_for_single_list = \"sectionindex.tmpl\" if self.show_list_as_index else \"list.tmpl\"\n self.enable_for_lang = {}\n return super(ClassifySections, self).set_site(site)\n\n def is_enabled(self, lang=None):\n \"\"\"Return True if this taxonomy is enabled, or False otherwise.\"\"\"\n if not self.site.config['POSTS_SECTIONS']:\n return False\n if lang is not None:\n return self.enable_for_lang.get(lang, False)\n return True\n\n def classify(self, post, lang):\n \"\"\"Classify the given post for the given language.\"\"\"\n return [post.section_slug(lang)]\n\n def _get_section_name(self, section, lang):\n # Check whether we have a name for this section\n if section in self.site.config['POSTS_SECTION_NAME'](lang):\n return self.site.config['POSTS_SECTION_NAME'](lang)[section]\n else:\n return section.replace('-', ' ').title()\n\n def get_classification_friendly_name(self, section, lang, only_last_component=False):\n \"\"\"Extract a friendly name from the classification.\"\"\"\n return self._get_section_name(section, lang)\n\n def get_path(self, section, lang, dest_type='page'):\n \"\"\"A path handler for the given classification.\"\"\"\n result = [_f for _f in [section] if _f]\n if dest_type == 'rss':\n return result + ['rss.xml'], 'never'\n return result, 'always'\n\n def provide_context_and_uptodate(self, section, lang, node=None):\n \"\"\"Provide data for the context and the uptodate list for the list of the given classifiation.\"\"\"\n kw = {\n \"messages\": self.site.MESSAGES,\n }\n section_name = self._get_section_name(section, lang)\n # Compose section title\n section_title = section_name\n posts_section_title = self.site.config['POSTS_SECTION_TITLE'](lang)\n if isinstance(posts_section_title, dict):\n if section in posts_section_title:\n section_title = posts_section_title[section]\n elif isinstance(posts_section_title, (utils.bytes_str, utils.unicode_str)):\n section_title = posts_section_title\n section_title = section_title.format(name=section_name)\n # Compose context\n context = {\n \"title\": section_title,\n \"description\": self.site.config['POSTS_SECTION_DESCRIPTIONS'](lang)[section] if section in self.site.config['POSTS_SECTION_DESCRIPTIONS'](lang) else \"\",\n \"pagekind\": [\"section_page\", \"index\" if self.show_list_as_index else \"list\"]\n }\n kw.update(context)\n return context, kw\n\n def postprocess_posts_per_classification(self, posts_per_section_per_language, flat_hierarchy_per_lang=None, hierarchy_lookup_per_lang=None):\n \"\"\"Rearrange, modify or otherwise use the list of posts per classification and per language.\"\"\"\n for lang, posts_per_section in posts_per_section_per_language.items():\n # Don't build sections when there is only one, a.k.a. default setups\n sections = set()\n for section, posts in posts_per_section.items():\n for post in posts:\n if not self.site.config[\"SHOW_UNTRANSLATED_POSTS\"] and not post.is_translation_available(lang):\n continue\n sections.add(section)\n self.enable_for_lang[lang] = (len(sections) > 1)\n\n def should_generate_classification_page(self, dirname, post_list, lang):\n \"\"\"Only generates list of posts for classification if this function returns True.\"\"\"\n short_destination = dirname + '/' + self.site.config['INDEX_FILE']\n for post in post_list:\n # If there is an index.html pending to be created from a page, do not generate the section page.\n # The section page would be useless anyways. (via Issue #2613)\n if post.destination_path(lang, sep='/') == short_destination:\n return False\n return True\n", "path": "nikola/plugins/task/sections.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2017 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Render the blog indexes.\"\"\"\n\nfrom __future__ import unicode_literals\n\nfrom nikola.plugin_categories import Taxonomy\nfrom nikola import utils\n\n\nclass ClassifySections(Taxonomy):\n \"\"\"Classify the posts by sections.\"\"\"\n\n name = \"classify_sections\"\n\n classification_name = \"section_index\"\n overview_page_variable_name = \"sections\"\n more_than_one_classifications_per_post = False\n has_hierarchy = False\n generate_atom_feeds_for_post_lists = False\n template_for_classification_overview = None\n apply_to_posts = True\n apply_to_pages = False\n omit_empty_classifications = True\n also_create_classifications_from_other_languages = False\n path_handler_docstrings = {\n 'section_index_index': False,\n 'section_index': \"\"\"Link to the index for a section.\n\nExample:\n\nlink://section_index/cars => /cars/index.html\"\"\",\n 'section_index_atom': \"\"\"Link to the Atom index for a section.\n\nExample:\n\nlink://section_index_atom/cars => /cars/index.atom\"\"\",\n 'section_index_rss': \"\"\"Link to the RSS feed for a section.\n\nExample:\n\nlink://section_index_rss/cars => /cars/rss.xml\"\"\",\n }\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.show_list_as_index = site.config[\"POSTS_SECTIONS_ARE_INDEXES\"]\n self.template_for_single_list = \"sectionindex.tmpl\" if self.show_list_as_index else \"list.tmpl\"\n self.enable_for_lang = {}\n return super(ClassifySections, self).set_site(site)\n\n def is_enabled(self, lang=None):\n \"\"\"Return True if this taxonomy is enabled, or False otherwise.\"\"\"\n if not self.site.config['POSTS_SECTIONS']:\n return False\n if lang is not None:\n return self.enable_for_lang.get(lang, False)\n return True\n\n def classify(self, post, lang):\n \"\"\"Classify the given post for the given language.\"\"\"\n return [post.section_slug(lang)]\n\n def _get_section_name(self, section, lang):\n # Check whether we have a name for this section\n if section in self.site.config['POSTS_SECTION_NAME'](lang):\n return self.site.config['POSTS_SECTION_NAME'](lang)[section]\n else:\n return section.replace('-', ' ').title()\n\n def get_classification_friendly_name(self, section, lang, only_last_component=False):\n \"\"\"Extract a friendly name from the classification.\"\"\"\n return self._get_section_name(section, lang)\n\n def get_path(self, section, lang, dest_type='page'):\n \"\"\"A path handler for the given classification.\"\"\"\n result = [_f for _f in [section] if _f]\n if dest_type == 'rss':\n return result + ['rss.xml'], 'never'\n return result, 'always'\n\n def provide_context_and_uptodate(self, section, lang, node=None):\n \"\"\"Provide data for the context and the uptodate list for the list of the given classifiation.\"\"\"\n kw = {\n \"messages\": self.site.MESSAGES,\n }\n section_name = self._get_section_name(section, lang)\n # Compose section title\n section_title = section_name\n posts_section_title = self.site.config['POSTS_SECTION_TITLE'](lang)\n if isinstance(posts_section_title, dict):\n if section in posts_section_title:\n section_title = posts_section_title[section]\n elif isinstance(posts_section_title, (utils.bytes_str, utils.unicode_str)):\n section_title = posts_section_title\n section_title = section_title.format(name=section_name)\n # Compose context\n context = {\n \"title\": section_title,\n \"description\": self.site.config['POSTS_SECTION_DESCRIPTIONS'](lang)[section] if section in self.site.config['POSTS_SECTION_DESCRIPTIONS'](lang) else \"\",\n \"pagekind\": [\"section_page\", \"index\" if self.show_list_as_index else \"list\"]\n }\n kw.update(context)\n return context, kw\n\n def postprocess_posts_per_classification(self, posts_per_section_per_language, flat_hierarchy_per_lang=None, hierarchy_lookup_per_lang=None):\n \"\"\"Rearrange, modify or otherwise use the list of posts per classification and per language.\"\"\"\n for lang, posts_per_section in posts_per_section_per_language.items():\n # Don't build sections when there is only one, a.k.a. default setups\n sections = set()\n for section, posts in posts_per_section.items():\n for post in posts:\n if not self.site.config[\"SHOW_UNTRANSLATED_POSTS\"] and not post.is_translation_available(lang):\n continue\n sections.add(section)\n self.enable_for_lang[lang] = (len(sections) > 1)\n\n def should_generate_classification_page(self, dirname, post_list, lang):\n \"\"\"Only generates list of posts for classification if this function returns True.\"\"\"\n short_destination = dirname + '/' + self.site.config['INDEX_FILE']\n # If there is an index.html pending to be created from a page, do not generate the section page.\n # The section page would be useless anyways. (via Issue #2613)\n for post in self.site.timeline:\n if not self.site.config[\"SHOW_UNTRANSLATED_POSTS\"] and not post.is_translation_available(lang):\n continue\n if post.destination_path(lang, sep='/') == short_destination:\n return False\n return True\n", "path": "nikola/plugins/task/sections.py"}]}
| 2,273 | 259 |
gh_patches_debug_12017
|
rasdani/github-patches
|
git_diff
|
spyder-ide__spyder-8378
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Title field of error dialog is too small in macOS
This can be clearly seen in the following image:

----
@juanis2112, this field is defined here:
https://github.com/spyder-ide/spyder/blob/master/spyder/widgets/reporterror.py#L158
I don't know exactly how to do this, but QLineEdit has several methods to define its size (e.g. `setMinimumWidth` or `setMaximumWidth`). You can find the full list here:
http://doc.qt.io/qt-5/qlineedit-members.html
I'm sure one of them will allow us to set the QLineEdit width to expand to the size of the report error dialog.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `spyder/widgets/reporterror.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # -----------------------------------------------------------------------------
3 # Copyright © Spyder Project Contributors
4 #
5 # Licensed under the terms of the MIT License
6 # (see spyder/__init__.py for details)
7 # -----------------------------------------------------------------------------
8
9 """Report Error Dialog"""
10
11 # Standard library imports
12 import sys
13
14 # Third party imports
15 from qtpy.QtCore import Qt, Signal
16 from qtpy.QtWidgets import (QApplication, QCheckBox, QDialog, QFormLayout,
17 QHBoxLayout, QLabel, QLineEdit, QMessageBox,
18 QPlainTextEdit, QPushButton, QVBoxLayout)
19
20 # Local imports
21 from spyder import __project_url__, __trouble_url__
22 from spyder.config.base import _
23 from spyder.config.gui import get_font
24 from spyder.config.gui import CONF
25 from spyder.utils import icon_manager as ima
26 from spyder.utils.qthelpers import restore_keyevent
27 from spyder.widgets.github.backend import GithubBackend
28 from spyder.plugins.editor.widgets.codeeditor import CodeEditor
29 from spyder.widgets.mixins import BaseEditMixin, TracebackLinksMixin
30 from spyder.plugins.editor.widgets.base import ConsoleBaseWidget
31
32
33 # Minimum number of characters to introduce in the title and
34 # description fields before being able to send the report to
35 # Github.
36 TITLE_MIN_CHARS = 15
37 DESC_MIN_CHARS = 50
38
39
40 class DescriptionWidget(CodeEditor):
41 """Widget to enter error description."""
42
43 def __init__(self, parent=None):
44 CodeEditor.__init__(self, parent)
45
46 # Editor options
47 self.setup_editor(
48 language='md',
49 color_scheme=CONF.get('appearance',
50 'selected'),
51 linenumbers=False,
52 scrollflagarea=False,
53 wrap=True,
54 edge_line=False,
55 highlight_current_line=False,
56 highlight_current_cell=False,
57 occurrence_highlighting=False,
58 auto_unindent=False)
59
60 # Set font
61 self.set_font(get_font())
62
63 # Header
64 self.header = (
65 "### What steps will reproduce the problem?\n\n"
66 "<!--- You can use Markdown here --->\n\n")
67 self.set_text(self.header)
68 self.move_cursor(len(self.header))
69 self.header_end_pos = self.get_position('eof')
70
71 def remove_text(self):
72 """Remove text."""
73 self.truncate_selection(self.header_end_pos)
74 self.remove_selected_text()
75
76 def cut(self):
77 """Cut text"""
78 self.truncate_selection(self.header_end_pos)
79 if self.has_selected_text():
80 CodeEditor.cut(self)
81
82 def keyPressEvent(self, event):
83 """Reimplemented Qt Method to avoid removing the header."""
84 event, text, key, ctrl, shift = restore_keyevent(event)
85 cursor_position = self.get_position('cursor')
86
87 if cursor_position < self.header_end_pos:
88 self.restrict_cursor_position(self.header_end_pos, 'eof')
89 elif key == Qt.Key_Delete:
90 if self.has_selected_text():
91 self.remove_text()
92 else:
93 self.stdkey_clear()
94 elif key == Qt.Key_Backspace:
95 if self.has_selected_text():
96 self.remove_text()
97 elif self.header_end_pos == cursor_position:
98 return
99 else:
100 self.stdkey_backspace()
101 elif key == Qt.Key_X and ctrl:
102 self.cut()
103 else:
104 CodeEditor.keyPressEvent(self, event)
105
106 def contextMenuEvent(self, event):
107 """Reimplemented Qt Method to not show the context menu."""
108 pass
109
110
111 class ShowErrorWidget(TracebackLinksMixin, ConsoleBaseWidget, BaseEditMixin):
112 """Widget to show errors as they appear in the Internal console."""
113 QT_CLASS = QPlainTextEdit
114 go_to_error = Signal(str)
115
116 def __init__(self, parent=None):
117 ConsoleBaseWidget.__init__(self, parent)
118 BaseEditMixin.__init__(self)
119 TracebackLinksMixin.__init__(self)
120 self.setReadOnly(True)
121
122
123 class SpyderErrorDialog(QDialog):
124 """Custom error dialog for error reporting."""
125
126 def __init__(self, parent=None, is_report=False):
127 QDialog.__init__(self, parent)
128 self.is_report = is_report
129
130 self.setWindowTitle(_("Issue reporter"))
131 self.setModal(True)
132
133 # To save the traceback sent to the internal console
134 self.error_traceback = ""
135
136 # Dialog main label
137 if self.is_report:
138 title = _("Please fill the following information")
139 else:
140 title = _("Spyder has encountered an internal problem!")
141 main_label = QLabel(
142 _("<h3>{title}</h3>"
143 "Before reporting this problem, <i>please</i> consult our "
144 "comprehensive "
145 "<b><a href=\"{trouble_url}\">Troubleshooting Guide</a></b> "
146 "which should help solve most issues, and search for "
147 "<b><a href=\"{project_url}\">known bugs</a></b> "
148 "matching your error message or problem description for a "
149 "quicker solution."
150 ).format(title=title, trouble_url=__trouble_url__,
151 project_url=__project_url__))
152 main_label.setOpenExternalLinks(True)
153 main_label.setWordWrap(True)
154 main_label.setAlignment(Qt.AlignJustify)
155 main_label.setStyleSheet('font-size: 12px;')
156
157 # Issue title
158 self.title = QLineEdit()
159 self.title.textChanged.connect(self._contents_changed)
160 self.title_chars_label = QLabel(_("{} more characters "
161 "to go...").format(TITLE_MIN_CHARS))
162 form_layout = QFormLayout()
163 red_asterisk = '<font color="Red">*</font>'
164 title_label = QLabel(_("<b>Title</b>: {}").format(red_asterisk))
165 form_layout.setWidget(0, QFormLayout.LabelRole, title_label)
166 form_layout.setWidget(0, QFormLayout.FieldRole, self.title)
167
168 # Description
169 steps_header = QLabel(
170 _("<b>Steps to reproduce:</b> {}").format(red_asterisk))
171 steps_text = QLabel(_("Please enter a detailed step-by-step "
172 "description (in English) of what led up to "
173 "the problem below. Issue reports without a "
174 "clear way to reproduce them will be closed."))
175 steps_text.setWordWrap(True)
176 steps_text.setAlignment(Qt.AlignJustify)
177 steps_text.setStyleSheet('font-size: 12px;')
178
179 # Field to input the description of the problem
180 self.input_description = DescriptionWidget(self)
181
182 # Only allow to submit to Github if we have a long enough description
183 self.input_description.textChanged.connect(self._contents_changed)
184
185 # Widget to show errors
186 self.details = ShowErrorWidget(self)
187 self.details.set_pythonshell_font(get_font())
188 self.details.hide()
189
190 # Label to show missing chars
191 self.initial_chars = len(self.input_description.toPlainText())
192 self.desc_chars_label = QLabel(_("{} more characters "
193 "to go...").format(DESC_MIN_CHARS))
194
195 # Checkbox to dismiss future errors
196 self.dismiss_box = QCheckBox(_("Hide all future errors during this "
197 "session"))
198 if self.is_report:
199 self.dismiss_box.hide()
200
201 # Dialog buttons
202 gh_icon = ima.icon('github')
203 self.submit_btn = QPushButton(gh_icon, _('Submit to Github'))
204 self.submit_btn.setEnabled(False)
205 self.submit_btn.clicked.connect(self._submit_to_github)
206
207 self.details_btn = QPushButton(_('Show details'))
208 self.details_btn.clicked.connect(self._show_details)
209 if self.is_report:
210 self.details_btn.hide()
211
212 self.close_btn = QPushButton(_('Close'))
213 if self.is_report:
214 self.close_btn.clicked.connect(self.reject)
215
216 # Buttons layout
217 buttons_layout = QHBoxLayout()
218 buttons_layout.addWidget(self.submit_btn)
219 buttons_layout.addWidget(self.details_btn)
220 buttons_layout.addWidget(self.close_btn)
221
222 # Main layout
223 layout = QVBoxLayout()
224 layout.addWidget(main_label)
225 layout.addSpacing(20)
226 layout.addLayout(form_layout)
227 layout.addWidget(self.title_chars_label)
228 layout.addSpacing(12)
229 layout.addWidget(steps_header)
230 layout.addSpacing(-1)
231 layout.addWidget(steps_text)
232 layout.addSpacing(1)
233 layout.addWidget(self.input_description)
234 layout.addWidget(self.details)
235 layout.addWidget(self.desc_chars_label)
236 layout.addSpacing(15)
237 layout.addWidget(self.dismiss_box)
238 layout.addSpacing(15)
239 layout.addLayout(buttons_layout)
240 layout.setContentsMargins(25, 20, 25, 10)
241 self.setLayout(layout)
242
243 self.resize(570, 600)
244 self.title.setFocus()
245
246 # Set Tab key focus order
247 self.setTabOrder(self.title, self.input_description)
248
249 def _submit_to_github(self):
250 """Action to take when pressing the submit button."""
251 # Get reference to the main window
252 if self.parent() is not None:
253 if getattr(self.parent(), 'main', False):
254 # This covers the case when the dialog is attached
255 # to the internal console
256 main = self.parent().main
257 else:
258 # Else the dialog is attached to the main window
259 # directly
260 main = self.parent()
261 else:
262 main = None
263
264 # Getting description and traceback
265 title = self.title.text()
266 description = self.input_description.toPlainText()
267 traceback = self.error_traceback[:-1] # Remove last EOL
268
269 # Render issue
270 if main is not None:
271 issue_text = main.render_issue(description=description,
272 traceback=traceback)
273 else:
274 issue_text = description
275
276 try:
277 if main is None:
278 org = 'ccordoba12'
279 else:
280 org = 'spyder-ide'
281 github_backend = GithubBackend(org, 'spyder', parent_widget=main)
282 github_report = github_backend.send_report(title, issue_text)
283 if github_report:
284 self.close()
285 except Exception:
286 ret = QMessageBox.question(
287 self, _('Error'),
288 _("An error occurred while trying to send the issue to "
289 "Github automatically. Would you like to open it "
290 "manually?<br><br>"
291 "If so, please make sure to paste your clipboard "
292 "into the issue report box that will appear in a new "
293 "browser tab before clicking <i>Submit</i> on that "
294 "page."))
295 if ret in [QMessageBox.Yes, QMessageBox.Ok]:
296 QApplication.clipboard().setText(issue_text)
297 issue_body = (
298 " \n<!--- *** BEFORE SUBMITTING: PASTE CLIPBOARD HERE "
299 "TO COMPLETE YOUR REPORT *** ---!>\n")
300 if main is not None:
301 main.report_issue(body=issue_body, title=title,
302 open_webpage=True)
303 else:
304 pass
305
306 def append_traceback(self, text):
307 """Append text to the traceback, to be displayed in details."""
308 self.error_traceback += text
309
310 def _show_details(self):
311 """Show traceback on its own dialog"""
312 if self.details.isVisible():
313 self.details.hide()
314 self.details_btn.setText(_('Show details'))
315 else:
316 self.resize(570, 700)
317 self.details.document().setPlainText('')
318 self.details.append_text_to_shell(self.error_traceback,
319 error=True,
320 prompt=False)
321 self.details.show()
322 self.details_btn.setText(_('Hide details'))
323
324 def _contents_changed(self):
325 """Activate submit_btn."""
326 desc_chars = (len(self.input_description.toPlainText()) -
327 self.initial_chars)
328 if desc_chars < DESC_MIN_CHARS:
329 self.desc_chars_label.setText(
330 u"{} {}".format(DESC_MIN_CHARS - desc_chars,
331 _("more characters to go...")))
332 else:
333 self.desc_chars_label.setText(_("Description complete; thanks!"))
334
335 title_chars = len(self.title.text())
336 if title_chars < TITLE_MIN_CHARS:
337 self.title_chars_label.setText(
338 u"{} {}".format(TITLE_MIN_CHARS - title_chars,
339 _("more characters to go...")))
340 else:
341 self.title_chars_label.setText(_("Title complete; thanks!"))
342
343 submission_enabled = (desc_chars >= DESC_MIN_CHARS and
344 title_chars >= TITLE_MIN_CHARS)
345 self.submit_btn.setEnabled(submission_enabled)
346
347
348 def test():
349 from spyder.utils.qthelpers import qapplication
350 app = qapplication()
351 dlg = SpyderErrorDialog()
352 dlg.show()
353 sys.exit(dlg.exec_())
354
355
356 if __name__ == "__main__":
357 test()
358
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/spyder/widgets/reporterror.py b/spyder/widgets/reporterror.py
--- a/spyder/widgets/reporterror.py
+++ b/spyder/widgets/reporterror.py
@@ -160,6 +160,7 @@
self.title_chars_label = QLabel(_("{} more characters "
"to go...").format(TITLE_MIN_CHARS))
form_layout = QFormLayout()
+ form_layout.setFieldGrowthPolicy(QFormLayout.ExpandingFieldsGrow)
red_asterisk = '<font color="Red">*</font>'
title_label = QLabel(_("<b>Title</b>: {}").format(red_asterisk))
form_layout.setWidget(0, QFormLayout.LabelRole, title_label)
|
{"golden_diff": "diff --git a/spyder/widgets/reporterror.py b/spyder/widgets/reporterror.py\n--- a/spyder/widgets/reporterror.py\n+++ b/spyder/widgets/reporterror.py\n@@ -160,6 +160,7 @@\n self.title_chars_label = QLabel(_(\"{} more characters \"\n \"to go...\").format(TITLE_MIN_CHARS))\n form_layout = QFormLayout()\n+ form_layout.setFieldGrowthPolicy(QFormLayout.ExpandingFieldsGrow)\n red_asterisk = '<font color=\"Red\">*</font>'\n title_label = QLabel(_(\"<b>Title</b>: {}\").format(red_asterisk))\n form_layout.setWidget(0, QFormLayout.LabelRole, title_label)\n", "issue": "Title field of error dialog is too small in macOS\nThis can be clearly seen in the following image:\r\n\r\n\r\n\r\n----\r\n\r\n@juanis2112, this field is defined here:\r\n\r\nhttps://github.com/spyder-ide/spyder/blob/master/spyder/widgets/reporterror.py#L158\r\n\r\nI don't know exactly how to do this, but QLineEdit has several methods to define its size (e.g. `setMinimumWidth` or `setMaximumWidth`). You can find the full list here:\r\n\r\nhttp://doc.qt.io/qt-5/qlineedit-members.html\r\n\r\nI'm sure one of them will allow us to set the QLineEdit width to expand to the size of the report error dialog.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# -----------------------------------------------------------------------------\n# Copyright \u00a9 Spyder Project Contributors\n#\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n# -----------------------------------------------------------------------------\n\n\"\"\"Report Error Dialog\"\"\"\n\n# Standard library imports\nimport sys\n\n# Third party imports\nfrom qtpy.QtCore import Qt, Signal\nfrom qtpy.QtWidgets import (QApplication, QCheckBox, QDialog, QFormLayout,\n QHBoxLayout, QLabel, QLineEdit, QMessageBox,\n QPlainTextEdit, QPushButton, QVBoxLayout)\n\n# Local imports\nfrom spyder import __project_url__, __trouble_url__\nfrom spyder.config.base import _\nfrom spyder.config.gui import get_font\nfrom spyder.config.gui import CONF\nfrom spyder.utils import icon_manager as ima\nfrom spyder.utils.qthelpers import restore_keyevent\nfrom spyder.widgets.github.backend import GithubBackend\nfrom spyder.plugins.editor.widgets.codeeditor import CodeEditor\nfrom spyder.widgets.mixins import BaseEditMixin, TracebackLinksMixin\nfrom spyder.plugins.editor.widgets.base import ConsoleBaseWidget\n\n\n# Minimum number of characters to introduce in the title and\n# description fields before being able to send the report to\n# Github.\nTITLE_MIN_CHARS = 15\nDESC_MIN_CHARS = 50\n\n\nclass DescriptionWidget(CodeEditor):\n \"\"\"Widget to enter error description.\"\"\"\n\n def __init__(self, parent=None):\n CodeEditor.__init__(self, parent)\n\n # Editor options\n self.setup_editor(\n language='md',\n color_scheme=CONF.get('appearance',\n 'selected'),\n linenumbers=False,\n scrollflagarea=False,\n wrap=True,\n edge_line=False,\n highlight_current_line=False,\n highlight_current_cell=False,\n occurrence_highlighting=False,\n auto_unindent=False)\n\n # Set font\n self.set_font(get_font())\n\n # Header\n self.header = (\n \"### What steps will reproduce the problem?\\n\\n\"\n \"<!--- You can use Markdown here --->\\n\\n\")\n self.set_text(self.header)\n self.move_cursor(len(self.header))\n self.header_end_pos = self.get_position('eof')\n\n def remove_text(self):\n \"\"\"Remove text.\"\"\"\n self.truncate_selection(self.header_end_pos)\n self.remove_selected_text()\n\n def cut(self):\n \"\"\"Cut text\"\"\"\n self.truncate_selection(self.header_end_pos)\n if self.has_selected_text():\n CodeEditor.cut(self)\n\n def keyPressEvent(self, event):\n \"\"\"Reimplemented Qt Method to avoid removing the header.\"\"\"\n event, text, key, ctrl, shift = restore_keyevent(event)\n cursor_position = self.get_position('cursor')\n\n if cursor_position < self.header_end_pos:\n self.restrict_cursor_position(self.header_end_pos, 'eof')\n elif key == Qt.Key_Delete:\n if self.has_selected_text():\n self.remove_text()\n else:\n self.stdkey_clear()\n elif key == Qt.Key_Backspace:\n if self.has_selected_text():\n self.remove_text()\n elif self.header_end_pos == cursor_position:\n return\n else:\n self.stdkey_backspace()\n elif key == Qt.Key_X and ctrl:\n self.cut()\n else:\n CodeEditor.keyPressEvent(self, event)\n\n def contextMenuEvent(self, event):\n \"\"\"Reimplemented Qt Method to not show the context menu.\"\"\"\n pass\n\n\nclass ShowErrorWidget(TracebackLinksMixin, ConsoleBaseWidget, BaseEditMixin):\n \"\"\"Widget to show errors as they appear in the Internal console.\"\"\"\n QT_CLASS = QPlainTextEdit\n go_to_error = Signal(str)\n\n def __init__(self, parent=None):\n ConsoleBaseWidget.__init__(self, parent)\n BaseEditMixin.__init__(self)\n TracebackLinksMixin.__init__(self)\n self.setReadOnly(True)\n\n\nclass SpyderErrorDialog(QDialog):\n \"\"\"Custom error dialog for error reporting.\"\"\"\n\n def __init__(self, parent=None, is_report=False):\n QDialog.__init__(self, parent)\n self.is_report = is_report\n\n self.setWindowTitle(_(\"Issue reporter\"))\n self.setModal(True)\n\n # To save the traceback sent to the internal console\n self.error_traceback = \"\"\n\n # Dialog main label\n if self.is_report:\n title = _(\"Please fill the following information\")\n else:\n title = _(\"Spyder has encountered an internal problem!\")\n main_label = QLabel(\n _(\"<h3>{title}</h3>\"\n \"Before reporting this problem, <i>please</i> consult our \"\n \"comprehensive \"\n \"<b><a href=\\\"{trouble_url}\\\">Troubleshooting Guide</a></b> \"\n \"which should help solve most issues, and search for \"\n \"<b><a href=\\\"{project_url}\\\">known bugs</a></b> \"\n \"matching your error message or problem description for a \"\n \"quicker solution.\"\n ).format(title=title, trouble_url=__trouble_url__,\n project_url=__project_url__))\n main_label.setOpenExternalLinks(True)\n main_label.setWordWrap(True)\n main_label.setAlignment(Qt.AlignJustify)\n main_label.setStyleSheet('font-size: 12px;')\n\n # Issue title\n self.title = QLineEdit()\n self.title.textChanged.connect(self._contents_changed)\n self.title_chars_label = QLabel(_(\"{} more characters \"\n \"to go...\").format(TITLE_MIN_CHARS))\n form_layout = QFormLayout()\n red_asterisk = '<font color=\"Red\">*</font>'\n title_label = QLabel(_(\"<b>Title</b>: {}\").format(red_asterisk))\n form_layout.setWidget(0, QFormLayout.LabelRole, title_label)\n form_layout.setWidget(0, QFormLayout.FieldRole, self.title)\n\n # Description\n steps_header = QLabel(\n _(\"<b>Steps to reproduce:</b> {}\").format(red_asterisk))\n steps_text = QLabel(_(\"Please enter a detailed step-by-step \"\n \"description (in English) of what led up to \"\n \"the problem below. Issue reports without a \"\n \"clear way to reproduce them will be closed.\"))\n steps_text.setWordWrap(True)\n steps_text.setAlignment(Qt.AlignJustify)\n steps_text.setStyleSheet('font-size: 12px;')\n\n # Field to input the description of the problem\n self.input_description = DescriptionWidget(self)\n\n # Only allow to submit to Github if we have a long enough description\n self.input_description.textChanged.connect(self._contents_changed)\n\n # Widget to show errors\n self.details = ShowErrorWidget(self)\n self.details.set_pythonshell_font(get_font())\n self.details.hide()\n\n # Label to show missing chars\n self.initial_chars = len(self.input_description.toPlainText())\n self.desc_chars_label = QLabel(_(\"{} more characters \"\n \"to go...\").format(DESC_MIN_CHARS))\n\n # Checkbox to dismiss future errors\n self.dismiss_box = QCheckBox(_(\"Hide all future errors during this \"\n \"session\"))\n if self.is_report:\n self.dismiss_box.hide()\n\n # Dialog buttons\n gh_icon = ima.icon('github')\n self.submit_btn = QPushButton(gh_icon, _('Submit to Github'))\n self.submit_btn.setEnabled(False)\n self.submit_btn.clicked.connect(self._submit_to_github)\n\n self.details_btn = QPushButton(_('Show details'))\n self.details_btn.clicked.connect(self._show_details)\n if self.is_report:\n self.details_btn.hide()\n\n self.close_btn = QPushButton(_('Close'))\n if self.is_report:\n self.close_btn.clicked.connect(self.reject)\n\n # Buttons layout\n buttons_layout = QHBoxLayout()\n buttons_layout.addWidget(self.submit_btn)\n buttons_layout.addWidget(self.details_btn)\n buttons_layout.addWidget(self.close_btn)\n\n # Main layout\n layout = QVBoxLayout()\n layout.addWidget(main_label)\n layout.addSpacing(20)\n layout.addLayout(form_layout)\n layout.addWidget(self.title_chars_label)\n layout.addSpacing(12)\n layout.addWidget(steps_header)\n layout.addSpacing(-1)\n layout.addWidget(steps_text)\n layout.addSpacing(1)\n layout.addWidget(self.input_description)\n layout.addWidget(self.details)\n layout.addWidget(self.desc_chars_label)\n layout.addSpacing(15)\n layout.addWidget(self.dismiss_box)\n layout.addSpacing(15)\n layout.addLayout(buttons_layout)\n layout.setContentsMargins(25, 20, 25, 10)\n self.setLayout(layout)\n\n self.resize(570, 600)\n self.title.setFocus()\n\n # Set Tab key focus order\n self.setTabOrder(self.title, self.input_description)\n\n def _submit_to_github(self):\n \"\"\"Action to take when pressing the submit button.\"\"\"\n # Get reference to the main window\n if self.parent() is not None:\n if getattr(self.parent(), 'main', False):\n # This covers the case when the dialog is attached\n # to the internal console\n main = self.parent().main\n else:\n # Else the dialog is attached to the main window\n # directly\n main = self.parent()\n else:\n main = None\n\n # Getting description and traceback\n title = self.title.text()\n description = self.input_description.toPlainText()\n traceback = self.error_traceback[:-1] # Remove last EOL\n\n # Render issue\n if main is not None:\n issue_text = main.render_issue(description=description,\n traceback=traceback)\n else:\n issue_text = description\n\n try:\n if main is None:\n org = 'ccordoba12'\n else:\n org = 'spyder-ide'\n github_backend = GithubBackend(org, 'spyder', parent_widget=main)\n github_report = github_backend.send_report(title, issue_text)\n if github_report:\n self.close()\n except Exception:\n ret = QMessageBox.question(\n self, _('Error'),\n _(\"An error occurred while trying to send the issue to \"\n \"Github automatically. Would you like to open it \"\n \"manually?<br><br>\"\n \"If so, please make sure to paste your clipboard \"\n \"into the issue report box that will appear in a new \"\n \"browser tab before clicking <i>Submit</i> on that \"\n \"page.\"))\n if ret in [QMessageBox.Yes, QMessageBox.Ok]:\n QApplication.clipboard().setText(issue_text)\n issue_body = (\n \" \\n<!--- *** BEFORE SUBMITTING: PASTE CLIPBOARD HERE \"\n \"TO COMPLETE YOUR REPORT *** ---!>\\n\")\n if main is not None:\n main.report_issue(body=issue_body, title=title,\n open_webpage=True)\n else:\n pass\n\n def append_traceback(self, text):\n \"\"\"Append text to the traceback, to be displayed in details.\"\"\"\n self.error_traceback += text\n\n def _show_details(self):\n \"\"\"Show traceback on its own dialog\"\"\"\n if self.details.isVisible():\n self.details.hide()\n self.details_btn.setText(_('Show details'))\n else:\n self.resize(570, 700)\n self.details.document().setPlainText('')\n self.details.append_text_to_shell(self.error_traceback,\n error=True,\n prompt=False)\n self.details.show()\n self.details_btn.setText(_('Hide details'))\n\n def _contents_changed(self):\n \"\"\"Activate submit_btn.\"\"\"\n desc_chars = (len(self.input_description.toPlainText()) -\n self.initial_chars)\n if desc_chars < DESC_MIN_CHARS:\n self.desc_chars_label.setText(\n u\"{} {}\".format(DESC_MIN_CHARS - desc_chars,\n _(\"more characters to go...\")))\n else:\n self.desc_chars_label.setText(_(\"Description complete; thanks!\"))\n\n title_chars = len(self.title.text())\n if title_chars < TITLE_MIN_CHARS:\n self.title_chars_label.setText(\n u\"{} {}\".format(TITLE_MIN_CHARS - title_chars,\n _(\"more characters to go...\")))\n else:\n self.title_chars_label.setText(_(\"Title complete; thanks!\"))\n\n submission_enabled = (desc_chars >= DESC_MIN_CHARS and\n title_chars >= TITLE_MIN_CHARS)\n self.submit_btn.setEnabled(submission_enabled)\n\n\ndef test():\n from spyder.utils.qthelpers import qapplication\n app = qapplication()\n dlg = SpyderErrorDialog()\n dlg.show()\n sys.exit(dlg.exec_())\n\n\nif __name__ == \"__main__\":\n test()\n", "path": "spyder/widgets/reporterror.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# -----------------------------------------------------------------------------\n# Copyright \u00a9 Spyder Project Contributors\n#\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n# -----------------------------------------------------------------------------\n\n\"\"\"Report Error Dialog\"\"\"\n\n# Standard library imports\nimport sys\n\n# Third party imports\nfrom qtpy.QtCore import Qt, Signal\nfrom qtpy.QtWidgets import (QApplication, QCheckBox, QDialog, QFormLayout,\n QHBoxLayout, QLabel, QLineEdit, QMessageBox,\n QPlainTextEdit, QPushButton, QVBoxLayout)\n\n# Local imports\nfrom spyder import __project_url__, __trouble_url__\nfrom spyder.config.base import _\nfrom spyder.config.gui import get_font\nfrom spyder.config.gui import CONF\nfrom spyder.utils import icon_manager as ima\nfrom spyder.utils.qthelpers import restore_keyevent\nfrom spyder.widgets.github.backend import GithubBackend\nfrom spyder.plugins.editor.widgets.codeeditor import CodeEditor\nfrom spyder.widgets.mixins import BaseEditMixin, TracebackLinksMixin\nfrom spyder.plugins.editor.widgets.base import ConsoleBaseWidget\n\n\n# Minimum number of characters to introduce in the title and\n# description fields before being able to send the report to\n# Github.\nTITLE_MIN_CHARS = 15\nDESC_MIN_CHARS = 50\n\n\nclass DescriptionWidget(CodeEditor):\n \"\"\"Widget to enter error description.\"\"\"\n\n def __init__(self, parent=None):\n CodeEditor.__init__(self, parent)\n\n # Editor options\n self.setup_editor(\n language='md',\n color_scheme=CONF.get('appearance',\n 'selected'),\n linenumbers=False,\n scrollflagarea=False,\n wrap=True,\n edge_line=False,\n highlight_current_line=False,\n highlight_current_cell=False,\n occurrence_highlighting=False,\n auto_unindent=False)\n\n # Set font\n self.set_font(get_font())\n\n # Header\n self.header = (\n \"### What steps will reproduce the problem?\\n\\n\"\n \"<!--- You can use Markdown here --->\\n\\n\")\n self.set_text(self.header)\n self.move_cursor(len(self.header))\n self.header_end_pos = self.get_position('eof')\n\n def remove_text(self):\n \"\"\"Remove text.\"\"\"\n self.truncate_selection(self.header_end_pos)\n self.remove_selected_text()\n\n def cut(self):\n \"\"\"Cut text\"\"\"\n self.truncate_selection(self.header_end_pos)\n if self.has_selected_text():\n CodeEditor.cut(self)\n\n def keyPressEvent(self, event):\n \"\"\"Reimplemented Qt Method to avoid removing the header.\"\"\"\n event, text, key, ctrl, shift = restore_keyevent(event)\n cursor_position = self.get_position('cursor')\n\n if cursor_position < self.header_end_pos:\n self.restrict_cursor_position(self.header_end_pos, 'eof')\n elif key == Qt.Key_Delete:\n if self.has_selected_text():\n self.remove_text()\n else:\n self.stdkey_clear()\n elif key == Qt.Key_Backspace:\n if self.has_selected_text():\n self.remove_text()\n elif self.header_end_pos == cursor_position:\n return\n else:\n self.stdkey_backspace()\n elif key == Qt.Key_X and ctrl:\n self.cut()\n else:\n CodeEditor.keyPressEvent(self, event)\n\n def contextMenuEvent(self, event):\n \"\"\"Reimplemented Qt Method to not show the context menu.\"\"\"\n pass\n\n\nclass ShowErrorWidget(TracebackLinksMixin, ConsoleBaseWidget, BaseEditMixin):\n \"\"\"Widget to show errors as they appear in the Internal console.\"\"\"\n QT_CLASS = QPlainTextEdit\n go_to_error = Signal(str)\n\n def __init__(self, parent=None):\n ConsoleBaseWidget.__init__(self, parent)\n BaseEditMixin.__init__(self)\n TracebackLinksMixin.__init__(self)\n self.setReadOnly(True)\n\n\nclass SpyderErrorDialog(QDialog):\n \"\"\"Custom error dialog for error reporting.\"\"\"\n\n def __init__(self, parent=None, is_report=False):\n QDialog.__init__(self, parent)\n self.is_report = is_report\n\n self.setWindowTitle(_(\"Issue reporter\"))\n self.setModal(True)\n\n # To save the traceback sent to the internal console\n self.error_traceback = \"\"\n\n # Dialog main label\n if self.is_report:\n title = _(\"Please fill the following information\")\n else:\n title = _(\"Spyder has encountered an internal problem!\")\n main_label = QLabel(\n _(\"<h3>{title}</h3>\"\n \"Before reporting this problem, <i>please</i> consult our \"\n \"comprehensive \"\n \"<b><a href=\\\"{trouble_url}\\\">Troubleshooting Guide</a></b> \"\n \"which should help solve most issues, and search for \"\n \"<b><a href=\\\"{project_url}\\\">known bugs</a></b> \"\n \"matching your error message or problem description for a \"\n \"quicker solution.\"\n ).format(title=title, trouble_url=__trouble_url__,\n project_url=__project_url__))\n main_label.setOpenExternalLinks(True)\n main_label.setWordWrap(True)\n main_label.setAlignment(Qt.AlignJustify)\n main_label.setStyleSheet('font-size: 12px;')\n\n # Issue title\n self.title = QLineEdit()\n self.title.textChanged.connect(self._contents_changed)\n self.title_chars_label = QLabel(_(\"{} more characters \"\n \"to go...\").format(TITLE_MIN_CHARS))\n form_layout = QFormLayout()\n form_layout.setFieldGrowthPolicy(QFormLayout.ExpandingFieldsGrow)\n red_asterisk = '<font color=\"Red\">*</font>'\n title_label = QLabel(_(\"<b>Title</b>: {}\").format(red_asterisk))\n form_layout.setWidget(0, QFormLayout.LabelRole, title_label)\n form_layout.setWidget(0, QFormLayout.FieldRole, self.title)\n\n # Description\n steps_header = QLabel(\n _(\"<b>Steps to reproduce:</b> {}\").format(red_asterisk))\n steps_text = QLabel(_(\"Please enter a detailed step-by-step \"\n \"description (in English) of what led up to \"\n \"the problem below. Issue reports without a \"\n \"clear way to reproduce them will be closed.\"))\n steps_text.setWordWrap(True)\n steps_text.setAlignment(Qt.AlignJustify)\n steps_text.setStyleSheet('font-size: 12px;')\n\n # Field to input the description of the problem\n self.input_description = DescriptionWidget(self)\n\n # Only allow to submit to Github if we have a long enough description\n self.input_description.textChanged.connect(self._contents_changed)\n\n # Widget to show errors\n self.details = ShowErrorWidget(self)\n self.details.set_pythonshell_font(get_font())\n self.details.hide()\n\n # Label to show missing chars\n self.initial_chars = len(self.input_description.toPlainText())\n self.desc_chars_label = QLabel(_(\"{} more characters \"\n \"to go...\").format(DESC_MIN_CHARS))\n\n # Checkbox to dismiss future errors\n self.dismiss_box = QCheckBox(_(\"Hide all future errors during this \"\n \"session\"))\n if self.is_report:\n self.dismiss_box.hide()\n\n # Dialog buttons\n gh_icon = ima.icon('github')\n self.submit_btn = QPushButton(gh_icon, _('Submit to Github'))\n self.submit_btn.setEnabled(False)\n self.submit_btn.clicked.connect(self._submit_to_github)\n\n self.details_btn = QPushButton(_('Show details'))\n self.details_btn.clicked.connect(self._show_details)\n if self.is_report:\n self.details_btn.hide()\n\n self.close_btn = QPushButton(_('Close'))\n if self.is_report:\n self.close_btn.clicked.connect(self.reject)\n\n # Buttons layout\n buttons_layout = QHBoxLayout()\n buttons_layout.addWidget(self.submit_btn)\n buttons_layout.addWidget(self.details_btn)\n buttons_layout.addWidget(self.close_btn)\n\n # Main layout\n layout = QVBoxLayout()\n layout.addWidget(main_label)\n layout.addSpacing(20)\n layout.addLayout(form_layout)\n layout.addWidget(self.title_chars_label)\n layout.addSpacing(12)\n layout.addWidget(steps_header)\n layout.addSpacing(-1)\n layout.addWidget(steps_text)\n layout.addSpacing(1)\n layout.addWidget(self.input_description)\n layout.addWidget(self.details)\n layout.addWidget(self.desc_chars_label)\n layout.addSpacing(15)\n layout.addWidget(self.dismiss_box)\n layout.addSpacing(15)\n layout.addLayout(buttons_layout)\n layout.setContentsMargins(25, 20, 25, 10)\n self.setLayout(layout)\n\n self.resize(570, 600)\n self.title.setFocus()\n\n # Set Tab key focus order\n self.setTabOrder(self.title, self.input_description)\n\n def _submit_to_github(self):\n \"\"\"Action to take when pressing the submit button.\"\"\"\n # Get reference to the main window\n if self.parent() is not None:\n if getattr(self.parent(), 'main', False):\n # This covers the case when the dialog is attached\n # to the internal console\n main = self.parent().main\n else:\n # Else the dialog is attached to the main window\n # directly\n main = self.parent()\n else:\n main = None\n\n # Getting description and traceback\n title = self.title.text()\n description = self.input_description.toPlainText()\n traceback = self.error_traceback[:-1] # Remove last EOL\n\n # Render issue\n if main is not None:\n issue_text = main.render_issue(description=description,\n traceback=traceback)\n else:\n issue_text = description\n\n try:\n if main is None:\n org = 'ccordoba12'\n else:\n org = 'spyder-ide'\n github_backend = GithubBackend(org, 'spyder', parent_widget=main)\n github_report = github_backend.send_report(title, issue_text)\n if github_report:\n self.close()\n except Exception:\n ret = QMessageBox.question(\n self, _('Error'),\n _(\"An error occurred while trying to send the issue to \"\n \"Github automatically. Would you like to open it \"\n \"manually?<br><br>\"\n \"If so, please make sure to paste your clipboard \"\n \"into the issue report box that will appear in a new \"\n \"browser tab before clicking <i>Submit</i> on that \"\n \"page.\"))\n if ret in [QMessageBox.Yes, QMessageBox.Ok]:\n QApplication.clipboard().setText(issue_text)\n issue_body = (\n \" \\n<!--- *** BEFORE SUBMITTING: PASTE CLIPBOARD HERE \"\n \"TO COMPLETE YOUR REPORT *** ---!>\\n\")\n if main is not None:\n main.report_issue(body=issue_body, title=title,\n open_webpage=True)\n else:\n pass\n\n def append_traceback(self, text):\n \"\"\"Append text to the traceback, to be displayed in details.\"\"\"\n self.error_traceback += text\n\n def _show_details(self):\n \"\"\"Show traceback on its own dialog\"\"\"\n if self.details.isVisible():\n self.details.hide()\n self.details_btn.setText(_('Show details'))\n else:\n self.resize(570, 700)\n self.details.document().setPlainText('')\n self.details.append_text_to_shell(self.error_traceback,\n error=True,\n prompt=False)\n self.details.show()\n self.details_btn.setText(_('Hide details'))\n\n def _contents_changed(self):\n \"\"\"Activate submit_btn.\"\"\"\n desc_chars = (len(self.input_description.toPlainText()) -\n self.initial_chars)\n if desc_chars < DESC_MIN_CHARS:\n self.desc_chars_label.setText(\n u\"{} {}\".format(DESC_MIN_CHARS - desc_chars,\n _(\"more characters to go...\")))\n else:\n self.desc_chars_label.setText(_(\"Description complete; thanks!\"))\n\n title_chars = len(self.title.text())\n if title_chars < TITLE_MIN_CHARS:\n self.title_chars_label.setText(\n u\"{} {}\".format(TITLE_MIN_CHARS - title_chars,\n _(\"more characters to go...\")))\n else:\n self.title_chars_label.setText(_(\"Title complete; thanks!\"))\n\n submission_enabled = (desc_chars >= DESC_MIN_CHARS and\n title_chars >= TITLE_MIN_CHARS)\n self.submit_btn.setEnabled(submission_enabled)\n\n\ndef test():\n from spyder.utils.qthelpers import qapplication\n app = qapplication()\n dlg = SpyderErrorDialog()\n dlg.show()\n sys.exit(dlg.exec_())\n\n\nif __name__ == \"__main__\":\n test()\n", "path": "spyder/widgets/reporterror.py"}]}
| 4,095 | 154 |
gh_patches_debug_25587
|
rasdani/github-patches
|
git_diff
|
vllm-project__vllm-148
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation on running basic python server and FastAPI server
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/outputs.py`
Content:
```
1 from typing import Dict, List, Optional
2
3 from vllm.sequence import SequenceGroup, SequenceStatus
4
5
6 class CompletionOutput:
7
8 def __init__(
9 self,
10 index: int,
11 text: str,
12 token_ids: List[int],
13 cumulative_logprob: float,
14 logprobs: List[Dict[int, float]],
15 finish_reason: Optional[str] = None,
16 ) -> None:
17 self.index = index
18 self.text = text
19 self.token_ids = token_ids
20 self.cumulative_logprob = cumulative_logprob
21 self.logprobs = logprobs
22 self.finish_reason = finish_reason
23
24 def finished(self) -> bool:
25 return self.finish_reason is not None
26
27 def __repr__(self) -> str:
28 return (f"CompletionOutput(index={self.index}, "
29 f"text={self.text!r}, "
30 f"token_ids={self.token_ids}, "
31 f"cumulative_logprob={self.cumulative_logprob}, "
32 f"logprobs={self.logprobs},"
33 f"finish_reason={self.finish_reason})")
34
35
36 class RequestOutput:
37
38 def __init__(
39 self,
40 request_id: str,
41 prompt: str,
42 prompt_token_ids: List[int],
43 outputs: List[CompletionOutput],
44 ) -> None:
45 self.request_id = request_id
46 self.prompt = prompt
47 self.prompt_token_ids = prompt_token_ids
48 self.outputs = outputs
49
50 @classmethod
51 def from_seq_group(cls, seq_group: SequenceGroup) -> "RequestOutput":
52 # Get the top-n sequences.
53 n = seq_group.sampling_params.n
54 seqs = seq_group.get_seqs()
55 assert n <= len(seqs)
56 sorted_seqs = sorted(
57 seqs, key=lambda seq: seq.get_cumulative_logprob(), reverse=True)
58 top_n_seqs = sorted_seqs[:n]
59
60 # Create the outputs.
61 outputs: List[CompletionOutput] = []
62 for seq in top_n_seqs:
63 logprobs = seq.output_logprobs
64 if seq_group.sampling_params.logprobs is None:
65 # NOTE: We need to take care of this case because the sequence
66 # always has the logprobs of the sampled tokens even if the
67 # logprobs are not requested.
68 logprobs = {}
69 finshed_reason = SequenceStatus.get_finished_reason(seq.status)
70 output = CompletionOutput(seqs.index(seq), seq.output_text,
71 seq.get_output_token_ids(),
72 seq.get_cumulative_logprob(), logprobs,
73 finshed_reason)
74 outputs.append(output)
75
76 # Every sequence in the sequence group should have the same prompt.
77 prompt = top_n_seqs[0].prompt
78 prompt_token_ids = top_n_seqs[0].data.prompt_token_ids
79 return cls(seq_group.request_id, prompt, prompt_token_ids, outputs)
80
81 def __repr__(self) -> str:
82 return (f"RequestOutput(request_id={self.request_id}, "
83 f"prompt={self.prompt!r}, "
84 f"prompt_token_ids={self.prompt_token_ids}, "
85 f"outputs={self.outputs})")
86
87 def finished(self) -> bool:
88 return all(output.finished() for output in self.outputs)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vllm/outputs.py b/vllm/outputs.py
--- a/vllm/outputs.py
+++ b/vllm/outputs.py
@@ -4,6 +4,18 @@
class CompletionOutput:
+ """The output data of one completion output of a request.
+
+ Args:
+ index: The index of the output in the request.
+ text: The generated output text.
+ token_ids: The token IDs of the generated output text.
+ cumulative_logprob: The cumulative log probability of the generated
+ output text.
+ logprobs: The log probabilities of the top probability words at each
+ position if the logprobs are requested.
+ finish_reason: The reason why the sequence is finished.
+ """
def __init__(
self,
@@ -11,7 +23,7 @@
text: str,
token_ids: List[int],
cumulative_logprob: float,
- logprobs: List[Dict[int, float]],
+ logprobs: Optional[List[Dict[int, float]]],
finish_reason: Optional[str] = None,
) -> None:
self.index = index
@@ -34,7 +46,14 @@
class RequestOutput:
+ """The output data of a request to the LLM.
+ Args:
+ request_id: The unique ID of the request.
+ prompt: The prompt string of the request.
+ prompt_token_ids: The token IDs of the prompt.
+ outputs: The output sequences of the request.
+ """
def __init__(
self,
request_id: str,
|
{"golden_diff": "diff --git a/vllm/outputs.py b/vllm/outputs.py\n--- a/vllm/outputs.py\n+++ b/vllm/outputs.py\n@@ -4,6 +4,18 @@\n \n \n class CompletionOutput:\n+ \"\"\"The output data of one completion output of a request.\n+\n+ Args:\n+ index: The index of the output in the request.\n+ text: The generated output text.\n+ token_ids: The token IDs of the generated output text.\n+ cumulative_logprob: The cumulative log probability of the generated\n+ output text.\n+ logprobs: The log probabilities of the top probability words at each\n+ position if the logprobs are requested.\n+ finish_reason: The reason why the sequence is finished.\n+ \"\"\"\n \n def __init__(\n self,\n@@ -11,7 +23,7 @@\n text: str,\n token_ids: List[int],\n cumulative_logprob: float,\n- logprobs: List[Dict[int, float]],\n+ logprobs: Optional[List[Dict[int, float]]],\n finish_reason: Optional[str] = None,\n ) -> None:\n self.index = index\n@@ -34,7 +46,14 @@\n \n \n class RequestOutput:\n+ \"\"\"The output data of a request to the LLM.\n \n+ Args:\n+ request_id: The unique ID of the request.\n+ prompt: The prompt string of the request.\n+ prompt_token_ids: The token IDs of the prompt.\n+ outputs: The output sequences of the request.\n+ \"\"\"\n def __init__(\n self,\n request_id: str,\n", "issue": "Documentation on running basic python server and FastAPI server\n\n", "before_files": [{"content": "from typing import Dict, List, Optional\n\nfrom vllm.sequence import SequenceGroup, SequenceStatus\n\n\nclass CompletionOutput:\n\n def __init__(\n self,\n index: int,\n text: str,\n token_ids: List[int],\n cumulative_logprob: float,\n logprobs: List[Dict[int, float]],\n finish_reason: Optional[str] = None,\n ) -> None:\n self.index = index\n self.text = text\n self.token_ids = token_ids\n self.cumulative_logprob = cumulative_logprob\n self.logprobs = logprobs\n self.finish_reason = finish_reason\n\n def finished(self) -> bool:\n return self.finish_reason is not None\n\n def __repr__(self) -> str:\n return (f\"CompletionOutput(index={self.index}, \"\n f\"text={self.text!r}, \"\n f\"token_ids={self.token_ids}, \"\n f\"cumulative_logprob={self.cumulative_logprob}, \"\n f\"logprobs={self.logprobs},\"\n f\"finish_reason={self.finish_reason})\")\n\n\nclass RequestOutput:\n\n def __init__(\n self,\n request_id: str,\n prompt: str,\n prompt_token_ids: List[int],\n outputs: List[CompletionOutput],\n ) -> None:\n self.request_id = request_id\n self.prompt = prompt\n self.prompt_token_ids = prompt_token_ids\n self.outputs = outputs\n\n @classmethod\n def from_seq_group(cls, seq_group: SequenceGroup) -> \"RequestOutput\":\n # Get the top-n sequences.\n n = seq_group.sampling_params.n\n seqs = seq_group.get_seqs()\n assert n <= len(seqs)\n sorted_seqs = sorted(\n seqs, key=lambda seq: seq.get_cumulative_logprob(), reverse=True)\n top_n_seqs = sorted_seqs[:n]\n\n # Create the outputs.\n outputs: List[CompletionOutput] = []\n for seq in top_n_seqs:\n logprobs = seq.output_logprobs\n if seq_group.sampling_params.logprobs is None:\n # NOTE: We need to take care of this case because the sequence\n # always has the logprobs of the sampled tokens even if the\n # logprobs are not requested.\n logprobs = {}\n finshed_reason = SequenceStatus.get_finished_reason(seq.status)\n output = CompletionOutput(seqs.index(seq), seq.output_text,\n seq.get_output_token_ids(),\n seq.get_cumulative_logprob(), logprobs,\n finshed_reason)\n outputs.append(output)\n\n # Every sequence in the sequence group should have the same prompt.\n prompt = top_n_seqs[0].prompt\n prompt_token_ids = top_n_seqs[0].data.prompt_token_ids\n return cls(seq_group.request_id, prompt, prompt_token_ids, outputs)\n\n def __repr__(self) -> str:\n return (f\"RequestOutput(request_id={self.request_id}, \"\n f\"prompt={self.prompt!r}, \"\n f\"prompt_token_ids={self.prompt_token_ids}, \"\n f\"outputs={self.outputs})\")\n\n def finished(self) -> bool:\n return all(output.finished() for output in self.outputs)\n", "path": "vllm/outputs.py"}], "after_files": [{"content": "from typing import Dict, List, Optional\n\nfrom vllm.sequence import SequenceGroup, SequenceStatus\n\n\nclass CompletionOutput:\n \"\"\"The output data of one completion output of a request.\n\n Args:\n index: The index of the output in the request.\n text: The generated output text.\n token_ids: The token IDs of the generated output text.\n cumulative_logprob: The cumulative log probability of the generated\n output text.\n logprobs: The log probabilities of the top probability words at each\n position if the logprobs are requested.\n finish_reason: The reason why the sequence is finished.\n \"\"\"\n\n def __init__(\n self,\n index: int,\n text: str,\n token_ids: List[int],\n cumulative_logprob: float,\n logprobs: Optional[List[Dict[int, float]]],\n finish_reason: Optional[str] = None,\n ) -> None:\n self.index = index\n self.text = text\n self.token_ids = token_ids\n self.cumulative_logprob = cumulative_logprob\n self.logprobs = logprobs\n self.finish_reason = finish_reason\n\n def finished(self) -> bool:\n return self.finish_reason is not None\n\n def __repr__(self) -> str:\n return (f\"CompletionOutput(index={self.index}, \"\n f\"text={self.text!r}, \"\n f\"token_ids={self.token_ids}, \"\n f\"cumulative_logprob={self.cumulative_logprob}, \"\n f\"logprobs={self.logprobs},\"\n f\"finish_reason={self.finish_reason})\")\n\n\nclass RequestOutput:\n \"\"\"The output data of a request to the LLM.\n\n Args:\n request_id: The unique ID of the request.\n prompt: The prompt string of the request.\n prompt_token_ids: The token IDs of the prompt.\n outputs: The output sequences of the request.\n \"\"\"\n def __init__(\n self,\n request_id: str,\n prompt: str,\n prompt_token_ids: List[int],\n outputs: List[CompletionOutput],\n ) -> None:\n self.request_id = request_id\n self.prompt = prompt\n self.prompt_token_ids = prompt_token_ids\n self.outputs = outputs\n\n @classmethod\n def from_seq_group(cls, seq_group: SequenceGroup) -> \"RequestOutput\":\n # Get the top-n sequences.\n n = seq_group.sampling_params.n\n seqs = seq_group.get_seqs()\n assert n <= len(seqs)\n sorted_seqs = sorted(\n seqs, key=lambda seq: seq.get_cumulative_logprob(), reverse=True)\n top_n_seqs = sorted_seqs[:n]\n\n # Create the outputs.\n outputs: List[CompletionOutput] = []\n for seq in top_n_seqs:\n logprobs = seq.output_logprobs\n if seq_group.sampling_params.logprobs is None:\n # NOTE: We need to take care of this case because the sequence\n # always has the logprobs of the sampled tokens even if the\n # logprobs are not requested.\n logprobs = {}\n finshed_reason = SequenceStatus.get_finished_reason(seq.status)\n output = CompletionOutput(seqs.index(seq), seq.output_text,\n seq.get_output_token_ids(),\n seq.get_cumulative_logprob(), logprobs,\n finshed_reason)\n outputs.append(output)\n\n # Every sequence in the sequence group should have the same prompt.\n prompt = top_n_seqs[0].prompt\n prompt_token_ids = top_n_seqs[0].data.prompt_token_ids\n return cls(seq_group.request_id, prompt, prompt_token_ids, outputs)\n\n def __repr__(self) -> str:\n return (f\"RequestOutput(request_id={self.request_id}, \"\n f\"prompt={self.prompt!r}, \"\n f\"prompt_token_ids={self.prompt_token_ids}, \"\n f\"outputs={self.outputs})\")\n\n def finished(self) -> bool:\n return all(output.finished() for output in self.outputs)\n", "path": "vllm/outputs.py"}]}
| 1,137 | 360 |
gh_patches_debug_36857
|
rasdani/github-patches
|
git_diff
|
HypothesisWorks__hypothesis-2256
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecate use of other decorators without `@given`
This was first suggested in #1135 for `@settings()`, but the approach taken did not scale as each decorator would have to know about all of the others. Fortunately, #2162 gave us a much nicer option: we can just check for other decorators in our pytest plugin, when we already know that `@given` was never applied!
That means adding a deprecation warning for each of `@example`, `@seed`, and `@reproduce_failure` based on the special attributes they attach to the test function.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hypothesis-python/src/hypothesis/extra/pytestplugin.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis/
5 #
6 # Most of this work is copyright (C) 2013-2019 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at https://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import absolute_import, division, print_function
19
20 from distutils.version import LooseVersion
21
22 import pytest
23
24 from hypothesis import Verbosity, core, settings
25 from hypothesis._settings import note_deprecation
26 from hypothesis.errors import InvalidArgument
27 from hypothesis.internal.compat import text_type
28 from hypothesis.internal.detection import is_hypothesis_test
29 from hypothesis.reporting import default as default_reporter, with_reporter
30 from hypothesis.statistics import collector
31
32 LOAD_PROFILE_OPTION = "--hypothesis-profile"
33 VERBOSITY_OPTION = "--hypothesis-verbosity"
34 PRINT_STATISTICS_OPTION = "--hypothesis-show-statistics"
35 SEED_OPTION = "--hypothesis-seed"
36
37
38 class StoringReporter(object):
39 def __init__(self, config):
40 self.config = config
41 self.results = []
42
43 def __call__(self, msg):
44 if self.config.getoption("capture", "fd") == "no":
45 default_reporter(msg)
46 if not isinstance(msg, text_type):
47 msg = repr(msg)
48 self.results.append(msg)
49
50
51 if LooseVersion(pytest.__version__) < "4.3": # pragma: no cover
52 import warnings
53 from hypothesis.errors import HypothesisWarning
54
55 PYTEST_TOO_OLD_MESSAGE = """
56 You are using Pytest version %s. Hypothesis tests work with any test
57 runner, but our Pytest plugin requires Pytest 4.3 or newer.
58 Note that the Pytest developers no longer support this version either!
59 Disabling the Hypothesis pytest plugin...
60 """
61 warnings.warn(PYTEST_TOO_OLD_MESSAGE % (pytest.__version__,), HypothesisWarning)
62
63 else:
64
65 def pytest_addoption(parser):
66 group = parser.getgroup("hypothesis", "Hypothesis")
67 group.addoption(
68 LOAD_PROFILE_OPTION,
69 action="store",
70 help="Load in a registered hypothesis.settings profile",
71 )
72 group.addoption(
73 VERBOSITY_OPTION,
74 action="store",
75 choices=[opt.name for opt in Verbosity],
76 help="Override profile with verbosity setting specified",
77 )
78 group.addoption(
79 PRINT_STATISTICS_OPTION,
80 action="store_true",
81 help="Configure when statistics are printed",
82 default=False,
83 )
84 group.addoption(
85 SEED_OPTION,
86 action="store",
87 help="Set a seed to use for all Hypothesis tests",
88 )
89
90 def pytest_report_header(config):
91 profile = config.getoption(LOAD_PROFILE_OPTION)
92 if not profile:
93 profile = settings._current_profile
94 settings_str = settings.get_profile(profile).show_changed()
95 if settings_str != "":
96 settings_str = " -> %s" % (settings_str)
97 if (
98 config.option.verbose >= 1
99 or settings.default.verbosity >= Verbosity.verbose
100 ):
101 return "hypothesis profile %r%s" % (profile, settings_str)
102
103 def pytest_configure(config):
104 core.running_under_pytest = True
105 profile = config.getoption(LOAD_PROFILE_OPTION)
106 if profile:
107 settings.load_profile(profile)
108 verbosity_name = config.getoption(VERBOSITY_OPTION)
109 if verbosity_name:
110 verbosity_value = Verbosity[verbosity_name]
111 profile_name = "%s-with-%s-verbosity" % (
112 settings._current_profile,
113 verbosity_name,
114 )
115 # register_profile creates a new profile, exactly like the current one,
116 # with the extra values given (in this case 'verbosity')
117 settings.register_profile(profile_name, verbosity=verbosity_value)
118 settings.load_profile(profile_name)
119 seed = config.getoption(SEED_OPTION)
120 if seed is not None:
121 try:
122 seed = int(seed)
123 except ValueError:
124 pass
125 core.global_force_seed = seed
126 config.addinivalue_line("markers", "hypothesis: Tests which use hypothesis.")
127
128 @pytest.hookimpl(hookwrapper=True)
129 def pytest_runtest_call(item):
130 if not hasattr(item, "obj"):
131 yield
132 elif not is_hypothesis_test(item.obj):
133 # If @given was not applied, check whether other hypothesis
134 # decorators were applied, and raise an error if they were.
135 if getattr(item.obj, "_hypothesis_internal_settings_applied", False):
136 raise InvalidArgument(
137 "Using `@settings` on a test without `@given` is completely pointless."
138 )
139 yield
140 else:
141 if item.get_closest_marker("parametrize") is not None:
142 # Give every parametrized test invocation a unique database key
143 key = item.nodeid.encode("utf-8")
144 item.obj.hypothesis.inner_test._hypothesis_internal_add_digest = key
145
146 store = StoringReporter(item.config)
147
148 def note_statistics(stats):
149 lines = [item.nodeid + ":", ""] + stats.get_description() + [""]
150 item.hypothesis_statistics = lines
151
152 with collector.with_value(note_statistics):
153 with with_reporter(store):
154 yield
155 if store.results:
156 item.hypothesis_report_information = list(store.results)
157
158 @pytest.hookimpl(hookwrapper=True)
159 def pytest_runtest_makereport(item, call):
160 report = (yield).get_result()
161 if hasattr(item, "hypothesis_report_information"):
162 report.sections.append(
163 ("Hypothesis", "\n".join(item.hypothesis_report_information))
164 )
165 if hasattr(item, "hypothesis_statistics") and report.when == "teardown":
166 val = ("hypothesis-stats", item.hypothesis_statistics)
167 report.user_properties.append(val)
168
169 def pytest_terminal_summary(terminalreporter):
170 if not terminalreporter.config.getoption(PRINT_STATISTICS_OPTION):
171 return
172 terminalreporter.section("Hypothesis Statistics")
173 # terminalreporter.stats is a dict, where the empty string appears to
174 # always be the key for a list of _pytest.reports.TestReport objects
175 # (where we stored the statistics data in pytest_runtest_makereport above)
176 for test_report in terminalreporter.stats.get("", []):
177 for name, lines in test_report.user_properties:
178 if name == "hypothesis-stats" and test_report.when == "teardown":
179 for li in lines:
180 terminalreporter.write_line(li)
181
182 def pytest_collection_modifyitems(items):
183 for item in items:
184 if not isinstance(item, pytest.Function):
185 continue
186 if is_hypothesis_test(item.obj):
187 item.add_marker("hypothesis")
188 if getattr(item.obj, "is_hypothesis_strategy_function", False):
189
190 def note_strategy_is_not_test(*args, **kwargs):
191 note_deprecation(
192 "%s is a function that returns a Hypothesis strategy, "
193 "but pytest has collected it as a test function. This "
194 "is useless as the function body will never be executed. "
195 "To define a test function, use @given instead of "
196 "@composite." % (item.nodeid,),
197 since="2018-11-02",
198 )
199
200 item.obj = note_strategy_is_not_test
201
202
203 def load():
204 """Required for `pluggy` to load a plugin from setuptools entrypoints."""
205
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hypothesis-python/src/hypothesis/extra/pytestplugin.py b/hypothesis-python/src/hypothesis/extra/pytestplugin.py
--- a/hypothesis-python/src/hypothesis/extra/pytestplugin.py
+++ b/hypothesis-python/src/hypothesis/extra/pytestplugin.py
@@ -132,10 +132,24 @@
elif not is_hypothesis_test(item.obj):
# If @given was not applied, check whether other hypothesis
# decorators were applied, and raise an error if they were.
+ message = "Using `@%s` on a test without `@given` is completely pointless."
if getattr(item.obj, "_hypothesis_internal_settings_applied", False):
- raise InvalidArgument(
- "Using `@settings` on a test without `@given` is completely pointless."
+ raise InvalidArgument(message % ("settings",))
+ if getattr(item.obj, "is_hypothesis_strategy_function", False):
+ note_deprecation(
+ "%s is a function that returns a Hypothesis strategy, but pytest "
+ "has collected it as a test function. This is useless as the "
+ "function body will never be executed. To define a test "
+ "function, use @given instead of @composite." % (item.nodeid,),
+ since="2018-11-02",
)
+ for name, attribute in [
+ ("example", "hypothesis_explicit_examples"),
+ ("seed", "_hypothesis_internal_use_seed"),
+ ("reproduce_example", "_hypothesis_internal_use_reproduce_failure"),
+ ]:
+ if hasattr(item.obj, attribute):
+ note_deprecation(message % (name,), since="RELEASEDAY")
yield
else:
if item.get_closest_marker("parametrize") is not None:
@@ -181,23 +195,8 @@
def pytest_collection_modifyitems(items):
for item in items:
- if not isinstance(item, pytest.Function):
- continue
- if is_hypothesis_test(item.obj):
+ if isinstance(item, pytest.Function) and is_hypothesis_test(item.obj):
item.add_marker("hypothesis")
- if getattr(item.obj, "is_hypothesis_strategy_function", False):
-
- def note_strategy_is_not_test(*args, **kwargs):
- note_deprecation(
- "%s is a function that returns a Hypothesis strategy, "
- "but pytest has collected it as a test function. This "
- "is useless as the function body will never be executed. "
- "To define a test function, use @given instead of "
- "@composite." % (item.nodeid,),
- since="2018-11-02",
- )
-
- item.obj = note_strategy_is_not_test
def load():
|
{"golden_diff": "diff --git a/hypothesis-python/src/hypothesis/extra/pytestplugin.py b/hypothesis-python/src/hypothesis/extra/pytestplugin.py\n--- a/hypothesis-python/src/hypothesis/extra/pytestplugin.py\n+++ b/hypothesis-python/src/hypothesis/extra/pytestplugin.py\n@@ -132,10 +132,24 @@\n elif not is_hypothesis_test(item.obj):\n # If @given was not applied, check whether other hypothesis\n # decorators were applied, and raise an error if they were.\n+ message = \"Using `@%s` on a test without `@given` is completely pointless.\"\n if getattr(item.obj, \"_hypothesis_internal_settings_applied\", False):\n- raise InvalidArgument(\n- \"Using `@settings` on a test without `@given` is completely pointless.\"\n+ raise InvalidArgument(message % (\"settings\",))\n+ if getattr(item.obj, \"is_hypothesis_strategy_function\", False):\n+ note_deprecation(\n+ \"%s is a function that returns a Hypothesis strategy, but pytest \"\n+ \"has collected it as a test function. This is useless as the \"\n+ \"function body will never be executed. To define a test \"\n+ \"function, use @given instead of @composite.\" % (item.nodeid,),\n+ since=\"2018-11-02\",\n )\n+ for name, attribute in [\n+ (\"example\", \"hypothesis_explicit_examples\"),\n+ (\"seed\", \"_hypothesis_internal_use_seed\"),\n+ (\"reproduce_example\", \"_hypothesis_internal_use_reproduce_failure\"),\n+ ]:\n+ if hasattr(item.obj, attribute):\n+ note_deprecation(message % (name,), since=\"RELEASEDAY\")\n yield\n else:\n if item.get_closest_marker(\"parametrize\") is not None:\n@@ -181,23 +195,8 @@\n \n def pytest_collection_modifyitems(items):\n for item in items:\n- if not isinstance(item, pytest.Function):\n- continue\n- if is_hypothesis_test(item.obj):\n+ if isinstance(item, pytest.Function) and is_hypothesis_test(item.obj):\n item.add_marker(\"hypothesis\")\n- if getattr(item.obj, \"is_hypothesis_strategy_function\", False):\n-\n- def note_strategy_is_not_test(*args, **kwargs):\n- note_deprecation(\n- \"%s is a function that returns a Hypothesis strategy, \"\n- \"but pytest has collected it as a test function. This \"\n- \"is useless as the function body will never be executed. \"\n- \"To define a test function, use @given instead of \"\n- \"@composite.\" % (item.nodeid,),\n- since=\"2018-11-02\",\n- )\n-\n- item.obj = note_strategy_is_not_test\n \n \n def load():\n", "issue": "Deprecate use of other decorators without `@given`\nThis was first suggested in #1135 for `@settings()`, but the approach taken did not scale as each decorator would have to know about all of the others. Fortunately, #2162 gave us a much nicer option: we can just check for other decorators in our pytest plugin, when we already know that `@given` was never applied!\r\n\r\nThat means adding a deprecation warning for each of `@example`, `@seed`, and `@reproduce_failure` based on the special attributes they attach to the test function.\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis/\n#\n# Most of this work is copyright (C) 2013-2019 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at https://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom distutils.version import LooseVersion\n\nimport pytest\n\nfrom hypothesis import Verbosity, core, settings\nfrom hypothesis._settings import note_deprecation\nfrom hypothesis.errors import InvalidArgument\nfrom hypothesis.internal.compat import text_type\nfrom hypothesis.internal.detection import is_hypothesis_test\nfrom hypothesis.reporting import default as default_reporter, with_reporter\nfrom hypothesis.statistics import collector\n\nLOAD_PROFILE_OPTION = \"--hypothesis-profile\"\nVERBOSITY_OPTION = \"--hypothesis-verbosity\"\nPRINT_STATISTICS_OPTION = \"--hypothesis-show-statistics\"\nSEED_OPTION = \"--hypothesis-seed\"\n\n\nclass StoringReporter(object):\n def __init__(self, config):\n self.config = config\n self.results = []\n\n def __call__(self, msg):\n if self.config.getoption(\"capture\", \"fd\") == \"no\":\n default_reporter(msg)\n if not isinstance(msg, text_type):\n msg = repr(msg)\n self.results.append(msg)\n\n\nif LooseVersion(pytest.__version__) < \"4.3\": # pragma: no cover\n import warnings\n from hypothesis.errors import HypothesisWarning\n\n PYTEST_TOO_OLD_MESSAGE = \"\"\"\n You are using Pytest version %s. Hypothesis tests work with any test\n runner, but our Pytest plugin requires Pytest 4.3 or newer.\n Note that the Pytest developers no longer support this version either!\n Disabling the Hypothesis pytest plugin...\n \"\"\"\n warnings.warn(PYTEST_TOO_OLD_MESSAGE % (pytest.__version__,), HypothesisWarning)\n\nelse:\n\n def pytest_addoption(parser):\n group = parser.getgroup(\"hypothesis\", \"Hypothesis\")\n group.addoption(\n LOAD_PROFILE_OPTION,\n action=\"store\",\n help=\"Load in a registered hypothesis.settings profile\",\n )\n group.addoption(\n VERBOSITY_OPTION,\n action=\"store\",\n choices=[opt.name for opt in Verbosity],\n help=\"Override profile with verbosity setting specified\",\n )\n group.addoption(\n PRINT_STATISTICS_OPTION,\n action=\"store_true\",\n help=\"Configure when statistics are printed\",\n default=False,\n )\n group.addoption(\n SEED_OPTION,\n action=\"store\",\n help=\"Set a seed to use for all Hypothesis tests\",\n )\n\n def pytest_report_header(config):\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if not profile:\n profile = settings._current_profile\n settings_str = settings.get_profile(profile).show_changed()\n if settings_str != \"\":\n settings_str = \" -> %s\" % (settings_str)\n if (\n config.option.verbose >= 1\n or settings.default.verbosity >= Verbosity.verbose\n ):\n return \"hypothesis profile %r%s\" % (profile, settings_str)\n\n def pytest_configure(config):\n core.running_under_pytest = True\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if profile:\n settings.load_profile(profile)\n verbosity_name = config.getoption(VERBOSITY_OPTION)\n if verbosity_name:\n verbosity_value = Verbosity[verbosity_name]\n profile_name = \"%s-with-%s-verbosity\" % (\n settings._current_profile,\n verbosity_name,\n )\n # register_profile creates a new profile, exactly like the current one,\n # with the extra values given (in this case 'verbosity')\n settings.register_profile(profile_name, verbosity=verbosity_value)\n settings.load_profile(profile_name)\n seed = config.getoption(SEED_OPTION)\n if seed is not None:\n try:\n seed = int(seed)\n except ValueError:\n pass\n core.global_force_seed = seed\n config.addinivalue_line(\"markers\", \"hypothesis: Tests which use hypothesis.\")\n\n @pytest.hookimpl(hookwrapper=True)\n def pytest_runtest_call(item):\n if not hasattr(item, \"obj\"):\n yield\n elif not is_hypothesis_test(item.obj):\n # If @given was not applied, check whether other hypothesis\n # decorators were applied, and raise an error if they were.\n if getattr(item.obj, \"_hypothesis_internal_settings_applied\", False):\n raise InvalidArgument(\n \"Using `@settings` on a test without `@given` is completely pointless.\"\n )\n yield\n else:\n if item.get_closest_marker(\"parametrize\") is not None:\n # Give every parametrized test invocation a unique database key\n key = item.nodeid.encode(\"utf-8\")\n item.obj.hypothesis.inner_test._hypothesis_internal_add_digest = key\n\n store = StoringReporter(item.config)\n\n def note_statistics(stats):\n lines = [item.nodeid + \":\", \"\"] + stats.get_description() + [\"\"]\n item.hypothesis_statistics = lines\n\n with collector.with_value(note_statistics):\n with with_reporter(store):\n yield\n if store.results:\n item.hypothesis_report_information = list(store.results)\n\n @pytest.hookimpl(hookwrapper=True)\n def pytest_runtest_makereport(item, call):\n report = (yield).get_result()\n if hasattr(item, \"hypothesis_report_information\"):\n report.sections.append(\n (\"Hypothesis\", \"\\n\".join(item.hypothesis_report_information))\n )\n if hasattr(item, \"hypothesis_statistics\") and report.when == \"teardown\":\n val = (\"hypothesis-stats\", item.hypothesis_statistics)\n report.user_properties.append(val)\n\n def pytest_terminal_summary(terminalreporter):\n if not terminalreporter.config.getoption(PRINT_STATISTICS_OPTION):\n return\n terminalreporter.section(\"Hypothesis Statistics\")\n # terminalreporter.stats is a dict, where the empty string appears to\n # always be the key for a list of _pytest.reports.TestReport objects\n # (where we stored the statistics data in pytest_runtest_makereport above)\n for test_report in terminalreporter.stats.get(\"\", []):\n for name, lines in test_report.user_properties:\n if name == \"hypothesis-stats\" and test_report.when == \"teardown\":\n for li in lines:\n terminalreporter.write_line(li)\n\n def pytest_collection_modifyitems(items):\n for item in items:\n if not isinstance(item, pytest.Function):\n continue\n if is_hypothesis_test(item.obj):\n item.add_marker(\"hypothesis\")\n if getattr(item.obj, \"is_hypothesis_strategy_function\", False):\n\n def note_strategy_is_not_test(*args, **kwargs):\n note_deprecation(\n \"%s is a function that returns a Hypothesis strategy, \"\n \"but pytest has collected it as a test function. This \"\n \"is useless as the function body will never be executed. \"\n \"To define a test function, use @given instead of \"\n \"@composite.\" % (item.nodeid,),\n since=\"2018-11-02\",\n )\n\n item.obj = note_strategy_is_not_test\n\n\ndef load():\n \"\"\"Required for `pluggy` to load a plugin from setuptools entrypoints.\"\"\"\n", "path": "hypothesis-python/src/hypothesis/extra/pytestplugin.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis/\n#\n# Most of this work is copyright (C) 2013-2019 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at https://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom distutils.version import LooseVersion\n\nimport pytest\n\nfrom hypothesis import Verbosity, core, settings\nfrom hypothesis._settings import note_deprecation\nfrom hypothesis.errors import InvalidArgument\nfrom hypothesis.internal.compat import text_type\nfrom hypothesis.internal.detection import is_hypothesis_test\nfrom hypothesis.reporting import default as default_reporter, with_reporter\nfrom hypothesis.statistics import collector\n\nLOAD_PROFILE_OPTION = \"--hypothesis-profile\"\nVERBOSITY_OPTION = \"--hypothesis-verbosity\"\nPRINT_STATISTICS_OPTION = \"--hypothesis-show-statistics\"\nSEED_OPTION = \"--hypothesis-seed\"\n\n\nclass StoringReporter(object):\n def __init__(self, config):\n self.config = config\n self.results = []\n\n def __call__(self, msg):\n if self.config.getoption(\"capture\", \"fd\") == \"no\":\n default_reporter(msg)\n if not isinstance(msg, text_type):\n msg = repr(msg)\n self.results.append(msg)\n\n\nif LooseVersion(pytest.__version__) < \"4.3\": # pragma: no cover\n import warnings\n from hypothesis.errors import HypothesisWarning\n\n PYTEST_TOO_OLD_MESSAGE = \"\"\"\n You are using Pytest version %s. Hypothesis tests work with any test\n runner, but our Pytest plugin requires Pytest 4.3 or newer.\n Note that the Pytest developers no longer support this version either!\n Disabling the Hypothesis pytest plugin...\n \"\"\"\n warnings.warn(PYTEST_TOO_OLD_MESSAGE % (pytest.__version__,), HypothesisWarning)\n\nelse:\n\n def pytest_addoption(parser):\n group = parser.getgroup(\"hypothesis\", \"Hypothesis\")\n group.addoption(\n LOAD_PROFILE_OPTION,\n action=\"store\",\n help=\"Load in a registered hypothesis.settings profile\",\n )\n group.addoption(\n VERBOSITY_OPTION,\n action=\"store\",\n choices=[opt.name for opt in Verbosity],\n help=\"Override profile with verbosity setting specified\",\n )\n group.addoption(\n PRINT_STATISTICS_OPTION,\n action=\"store_true\",\n help=\"Configure when statistics are printed\",\n default=False,\n )\n group.addoption(\n SEED_OPTION,\n action=\"store\",\n help=\"Set a seed to use for all Hypothesis tests\",\n )\n\n def pytest_report_header(config):\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if not profile:\n profile = settings._current_profile\n settings_str = settings.get_profile(profile).show_changed()\n if settings_str != \"\":\n settings_str = \" -> %s\" % (settings_str)\n if (\n config.option.verbose >= 1\n or settings.default.verbosity >= Verbosity.verbose\n ):\n return \"hypothesis profile %r%s\" % (profile, settings_str)\n\n def pytest_configure(config):\n core.running_under_pytest = True\n profile = config.getoption(LOAD_PROFILE_OPTION)\n if profile:\n settings.load_profile(profile)\n verbosity_name = config.getoption(VERBOSITY_OPTION)\n if verbosity_name:\n verbosity_value = Verbosity[verbosity_name]\n profile_name = \"%s-with-%s-verbosity\" % (\n settings._current_profile,\n verbosity_name,\n )\n # register_profile creates a new profile, exactly like the current one,\n # with the extra values given (in this case 'verbosity')\n settings.register_profile(profile_name, verbosity=verbosity_value)\n settings.load_profile(profile_name)\n seed = config.getoption(SEED_OPTION)\n if seed is not None:\n try:\n seed = int(seed)\n except ValueError:\n pass\n core.global_force_seed = seed\n config.addinivalue_line(\"markers\", \"hypothesis: Tests which use hypothesis.\")\n\n @pytest.hookimpl(hookwrapper=True)\n def pytest_runtest_call(item):\n if not hasattr(item, \"obj\"):\n yield\n elif not is_hypothesis_test(item.obj):\n # If @given was not applied, check whether other hypothesis\n # decorators were applied, and raise an error if they were.\n message = \"Using `@%s` on a test without `@given` is completely pointless.\"\n if getattr(item.obj, \"_hypothesis_internal_settings_applied\", False):\n raise InvalidArgument(message % (\"settings\",))\n if getattr(item.obj, \"is_hypothesis_strategy_function\", False):\n note_deprecation(\n \"%s is a function that returns a Hypothesis strategy, but pytest \"\n \"has collected it as a test function. This is useless as the \"\n \"function body will never be executed. To define a test \"\n \"function, use @given instead of @composite.\" % (item.nodeid,),\n since=\"2018-11-02\",\n )\n for name, attribute in [\n (\"example\", \"hypothesis_explicit_examples\"),\n (\"seed\", \"_hypothesis_internal_use_seed\"),\n (\"reproduce_example\", \"_hypothesis_internal_use_reproduce_failure\"),\n ]:\n if hasattr(item.obj, attribute):\n note_deprecation(message % (name,), since=\"RELEASEDAY\")\n yield\n else:\n if item.get_closest_marker(\"parametrize\") is not None:\n # Give every parametrized test invocation a unique database key\n key = item.nodeid.encode(\"utf-8\")\n item.obj.hypothesis.inner_test._hypothesis_internal_add_digest = key\n\n store = StoringReporter(item.config)\n\n def note_statistics(stats):\n lines = [item.nodeid + \":\", \"\"] + stats.get_description() + [\"\"]\n item.hypothesis_statistics = lines\n\n with collector.with_value(note_statistics):\n with with_reporter(store):\n yield\n if store.results:\n item.hypothesis_report_information = list(store.results)\n\n @pytest.hookimpl(hookwrapper=True)\n def pytest_runtest_makereport(item, call):\n report = (yield).get_result()\n if hasattr(item, \"hypothesis_report_information\"):\n report.sections.append(\n (\"Hypothesis\", \"\\n\".join(item.hypothesis_report_information))\n )\n if hasattr(item, \"hypothesis_statistics\") and report.when == \"teardown\":\n val = (\"hypothesis-stats\", item.hypothesis_statistics)\n report.user_properties.append(val)\n\n def pytest_terminal_summary(terminalreporter):\n if not terminalreporter.config.getoption(PRINT_STATISTICS_OPTION):\n return\n terminalreporter.section(\"Hypothesis Statistics\")\n # terminalreporter.stats is a dict, where the empty string appears to\n # always be the key for a list of _pytest.reports.TestReport objects\n # (where we stored the statistics data in pytest_runtest_makereport above)\n for test_report in terminalreporter.stats.get(\"\", []):\n for name, lines in test_report.user_properties:\n if name == \"hypothesis-stats\" and test_report.when == \"teardown\":\n for li in lines:\n terminalreporter.write_line(li)\n\n def pytest_collection_modifyitems(items):\n for item in items:\n if isinstance(item, pytest.Function) and is_hypothesis_test(item.obj):\n item.add_marker(\"hypothesis\")\n\n\ndef load():\n \"\"\"Required for `pluggy` to load a plugin from setuptools entrypoints.\"\"\"\n", "path": "hypothesis-python/src/hypothesis/extra/pytestplugin.py"}]}
| 2,624 | 639 |
gh_patches_debug_22017
|
rasdani/github-patches
|
git_diff
|
facebookresearch__CompilerGym-563
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`statistics.py` wrong parameter name
## 🐛 Bug
The functions [here](https://github.com/facebookresearch/CompilerGym/blob/e248330d2475fbcdf473cc3df951f25b5eaf4945/compiler_gym/util/statistics.py#L8) says they take `iterable` as inputs. However, `np.asarray` actually take `array_like`.
[Quote:
](https://numpy.org/doc/stable/reference/generated/numpy.asarray.html)
> Input data, in any form that can be converted to an array. This includes lists, lists of tuples, tuples, tuples of tuples, tuples of lists and ndarrays.
e.g.
```python
geometric_mean(i for i in range(10))
```
This will fail because though it's an `iterable`, it's not an `array_like`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `compiler_gym/util/statistics.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5 import numpy as np
6
7
8 def geometric_mean(iterable):
9 """Zero-length-safe geometric mean."""
10 values = np.asarray(iterable)
11 if not values.size:
12 return 0
13 # Shortcut to return 0 when any element of the input is not positive.
14 if not np.all(values > 0):
15 return 0
16 a = np.log(values)
17 return np.exp(a.sum() / len(a))
18
19
20 def arithmetic_mean(iterable):
21 """Zero-length-safe arithmetic mean."""
22 values = np.asarray(iterable)
23 if not values.size:
24 return 0
25 return values.mean()
26
27
28 def stdev(iterable):
29 """Zero-length-safe standard deviation."""
30 values = np.asarray(iterable)
31 if not values.size:
32 return 0
33 return values.std()
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/compiler_gym/util/statistics.py b/compiler_gym/util/statistics.py
--- a/compiler_gym/util/statistics.py
+++ b/compiler_gym/util/statistics.py
@@ -5,9 +5,9 @@
import numpy as np
-def geometric_mean(iterable):
+def geometric_mean(array_like):
"""Zero-length-safe geometric mean."""
- values = np.asarray(iterable)
+ values = np.asarray(array_like)
if not values.size:
return 0
# Shortcut to return 0 when any element of the input is not positive.
@@ -17,17 +17,17 @@
return np.exp(a.sum() / len(a))
-def arithmetic_mean(iterable):
+def arithmetic_mean(array_like):
"""Zero-length-safe arithmetic mean."""
- values = np.asarray(iterable)
+ values = np.asarray(array_like)
if not values.size:
return 0
return values.mean()
-def stdev(iterable):
+def stdev(array_like):
"""Zero-length-safe standard deviation."""
- values = np.asarray(iterable)
+ values = np.asarray(array_like)
if not values.size:
return 0
return values.std()
|
{"golden_diff": "diff --git a/compiler_gym/util/statistics.py b/compiler_gym/util/statistics.py\n--- a/compiler_gym/util/statistics.py\n+++ b/compiler_gym/util/statistics.py\n@@ -5,9 +5,9 @@\n import numpy as np\n \n \n-def geometric_mean(iterable):\n+def geometric_mean(array_like):\n \"\"\"Zero-length-safe geometric mean.\"\"\"\n- values = np.asarray(iterable)\n+ values = np.asarray(array_like)\n if not values.size:\n return 0\n # Shortcut to return 0 when any element of the input is not positive.\n@@ -17,17 +17,17 @@\n return np.exp(a.sum() / len(a))\n \n \n-def arithmetic_mean(iterable):\n+def arithmetic_mean(array_like):\n \"\"\"Zero-length-safe arithmetic mean.\"\"\"\n- values = np.asarray(iterable)\n+ values = np.asarray(array_like)\n if not values.size:\n return 0\n return values.mean()\n \n \n-def stdev(iterable):\n+def stdev(array_like):\n \"\"\"Zero-length-safe standard deviation.\"\"\"\n- values = np.asarray(iterable)\n+ values = np.asarray(array_like)\n if not values.size:\n return 0\n return values.std()\n", "issue": "`statistics.py` wrong parameter name\n## \ud83d\udc1b Bug\r\n\r\nThe functions [here](https://github.com/facebookresearch/CompilerGym/blob/e248330d2475fbcdf473cc3df951f25b5eaf4945/compiler_gym/util/statistics.py#L8) says they take `iterable` as inputs. However, `np.asarray` actually take `array_like`.\r\n\r\n[Quote:\r\n](https://numpy.org/doc/stable/reference/generated/numpy.asarray.html)\r\n\r\n> Input data, in any form that can be converted to an array. This includes lists, lists of tuples, tuples, tuples of tuples, tuples of lists and ndarrays.\r\n\r\ne.g.\r\n```python\r\ngeometric_mean(i for i in range(10))\r\n```\r\nThis will fail because though it's an `iterable`, it's not an `array_like`.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nimport numpy as np\n\n\ndef geometric_mean(iterable):\n \"\"\"Zero-length-safe geometric mean.\"\"\"\n values = np.asarray(iterable)\n if not values.size:\n return 0\n # Shortcut to return 0 when any element of the input is not positive.\n if not np.all(values > 0):\n return 0\n a = np.log(values)\n return np.exp(a.sum() / len(a))\n\n\ndef arithmetic_mean(iterable):\n \"\"\"Zero-length-safe arithmetic mean.\"\"\"\n values = np.asarray(iterable)\n if not values.size:\n return 0\n return values.mean()\n\n\ndef stdev(iterable):\n \"\"\"Zero-length-safe standard deviation.\"\"\"\n values = np.asarray(iterable)\n if not values.size:\n return 0\n return values.std()\n", "path": "compiler_gym/util/statistics.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nimport numpy as np\n\n\ndef geometric_mean(array_like):\n \"\"\"Zero-length-safe geometric mean.\"\"\"\n values = np.asarray(array_like)\n if not values.size:\n return 0\n # Shortcut to return 0 when any element of the input is not positive.\n if not np.all(values > 0):\n return 0\n a = np.log(values)\n return np.exp(a.sum() / len(a))\n\n\ndef arithmetic_mean(array_like):\n \"\"\"Zero-length-safe arithmetic mean.\"\"\"\n values = np.asarray(array_like)\n if not values.size:\n return 0\n return values.mean()\n\n\ndef stdev(array_like):\n \"\"\"Zero-length-safe standard deviation.\"\"\"\n values = np.asarray(array_like)\n if not values.size:\n return 0\n return values.std()\n", "path": "compiler_gym/util/statistics.py"}]}
| 723 | 260 |
gh_patches_debug_26408
|
rasdani/github-patches
|
git_diff
|
mars-project__mars-646
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
We should use numpy.fft only because it is faster than scipy.fftpack
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
Maybe at a point in the history, scipy's fftpack is faster than numpy's fft module. Thus we do some optimization in the execution of fft that if scipy is installed, scipy's fftpack would be used to calculate fft. However, recently, I found that numpy's fft is obviously faster than scipy's fftpack. Sample code is shown below.
```python
In [1]: N = 1600000
In [2]: import numpy as np
In [3]: a = np.random.rand(N, 10)
In [4]: %timeit np.fft.fft(a)
118 ms ± 1.33 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [6]: import scipy.fftpack as sfft
In [7]: %timeit sfft.fft(a)
290 ms ± 3.65 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [9]: np.testing.assert_allclose(np.fft.fft(a), sfft.fft(a))
```
Hence, I suggest to use numpy only to calculate fft.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mars/tensor/fft/core.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from collections import Iterable
18
19 from ...compat import izip
20 from ...serialize import ValueType, KeyField, StringField, Int32Field, \
21 Int64Field, ListField
22 from ..utils import validate_axis, decide_chunk_sizes, recursive_tile
23 from ..operands import TensorHasInput, TensorOperandMixin
24 import numpy as np
25 from ..array_utils import get_array_module
26
27 try:
28 import scipy.fftpack as scifft
29 except ImportError: # pragma: no cover
30 scifft = None
31
32
33 class TensorFFTBaseMixin(TensorOperandMixin):
34 __slots__ = ()
35
36 @classmethod
37 def _get_shape(cls, op, shape):
38 raise NotImplementedError
39
40 @classmethod
41 def _tile_fft(cls, op, axes):
42 in_tensor = op.inputs[0]
43 out_tensor = op.outputs[0]
44
45 if any(in_tensor.chunk_shape[axis] != 1 for axis in axes):
46 # fft requires only 1 chunk for the specified axis, so we do rechunk first
47 chunks = {validate_axis(in_tensor.ndim, axis): in_tensor.shape[axis] for axis in axes}
48 new_chunks = decide_chunk_sizes(in_tensor.shape, chunks, in_tensor.dtype.itemsize)
49 in_tensor = in_tensor.rechunk(new_chunks).single_tiles()
50
51 out_chunks = []
52 for c in in_tensor.chunks:
53 chunk_op = op.copy().reset_key()
54 chunk_shape = cls._get_shape(op, c.shape)
55 out_chunk = chunk_op.new_chunk([c], shape=chunk_shape,
56 index=c.index, order=out_tensor.order)
57 out_chunks.append(out_chunk)
58
59 nsplits = [tuple(c.shape[i] for c in out_chunks
60 if all(idx == 0 for j, idx in enumerate(c.index) if j != i))
61 for i in range(len(out_chunks[0].shape))]
62 new_op = op.copy()
63 return new_op.new_tensors(op.inputs, out_tensor.shape, order=out_tensor.order,
64 chunks=out_chunks, nsplits=nsplits)
65
66 def __call__(self, a, order=None):
67 shape = self._get_shape(self, a.shape)
68 order = a.order if order is None else order
69 return self.new_tensor([a], shape, order=order)
70
71
72 class TensorFFTMixin(TensorFFTBaseMixin):
73 __slots__ = ()
74
75 @classmethod
76 def tile(cls, op):
77 return cls._tile_fft(op, [op.axis])
78
79
80 class TensorComplexFFTMixin(TensorFFTMixin):
81 @classmethod
82 def _get_shape(cls, op, shape):
83 new_shape = list(shape)
84 if op.n is not None:
85 new_shape[op.axis] = op.n
86 return tuple(new_shape)
87
88
89 def validate_fft(tensor, axis=-1, norm=None):
90 validate_axis(tensor.ndim, axis)
91 if norm is not None and norm not in ('ortho',):
92 raise ValueError('Invalid norm value {0}, should be None or "ortho"'.format(norm))
93
94
95 class TensorFFTNMixin(TensorFFTBaseMixin):
96 @classmethod
97 def tile(cls, op):
98 return cls._tile_fft(op, op.axes)
99
100 @staticmethod
101 def _merge_shape(op, shape):
102 new_shape = list(shape)
103 if op.shape is not None:
104 for ss, axis in izip(op.shape, op.axes):
105 new_shape[axis] = ss
106 return new_shape
107
108
109 class TensorComplexFFTNMixin(TensorFFTNMixin):
110 @classmethod
111 def _get_shape(cls, op, shape):
112 return tuple(cls._merge_shape(op, shape))
113
114
115 class TensorRealFFTNMixin(TensorFFTNMixin):
116 @classmethod
117 def _get_shape(cls, op, shape):
118 new_shape = cls._merge_shape(op, shape)
119 new_shape[op.axes[-1]] = new_shape[op.axes[-1]] // 2 + 1
120 return tuple(new_shape)
121
122
123 class TensorRealIFFTNMixin(TensorFFTNMixin):
124 @classmethod
125 def _get_shape(cls, op, shape):
126 new_shape = list(shape)
127 new_shape[op.axes[-1]] = 2 * (new_shape[op.axes[-1]] - 1)
128 return tuple(cls._merge_shape(op, new_shape))
129
130
131 def validate_fftn(tensor, s=None, axes=None, norm=None):
132 if axes is None:
133 if s is None:
134 axes = tuple(range(tensor.ndim))
135 else:
136 axes = tuple(range(len(s)))
137 else:
138 for axis in axes:
139 validate_axis(tensor.ndim, axis)
140 if len(set(axes)) < len(axes):
141 raise ValueError('Duplicate axes not allowed')
142
143 if norm is not None and norm not in ('ortho',):
144 raise ValueError('Invalid norm value {0}, should be None or "ortho"'.format(norm))
145
146 return axes
147
148
149 class TensorFFTShiftMixin(TensorOperandMixin):
150 __slots__ = ()
151
152 @classmethod
153 def _is_inverse(cls):
154 return False
155
156 @classmethod
157 def _process_axes(cls, x, axes):
158 if axes is None:
159 axes = tuple(range(x.ndim))
160 elif isinstance(axes, Iterable):
161 axes = tuple(axes)
162 else:
163 axes = (axes,)
164
165 return axes
166
167 @classmethod
168 def tile(cls, op):
169 from ..merge import concatenate
170
171 axes = op.axes
172 in_tensor = op.input
173 is_inverse = cls._is_inverse()
174
175 x = in_tensor
176 for axis in axes:
177 size = in_tensor.shape[axis]
178 slice_on = (size + 1) // 2 if not is_inverse else size // 2
179 slc1 = [slice(None)] * axis + [slice(slice_on)]
180 slc2 = [slice(None)] * axis + [slice(slice_on, None)]
181 x = concatenate([x[slc2], x[slc1]], axis=axis)
182
183 recursive_tile(x)
184 new_op = op.copy()
185 return new_op.new_tensors(op.inputs, op.outputs[0].shape,
186 chunks=x.chunks, nsplits=x.nsplits)
187
188
189 class TensorDiscreteFourierTransform(TensorHasInput):
190 __slots__ = ()
191
192
193 class TensorBaseFFT(TensorDiscreteFourierTransform):
194 _input = KeyField('input')
195 _norm = StringField('norm')
196
197 @property
198 def norm(self):
199 return getattr(self, '_norm', None)
200
201
202 class TensorBaseSingleDimensionFFT(TensorBaseFFT):
203 _n = Int64Field('n')
204 _axis = Int32Field('axis')
205
206 @property
207 def n(self):
208 return self._n
209
210 @property
211 def axis(self):
212 return self._axis
213
214 @classmethod
215 def execute(cls, ctx, op):
216 a = ctx[op.inputs[0].key]
217 xp = get_array_module(a)
218 fun = _get_fft_func(op, xp)
219 res = fun(a, n=op.n, axis=op.axis, norm=op.norm)
220 if res.dtype != op.dtype:
221 res = res.astype(op.dtype)
222 ctx[op.outputs[0].key] = res
223
224
225 class TensorBaseMultipleDimensionFFT(TensorBaseFFT):
226 _shape = ListField('shape', ValueType.int64)
227 _axes = ListField('axes', ValueType.int32)
228
229 @property
230 def shape(self):
231 return self._shape
232
233 @property
234 def axes(self):
235 return self._axes
236
237 @classmethod
238 def execute(cls, ctx, op):
239 a = ctx[op.inputs[0].key]
240 xp = get_array_module(a)
241 fun = _get_fft_func(op, xp)
242 res = fun(a, s=op.shape, axes=op.axes, norm=op.norm)
243 if res.dtype != op.dtype:
244 res = res.astype(op.dtype)
245 ctx[op.outputs[0].key] = res
246
247
248 def _get_fft_func(op, xp):
249 from .. import fft as fftop
250
251 fun_name = type(op).__name__.lower()[6:] # all op starts with tensor
252 if type(op) in (fftop.TensorFFT, fftop.TensorIFFT, fftop.TensorFFT2, fftop.TensorIFFT2,
253 fftop.TensorFFTN, fftop.TensorIFFTN):
254 if xp is np and scifft and op.norm is None:
255 def f(*args, **kwargs):
256 kwargs.pop('norm', None)
257 if 's' in kwargs:
258 kwargs['shape'] = kwargs.pop('s', None)
259 return getattr(scifft, fun_name)(*args, **kwargs)
260
261 return f
262 else:
263 return getattr(xp.fft, fun_name)
264 else:
265 return getattr(xp.fft, fun_name)
266
267
268 class TensorStandardFFT(TensorBaseSingleDimensionFFT):
269 pass
270
271
272 class TensorStandardFFTN(TensorBaseMultipleDimensionFFT):
273 pass
274
275
276 class TensorFFTShiftBase(TensorHasInput):
277 _input = KeyField('input')
278 _axes = ListField('axes', ValueType.int32)
279
280 @property
281 def axes(self):
282 return self._axes
283
284
285 class TensorRealFFT(TensorBaseSingleDimensionFFT):
286 pass
287
288
289 class TensorRealFFTN(TensorBaseMultipleDimensionFFT):
290 pass
291
292
293 class TensorHermitianFFT(TensorBaseSingleDimensionFFT):
294 pass
295
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mars/tensor/fft/core.py b/mars/tensor/fft/core.py
--- a/mars/tensor/fft/core.py
+++ b/mars/tensor/fft/core.py
@@ -21,14 +21,8 @@
Int64Field, ListField
from ..utils import validate_axis, decide_chunk_sizes, recursive_tile
from ..operands import TensorHasInput, TensorOperandMixin
-import numpy as np
from ..array_utils import get_array_module
-try:
- import scipy.fftpack as scifft
-except ImportError: # pragma: no cover
- scifft = None
-
class TensorFFTBaseMixin(TensorOperandMixin):
__slots__ = ()
@@ -246,23 +240,8 @@
def _get_fft_func(op, xp):
- from .. import fft as fftop
-
fun_name = type(op).__name__.lower()[6:] # all op starts with tensor
- if type(op) in (fftop.TensorFFT, fftop.TensorIFFT, fftop.TensorFFT2, fftop.TensorIFFT2,
- fftop.TensorFFTN, fftop.TensorIFFTN):
- if xp is np and scifft and op.norm is None:
- def f(*args, **kwargs):
- kwargs.pop('norm', None)
- if 's' in kwargs:
- kwargs['shape'] = kwargs.pop('s', None)
- return getattr(scifft, fun_name)(*args, **kwargs)
-
- return f
- else:
- return getattr(xp.fft, fun_name)
- else:
- return getattr(xp.fft, fun_name)
+ return getattr(xp.fft, fun_name)
class TensorStandardFFT(TensorBaseSingleDimensionFFT):
|
{"golden_diff": "diff --git a/mars/tensor/fft/core.py b/mars/tensor/fft/core.py\n--- a/mars/tensor/fft/core.py\n+++ b/mars/tensor/fft/core.py\n@@ -21,14 +21,8 @@\n Int64Field, ListField\n from ..utils import validate_axis, decide_chunk_sizes, recursive_tile\n from ..operands import TensorHasInput, TensorOperandMixin\n-import numpy as np\n from ..array_utils import get_array_module\n \n-try:\n- import scipy.fftpack as scifft\n-except ImportError: # pragma: no cover\n- scifft = None\n-\n \n class TensorFFTBaseMixin(TensorOperandMixin):\n __slots__ = ()\n@@ -246,23 +240,8 @@\n \n \n def _get_fft_func(op, xp):\n- from .. import fft as fftop\n-\n fun_name = type(op).__name__.lower()[6:] # all op starts with tensor\n- if type(op) in (fftop.TensorFFT, fftop.TensorIFFT, fftop.TensorFFT2, fftop.TensorIFFT2,\n- fftop.TensorFFTN, fftop.TensorIFFTN):\n- if xp is np and scifft and op.norm is None:\n- def f(*args, **kwargs):\n- kwargs.pop('norm', None)\n- if 's' in kwargs:\n- kwargs['shape'] = kwargs.pop('s', None)\n- return getattr(scifft, fun_name)(*args, **kwargs)\n-\n- return f\n- else:\n- return getattr(xp.fft, fun_name)\n- else:\n- return getattr(xp.fft, fun_name)\n+ return getattr(xp.fft, fun_name)\n \n \n class TensorStandardFFT(TensorBaseSingleDimensionFFT):\n", "issue": "We should use numpy.fft only because it is faster than scipy.fftpack\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\nMaybe at a point in the history, scipy's fftpack is faster than numpy's fft module. Thus we do some optimization in the execution of fft that if scipy is installed, scipy's fftpack would be used to calculate fft. However, recently, I found that numpy's fft is obviously faster than scipy's fftpack. Sample code is shown below.\r\n\r\n```python\r\nIn [1]: N = 1600000 \r\n\r\nIn [2]: import numpy as np \r\n\r\nIn [3]: a = np.random.rand(N, 10) \r\n\r\nIn [4]: %timeit np.fft.fft(a) \r\n118 ms \u00b1 1.33 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\r\n\r\nIn [6]: import scipy.fftpack as sfft \r\n\r\nIn [7]: %timeit sfft.fft(a) \r\n290 ms \u00b1 3.65 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\n\r\nIn [9]: np.testing.assert_allclose(np.fft.fft(a), sfft.fft(a))\r\n```\r\n\r\nHence, I suggest to use numpy only to calculate fft.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom collections import Iterable\n\nfrom ...compat import izip\nfrom ...serialize import ValueType, KeyField, StringField, Int32Field, \\\n Int64Field, ListField\nfrom ..utils import validate_axis, decide_chunk_sizes, recursive_tile\nfrom ..operands import TensorHasInput, TensorOperandMixin\nimport numpy as np\nfrom ..array_utils import get_array_module\n\ntry:\n import scipy.fftpack as scifft\nexcept ImportError: # pragma: no cover\n scifft = None\n\n\nclass TensorFFTBaseMixin(TensorOperandMixin):\n __slots__ = ()\n\n @classmethod\n def _get_shape(cls, op, shape):\n raise NotImplementedError\n\n @classmethod\n def _tile_fft(cls, op, axes):\n in_tensor = op.inputs[0]\n out_tensor = op.outputs[0]\n\n if any(in_tensor.chunk_shape[axis] != 1 for axis in axes):\n # fft requires only 1 chunk for the specified axis, so we do rechunk first\n chunks = {validate_axis(in_tensor.ndim, axis): in_tensor.shape[axis] for axis in axes}\n new_chunks = decide_chunk_sizes(in_tensor.shape, chunks, in_tensor.dtype.itemsize)\n in_tensor = in_tensor.rechunk(new_chunks).single_tiles()\n\n out_chunks = []\n for c in in_tensor.chunks:\n chunk_op = op.copy().reset_key()\n chunk_shape = cls._get_shape(op, c.shape)\n out_chunk = chunk_op.new_chunk([c], shape=chunk_shape,\n index=c.index, order=out_tensor.order)\n out_chunks.append(out_chunk)\n\n nsplits = [tuple(c.shape[i] for c in out_chunks\n if all(idx == 0 for j, idx in enumerate(c.index) if j != i))\n for i in range(len(out_chunks[0].shape))]\n new_op = op.copy()\n return new_op.new_tensors(op.inputs, out_tensor.shape, order=out_tensor.order,\n chunks=out_chunks, nsplits=nsplits)\n\n def __call__(self, a, order=None):\n shape = self._get_shape(self, a.shape)\n order = a.order if order is None else order\n return self.new_tensor([a], shape, order=order)\n\n\nclass TensorFFTMixin(TensorFFTBaseMixin):\n __slots__ = ()\n\n @classmethod\n def tile(cls, op):\n return cls._tile_fft(op, [op.axis])\n\n\nclass TensorComplexFFTMixin(TensorFFTMixin):\n @classmethod\n def _get_shape(cls, op, shape):\n new_shape = list(shape)\n if op.n is not None:\n new_shape[op.axis] = op.n\n return tuple(new_shape)\n\n\ndef validate_fft(tensor, axis=-1, norm=None):\n validate_axis(tensor.ndim, axis)\n if norm is not None and norm not in ('ortho',):\n raise ValueError('Invalid norm value {0}, should be None or \"ortho\"'.format(norm))\n\n\nclass TensorFFTNMixin(TensorFFTBaseMixin):\n @classmethod\n def tile(cls, op):\n return cls._tile_fft(op, op.axes)\n\n @staticmethod\n def _merge_shape(op, shape):\n new_shape = list(shape)\n if op.shape is not None:\n for ss, axis in izip(op.shape, op.axes):\n new_shape[axis] = ss\n return new_shape\n\n\nclass TensorComplexFFTNMixin(TensorFFTNMixin):\n @classmethod\n def _get_shape(cls, op, shape):\n return tuple(cls._merge_shape(op, shape))\n\n\nclass TensorRealFFTNMixin(TensorFFTNMixin):\n @classmethod\n def _get_shape(cls, op, shape):\n new_shape = cls._merge_shape(op, shape)\n new_shape[op.axes[-1]] = new_shape[op.axes[-1]] // 2 + 1\n return tuple(new_shape)\n\n\nclass TensorRealIFFTNMixin(TensorFFTNMixin):\n @classmethod\n def _get_shape(cls, op, shape):\n new_shape = list(shape)\n new_shape[op.axes[-1]] = 2 * (new_shape[op.axes[-1]] - 1)\n return tuple(cls._merge_shape(op, new_shape))\n\n\ndef validate_fftn(tensor, s=None, axes=None, norm=None):\n if axes is None:\n if s is None:\n axes = tuple(range(tensor.ndim))\n else:\n axes = tuple(range(len(s)))\n else:\n for axis in axes:\n validate_axis(tensor.ndim, axis)\n if len(set(axes)) < len(axes):\n raise ValueError('Duplicate axes not allowed')\n\n if norm is not None and norm not in ('ortho',):\n raise ValueError('Invalid norm value {0}, should be None or \"ortho\"'.format(norm))\n\n return axes\n\n\nclass TensorFFTShiftMixin(TensorOperandMixin):\n __slots__ = ()\n\n @classmethod\n def _is_inverse(cls):\n return False\n\n @classmethod\n def _process_axes(cls, x, axes):\n if axes is None:\n axes = tuple(range(x.ndim))\n elif isinstance(axes, Iterable):\n axes = tuple(axes)\n else:\n axes = (axes,)\n\n return axes\n\n @classmethod\n def tile(cls, op):\n from ..merge import concatenate\n\n axes = op.axes\n in_tensor = op.input\n is_inverse = cls._is_inverse()\n\n x = in_tensor\n for axis in axes:\n size = in_tensor.shape[axis]\n slice_on = (size + 1) // 2 if not is_inverse else size // 2\n slc1 = [slice(None)] * axis + [slice(slice_on)]\n slc2 = [slice(None)] * axis + [slice(slice_on, None)]\n x = concatenate([x[slc2], x[slc1]], axis=axis)\n\n recursive_tile(x)\n new_op = op.copy()\n return new_op.new_tensors(op.inputs, op.outputs[0].shape,\n chunks=x.chunks, nsplits=x.nsplits)\n\n\nclass TensorDiscreteFourierTransform(TensorHasInput):\n __slots__ = ()\n\n\nclass TensorBaseFFT(TensorDiscreteFourierTransform):\n _input = KeyField('input')\n _norm = StringField('norm')\n\n @property\n def norm(self):\n return getattr(self, '_norm', None)\n\n\nclass TensorBaseSingleDimensionFFT(TensorBaseFFT):\n _n = Int64Field('n')\n _axis = Int32Field('axis')\n\n @property\n def n(self):\n return self._n\n\n @property\n def axis(self):\n return self._axis\n\n @classmethod\n def execute(cls, ctx, op):\n a = ctx[op.inputs[0].key]\n xp = get_array_module(a)\n fun = _get_fft_func(op, xp)\n res = fun(a, n=op.n, axis=op.axis, norm=op.norm)\n if res.dtype != op.dtype:\n res = res.astype(op.dtype)\n ctx[op.outputs[0].key] = res\n\n\nclass TensorBaseMultipleDimensionFFT(TensorBaseFFT):\n _shape = ListField('shape', ValueType.int64)\n _axes = ListField('axes', ValueType.int32)\n\n @property\n def shape(self):\n return self._shape\n\n @property\n def axes(self):\n return self._axes\n\n @classmethod\n def execute(cls, ctx, op):\n a = ctx[op.inputs[0].key]\n xp = get_array_module(a)\n fun = _get_fft_func(op, xp)\n res = fun(a, s=op.shape, axes=op.axes, norm=op.norm)\n if res.dtype != op.dtype:\n res = res.astype(op.dtype)\n ctx[op.outputs[0].key] = res\n\n\ndef _get_fft_func(op, xp):\n from .. import fft as fftop\n\n fun_name = type(op).__name__.lower()[6:] # all op starts with tensor\n if type(op) in (fftop.TensorFFT, fftop.TensorIFFT, fftop.TensorFFT2, fftop.TensorIFFT2,\n fftop.TensorFFTN, fftop.TensorIFFTN):\n if xp is np and scifft and op.norm is None:\n def f(*args, **kwargs):\n kwargs.pop('norm', None)\n if 's' in kwargs:\n kwargs['shape'] = kwargs.pop('s', None)\n return getattr(scifft, fun_name)(*args, **kwargs)\n\n return f\n else:\n return getattr(xp.fft, fun_name)\n else:\n return getattr(xp.fft, fun_name)\n\n\nclass TensorStandardFFT(TensorBaseSingleDimensionFFT):\n pass\n\n\nclass TensorStandardFFTN(TensorBaseMultipleDimensionFFT):\n pass\n\n\nclass TensorFFTShiftBase(TensorHasInput):\n _input = KeyField('input')\n _axes = ListField('axes', ValueType.int32)\n\n @property\n def axes(self):\n return self._axes\n\n\nclass TensorRealFFT(TensorBaseSingleDimensionFFT):\n pass\n\n\nclass TensorRealFFTN(TensorBaseMultipleDimensionFFT):\n pass\n\n\nclass TensorHermitianFFT(TensorBaseSingleDimensionFFT):\n pass\n", "path": "mars/tensor/fft/core.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom collections import Iterable\n\nfrom ...compat import izip\nfrom ...serialize import ValueType, KeyField, StringField, Int32Field, \\\n Int64Field, ListField\nfrom ..utils import validate_axis, decide_chunk_sizes, recursive_tile\nfrom ..operands import TensorHasInput, TensorOperandMixin\nfrom ..array_utils import get_array_module\n\n\nclass TensorFFTBaseMixin(TensorOperandMixin):\n __slots__ = ()\n\n @classmethod\n def _get_shape(cls, op, shape):\n raise NotImplementedError\n\n @classmethod\n def _tile_fft(cls, op, axes):\n in_tensor = op.inputs[0]\n out_tensor = op.outputs[0]\n\n if any(in_tensor.chunk_shape[axis] != 1 for axis in axes):\n # fft requires only 1 chunk for the specified axis, so we do rechunk first\n chunks = {validate_axis(in_tensor.ndim, axis): in_tensor.shape[axis] for axis in axes}\n new_chunks = decide_chunk_sizes(in_tensor.shape, chunks, in_tensor.dtype.itemsize)\n in_tensor = in_tensor.rechunk(new_chunks).single_tiles()\n\n out_chunks = []\n for c in in_tensor.chunks:\n chunk_op = op.copy().reset_key()\n chunk_shape = cls._get_shape(op, c.shape)\n out_chunk = chunk_op.new_chunk([c], shape=chunk_shape,\n index=c.index, order=out_tensor.order)\n out_chunks.append(out_chunk)\n\n nsplits = [tuple(c.shape[i] for c in out_chunks\n if all(idx == 0 for j, idx in enumerate(c.index) if j != i))\n for i in range(len(out_chunks[0].shape))]\n new_op = op.copy()\n return new_op.new_tensors(op.inputs, out_tensor.shape, order=out_tensor.order,\n chunks=out_chunks, nsplits=nsplits)\n\n def __call__(self, a, order=None):\n shape = self._get_shape(self, a.shape)\n order = a.order if order is None else order\n return self.new_tensor([a], shape, order=order)\n\n\nclass TensorFFTMixin(TensorFFTBaseMixin):\n __slots__ = ()\n\n @classmethod\n def tile(cls, op):\n return cls._tile_fft(op, [op.axis])\n\n\nclass TensorComplexFFTMixin(TensorFFTMixin):\n @classmethod\n def _get_shape(cls, op, shape):\n new_shape = list(shape)\n if op.n is not None:\n new_shape[op.axis] = op.n\n return tuple(new_shape)\n\n\ndef validate_fft(tensor, axis=-1, norm=None):\n validate_axis(tensor.ndim, axis)\n if norm is not None and norm not in ('ortho',):\n raise ValueError('Invalid norm value {0}, should be None or \"ortho\"'.format(norm))\n\n\nclass TensorFFTNMixin(TensorFFTBaseMixin):\n @classmethod\n def tile(cls, op):\n return cls._tile_fft(op, op.axes)\n\n @staticmethod\n def _merge_shape(op, shape):\n new_shape = list(shape)\n if op.shape is not None:\n for ss, axis in izip(op.shape, op.axes):\n new_shape[axis] = ss\n return new_shape\n\n\nclass TensorComplexFFTNMixin(TensorFFTNMixin):\n @classmethod\n def _get_shape(cls, op, shape):\n return tuple(cls._merge_shape(op, shape))\n\n\nclass TensorRealFFTNMixin(TensorFFTNMixin):\n @classmethod\n def _get_shape(cls, op, shape):\n new_shape = cls._merge_shape(op, shape)\n new_shape[op.axes[-1]] = new_shape[op.axes[-1]] // 2 + 1\n return tuple(new_shape)\n\n\nclass TensorRealIFFTNMixin(TensorFFTNMixin):\n @classmethod\n def _get_shape(cls, op, shape):\n new_shape = list(shape)\n new_shape[op.axes[-1]] = 2 * (new_shape[op.axes[-1]] - 1)\n return tuple(cls._merge_shape(op, new_shape))\n\n\ndef validate_fftn(tensor, s=None, axes=None, norm=None):\n if axes is None:\n if s is None:\n axes = tuple(range(tensor.ndim))\n else:\n axes = tuple(range(len(s)))\n else:\n for axis in axes:\n validate_axis(tensor.ndim, axis)\n if len(set(axes)) < len(axes):\n raise ValueError('Duplicate axes not allowed')\n\n if norm is not None and norm not in ('ortho',):\n raise ValueError('Invalid norm value {0}, should be None or \"ortho\"'.format(norm))\n\n return axes\n\n\nclass TensorFFTShiftMixin(TensorOperandMixin):\n __slots__ = ()\n\n @classmethod\n def _is_inverse(cls):\n return False\n\n @classmethod\n def _process_axes(cls, x, axes):\n if axes is None:\n axes = tuple(range(x.ndim))\n elif isinstance(axes, Iterable):\n axes = tuple(axes)\n else:\n axes = (axes,)\n\n return axes\n\n @classmethod\n def tile(cls, op):\n from ..merge import concatenate\n\n axes = op.axes\n in_tensor = op.input\n is_inverse = cls._is_inverse()\n\n x = in_tensor\n for axis in axes:\n size = in_tensor.shape[axis]\n slice_on = (size + 1) // 2 if not is_inverse else size // 2\n slc1 = [slice(None)] * axis + [slice(slice_on)]\n slc2 = [slice(None)] * axis + [slice(slice_on, None)]\n x = concatenate([x[slc2], x[slc1]], axis=axis)\n\n recursive_tile(x)\n new_op = op.copy()\n return new_op.new_tensors(op.inputs, op.outputs[0].shape,\n chunks=x.chunks, nsplits=x.nsplits)\n\n\nclass TensorDiscreteFourierTransform(TensorHasInput):\n __slots__ = ()\n\n\nclass TensorBaseFFT(TensorDiscreteFourierTransform):\n _input = KeyField('input')\n _norm = StringField('norm')\n\n @property\n def norm(self):\n return getattr(self, '_norm', None)\n\n\nclass TensorBaseSingleDimensionFFT(TensorBaseFFT):\n _n = Int64Field('n')\n _axis = Int32Field('axis')\n\n @property\n def n(self):\n return self._n\n\n @property\n def axis(self):\n return self._axis\n\n @classmethod\n def execute(cls, ctx, op):\n a = ctx[op.inputs[0].key]\n xp = get_array_module(a)\n fun = _get_fft_func(op, xp)\n res = fun(a, n=op.n, axis=op.axis, norm=op.norm)\n if res.dtype != op.dtype:\n res = res.astype(op.dtype)\n ctx[op.outputs[0].key] = res\n\n\nclass TensorBaseMultipleDimensionFFT(TensorBaseFFT):\n _shape = ListField('shape', ValueType.int64)\n _axes = ListField('axes', ValueType.int32)\n\n @property\n def shape(self):\n return self._shape\n\n @property\n def axes(self):\n return self._axes\n\n @classmethod\n def execute(cls, ctx, op):\n a = ctx[op.inputs[0].key]\n xp = get_array_module(a)\n fun = _get_fft_func(op, xp)\n res = fun(a, s=op.shape, axes=op.axes, norm=op.norm)\n if res.dtype != op.dtype:\n res = res.astype(op.dtype)\n ctx[op.outputs[0].key] = res\n\n\ndef _get_fft_func(op, xp):\n fun_name = type(op).__name__.lower()[6:] # all op starts with tensor\n return getattr(xp.fft, fun_name)\n\n\nclass TensorStandardFFT(TensorBaseSingleDimensionFFT):\n pass\n\n\nclass TensorStandardFFTN(TensorBaseMultipleDimensionFFT):\n pass\n\n\nclass TensorFFTShiftBase(TensorHasInput):\n _input = KeyField('input')\n _axes = ListField('axes', ValueType.int32)\n\n @property\n def axes(self):\n return self._axes\n\n\nclass TensorRealFFT(TensorBaseSingleDimensionFFT):\n pass\n\n\nclass TensorRealFFTN(TensorBaseMultipleDimensionFFT):\n pass\n\n\nclass TensorHermitianFFT(TensorBaseSingleDimensionFFT):\n pass\n", "path": "mars/tensor/fft/core.py"}]}
| 3,539 | 394 |
gh_patches_debug_41668
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-855
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Goodreads import floods activity streams
When a user does a goodreads import, the activitstreams manager isn't checking if the publication date is older than the oldest status, and adds all the statuses to the feeds, pushing out newer statuses.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/activitystreams.py`
Content:
```
1 """ access the activity streams stored in redis """
2 from abc import ABC
3 from django.dispatch import receiver
4 from django.db.models import signals, Q
5 import redis
6
7 from bookwyrm import models, settings
8 from bookwyrm.views.helpers import privacy_filter
9
10 r = redis.Redis(
11 host=settings.REDIS_ACTIVITY_HOST, port=settings.REDIS_ACTIVITY_PORT, db=0
12 )
13
14
15 class ActivityStream(ABC):
16 """ a category of activity stream (like home, local, federated) """
17
18 def stream_id(self, user):
19 """ the redis key for this user's instance of this stream """
20 return "{}-{}".format(user.id, self.key)
21
22 def unread_id(self, user):
23 """ the redis key for this user's unread count for this stream """
24 return "{}-unread".format(self.stream_id(user))
25
26 def add_status(self, status):
27 """ add a status to users' feeds """
28 # we want to do this as a bulk operation, hence "pipeline"
29 pipeline = r.pipeline()
30 for user in self.stream_users(status):
31 # add the status to the feed
32 pipeline.lpush(self.stream_id(user), status.id)
33 pipeline.ltrim(self.stream_id(user), 0, settings.MAX_STREAM_LENGTH)
34
35 # add to the unread status count
36 pipeline.incr(self.unread_id(user))
37 # and go!
38 pipeline.execute()
39
40 def remove_status(self, status):
41 """ remove a status from all feeds """
42 pipeline = r.pipeline()
43 for user in self.stream_users(status):
44 pipeline.lrem(self.stream_id(user), -1, status.id)
45 pipeline.execute()
46
47 def add_user_statuses(self, viewer, user):
48 """ add a user's statuses to another user's feed """
49 pipeline = r.pipeline()
50 for status in user.status_set.all()[: settings.MAX_STREAM_LENGTH]:
51 pipeline.lpush(self.stream_id(viewer), status.id)
52 pipeline.execute()
53
54 def remove_user_statuses(self, viewer, user):
55 """ remove a user's status from another user's feed """
56 pipeline = r.pipeline()
57 for status in user.status_set.all()[: settings.MAX_STREAM_LENGTH]:
58 pipeline.lrem(self.stream_id(viewer), -1, status.id)
59 pipeline.execute()
60
61 def get_activity_stream(self, user):
62 """ load the ids for statuses to be displayed """
63 # clear unreads for this feed
64 r.set(self.unread_id(user), 0)
65
66 statuses = r.lrange(self.stream_id(user), 0, -1)
67 return (
68 models.Status.objects.select_subclasses()
69 .filter(id__in=statuses)
70 .order_by("-published_date")
71 )
72
73 def get_unread_count(self, user):
74 """ get the unread status count for this user's feed """
75 return int(r.get(self.unread_id(user)))
76
77 def populate_stream(self, user):
78 """ go from zero to a timeline """
79 pipeline = r.pipeline()
80 statuses = self.stream_statuses(user)
81
82 stream_id = self.stream_id(user)
83 for status in statuses.all()[: settings.MAX_STREAM_LENGTH]:
84 pipeline.lpush(stream_id, status.id)
85 pipeline.execute()
86
87 def stream_users(self, status): # pylint: disable=no-self-use
88 """ given a status, what users should see it """
89 # direct messages don't appeard in feeds, direct comments/reviews/etc do
90 if status.privacy == "direct" and status.status_type == "Note":
91 return []
92
93 # everybody who could plausibly see this status
94 audience = models.User.objects.filter(
95 is_active=True,
96 local=True, # we only create feeds for users of this instance
97 ).exclude(
98 Q(id__in=status.user.blocks.all()) | Q(blocks=status.user) # not blocked
99 )
100
101 # only visible to the poster and mentioned users
102 if status.privacy == "direct":
103 audience = audience.filter(
104 Q(id=status.user.id) # if the user is the post's author
105 | Q(id__in=status.mention_users.all()) # if the user is mentioned
106 )
107 # only visible to the poster's followers and tagged users
108 elif status.privacy == "followers":
109 audience = audience.filter(
110 Q(id=status.user.id) # if the user is the post's author
111 | Q(following=status.user) # if the user is following the author
112 )
113 return audience.distinct()
114
115 def stream_statuses(self, user): # pylint: disable=no-self-use
116 """ given a user, what statuses should they see on this stream """
117 return privacy_filter(
118 user,
119 models.Status.objects.select_subclasses(),
120 privacy_levels=["public", "unlisted", "followers"],
121 )
122
123
124 class HomeStream(ActivityStream):
125 """ users you follow """
126
127 key = "home"
128
129 def stream_users(self, status):
130 audience = super().stream_users(status)
131 if not audience:
132 return []
133 return audience.filter(
134 Q(id=status.user.id) # if the user is the post's author
135 | Q(following=status.user) # if the user is following the author
136 ).distinct()
137
138 def stream_statuses(self, user):
139 return privacy_filter(
140 user,
141 models.Status.objects.select_subclasses(),
142 privacy_levels=["public", "unlisted", "followers"],
143 following_only=True,
144 )
145
146
147 class LocalStream(ActivityStream):
148 """ users you follow """
149
150 key = "local"
151
152 def stream_users(self, status):
153 # this stream wants no part in non-public statuses
154 if status.privacy != "public" or not status.user.local:
155 return []
156 return super().stream_users(status)
157
158 def stream_statuses(self, user):
159 # all public statuses by a local user
160 return privacy_filter(
161 user,
162 models.Status.objects.select_subclasses().filter(user__local=True),
163 privacy_levels=["public"],
164 )
165
166
167 class FederatedStream(ActivityStream):
168 """ users you follow """
169
170 key = "federated"
171
172 def stream_users(self, status):
173 # this stream wants no part in non-public statuses
174 if status.privacy != "public":
175 return []
176 return super().stream_users(status)
177
178 def stream_statuses(self, user):
179 return privacy_filter(
180 user,
181 models.Status.objects.select_subclasses(),
182 privacy_levels=["public"],
183 )
184
185
186 streams = {
187 "home": HomeStream(),
188 "local": LocalStream(),
189 "federated": FederatedStream(),
190 }
191
192
193 @receiver(signals.post_save)
194 # pylint: disable=unused-argument
195 def add_status_on_create(sender, instance, created, *args, **kwargs):
196 """ add newly created statuses to activity feeds """
197 # we're only interested in new statuses
198 if not issubclass(sender, models.Status):
199 return
200
201 if instance.deleted:
202 for stream in streams.values():
203 stream.remove_status(instance)
204 return
205
206 if not created:
207 return
208
209 # iterates through Home, Local, Federated
210 for stream in streams.values():
211 stream.add_status(instance)
212
213
214 @receiver(signals.post_delete, sender=models.Boost)
215 # pylint: disable=unused-argument
216 def remove_boost_on_delete(sender, instance, *args, **kwargs):
217 """ boosts are deleted """
218 # we're only interested in new statuses
219 for stream in streams.values():
220 stream.remove_status(instance)
221
222
223 @receiver(signals.post_save, sender=models.UserFollows)
224 # pylint: disable=unused-argument
225 def add_statuses_on_follow(sender, instance, created, *args, **kwargs):
226 """ add a newly followed user's statuses to feeds """
227 if not created or not instance.user_subject.local:
228 return
229 HomeStream().add_user_statuses(instance.user_subject, instance.user_object)
230
231
232 @receiver(signals.post_delete, sender=models.UserFollows)
233 # pylint: disable=unused-argument
234 def remove_statuses_on_unfollow(sender, instance, *args, **kwargs):
235 """ remove statuses from a feed on unfollow """
236 if not instance.user_subject.local:
237 return
238 HomeStream().remove_user_statuses(instance.user_subject, instance.user_object)
239
240
241 @receiver(signals.post_save, sender=models.UserBlocks)
242 # pylint: disable=unused-argument
243 def remove_statuses_on_block(sender, instance, *args, **kwargs):
244 """ remove statuses from all feeds on block """
245 # blocks apply ot all feeds
246 if instance.user_subject.local:
247 for stream in streams.values():
248 stream.remove_user_statuses(instance.user_subject, instance.user_object)
249
250 # and in both directions
251 if instance.user_object.local:
252 for stream in streams.values():
253 stream.remove_user_statuses(instance.user_object, instance.user_subject)
254
255
256 @receiver(signals.post_delete, sender=models.UserBlocks)
257 # pylint: disable=unused-argument
258 def add_statuses_on_unblock(sender, instance, *args, **kwargs):
259 """ remove statuses from all feeds on block """
260 public_streams = [LocalStream(), FederatedStream()]
261 # add statuses back to streams with statuses from anyone
262 if instance.user_subject.local:
263 for stream in public_streams:
264 stream.add_user_statuses(instance.user_subject, instance.user_object)
265
266 # add statuses back to streams with statuses from anyone
267 if instance.user_object.local:
268 for stream in public_streams:
269 stream.add_user_statuses(instance.user_object, instance.user_subject)
270
271
272 @receiver(signals.post_save, sender=models.User)
273 # pylint: disable=unused-argument
274 def populate_feed_on_account_create(sender, instance, created, *args, **kwargs):
275 """ build a user's feeds when they join """
276 if not created or not instance.local:
277 return
278
279 for stream in streams.values():
280 stream.populate_stream(instance)
281
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/activitystreams.py b/bookwyrm/activitystreams.py
--- a/bookwyrm/activitystreams.py
+++ b/bookwyrm/activitystreams.py
@@ -23,15 +23,21 @@
""" the redis key for this user's unread count for this stream """
return "{}-unread".format(self.stream_id(user))
+ def get_value(self, status): # pylint: disable=no-self-use
+ """ the status id and the rank (ie, published date) """
+ return {status.id: status.published_date.timestamp()}
+
def add_status(self, status):
""" add a status to users' feeds """
+ value = self.get_value(status)
# we want to do this as a bulk operation, hence "pipeline"
pipeline = r.pipeline()
for user in self.stream_users(status):
# add the status to the feed
- pipeline.lpush(self.stream_id(user), status.id)
- pipeline.ltrim(self.stream_id(user), 0, settings.MAX_STREAM_LENGTH)
-
+ pipeline.zadd(self.stream_id(user), value)
+ pipeline.zremrangebyrank(
+ self.stream_id(user), settings.MAX_STREAM_LENGTH, -1
+ )
# add to the unread status count
pipeline.incr(self.unread_id(user))
# and go!
@@ -47,8 +53,13 @@
def add_user_statuses(self, viewer, user):
""" add a user's statuses to another user's feed """
pipeline = r.pipeline()
- for status in user.status_set.all()[: settings.MAX_STREAM_LENGTH]:
- pipeline.lpush(self.stream_id(viewer), status.id)
+ statuses = user.status_set.all()[: settings.MAX_STREAM_LENGTH]
+ for status in statuses:
+ pipeline.zadd(self.stream_id(viewer), self.get_value(status))
+ if statuses:
+ pipeline.zremrangebyrank(
+ self.stream_id(user), settings.MAX_STREAM_LENGTH, -1
+ )
pipeline.execute()
def remove_user_statuses(self, viewer, user):
@@ -63,7 +74,7 @@
# clear unreads for this feed
r.set(self.unread_id(user), 0)
- statuses = r.lrange(self.stream_id(user), 0, -1)
+ statuses = r.zrevrange(self.stream_id(user), 0, -1)
return (
models.Status.objects.select_subclasses()
.filter(id__in=statuses)
@@ -81,7 +92,11 @@
stream_id = self.stream_id(user)
for status in statuses.all()[: settings.MAX_STREAM_LENGTH]:
- pipeline.lpush(stream_id, status.id)
+ pipeline.zadd(stream_id, self.get_value(status))
+
+ # only trim the stream if statuses were added
+ if statuses.exists():
+ pipeline.zremrangebyrank(stream_id, settings.MAX_STREAM_LENGTH, -1)
pipeline.execute()
def stream_users(self, status): # pylint: disable=no-self-use
@@ -271,7 +286,7 @@
@receiver(signals.post_save, sender=models.User)
# pylint: disable=unused-argument
-def populate_feed_on_account_create(sender, instance, created, *args, **kwargs):
+def populate_streams_on_account_create(sender, instance, created, *args, **kwargs):
""" build a user's feeds when they join """
if not created or not instance.local:
return
|
{"golden_diff": "diff --git a/bookwyrm/activitystreams.py b/bookwyrm/activitystreams.py\n--- a/bookwyrm/activitystreams.py\n+++ b/bookwyrm/activitystreams.py\n@@ -23,15 +23,21 @@\n \"\"\" the redis key for this user's unread count for this stream \"\"\"\n return \"{}-unread\".format(self.stream_id(user))\n \n+ def get_value(self, status): # pylint: disable=no-self-use\n+ \"\"\" the status id and the rank (ie, published date) \"\"\"\n+ return {status.id: status.published_date.timestamp()}\n+\n def add_status(self, status):\n \"\"\" add a status to users' feeds \"\"\"\n+ value = self.get_value(status)\n # we want to do this as a bulk operation, hence \"pipeline\"\n pipeline = r.pipeline()\n for user in self.stream_users(status):\n # add the status to the feed\n- pipeline.lpush(self.stream_id(user), status.id)\n- pipeline.ltrim(self.stream_id(user), 0, settings.MAX_STREAM_LENGTH)\n-\n+ pipeline.zadd(self.stream_id(user), value)\n+ pipeline.zremrangebyrank(\n+ self.stream_id(user), settings.MAX_STREAM_LENGTH, -1\n+ )\n # add to the unread status count\n pipeline.incr(self.unread_id(user))\n # and go!\n@@ -47,8 +53,13 @@\n def add_user_statuses(self, viewer, user):\n \"\"\" add a user's statuses to another user's feed \"\"\"\n pipeline = r.pipeline()\n- for status in user.status_set.all()[: settings.MAX_STREAM_LENGTH]:\n- pipeline.lpush(self.stream_id(viewer), status.id)\n+ statuses = user.status_set.all()[: settings.MAX_STREAM_LENGTH]\n+ for status in statuses:\n+ pipeline.zadd(self.stream_id(viewer), self.get_value(status))\n+ if statuses:\n+ pipeline.zremrangebyrank(\n+ self.stream_id(user), settings.MAX_STREAM_LENGTH, -1\n+ )\n pipeline.execute()\n \n def remove_user_statuses(self, viewer, user):\n@@ -63,7 +74,7 @@\n # clear unreads for this feed\n r.set(self.unread_id(user), 0)\n \n- statuses = r.lrange(self.stream_id(user), 0, -1)\n+ statuses = r.zrevrange(self.stream_id(user), 0, -1)\n return (\n models.Status.objects.select_subclasses()\n .filter(id__in=statuses)\n@@ -81,7 +92,11 @@\n \n stream_id = self.stream_id(user)\n for status in statuses.all()[: settings.MAX_STREAM_LENGTH]:\n- pipeline.lpush(stream_id, status.id)\n+ pipeline.zadd(stream_id, self.get_value(status))\n+\n+ # only trim the stream if statuses were added\n+ if statuses.exists():\n+ pipeline.zremrangebyrank(stream_id, settings.MAX_STREAM_LENGTH, -1)\n pipeline.execute()\n \n def stream_users(self, status): # pylint: disable=no-self-use\n@@ -271,7 +286,7 @@\n \n @receiver(signals.post_save, sender=models.User)\n # pylint: disable=unused-argument\n-def populate_feed_on_account_create(sender, instance, created, *args, **kwargs):\n+def populate_streams_on_account_create(sender, instance, created, *args, **kwargs):\n \"\"\" build a user's feeds when they join \"\"\"\n if not created or not instance.local:\n return\n", "issue": "Goodreads import floods activity streams\nWhen a user does a goodreads import, the activitstreams manager isn't checking if the publication date is older than the oldest status, and adds all the statuses to the feeds, pushing out newer statuses.\n", "before_files": [{"content": "\"\"\" access the activity streams stored in redis \"\"\"\nfrom abc import ABC\nfrom django.dispatch import receiver\nfrom django.db.models import signals, Q\nimport redis\n\nfrom bookwyrm import models, settings\nfrom bookwyrm.views.helpers import privacy_filter\n\nr = redis.Redis(\n host=settings.REDIS_ACTIVITY_HOST, port=settings.REDIS_ACTIVITY_PORT, db=0\n)\n\n\nclass ActivityStream(ABC):\n \"\"\" a category of activity stream (like home, local, federated) \"\"\"\n\n def stream_id(self, user):\n \"\"\" the redis key for this user's instance of this stream \"\"\"\n return \"{}-{}\".format(user.id, self.key)\n\n def unread_id(self, user):\n \"\"\" the redis key for this user's unread count for this stream \"\"\"\n return \"{}-unread\".format(self.stream_id(user))\n\n def add_status(self, status):\n \"\"\" add a status to users' feeds \"\"\"\n # we want to do this as a bulk operation, hence \"pipeline\"\n pipeline = r.pipeline()\n for user in self.stream_users(status):\n # add the status to the feed\n pipeline.lpush(self.stream_id(user), status.id)\n pipeline.ltrim(self.stream_id(user), 0, settings.MAX_STREAM_LENGTH)\n\n # add to the unread status count\n pipeline.incr(self.unread_id(user))\n # and go!\n pipeline.execute()\n\n def remove_status(self, status):\n \"\"\" remove a status from all feeds \"\"\"\n pipeline = r.pipeline()\n for user in self.stream_users(status):\n pipeline.lrem(self.stream_id(user), -1, status.id)\n pipeline.execute()\n\n def add_user_statuses(self, viewer, user):\n \"\"\" add a user's statuses to another user's feed \"\"\"\n pipeline = r.pipeline()\n for status in user.status_set.all()[: settings.MAX_STREAM_LENGTH]:\n pipeline.lpush(self.stream_id(viewer), status.id)\n pipeline.execute()\n\n def remove_user_statuses(self, viewer, user):\n \"\"\" remove a user's status from another user's feed \"\"\"\n pipeline = r.pipeline()\n for status in user.status_set.all()[: settings.MAX_STREAM_LENGTH]:\n pipeline.lrem(self.stream_id(viewer), -1, status.id)\n pipeline.execute()\n\n def get_activity_stream(self, user):\n \"\"\" load the ids for statuses to be displayed \"\"\"\n # clear unreads for this feed\n r.set(self.unread_id(user), 0)\n\n statuses = r.lrange(self.stream_id(user), 0, -1)\n return (\n models.Status.objects.select_subclasses()\n .filter(id__in=statuses)\n .order_by(\"-published_date\")\n )\n\n def get_unread_count(self, user):\n \"\"\" get the unread status count for this user's feed \"\"\"\n return int(r.get(self.unread_id(user)))\n\n def populate_stream(self, user):\n \"\"\" go from zero to a timeline \"\"\"\n pipeline = r.pipeline()\n statuses = self.stream_statuses(user)\n\n stream_id = self.stream_id(user)\n for status in statuses.all()[: settings.MAX_STREAM_LENGTH]:\n pipeline.lpush(stream_id, status.id)\n pipeline.execute()\n\n def stream_users(self, status): # pylint: disable=no-self-use\n \"\"\" given a status, what users should see it \"\"\"\n # direct messages don't appeard in feeds, direct comments/reviews/etc do\n if status.privacy == \"direct\" and status.status_type == \"Note\":\n return []\n\n # everybody who could plausibly see this status\n audience = models.User.objects.filter(\n is_active=True,\n local=True, # we only create feeds for users of this instance\n ).exclude(\n Q(id__in=status.user.blocks.all()) | Q(blocks=status.user) # not blocked\n )\n\n # only visible to the poster and mentioned users\n if status.privacy == \"direct\":\n audience = audience.filter(\n Q(id=status.user.id) # if the user is the post's author\n | Q(id__in=status.mention_users.all()) # if the user is mentioned\n )\n # only visible to the poster's followers and tagged users\n elif status.privacy == \"followers\":\n audience = audience.filter(\n Q(id=status.user.id) # if the user is the post's author\n | Q(following=status.user) # if the user is following the author\n )\n return audience.distinct()\n\n def stream_statuses(self, user): # pylint: disable=no-self-use\n \"\"\" given a user, what statuses should they see on this stream \"\"\"\n return privacy_filter(\n user,\n models.Status.objects.select_subclasses(),\n privacy_levels=[\"public\", \"unlisted\", \"followers\"],\n )\n\n\nclass HomeStream(ActivityStream):\n \"\"\" users you follow \"\"\"\n\n key = \"home\"\n\n def stream_users(self, status):\n audience = super().stream_users(status)\n if not audience:\n return []\n return audience.filter(\n Q(id=status.user.id) # if the user is the post's author\n | Q(following=status.user) # if the user is following the author\n ).distinct()\n\n def stream_statuses(self, user):\n return privacy_filter(\n user,\n models.Status.objects.select_subclasses(),\n privacy_levels=[\"public\", \"unlisted\", \"followers\"],\n following_only=True,\n )\n\n\nclass LocalStream(ActivityStream):\n \"\"\" users you follow \"\"\"\n\n key = \"local\"\n\n def stream_users(self, status):\n # this stream wants no part in non-public statuses\n if status.privacy != \"public\" or not status.user.local:\n return []\n return super().stream_users(status)\n\n def stream_statuses(self, user):\n # all public statuses by a local user\n return privacy_filter(\n user,\n models.Status.objects.select_subclasses().filter(user__local=True),\n privacy_levels=[\"public\"],\n )\n\n\nclass FederatedStream(ActivityStream):\n \"\"\" users you follow \"\"\"\n\n key = \"federated\"\n\n def stream_users(self, status):\n # this stream wants no part in non-public statuses\n if status.privacy != \"public\":\n return []\n return super().stream_users(status)\n\n def stream_statuses(self, user):\n return privacy_filter(\n user,\n models.Status.objects.select_subclasses(),\n privacy_levels=[\"public\"],\n )\n\n\nstreams = {\n \"home\": HomeStream(),\n \"local\": LocalStream(),\n \"federated\": FederatedStream(),\n}\n\n\n@receiver(signals.post_save)\n# pylint: disable=unused-argument\ndef add_status_on_create(sender, instance, created, *args, **kwargs):\n \"\"\" add newly created statuses to activity feeds \"\"\"\n # we're only interested in new statuses\n if not issubclass(sender, models.Status):\n return\n\n if instance.deleted:\n for stream in streams.values():\n stream.remove_status(instance)\n return\n\n if not created:\n return\n\n # iterates through Home, Local, Federated\n for stream in streams.values():\n stream.add_status(instance)\n\n\n@receiver(signals.post_delete, sender=models.Boost)\n# pylint: disable=unused-argument\ndef remove_boost_on_delete(sender, instance, *args, **kwargs):\n \"\"\" boosts are deleted \"\"\"\n # we're only interested in new statuses\n for stream in streams.values():\n stream.remove_status(instance)\n\n\n@receiver(signals.post_save, sender=models.UserFollows)\n# pylint: disable=unused-argument\ndef add_statuses_on_follow(sender, instance, created, *args, **kwargs):\n \"\"\" add a newly followed user's statuses to feeds \"\"\"\n if not created or not instance.user_subject.local:\n return\n HomeStream().add_user_statuses(instance.user_subject, instance.user_object)\n\n\n@receiver(signals.post_delete, sender=models.UserFollows)\n# pylint: disable=unused-argument\ndef remove_statuses_on_unfollow(sender, instance, *args, **kwargs):\n \"\"\" remove statuses from a feed on unfollow \"\"\"\n if not instance.user_subject.local:\n return\n HomeStream().remove_user_statuses(instance.user_subject, instance.user_object)\n\n\n@receiver(signals.post_save, sender=models.UserBlocks)\n# pylint: disable=unused-argument\ndef remove_statuses_on_block(sender, instance, *args, **kwargs):\n \"\"\" remove statuses from all feeds on block \"\"\"\n # blocks apply ot all feeds\n if instance.user_subject.local:\n for stream in streams.values():\n stream.remove_user_statuses(instance.user_subject, instance.user_object)\n\n # and in both directions\n if instance.user_object.local:\n for stream in streams.values():\n stream.remove_user_statuses(instance.user_object, instance.user_subject)\n\n\n@receiver(signals.post_delete, sender=models.UserBlocks)\n# pylint: disable=unused-argument\ndef add_statuses_on_unblock(sender, instance, *args, **kwargs):\n \"\"\" remove statuses from all feeds on block \"\"\"\n public_streams = [LocalStream(), FederatedStream()]\n # add statuses back to streams with statuses from anyone\n if instance.user_subject.local:\n for stream in public_streams:\n stream.add_user_statuses(instance.user_subject, instance.user_object)\n\n # add statuses back to streams with statuses from anyone\n if instance.user_object.local:\n for stream in public_streams:\n stream.add_user_statuses(instance.user_object, instance.user_subject)\n\n\n@receiver(signals.post_save, sender=models.User)\n# pylint: disable=unused-argument\ndef populate_feed_on_account_create(sender, instance, created, *args, **kwargs):\n \"\"\" build a user's feeds when they join \"\"\"\n if not created or not instance.local:\n return\n\n for stream in streams.values():\n stream.populate_stream(instance)\n", "path": "bookwyrm/activitystreams.py"}], "after_files": [{"content": "\"\"\" access the activity streams stored in redis \"\"\"\nfrom abc import ABC\nfrom django.dispatch import receiver\nfrom django.db.models import signals, Q\nimport redis\n\nfrom bookwyrm import models, settings\nfrom bookwyrm.views.helpers import privacy_filter\n\nr = redis.Redis(\n host=settings.REDIS_ACTIVITY_HOST, port=settings.REDIS_ACTIVITY_PORT, db=0\n)\n\n\nclass ActivityStream(ABC):\n \"\"\" a category of activity stream (like home, local, federated) \"\"\"\n\n def stream_id(self, user):\n \"\"\" the redis key for this user's instance of this stream \"\"\"\n return \"{}-{}\".format(user.id, self.key)\n\n def unread_id(self, user):\n \"\"\" the redis key for this user's unread count for this stream \"\"\"\n return \"{}-unread\".format(self.stream_id(user))\n\n def get_value(self, status): # pylint: disable=no-self-use\n \"\"\" the status id and the rank (ie, published date) \"\"\"\n return {status.id: status.published_date.timestamp()}\n\n def add_status(self, status):\n \"\"\" add a status to users' feeds \"\"\"\n value = self.get_value(status)\n # we want to do this as a bulk operation, hence \"pipeline\"\n pipeline = r.pipeline()\n for user in self.stream_users(status):\n # add the status to the feed\n pipeline.zadd(self.stream_id(user), value)\n pipeline.zremrangebyrank(\n self.stream_id(user), settings.MAX_STREAM_LENGTH, -1\n )\n # add to the unread status count\n pipeline.incr(self.unread_id(user))\n # and go!\n pipeline.execute()\n\n def remove_status(self, status):\n \"\"\" remove a status from all feeds \"\"\"\n pipeline = r.pipeline()\n for user in self.stream_users(status):\n pipeline.lrem(self.stream_id(user), -1, status.id)\n pipeline.execute()\n\n def add_user_statuses(self, viewer, user):\n \"\"\" add a user's statuses to another user's feed \"\"\"\n pipeline = r.pipeline()\n statuses = user.status_set.all()[: settings.MAX_STREAM_LENGTH]\n for status in statuses:\n pipeline.zadd(self.stream_id(viewer), self.get_value(status))\n if statuses:\n pipeline.zremrangebyrank(\n self.stream_id(user), settings.MAX_STREAM_LENGTH, -1\n )\n pipeline.execute()\n\n def remove_user_statuses(self, viewer, user):\n \"\"\" remove a user's status from another user's feed \"\"\"\n pipeline = r.pipeline()\n for status in user.status_set.all()[: settings.MAX_STREAM_LENGTH]:\n pipeline.lrem(self.stream_id(viewer), -1, status.id)\n pipeline.execute()\n\n def get_activity_stream(self, user):\n \"\"\" load the ids for statuses to be displayed \"\"\"\n # clear unreads for this feed\n r.set(self.unread_id(user), 0)\n\n statuses = r.zrevrange(self.stream_id(user), 0, -1)\n return (\n models.Status.objects.select_subclasses()\n .filter(id__in=statuses)\n .order_by(\"-published_date\")\n )\n\n def get_unread_count(self, user):\n \"\"\" get the unread status count for this user's feed \"\"\"\n return int(r.get(self.unread_id(user)))\n\n def populate_stream(self, user):\n \"\"\" go from zero to a timeline \"\"\"\n pipeline = r.pipeline()\n statuses = self.stream_statuses(user)\n\n stream_id = self.stream_id(user)\n for status in statuses.all()[: settings.MAX_STREAM_LENGTH]:\n pipeline.zadd(stream_id, self.get_value(status))\n\n # only trim the stream if statuses were added\n if statuses.exists():\n pipeline.zremrangebyrank(stream_id, settings.MAX_STREAM_LENGTH, -1)\n pipeline.execute()\n\n def stream_users(self, status): # pylint: disable=no-self-use\n \"\"\" given a status, what users should see it \"\"\"\n # direct messages don't appeard in feeds, direct comments/reviews/etc do\n if status.privacy == \"direct\" and status.status_type == \"Note\":\n return []\n\n # everybody who could plausibly see this status\n audience = models.User.objects.filter(\n is_active=True,\n local=True, # we only create feeds for users of this instance\n ).exclude(\n Q(id__in=status.user.blocks.all()) | Q(blocks=status.user) # not blocked\n )\n\n # only visible to the poster and mentioned users\n if status.privacy == \"direct\":\n audience = audience.filter(\n Q(id=status.user.id) # if the user is the post's author\n | Q(id__in=status.mention_users.all()) # if the user is mentioned\n )\n # only visible to the poster's followers and tagged users\n elif status.privacy == \"followers\":\n audience = audience.filter(\n Q(id=status.user.id) # if the user is the post's author\n | Q(following=status.user) # if the user is following the author\n )\n return audience.distinct()\n\n def stream_statuses(self, user): # pylint: disable=no-self-use\n \"\"\" given a user, what statuses should they see on this stream \"\"\"\n return privacy_filter(\n user,\n models.Status.objects.select_subclasses(),\n privacy_levels=[\"public\", \"unlisted\", \"followers\"],\n )\n\n\nclass HomeStream(ActivityStream):\n \"\"\" users you follow \"\"\"\n\n key = \"home\"\n\n def stream_users(self, status):\n audience = super().stream_users(status)\n if not audience:\n return []\n return audience.filter(\n Q(id=status.user.id) # if the user is the post's author\n | Q(following=status.user) # if the user is following the author\n ).distinct()\n\n def stream_statuses(self, user):\n return privacy_filter(\n user,\n models.Status.objects.select_subclasses(),\n privacy_levels=[\"public\", \"unlisted\", \"followers\"],\n following_only=True,\n )\n\n\nclass LocalStream(ActivityStream):\n \"\"\" users you follow \"\"\"\n\n key = \"local\"\n\n def stream_users(self, status):\n # this stream wants no part in non-public statuses\n if status.privacy != \"public\" or not status.user.local:\n return []\n return super().stream_users(status)\n\n def stream_statuses(self, user):\n # all public statuses by a local user\n return privacy_filter(\n user,\n models.Status.objects.select_subclasses().filter(user__local=True),\n privacy_levels=[\"public\"],\n )\n\n\nclass FederatedStream(ActivityStream):\n \"\"\" users you follow \"\"\"\n\n key = \"federated\"\n\n def stream_users(self, status):\n # this stream wants no part in non-public statuses\n if status.privacy != \"public\":\n return []\n return super().stream_users(status)\n\n def stream_statuses(self, user):\n return privacy_filter(\n user,\n models.Status.objects.select_subclasses(),\n privacy_levels=[\"public\"],\n )\n\n\nstreams = {\n \"home\": HomeStream(),\n \"local\": LocalStream(),\n \"federated\": FederatedStream(),\n}\n\n\n@receiver(signals.post_save)\n# pylint: disable=unused-argument\ndef add_status_on_create(sender, instance, created, *args, **kwargs):\n \"\"\" add newly created statuses to activity feeds \"\"\"\n # we're only interested in new statuses\n if not issubclass(sender, models.Status):\n return\n\n if instance.deleted:\n for stream in streams.values():\n stream.remove_status(instance)\n return\n\n if not created:\n return\n\n # iterates through Home, Local, Federated\n for stream in streams.values():\n stream.add_status(instance)\n\n\n@receiver(signals.post_delete, sender=models.Boost)\n# pylint: disable=unused-argument\ndef remove_boost_on_delete(sender, instance, *args, **kwargs):\n \"\"\" boosts are deleted \"\"\"\n # we're only interested in new statuses\n for stream in streams.values():\n stream.remove_status(instance)\n\n\n@receiver(signals.post_save, sender=models.UserFollows)\n# pylint: disable=unused-argument\ndef add_statuses_on_follow(sender, instance, created, *args, **kwargs):\n \"\"\" add a newly followed user's statuses to feeds \"\"\"\n if not created or not instance.user_subject.local:\n return\n HomeStream().add_user_statuses(instance.user_subject, instance.user_object)\n\n\n@receiver(signals.post_delete, sender=models.UserFollows)\n# pylint: disable=unused-argument\ndef remove_statuses_on_unfollow(sender, instance, *args, **kwargs):\n \"\"\" remove statuses from a feed on unfollow \"\"\"\n if not instance.user_subject.local:\n return\n HomeStream().remove_user_statuses(instance.user_subject, instance.user_object)\n\n\n@receiver(signals.post_save, sender=models.UserBlocks)\n# pylint: disable=unused-argument\ndef remove_statuses_on_block(sender, instance, *args, **kwargs):\n \"\"\" remove statuses from all feeds on block \"\"\"\n # blocks apply ot all feeds\n if instance.user_subject.local:\n for stream in streams.values():\n stream.remove_user_statuses(instance.user_subject, instance.user_object)\n\n # and in both directions\n if instance.user_object.local:\n for stream in streams.values():\n stream.remove_user_statuses(instance.user_object, instance.user_subject)\n\n\n@receiver(signals.post_delete, sender=models.UserBlocks)\n# pylint: disable=unused-argument\ndef add_statuses_on_unblock(sender, instance, *args, **kwargs):\n \"\"\" remove statuses from all feeds on block \"\"\"\n public_streams = [LocalStream(), FederatedStream()]\n # add statuses back to streams with statuses from anyone\n if instance.user_subject.local:\n for stream in public_streams:\n stream.add_user_statuses(instance.user_subject, instance.user_object)\n\n # add statuses back to streams with statuses from anyone\n if instance.user_object.local:\n for stream in public_streams:\n stream.add_user_statuses(instance.user_object, instance.user_subject)\n\n\n@receiver(signals.post_save, sender=models.User)\n# pylint: disable=unused-argument\ndef populate_streams_on_account_create(sender, instance, created, *args, **kwargs):\n \"\"\" build a user's feeds when they join \"\"\"\n if not created or not instance.local:\n return\n\n for stream in streams.values():\n stream.populate_stream(instance)\n", "path": "bookwyrm/activitystreams.py"}]}
| 3,120 | 752 |
gh_patches_debug_9599
|
rasdani/github-patches
|
git_diff
|
mars-project__mars-3129
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] ImportError: cannot import name 'elliprc' from 'mars.tensor.special.ellip_func_integrals'
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
```python
Traceback:
/usr/local/python3/lib/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
mars/tensor/special/tests/test_special.py:57: in <module>
from ..ellip_func_integrals import (
E ImportError: cannot import name 'elliprc' from 'mars.tensor.special.ellip_func_integrals' (/home/jenkins/agent/aci/mars/tensor/special/ellip_func_integrals.py)
```
```python
Traceback:
/usr/local/python3/lib/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
mars/tensor/special/tests/test_special.py:18: in <module>
from scipy.special import (
E ImportError: cannot import name 'elliprd' from 'scipy.special' (/home/admin/py3/lib/python3.7/site-packages/scipy/special/__init__.py)
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version 3.7.7
2. The version of Mars you use Latest master
3. Versions of crucial packages, such as numpy, scipy and pandas `scipy==1.5.0`
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mars/tensor/special/ellip_func_integrals.py`
Content:
```
1 # Copyright 1999-2021 Alibaba Group Holding Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import scipy.special as spspecial
16
17 from ..arithmetic.utils import arithmetic_operand
18 from ..utils import infer_dtype, implement_scipy
19 from .core import (
20 _register_special_op,
21 TensorSpecialBinOp,
22 TensorSpecialUnaryOp,
23 TensorSpecialMultiOp,
24 )
25
26
27 @_register_special_op
28 @arithmetic_operand(sparse_mode="unary")
29 class TensorEllipk(TensorSpecialUnaryOp):
30 _func_name = "ellipk"
31
32
33 @_register_special_op
34 @arithmetic_operand(sparse_mode="unary")
35 class TensorEllipkm1(TensorSpecialUnaryOp):
36 _func_name = "ellipkm1"
37
38
39 @_register_special_op
40 @arithmetic_operand(sparse_mode="binary_and")
41 class TensorEllipkinc(TensorSpecialBinOp):
42 _func_name = "ellipkinc"
43
44
45 @_register_special_op
46 @arithmetic_operand(sparse_mode="unary")
47 class TensorEllipe(TensorSpecialUnaryOp):
48 _func_name = "ellipe"
49
50
51 @_register_special_op
52 @arithmetic_operand(sparse_mode="binary_and")
53 class TensorEllipeinc(TensorSpecialBinOp):
54 _func_name = "ellipeinc"
55
56
57 @_register_special_op
58 @arithmetic_operand(sparse_mode="binary_and")
59 class TensorElliprc(TensorSpecialBinOp):
60 _func_name = "elliprc"
61
62
63 @_register_special_op
64 class TensorElliprd(TensorSpecialMultiOp):
65 _ARG_COUNT = 3
66 _func_name = "elliprd"
67
68
69 @_register_special_op
70 class TensorElliprf(TensorSpecialMultiOp):
71 _ARG_COUNT = 3
72 _func_name = "elliprf"
73
74
75 @_register_special_op
76 class TensorElliprg(TensorSpecialMultiOp):
77 _ARG_COUNT = 3
78 _func_name = "elliprg"
79
80
81 @_register_special_op
82 class TensorElliprj(TensorSpecialMultiOp):
83 _ARG_COUNT = 4
84 _func_name = "elliprj"
85
86
87 @implement_scipy(spspecial.ellipk)
88 @infer_dtype(spspecial.ellipk)
89 def ellipk(x, **kwargs):
90 op = TensorEllipk(**kwargs)
91 return op(x)
92
93
94 @implement_scipy(spspecial.ellipkm1)
95 @infer_dtype(spspecial.ellipkm1)
96 def ellipkm1(x, **kwargs):
97 op = TensorEllipkm1(**kwargs)
98 return op(x)
99
100
101 @implement_scipy(spspecial.ellipkinc)
102 @infer_dtype(spspecial.ellipkinc)
103 def ellipkinc(phi, m, **kwargs):
104 op = TensorEllipkinc(**kwargs)
105 return op(phi, m)
106
107
108 @implement_scipy(spspecial.ellipe)
109 @infer_dtype(spspecial.ellipe)
110 def ellipe(x, **kwargs):
111 op = TensorEllipe(**kwargs)
112 return op(x)
113
114
115 @implement_scipy(spspecial.ellipeinc)
116 @infer_dtype(spspecial.ellipeinc)
117 def ellipeinc(phi, m, **kwargs):
118 op = TensorEllipeinc(**kwargs)
119 return op(phi, m)
120
121
122 try:
123
124 @implement_scipy(spspecial.elliprc)
125 @infer_dtype(spspecial.elliprc)
126 def elliprc(x, y, **kwargs):
127 op = TensorElliprc(**kwargs)
128 return op(x, y)
129
130 @implement_scipy(spspecial.elliprd)
131 @infer_dtype(spspecial.elliprd)
132 def elliprd(x, y, z, **kwargs):
133 op = TensorElliprd(**kwargs)
134 return op(x, y, z)
135
136 @implement_scipy(spspecial.elliprf)
137 @infer_dtype(spspecial.elliprf)
138 def elliprf(x, y, z, **kwargs):
139 op = TensorElliprf(**kwargs)
140 return op(x, y, z)
141
142 @implement_scipy(spspecial.elliprg)
143 @infer_dtype(spspecial.elliprg)
144 def elliprg(x, y, z, **kwargs):
145 op = TensorElliprg(**kwargs)
146 return op(x, y, z)
147
148 @implement_scipy(spspecial.elliprj)
149 @infer_dtype(spspecial.elliprj)
150 def elliprj(x, y, z, p, **kwargs):
151 op = TensorElliprj(**kwargs)
152 return op(x, y, z, p)
153
154 except AttributeError:
155 # These functions are not implemented before scipy v1.8 so
156 # spsecial.func may cause AttributeError
157 pass
158
```
Path: `mars/tensor/special/__init__.py`
Content:
```
1 # Copyright 1999-2021 Alibaba Group Holding Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 try:
16 import scipy
17
18 from .err_fresnel import (
19 erf,
20 TensorErf,
21 erfc,
22 TensorErfc,
23 erfcx,
24 TensorErfcx,
25 erfi,
26 TensorErfi,
27 erfinv,
28 TensorErfinv,
29 erfcinv,
30 TensorErfcinv,
31 )
32 from .gamma_funcs import (
33 gamma,
34 TensorGamma,
35 gammaln,
36 TensorGammaln,
37 loggamma,
38 TensorLogGamma,
39 gammasgn,
40 TensorGammaSgn,
41 gammainc,
42 TensorGammaInc,
43 gammaincinv,
44 TensorGammaIncInv,
45 gammaincc,
46 TensorGammaIncc,
47 gammainccinv,
48 TensorGammaInccInv,
49 beta,
50 TensorBeta,
51 betaln,
52 TensorBetaLn,
53 betainc,
54 TensorBetaInc,
55 betaincinv,
56 TensorBetaIncInv,
57 psi,
58 TensorPsi,
59 rgamma,
60 TensorRGamma,
61 polygamma,
62 TensorPolyGamma,
63 multigammaln,
64 TensorMultiGammaLn,
65 digamma,
66 TensorDiGamma,
67 poch,
68 TensorPoch,
69 )
70 from .info_theory import (
71 entr,
72 TensorEntr,
73 rel_entr,
74 TensorRelEntr,
75 kl_div,
76 TensorKlDiv,
77 )
78 from .convenience import (
79 xlogy,
80 TensorXLogY,
81 )
82 from .bessel import (
83 jv,
84 TensorJV,
85 jve,
86 TensorJVE,
87 yn,
88 TensorYN,
89 yv,
90 TensorYV,
91 yve,
92 TensorYVE,
93 kn,
94 TensorKN,
95 kv,
96 TensorKV,
97 kve,
98 TensorKVE,
99 iv,
100 TensorIV,
101 ive,
102 TensorIVE,
103 hankel1,
104 TensorHankel1,
105 hankel1e,
106 TensorHankel1e,
107 hankel2,
108 TensorHankel2,
109 hankel2e,
110 TensorHankel2e,
111 )
112 from .hypergeometric_funcs import (
113 hyp2f1,
114 TensorHYP2F1,
115 hyp1f1,
116 TensorHYP1F1,
117 hyperu,
118 TensorHYPERU,
119 hyp0f1,
120 TensorHYP0F1,
121 )
122 from .ellip_harm import (
123 ellip_harm,
124 TensorEllipHarm,
125 ellip_harm_2,
126 TensorEllipHarm2,
127 ellip_normal,
128 TensorEllipNormal,
129 )
130 from .ellip_func_integrals import (
131 ellipk,
132 TensorEllipk,
133 ellipkm1,
134 TensorEllipkm1,
135 ellipkinc,
136 TensorEllipkinc,
137 ellipe,
138 TensorEllipe,
139 ellipeinc,
140 TensorEllipeinc,
141 elliprc,
142 TensorElliprc,
143 elliprd,
144 TensorElliprd,
145 elliprf,
146 TensorElliprf,
147 elliprg,
148 TensorElliprg,
149 elliprj,
150 TensorElliprj,
151 )
152 except ImportError: # pragma: no cover
153 pass
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mars/tensor/special/__init__.py b/mars/tensor/special/__init__.py
--- a/mars/tensor/special/__init__.py
+++ b/mars/tensor/special/__init__.py
@@ -151,3 +151,7 @@
)
except ImportError: # pragma: no cover
pass
+
+_names_to_del = [_name for _name, _val in globals().items() if _val is None]
+[globals().pop(_name) for _name in _names_to_del]
+del _names_to_del
diff --git a/mars/tensor/special/ellip_func_integrals.py b/mars/tensor/special/ellip_func_integrals.py
--- a/mars/tensor/special/ellip_func_integrals.py
+++ b/mars/tensor/special/ellip_func_integrals.py
@@ -154,4 +154,4 @@
except AttributeError:
# These functions are not implemented before scipy v1.8 so
# spsecial.func may cause AttributeError
- pass
+ elliprc = elliprd = elliprf = elliprg = elliprj = None
|
{"golden_diff": "diff --git a/mars/tensor/special/__init__.py b/mars/tensor/special/__init__.py\n--- a/mars/tensor/special/__init__.py\n+++ b/mars/tensor/special/__init__.py\n@@ -151,3 +151,7 @@\n )\n except ImportError: # pragma: no cover\n pass\n+\n+_names_to_del = [_name for _name, _val in globals().items() if _val is None]\n+[globals().pop(_name) for _name in _names_to_del]\n+del _names_to_del\ndiff --git a/mars/tensor/special/ellip_func_integrals.py b/mars/tensor/special/ellip_func_integrals.py\n--- a/mars/tensor/special/ellip_func_integrals.py\n+++ b/mars/tensor/special/ellip_func_integrals.py\n@@ -154,4 +154,4 @@\n except AttributeError:\n # These functions are not implemented before scipy v1.8 so\n # spsecial.func may cause AttributeError\n- pass\n+ elliprc = elliprd = elliprf = elliprg = elliprj = None\n", "issue": "[BUG] ImportError: cannot import name 'elliprc' from 'mars.tensor.special.ellip_func_integrals' \n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n```python\r\nTraceback:\r\n/usr/local/python3/lib/python3.7/importlib/__init__.py:127: in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\nmars/tensor/special/tests/test_special.py:57: in <module>\r\n from ..ellip_func_integrals import (\r\nE ImportError: cannot import name 'elliprc' from 'mars.tensor.special.ellip_func_integrals' (/home/jenkins/agent/aci/mars/tensor/special/ellip_func_integrals.py)\r\n```\r\n\r\n\r\n```python\r\nTraceback:\r\n/usr/local/python3/lib/python3.7/importlib/__init__.py:127: in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\nmars/tensor/special/tests/test_special.py:18: in <module>\r\n from scipy.special import (\r\nE ImportError: cannot import name 'elliprd' from 'scipy.special' (/home/admin/py3/lib/python3.7/site-packages/scipy/special/__init__.py)\r\n```\r\n\r\n**To Reproduce**\r\nTo help us reproducing this bug, please provide information below:\r\n1. Your Python version 3.7.7\r\n2. The version of Mars you use Latest master\r\n3. Versions of crucial packages, such as numpy, scipy and pandas `scipy==1.5.0`\r\n4. Full stack of the error.\r\n5. Minimized code to reproduce the error.\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "# Copyright 1999-2021 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport scipy.special as spspecial\n\nfrom ..arithmetic.utils import arithmetic_operand\nfrom ..utils import infer_dtype, implement_scipy\nfrom .core import (\n _register_special_op,\n TensorSpecialBinOp,\n TensorSpecialUnaryOp,\n TensorSpecialMultiOp,\n)\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"unary\")\nclass TensorEllipk(TensorSpecialUnaryOp):\n _func_name = \"ellipk\"\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"unary\")\nclass TensorEllipkm1(TensorSpecialUnaryOp):\n _func_name = \"ellipkm1\"\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"binary_and\")\nclass TensorEllipkinc(TensorSpecialBinOp):\n _func_name = \"ellipkinc\"\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"unary\")\nclass TensorEllipe(TensorSpecialUnaryOp):\n _func_name = \"ellipe\"\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"binary_and\")\nclass TensorEllipeinc(TensorSpecialBinOp):\n _func_name = \"ellipeinc\"\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"binary_and\")\nclass TensorElliprc(TensorSpecialBinOp):\n _func_name = \"elliprc\"\n\n\n@_register_special_op\nclass TensorElliprd(TensorSpecialMultiOp):\n _ARG_COUNT = 3\n _func_name = \"elliprd\"\n\n\n@_register_special_op\nclass TensorElliprf(TensorSpecialMultiOp):\n _ARG_COUNT = 3\n _func_name = \"elliprf\"\n\n\n@_register_special_op\nclass TensorElliprg(TensorSpecialMultiOp):\n _ARG_COUNT = 3\n _func_name = \"elliprg\"\n\n\n@_register_special_op\nclass TensorElliprj(TensorSpecialMultiOp):\n _ARG_COUNT = 4\n _func_name = \"elliprj\"\n\n\n@implement_scipy(spspecial.ellipk)\n@infer_dtype(spspecial.ellipk)\ndef ellipk(x, **kwargs):\n op = TensorEllipk(**kwargs)\n return op(x)\n\n\n@implement_scipy(spspecial.ellipkm1)\n@infer_dtype(spspecial.ellipkm1)\ndef ellipkm1(x, **kwargs):\n op = TensorEllipkm1(**kwargs)\n return op(x)\n\n\n@implement_scipy(spspecial.ellipkinc)\n@infer_dtype(spspecial.ellipkinc)\ndef ellipkinc(phi, m, **kwargs):\n op = TensorEllipkinc(**kwargs)\n return op(phi, m)\n\n\n@implement_scipy(spspecial.ellipe)\n@infer_dtype(spspecial.ellipe)\ndef ellipe(x, **kwargs):\n op = TensorEllipe(**kwargs)\n return op(x)\n\n\n@implement_scipy(spspecial.ellipeinc)\n@infer_dtype(spspecial.ellipeinc)\ndef ellipeinc(phi, m, **kwargs):\n op = TensorEllipeinc(**kwargs)\n return op(phi, m)\n\n\ntry:\n\n @implement_scipy(spspecial.elliprc)\n @infer_dtype(spspecial.elliprc)\n def elliprc(x, y, **kwargs):\n op = TensorElliprc(**kwargs)\n return op(x, y)\n\n @implement_scipy(spspecial.elliprd)\n @infer_dtype(spspecial.elliprd)\n def elliprd(x, y, z, **kwargs):\n op = TensorElliprd(**kwargs)\n return op(x, y, z)\n\n @implement_scipy(spspecial.elliprf)\n @infer_dtype(spspecial.elliprf)\n def elliprf(x, y, z, **kwargs):\n op = TensorElliprf(**kwargs)\n return op(x, y, z)\n\n @implement_scipy(spspecial.elliprg)\n @infer_dtype(spspecial.elliprg)\n def elliprg(x, y, z, **kwargs):\n op = TensorElliprg(**kwargs)\n return op(x, y, z)\n\n @implement_scipy(spspecial.elliprj)\n @infer_dtype(spspecial.elliprj)\n def elliprj(x, y, z, p, **kwargs):\n op = TensorElliprj(**kwargs)\n return op(x, y, z, p)\n\nexcept AttributeError:\n # These functions are not implemented before scipy v1.8 so\n # spsecial.func may cause AttributeError\n pass\n", "path": "mars/tensor/special/ellip_func_integrals.py"}, {"content": "# Copyright 1999-2021 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\ntry:\n import scipy\n\n from .err_fresnel import (\n erf,\n TensorErf,\n erfc,\n TensorErfc,\n erfcx,\n TensorErfcx,\n erfi,\n TensorErfi,\n erfinv,\n TensorErfinv,\n erfcinv,\n TensorErfcinv,\n )\n from .gamma_funcs import (\n gamma,\n TensorGamma,\n gammaln,\n TensorGammaln,\n loggamma,\n TensorLogGamma,\n gammasgn,\n TensorGammaSgn,\n gammainc,\n TensorGammaInc,\n gammaincinv,\n TensorGammaIncInv,\n gammaincc,\n TensorGammaIncc,\n gammainccinv,\n TensorGammaInccInv,\n beta,\n TensorBeta,\n betaln,\n TensorBetaLn,\n betainc,\n TensorBetaInc,\n betaincinv,\n TensorBetaIncInv,\n psi,\n TensorPsi,\n rgamma,\n TensorRGamma,\n polygamma,\n TensorPolyGamma,\n multigammaln,\n TensorMultiGammaLn,\n digamma,\n TensorDiGamma,\n poch,\n TensorPoch,\n )\n from .info_theory import (\n entr,\n TensorEntr,\n rel_entr,\n TensorRelEntr,\n kl_div,\n TensorKlDiv,\n )\n from .convenience import (\n xlogy,\n TensorXLogY,\n )\n from .bessel import (\n jv,\n TensorJV,\n jve,\n TensorJVE,\n yn,\n TensorYN,\n yv,\n TensorYV,\n yve,\n TensorYVE,\n kn,\n TensorKN,\n kv,\n TensorKV,\n kve,\n TensorKVE,\n iv,\n TensorIV,\n ive,\n TensorIVE,\n hankel1,\n TensorHankel1,\n hankel1e,\n TensorHankel1e,\n hankel2,\n TensorHankel2,\n hankel2e,\n TensorHankel2e,\n )\n from .hypergeometric_funcs import (\n hyp2f1,\n TensorHYP2F1,\n hyp1f1,\n TensorHYP1F1,\n hyperu,\n TensorHYPERU,\n hyp0f1,\n TensorHYP0F1,\n )\n from .ellip_harm import (\n ellip_harm,\n TensorEllipHarm,\n ellip_harm_2,\n TensorEllipHarm2,\n ellip_normal,\n TensorEllipNormal,\n )\n from .ellip_func_integrals import (\n ellipk,\n TensorEllipk,\n ellipkm1,\n TensorEllipkm1,\n ellipkinc,\n TensorEllipkinc,\n ellipe,\n TensorEllipe,\n ellipeinc,\n TensorEllipeinc,\n elliprc,\n TensorElliprc,\n elliprd,\n TensorElliprd,\n elliprf,\n TensorElliprf,\n elliprg,\n TensorElliprg,\n elliprj,\n TensorElliprj,\n )\nexcept ImportError: # pragma: no cover\n pass\n", "path": "mars/tensor/special/__init__.py"}], "after_files": [{"content": "# Copyright 1999-2021 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport scipy.special as spspecial\n\nfrom ..arithmetic.utils import arithmetic_operand\nfrom ..utils import infer_dtype, implement_scipy\nfrom .core import (\n _register_special_op,\n TensorSpecialBinOp,\n TensorSpecialUnaryOp,\n TensorSpecialMultiOp,\n)\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"unary\")\nclass TensorEllipk(TensorSpecialUnaryOp):\n _func_name = \"ellipk\"\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"unary\")\nclass TensorEllipkm1(TensorSpecialUnaryOp):\n _func_name = \"ellipkm1\"\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"binary_and\")\nclass TensorEllipkinc(TensorSpecialBinOp):\n _func_name = \"ellipkinc\"\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"unary\")\nclass TensorEllipe(TensorSpecialUnaryOp):\n _func_name = \"ellipe\"\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"binary_and\")\nclass TensorEllipeinc(TensorSpecialBinOp):\n _func_name = \"ellipeinc\"\n\n\n@_register_special_op\n@arithmetic_operand(sparse_mode=\"binary_and\")\nclass TensorElliprc(TensorSpecialBinOp):\n _func_name = \"elliprc\"\n\n\n@_register_special_op\nclass TensorElliprd(TensorSpecialMultiOp):\n _ARG_COUNT = 3\n _func_name = \"elliprd\"\n\n\n@_register_special_op\nclass TensorElliprf(TensorSpecialMultiOp):\n _ARG_COUNT = 3\n _func_name = \"elliprf\"\n\n\n@_register_special_op\nclass TensorElliprg(TensorSpecialMultiOp):\n _ARG_COUNT = 3\n _func_name = \"elliprg\"\n\n\n@_register_special_op\nclass TensorElliprj(TensorSpecialMultiOp):\n _ARG_COUNT = 4\n _func_name = \"elliprj\"\n\n\n@implement_scipy(spspecial.ellipk)\n@infer_dtype(spspecial.ellipk)\ndef ellipk(x, **kwargs):\n op = TensorEllipk(**kwargs)\n return op(x)\n\n\n@implement_scipy(spspecial.ellipkm1)\n@infer_dtype(spspecial.ellipkm1)\ndef ellipkm1(x, **kwargs):\n op = TensorEllipkm1(**kwargs)\n return op(x)\n\n\n@implement_scipy(spspecial.ellipkinc)\n@infer_dtype(spspecial.ellipkinc)\ndef ellipkinc(phi, m, **kwargs):\n op = TensorEllipkinc(**kwargs)\n return op(phi, m)\n\n\n@implement_scipy(spspecial.ellipe)\n@infer_dtype(spspecial.ellipe)\ndef ellipe(x, **kwargs):\n op = TensorEllipe(**kwargs)\n return op(x)\n\n\n@implement_scipy(spspecial.ellipeinc)\n@infer_dtype(spspecial.ellipeinc)\ndef ellipeinc(phi, m, **kwargs):\n op = TensorEllipeinc(**kwargs)\n return op(phi, m)\n\n\ntry:\n\n @implement_scipy(spspecial.elliprc)\n @infer_dtype(spspecial.elliprc)\n def elliprc(x, y, **kwargs):\n op = TensorElliprc(**kwargs)\n return op(x, y)\n\n @implement_scipy(spspecial.elliprd)\n @infer_dtype(spspecial.elliprd)\n def elliprd(x, y, z, **kwargs):\n op = TensorElliprd(**kwargs)\n return op(x, y, z)\n\n @implement_scipy(spspecial.elliprf)\n @infer_dtype(spspecial.elliprf)\n def elliprf(x, y, z, **kwargs):\n op = TensorElliprf(**kwargs)\n return op(x, y, z)\n\n @implement_scipy(spspecial.elliprg)\n @infer_dtype(spspecial.elliprg)\n def elliprg(x, y, z, **kwargs):\n op = TensorElliprg(**kwargs)\n return op(x, y, z)\n\n @implement_scipy(spspecial.elliprj)\n @infer_dtype(spspecial.elliprj)\n def elliprj(x, y, z, p, **kwargs):\n op = TensorElliprj(**kwargs)\n return op(x, y, z, p)\n\nexcept AttributeError:\n # These functions are not implemented before scipy v1.8 so\n # spsecial.func may cause AttributeError\n elliprc = elliprd = elliprf = elliprg = elliprj = None\n", "path": "mars/tensor/special/ellip_func_integrals.py"}, {"content": "# Copyright 1999-2021 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\ntry:\n import scipy\n\n from .err_fresnel import (\n erf,\n TensorErf,\n erfc,\n TensorErfc,\n erfcx,\n TensorErfcx,\n erfi,\n TensorErfi,\n erfinv,\n TensorErfinv,\n erfcinv,\n TensorErfcinv,\n )\n from .gamma_funcs import (\n gamma,\n TensorGamma,\n gammaln,\n TensorGammaln,\n loggamma,\n TensorLogGamma,\n gammasgn,\n TensorGammaSgn,\n gammainc,\n TensorGammaInc,\n gammaincinv,\n TensorGammaIncInv,\n gammaincc,\n TensorGammaIncc,\n gammainccinv,\n TensorGammaInccInv,\n beta,\n TensorBeta,\n betaln,\n TensorBetaLn,\n betainc,\n TensorBetaInc,\n betaincinv,\n TensorBetaIncInv,\n psi,\n TensorPsi,\n rgamma,\n TensorRGamma,\n polygamma,\n TensorPolyGamma,\n multigammaln,\n TensorMultiGammaLn,\n digamma,\n TensorDiGamma,\n poch,\n TensorPoch,\n )\n from .info_theory import (\n entr,\n TensorEntr,\n rel_entr,\n TensorRelEntr,\n kl_div,\n TensorKlDiv,\n )\n from .convenience import (\n xlogy,\n TensorXLogY,\n )\n from .bessel import (\n jv,\n TensorJV,\n jve,\n TensorJVE,\n yn,\n TensorYN,\n yv,\n TensorYV,\n yve,\n TensorYVE,\n kn,\n TensorKN,\n kv,\n TensorKV,\n kve,\n TensorKVE,\n iv,\n TensorIV,\n ive,\n TensorIVE,\n hankel1,\n TensorHankel1,\n hankel1e,\n TensorHankel1e,\n hankel2,\n TensorHankel2,\n hankel2e,\n TensorHankel2e,\n )\n from .hypergeometric_funcs import (\n hyp2f1,\n TensorHYP2F1,\n hyp1f1,\n TensorHYP1F1,\n hyperu,\n TensorHYPERU,\n hyp0f1,\n TensorHYP0F1,\n )\n from .ellip_harm import (\n ellip_harm,\n TensorEllipHarm,\n ellip_harm_2,\n TensorEllipHarm2,\n ellip_normal,\n TensorEllipNormal,\n )\n from .ellip_func_integrals import (\n ellipk,\n TensorEllipk,\n ellipkm1,\n TensorEllipkm1,\n ellipkinc,\n TensorEllipkinc,\n ellipe,\n TensorEllipe,\n ellipeinc,\n TensorEllipeinc,\n elliprc,\n TensorElliprc,\n elliprd,\n TensorElliprd,\n elliprf,\n TensorElliprf,\n elliprg,\n TensorElliprg,\n elliprj,\n TensorElliprj,\n )\nexcept ImportError: # pragma: no cover\n pass\n\n_names_to_del = [_name for _name, _val in globals().items() if _val is None]\n[globals().pop(_name) for _name in _names_to_del]\ndel _names_to_del\n", "path": "mars/tensor/special/__init__.py"}]}
| 3,461 | 268 |
gh_patches_debug_42303
|
rasdani/github-patches
|
git_diff
|
sublimelsp__LSP-1690
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Format JSON on save
Is there a way to get this LSP to format json files ons save?
It works for other LSPs but just not JSON,
Here are my configs:
Preferences
```
{
...
"lsp_format_on_save": true
...
}
```
LSP
```
"lsp_code_actions_on_save": {
"source.organizeImports": true,
"source.fixAll.eslint": true,
}
```
All LSP-JSON settings are default
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/formatting.py`
Content:
```
1 from .core.edit import parse_text_edit
2 from .core.protocol import TextEdit
3 from .core.registry import LspTextCommand
4 from .core.sessions import Session
5 from .core.settings import userprefs
6 from .core.typing import Any, Callable, List, Optional, Iterator
7 from .core.views import entire_content_region
8 from .core.views import first_selection_region
9 from .core.views import text_document_formatting
10 from .core.views import text_document_range_formatting
11 from .core.views import will_save_wait_until
12 from .save_command import LspSaveCommand, SaveTask
13 import sublime
14
15
16 def apply_response_to_view(response: Optional[List[TextEdit]], view: sublime.View) -> None:
17 edits = list(parse_text_edit(change) for change in response) if response else []
18 view.run_command('lsp_apply_document_edit', {'changes': edits})
19
20
21 class WillSaveWaitTask(SaveTask):
22 @classmethod
23 def is_applicable(cls, view: sublime.View) -> bool:
24 return bool(view.file_name())
25
26 def __init__(self, task_runner: LspTextCommand, on_complete: Callable[[], None]) -> None:
27 super().__init__(task_runner, on_complete)
28 self._session_iterator = None # type: Optional[Iterator[Session]]
29
30 def run_async(self) -> None:
31 super().run_async()
32 self._session_iterator = self._task_runner.sessions('textDocumentSync.willSaveWaitUntil')
33 self._handle_next_session_async()
34
35 def _handle_next_session_async(self) -> None:
36 session = next(self._session_iterator, None) if self._session_iterator else None
37 if session:
38 self._purge_changes_async()
39 self._will_save_wait_until_async(session)
40 else:
41 self._on_complete()
42
43 def _will_save_wait_until_async(self, session: Session) -> None:
44 session.send_request_async(
45 will_save_wait_until(self._task_runner.view, reason=1), # TextDocumentSaveReason.Manual
46 self._on_response,
47 lambda error: self._on_response(None))
48
49 def _on_response(self, response: Any) -> None:
50 if response and not self._cancelled:
51 apply_response_to_view(response, self._task_runner.view)
52 sublime.set_timeout_async(self._handle_next_session_async)
53
54
55 class FormattingTask(SaveTask):
56 @classmethod
57 def is_applicable(cls, view: sublime.View) -> bool:
58 settings = view.settings()
59 view_format_on_save = settings.get('lsp_format_on_save', None)
60 enabled = view_format_on_save if isinstance(view_format_on_save, bool) else userprefs().lsp_format_on_save
61 return enabled and bool(view.window()) and bool(view.file_name())
62
63 def run_async(self) -> None:
64 super().run_async()
65 self._purge_changes_async()
66 session = self._task_runner.best_session(LspFormatDocumentCommand.capability)
67 if session:
68 session.send_request_async(
69 text_document_formatting(self._task_runner.view), self._on_response,
70 lambda error: self._on_response(None))
71 else:
72 self._on_complete()
73
74 def _on_response(self, response: Any) -> None:
75 if response and not self._cancelled:
76 apply_response_to_view(response, self._task_runner.view)
77 sublime.set_timeout_async(self._on_complete)
78
79
80 LspSaveCommand.register_task(WillSaveWaitTask)
81 LspSaveCommand.register_task(FormattingTask)
82
83
84 class LspFormatDocumentCommand(LspTextCommand):
85
86 capability = 'documentFormattingProvider'
87
88 def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:
89 return super().is_enabled() or bool(self.best_session(LspFormatDocumentRangeCommand.capability))
90
91 def run(self, edit: sublime.Edit, event: Optional[dict] = None) -> None:
92 session = self.best_session(self.capability)
93 if session:
94 # Either use the documentFormattingProvider ...
95 session.send_request(text_document_formatting(self.view), self.on_result)
96 else:
97 session = self.best_session(LspFormatDocumentRangeCommand.capability)
98 if session:
99 # ... or use the documentRangeFormattingProvider and format the entire range.
100 req = text_document_range_formatting(self.view, entire_content_region(self.view))
101 session.send_request(req, self.on_result)
102
103 def on_result(self, params: Any) -> None:
104 apply_response_to_view(params, self.view)
105
106
107 class LspFormatDocumentRangeCommand(LspTextCommand):
108
109 capability = 'documentRangeFormattingProvider'
110
111 def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:
112 if super().is_enabled(event, point):
113 if len(self.view.sel()) == 1:
114 region = self.view.sel()[0]
115 if region.begin() != region.end():
116 return True
117 return False
118
119 def run(self, edit: sublime.Edit, event: Optional[dict] = None) -> None:
120 session = self.best_session(self.capability)
121 selection = first_selection_region(self.view)
122 if session and selection is not None:
123 req = text_document_range_formatting(self.view, selection)
124 session.send_request(req, lambda response: apply_response_to_view(response, self.view))
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugin/formatting.py b/plugin/formatting.py
--- a/plugin/formatting.py
+++ b/plugin/formatting.py
@@ -1,9 +1,11 @@
from .core.edit import parse_text_edit
+from .core.promise import Promise
+from .core.protocol import Error
from .core.protocol import TextEdit
from .core.registry import LspTextCommand
from .core.sessions import Session
from .core.settings import userprefs
-from .core.typing import Any, Callable, List, Optional, Iterator
+from .core.typing import Any, Callable, List, Optional, Iterator, Union
from .core.views import entire_content_region
from .core.views import first_selection_region
from .core.views import text_document_formatting
@@ -13,6 +15,22 @@
import sublime
+FormatResponse = Union[List[TextEdit], None, Error]
+
+
+def format_document(text_command: LspTextCommand) -> Promise[FormatResponse]:
+ view = text_command.view
+ session = text_command.best_session(LspFormatDocumentCommand.capability)
+ if session:
+ # Either use the documentFormattingProvider ...
+ return session.send_request_task(text_document_formatting(view))
+ session = text_command.best_session(LspFormatDocumentRangeCommand.capability)
+ if session:
+ # ... or use the documentRangeFormattingProvider and format the entire range.
+ return session.send_request_task(text_document_range_formatting(view, entire_content_region(view)))
+ return Promise.resolve(None)
+
+
def apply_response_to_view(response: Optional[List[TextEdit]], view: sublime.View) -> None:
edits = list(parse_text_edit(change) for change in response) if response else []
view.run_command('lsp_apply_document_edit', {'changes': edits})
@@ -63,16 +81,10 @@
def run_async(self) -> None:
super().run_async()
self._purge_changes_async()
- session = self._task_runner.best_session(LspFormatDocumentCommand.capability)
- if session:
- session.send_request_async(
- text_document_formatting(self._task_runner.view), self._on_response,
- lambda error: self._on_response(None))
- else:
- self._on_complete()
+ format_document(self._task_runner).then(self._on_response)
- def _on_response(self, response: Any) -> None:
- if response and not self._cancelled:
+ def _on_response(self, response: FormatResponse) -> None:
+ if response and not isinstance(response, Error) and not self._cancelled:
apply_response_to_view(response, self._task_runner.view)
sublime.set_timeout_async(self._on_complete)
@@ -89,19 +101,11 @@
return super().is_enabled() or bool(self.best_session(LspFormatDocumentRangeCommand.capability))
def run(self, edit: sublime.Edit, event: Optional[dict] = None) -> None:
- session = self.best_session(self.capability)
- if session:
- # Either use the documentFormattingProvider ...
- session.send_request(text_document_formatting(self.view), self.on_result)
- else:
- session = self.best_session(LspFormatDocumentRangeCommand.capability)
- if session:
- # ... or use the documentRangeFormattingProvider and format the entire range.
- req = text_document_range_formatting(self.view, entire_content_region(self.view))
- session.send_request(req, self.on_result)
-
- def on_result(self, params: Any) -> None:
- apply_response_to_view(params, self.view)
+ format_document(self).then(self.on_result)
+
+ def on_result(self, result: FormatResponse) -> None:
+ if result and not isinstance(result, Error):
+ apply_response_to_view(result, self.view)
class LspFormatDocumentRangeCommand(LspTextCommand):
|
{"golden_diff": "diff --git a/plugin/formatting.py b/plugin/formatting.py\n--- a/plugin/formatting.py\n+++ b/plugin/formatting.py\n@@ -1,9 +1,11 @@\n from .core.edit import parse_text_edit\n+from .core.promise import Promise\n+from .core.protocol import Error\n from .core.protocol import TextEdit\n from .core.registry import LspTextCommand\n from .core.sessions import Session\n from .core.settings import userprefs\n-from .core.typing import Any, Callable, List, Optional, Iterator\n+from .core.typing import Any, Callable, List, Optional, Iterator, Union\n from .core.views import entire_content_region\n from .core.views import first_selection_region\n from .core.views import text_document_formatting\n@@ -13,6 +15,22 @@\n import sublime\n \n \n+FormatResponse = Union[List[TextEdit], None, Error]\n+\n+\n+def format_document(text_command: LspTextCommand) -> Promise[FormatResponse]:\n+ view = text_command.view\n+ session = text_command.best_session(LspFormatDocumentCommand.capability)\n+ if session:\n+ # Either use the documentFormattingProvider ...\n+ return session.send_request_task(text_document_formatting(view))\n+ session = text_command.best_session(LspFormatDocumentRangeCommand.capability)\n+ if session:\n+ # ... or use the documentRangeFormattingProvider and format the entire range.\n+ return session.send_request_task(text_document_range_formatting(view, entire_content_region(view)))\n+ return Promise.resolve(None)\n+\n+\n def apply_response_to_view(response: Optional[List[TextEdit]], view: sublime.View) -> None:\n edits = list(parse_text_edit(change) for change in response) if response else []\n view.run_command('lsp_apply_document_edit', {'changes': edits})\n@@ -63,16 +81,10 @@\n def run_async(self) -> None:\n super().run_async()\n self._purge_changes_async()\n- session = self._task_runner.best_session(LspFormatDocumentCommand.capability)\n- if session:\n- session.send_request_async(\n- text_document_formatting(self._task_runner.view), self._on_response,\n- lambda error: self._on_response(None))\n- else:\n- self._on_complete()\n+ format_document(self._task_runner).then(self._on_response)\n \n- def _on_response(self, response: Any) -> None:\n- if response and not self._cancelled:\n+ def _on_response(self, response: FormatResponse) -> None:\n+ if response and not isinstance(response, Error) and not self._cancelled:\n apply_response_to_view(response, self._task_runner.view)\n sublime.set_timeout_async(self._on_complete)\n \n@@ -89,19 +101,11 @@\n return super().is_enabled() or bool(self.best_session(LspFormatDocumentRangeCommand.capability))\n \n def run(self, edit: sublime.Edit, event: Optional[dict] = None) -> None:\n- session = self.best_session(self.capability)\n- if session:\n- # Either use the documentFormattingProvider ...\n- session.send_request(text_document_formatting(self.view), self.on_result)\n- else:\n- session = self.best_session(LspFormatDocumentRangeCommand.capability)\n- if session:\n- # ... or use the documentRangeFormattingProvider and format the entire range.\n- req = text_document_range_formatting(self.view, entire_content_region(self.view))\n- session.send_request(req, self.on_result)\n-\n- def on_result(self, params: Any) -> None:\n- apply_response_to_view(params, self.view)\n+ format_document(self).then(self.on_result)\n+\n+ def on_result(self, result: FormatResponse) -> None:\n+ if result and not isinstance(result, Error):\n+ apply_response_to_view(result, self.view)\n \n \n class LspFormatDocumentRangeCommand(LspTextCommand):\n", "issue": "Format JSON on save\nIs there a way to get this LSP to format json files ons save?\r\n\r\nIt works for other LSPs but just not JSON,\r\n\r\nHere are my configs:\r\n\r\nPreferences\r\n```\r\n{\r\n ...\r\n\t\"lsp_format_on_save\": true\r\n ...\r\n}\r\n```\r\n\r\nLSP\r\n```\r\n\t\"lsp_code_actions_on_save\": {\r\n\t\t\"source.organizeImports\": true,\r\n\t\t\"source.fixAll.eslint\": true,\r\n\t}\r\n```\r\n\r\nAll LSP-JSON settings are default\n", "before_files": [{"content": "from .core.edit import parse_text_edit\nfrom .core.protocol import TextEdit\nfrom .core.registry import LspTextCommand\nfrom .core.sessions import Session\nfrom .core.settings import userprefs\nfrom .core.typing import Any, Callable, List, Optional, Iterator\nfrom .core.views import entire_content_region\nfrom .core.views import first_selection_region\nfrom .core.views import text_document_formatting\nfrom .core.views import text_document_range_formatting\nfrom .core.views import will_save_wait_until\nfrom .save_command import LspSaveCommand, SaveTask\nimport sublime\n\n\ndef apply_response_to_view(response: Optional[List[TextEdit]], view: sublime.View) -> None:\n edits = list(parse_text_edit(change) for change in response) if response else []\n view.run_command('lsp_apply_document_edit', {'changes': edits})\n\n\nclass WillSaveWaitTask(SaveTask):\n @classmethod\n def is_applicable(cls, view: sublime.View) -> bool:\n return bool(view.file_name())\n\n def __init__(self, task_runner: LspTextCommand, on_complete: Callable[[], None]) -> None:\n super().__init__(task_runner, on_complete)\n self._session_iterator = None # type: Optional[Iterator[Session]]\n\n def run_async(self) -> None:\n super().run_async()\n self._session_iterator = self._task_runner.sessions('textDocumentSync.willSaveWaitUntil')\n self._handle_next_session_async()\n\n def _handle_next_session_async(self) -> None:\n session = next(self._session_iterator, None) if self._session_iterator else None\n if session:\n self._purge_changes_async()\n self._will_save_wait_until_async(session)\n else:\n self._on_complete()\n\n def _will_save_wait_until_async(self, session: Session) -> None:\n session.send_request_async(\n will_save_wait_until(self._task_runner.view, reason=1), # TextDocumentSaveReason.Manual\n self._on_response,\n lambda error: self._on_response(None))\n\n def _on_response(self, response: Any) -> None:\n if response and not self._cancelled:\n apply_response_to_view(response, self._task_runner.view)\n sublime.set_timeout_async(self._handle_next_session_async)\n\n\nclass FormattingTask(SaveTask):\n @classmethod\n def is_applicable(cls, view: sublime.View) -> bool:\n settings = view.settings()\n view_format_on_save = settings.get('lsp_format_on_save', None)\n enabled = view_format_on_save if isinstance(view_format_on_save, bool) else userprefs().lsp_format_on_save\n return enabled and bool(view.window()) and bool(view.file_name())\n\n def run_async(self) -> None:\n super().run_async()\n self._purge_changes_async()\n session = self._task_runner.best_session(LspFormatDocumentCommand.capability)\n if session:\n session.send_request_async(\n text_document_formatting(self._task_runner.view), self._on_response,\n lambda error: self._on_response(None))\n else:\n self._on_complete()\n\n def _on_response(self, response: Any) -> None:\n if response and not self._cancelled:\n apply_response_to_view(response, self._task_runner.view)\n sublime.set_timeout_async(self._on_complete)\n\n\nLspSaveCommand.register_task(WillSaveWaitTask)\nLspSaveCommand.register_task(FormattingTask)\n\n\nclass LspFormatDocumentCommand(LspTextCommand):\n\n capability = 'documentFormattingProvider'\n\n def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:\n return super().is_enabled() or bool(self.best_session(LspFormatDocumentRangeCommand.capability))\n\n def run(self, edit: sublime.Edit, event: Optional[dict] = None) -> None:\n session = self.best_session(self.capability)\n if session:\n # Either use the documentFormattingProvider ...\n session.send_request(text_document_formatting(self.view), self.on_result)\n else:\n session = self.best_session(LspFormatDocumentRangeCommand.capability)\n if session:\n # ... or use the documentRangeFormattingProvider and format the entire range.\n req = text_document_range_formatting(self.view, entire_content_region(self.view))\n session.send_request(req, self.on_result)\n\n def on_result(self, params: Any) -> None:\n apply_response_to_view(params, self.view)\n\n\nclass LspFormatDocumentRangeCommand(LspTextCommand):\n\n capability = 'documentRangeFormattingProvider'\n\n def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:\n if super().is_enabled(event, point):\n if len(self.view.sel()) == 1:\n region = self.view.sel()[0]\n if region.begin() != region.end():\n return True\n return False\n\n def run(self, edit: sublime.Edit, event: Optional[dict] = None) -> None:\n session = self.best_session(self.capability)\n selection = first_selection_region(self.view)\n if session and selection is not None:\n req = text_document_range_formatting(self.view, selection)\n session.send_request(req, lambda response: apply_response_to_view(response, self.view))\n", "path": "plugin/formatting.py"}], "after_files": [{"content": "from .core.edit import parse_text_edit\nfrom .core.promise import Promise\nfrom .core.protocol import Error\nfrom .core.protocol import TextEdit\nfrom .core.registry import LspTextCommand\nfrom .core.sessions import Session\nfrom .core.settings import userprefs\nfrom .core.typing import Any, Callable, List, Optional, Iterator, Union\nfrom .core.views import entire_content_region\nfrom .core.views import first_selection_region\nfrom .core.views import text_document_formatting\nfrom .core.views import text_document_range_formatting\nfrom .core.views import will_save_wait_until\nfrom .save_command import LspSaveCommand, SaveTask\nimport sublime\n\n\nFormatResponse = Union[List[TextEdit], None, Error]\n\n\ndef format_document(text_command: LspTextCommand) -> Promise[FormatResponse]:\n view = text_command.view\n session = text_command.best_session(LspFormatDocumentCommand.capability)\n if session:\n # Either use the documentFormattingProvider ...\n return session.send_request_task(text_document_formatting(view))\n session = text_command.best_session(LspFormatDocumentRangeCommand.capability)\n if session:\n # ... or use the documentRangeFormattingProvider and format the entire range.\n return session.send_request_task(text_document_range_formatting(view, entire_content_region(view)))\n return Promise.resolve(None)\n\n\ndef apply_response_to_view(response: Optional[List[TextEdit]], view: sublime.View) -> None:\n edits = list(parse_text_edit(change) for change in response) if response else []\n view.run_command('lsp_apply_document_edit', {'changes': edits})\n\n\nclass WillSaveWaitTask(SaveTask):\n @classmethod\n def is_applicable(cls, view: sublime.View) -> bool:\n return bool(view.file_name())\n\n def __init__(self, task_runner: LspTextCommand, on_complete: Callable[[], None]) -> None:\n super().__init__(task_runner, on_complete)\n self._session_iterator = None # type: Optional[Iterator[Session]]\n\n def run_async(self) -> None:\n super().run_async()\n self._session_iterator = self._task_runner.sessions('textDocumentSync.willSaveWaitUntil')\n self._handle_next_session_async()\n\n def _handle_next_session_async(self) -> None:\n session = next(self._session_iterator, None) if self._session_iterator else None\n if session:\n self._purge_changes_async()\n self._will_save_wait_until_async(session)\n else:\n self._on_complete()\n\n def _will_save_wait_until_async(self, session: Session) -> None:\n session.send_request_async(\n will_save_wait_until(self._task_runner.view, reason=1), # TextDocumentSaveReason.Manual\n self._on_response,\n lambda error: self._on_response(None))\n\n def _on_response(self, response: Any) -> None:\n if response and not self._cancelled:\n apply_response_to_view(response, self._task_runner.view)\n sublime.set_timeout_async(self._handle_next_session_async)\n\n\nclass FormattingTask(SaveTask):\n @classmethod\n def is_applicable(cls, view: sublime.View) -> bool:\n settings = view.settings()\n view_format_on_save = settings.get('lsp_format_on_save', None)\n enabled = view_format_on_save if isinstance(view_format_on_save, bool) else userprefs().lsp_format_on_save\n return enabled and bool(view.window()) and bool(view.file_name())\n\n def run_async(self) -> None:\n super().run_async()\n self._purge_changes_async()\n format_document(self._task_runner).then(self._on_response)\n\n def _on_response(self, response: FormatResponse) -> None:\n if response and not isinstance(response, Error) and not self._cancelled:\n apply_response_to_view(response, self._task_runner.view)\n sublime.set_timeout_async(self._on_complete)\n\n\nLspSaveCommand.register_task(WillSaveWaitTask)\nLspSaveCommand.register_task(FormattingTask)\n\n\nclass LspFormatDocumentCommand(LspTextCommand):\n\n capability = 'documentFormattingProvider'\n\n def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:\n return super().is_enabled() or bool(self.best_session(LspFormatDocumentRangeCommand.capability))\n\n def run(self, edit: sublime.Edit, event: Optional[dict] = None) -> None:\n format_document(self).then(self.on_result)\n\n def on_result(self, result: FormatResponse) -> None:\n if result and not isinstance(result, Error):\n apply_response_to_view(result, self.view)\n\n\nclass LspFormatDocumentRangeCommand(LspTextCommand):\n\n capability = 'documentRangeFormattingProvider'\n\n def is_enabled(self, event: Optional[dict] = None, point: Optional[int] = None) -> bool:\n if super().is_enabled(event, point):\n if len(self.view.sel()) == 1:\n region = self.view.sel()[0]\n if region.begin() != region.end():\n return True\n return False\n\n def run(self, edit: sublime.Edit, event: Optional[dict] = None) -> None:\n session = self.best_session(self.capability)\n selection = first_selection_region(self.view)\n if session and selection is not None:\n req = text_document_range_formatting(self.view, selection)\n session.send_request(req, lambda response: apply_response_to_view(response, self.view))\n", "path": "plugin/formatting.py"}]}
| 1,775 | 847 |
gh_patches_debug_40724
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-3175
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enable users to delete their own account
Currently, people are posting issues if they need their account deleted. This leads to extra noise in the issues, and to some privacy concerns.
I propose we add an email account and notice to the README, letting users know where they can email to request account deletion. This will allow better triaging and chunking of deletions.
@ericholscher What do you think?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/core/forms.py`
Content:
```
1 """Forms for core app."""
2
3 from __future__ import absolute_import
4 from builtins import object
5 import logging
6
7 from haystack.forms import SearchForm
8 from haystack.query import SearchQuerySet
9 from django import forms
10 from django.forms.fields import CharField
11 from django.utils.translation import ugettext_lazy as _
12
13 from .models import UserProfile
14
15 log = logging.getLogger(__name__)
16
17
18 class UserProfileForm(forms.ModelForm):
19 first_name = CharField(label=_('First name'), required=False)
20 last_name = CharField(label=_('Last name'), required=False)
21
22 class Meta(object):
23 model = UserProfile
24 # Don't allow users edit someone else's user page,
25 fields = ['first_name', 'last_name', 'homepage', 'allow_ads']
26
27 def __init__(self, *args, **kwargs):
28 super(UserProfileForm, self).__init__(*args, **kwargs)
29 try:
30 self.fields['first_name'].initial = self.instance.user.first_name
31 self.fields['last_name'].initial = self.instance.user.last_name
32 except AttributeError:
33 pass
34
35 def save(self, commit=True):
36 first_name = self.cleaned_data.pop('first_name', None)
37 last_name = self.cleaned_data.pop('last_name', None)
38 profile = super(UserProfileForm, self).save(commit=commit)
39 if commit:
40 user = profile.user
41 user.first_name = first_name
42 user.last_name = last_name
43 user.save()
44 return profile
45
46
47 class FacetField(forms.MultipleChoiceField):
48
49 """
50 For filtering searches on a facet.
51
52 Has validation for the format of facet values.
53 """
54
55 def valid_value(self, value):
56 """
57 Although this is a choice field, no choices need to be supplied.
58
59 Instead, we just validate that the value is in the correct format
60 for facet filtering (facet_name:value)
61 """
62 if ":" not in value:
63 return False
64 return True
65
66
67 class FacetedSearchForm(SearchForm):
68
69 """
70 Supports fetching faceted results with a corresponding query.
71
72 `facets`
73 A list of facet names for which to get facet counts
74 `models`
75 Limit the search to one or more models
76 """
77
78 selected_facets = FacetField(required=False)
79
80 def __init__(self, *args, **kwargs):
81 facets = kwargs.pop('facets', [])
82 models = kwargs.pop('models', [])
83 super(FacetedSearchForm, self).__init__(*args, **kwargs)
84
85 for facet in facets:
86 self.searchqueryset = self.searchqueryset.facet(facet)
87 if models:
88 self.searchqueryset = self.searchqueryset.models(*models)
89
90 def clean_selected_facets(self):
91 facets = self.cleaned_data['selected_facets']
92 cleaned_facets = []
93 clean = SearchQuerySet().query.clean
94 for facet in facets:
95 field, value = facet.split(":", 1)
96 if not value: # Ignore empty values
97 continue
98 value = clean(value)
99 cleaned_facets.append(u'%s:"%s"' % (field, value))
100 return cleaned_facets
101
102 def search(self):
103 sqs = super(FacetedSearchForm, self).search()
104 for facet in self.cleaned_data['selected_facets']:
105 sqs = sqs.narrow(facet)
106 self.searchqueryset = sqs
107 return sqs
108
```
Path: `readthedocs/profiles/views.py`
Content:
```
1 """Views for creating, editing and viewing site-specific user profiles."""
2
3 from __future__ import absolute_import
4 from django.contrib.auth.decorators import login_required
5 from django.contrib.auth.models import User
6 from django.core.exceptions import ObjectDoesNotExist
7 from django.core.urlresolvers import reverse
8 from django.http import Http404
9 from django.http import HttpResponseRedirect
10 from django.shortcuts import get_object_or_404
11 from django.shortcuts import render_to_response
12 from django.template import RequestContext
13
14
15 def create_profile(request, form_class, success_url=None,
16 template_name='profiles/private/create_profile.html',
17 extra_context=None):
18 """
19 Create a profile for the current user, if one doesn't already exist.
20
21 If the user already has a profile, a redirect will be issued to the
22 :view:`profiles.views.edit_profile` view.
23
24 **Optional arguments:**
25
26 ``extra_context``
27 A dictionary of variables to add to the template context. Any
28 callable object in this dictionary will be called to produce
29 the end result which appears in the context.
30
31 ``form_class``
32 The form class to use for validating and creating the user
33 profile. This form class must define a method named
34 ``save()``, implementing the same argument signature as the
35 ``save()`` method of a standard Django ``ModelForm`` (this
36 view will call ``save(commit=False)`` to obtain the profile
37 object, and fill in the user before the final save). If the
38 profile object includes many-to-many relations, the convention
39 established by ``ModelForm`` of using a method named
40 ``save_m2m()`` will be used, and so your form class should
41 also define this method.
42
43 ``success_url``
44 The URL to redirect to after successful profile creation. If
45 this argument is not supplied, this will default to the URL of
46 :view:`profiles.views.profile_detail` for the newly-created
47 profile object.
48
49 ``template_name``
50 The template to use when displaying the profile-creation
51 form. If not supplied, this will default to
52 :template:`profiles/create_profile.html`.
53
54 **Context:**
55
56 ``form``
57 The profile-creation form.
58
59 **Template:**
60
61 ``template_name`` keyword argument, or
62 :template:`profiles/create_profile.html`.
63
64 """
65 try:
66 profile_obj = request.user.profile
67 return HttpResponseRedirect(reverse('profiles_edit_profile'))
68 except ObjectDoesNotExist:
69 pass
70
71 #
72 # We set up success_url here, rather than as the default value for
73 # the argument. Trying to do it as the argument's default would
74 # mean evaluating the call to reverse() at the time this module is
75 # first imported, which introduces a circular dependency: to
76 # perform the reverse lookup we need access to profiles/urls.py,
77 # but profiles/urls.py in turn imports this module.
78 #
79
80 if success_url is None:
81 success_url = reverse('profiles_profile_detail',
82 kwargs={'username': request.user.username})
83 if request.method == 'POST':
84 form = form_class(data=request.POST, files=request.FILES)
85 if form.is_valid():
86 profile_obj = form.save(commit=False)
87 profile_obj.user = request.user
88 profile_obj.save()
89 if hasattr(form, 'save_m2m'):
90 form.save_m2m()
91 return HttpResponseRedirect(success_url)
92 else:
93 form = form_class()
94
95 if extra_context is None:
96 extra_context = {}
97 context = RequestContext(request)
98 for key, value in list(extra_context.items()):
99 context[key] = (value() if callable(value) else value)
100
101 return render_to_response(template_name,
102 {'form': form},
103 context_instance=context)
104 create_profile = login_required(create_profile)
105
106
107 def edit_profile(request, form_class, success_url=None,
108 template_name='profiles/private/edit_profile.html',
109 extra_context=None):
110 """
111 Edit the current user's profile.
112
113 If the user does not already have a profile, a redirect will be issued to
114 the :view:`profiles.views.create_profile` view.
115
116 **Optional arguments:**
117
118 ``extra_context``
119 A dictionary of variables to add to the template context. Any
120 callable object in this dictionary will be called to produce
121 the end result which appears in the context.
122
123 ``form_class``
124 The form class to use for validating and editing the user
125 profile. This form class must operate similarly to a standard
126 Django ``ModelForm`` in that it must accept an instance of the
127 object to be edited as the keyword argument ``instance`` to
128 its constructor, and it must implement a method named
129 ``save()`` which will save the updates to the object.
130
131 ``success_url``
132 The URL to redirect to following a successful edit. If not
133 specified, this will default to the URL of
134 :view:`profiles.views.profile_detail` for the profile object
135 being edited.
136
137 ``template_name``
138 The template to use when displaying the profile-editing
139 form. If not specified, this will default to
140 :template:`profiles/edit_profile.html`.
141
142 **Context:**
143
144 ``form``
145 The form for editing the profile.
146
147 ``profile``
148 The user's current profile.
149
150 **Template:**
151
152 ``template_name`` keyword argument or
153 :template:`profiles/edit_profile.html`.
154
155 """
156 try:
157 profile_obj = request.user.profile
158 except ObjectDoesNotExist:
159 return HttpResponseRedirect(reverse('profiles_profile_create'))
160
161 if success_url is None:
162 success_url = reverse('profiles_profile_detail',
163 kwargs={'username': request.user.username})
164 if request.method == 'POST':
165 form = form_class(data=request.POST, files=request.FILES, instance=profile_obj)
166 if form.is_valid():
167 form.save()
168 return HttpResponseRedirect(success_url)
169 else:
170 form = form_class(instance=profile_obj)
171
172 if extra_context is None:
173 extra_context = {}
174 context = RequestContext(request)
175 for key, value in list(extra_context.items()):
176 context[key] = (value() if callable(value) else value)
177
178 return render_to_response(template_name, {
179 'form': form,
180 'profile': profile_obj,
181 'user': profile_obj.user,
182 }, context_instance=context)
183 edit_profile = login_required(edit_profile)
184
185
186 def profile_detail(request, username, public_profile_field=None,
187 template_name='profiles/public/profile_detail.html',
188 extra_context=None):
189 """
190 Detail view of a user's profile.
191
192 If the user has not yet created a profile, ``Http404`` will be
193 raised.
194
195 **Required arguments:**
196
197 ``username``
198 The username of the user whose profile is being displayed.
199
200 **Optional arguments:**
201
202 ``extra_context``
203 A dictionary of variables to add to the template context. Any
204 callable object in this dictionary will be called to produce
205 the end result which appears in the context.
206
207 ``public_profile_field``
208 The name of a ``BooleanField`` on the profile model; if the
209 value of that field on the user's profile is ``False``, the
210 ``profile`` variable in the template will be ``None``. Use
211 this feature to allow users to mark their profiles as not
212 being publicly viewable.
213
214 If this argument is not specified, it will be assumed that all
215 users' profiles are publicly viewable.
216
217 ``template_name``
218 The name of the template to use for displaying the profile. If
219 not specified, this will default to
220 :template:`profiles/profile_detail.html`.
221
222 **Context:**
223
224 ``profile``
225 The user's profile, or ``None`` if the user's profile is not
226 publicly viewable (see the description of
227 ``public_profile_field`` above).
228
229 **Template:**
230
231 ``template_name`` keyword argument or
232 :template:`profiles/profile_detail.html`.
233
234 """
235 user = get_object_or_404(User, username=username)
236 try:
237 profile_obj = user.profile
238 except ObjectDoesNotExist:
239 raise Http404
240 if public_profile_field is not None and \
241 not getattr(profile_obj, public_profile_field):
242 profile_obj = None
243
244 if extra_context is None:
245 extra_context = {}
246 context = RequestContext(request)
247 for key, value in list(extra_context.items()):
248 context[key] = (value() if callable(value) else value)
249
250 return render_to_response(template_name,
251 {'profile': profile_obj},
252 context_instance=context)
253
```
Path: `readthedocs/profiles/urls/private.py`
Content:
```
1 """URL patterns for views to modify user profiles."""
2
3 from __future__ import absolute_import
4 from django.conf.urls import url
5
6 from readthedocs.core.forms import UserProfileForm
7 from readthedocs.profiles import views
8
9 urlpatterns = [
10 url(r'^create/', views.create_profile,
11 {
12 'form_class': UserProfileForm,
13 },
14 name='profiles_profile_create'),
15 url(r'^edit/', views.edit_profile,
16 {
17 'form_class': UserProfileForm,
18 'template_name': 'profiles/private/edit_profile.html',
19 },
20 name='profiles_profile_edit'),
21 ]
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/readthedocs/core/forms.py b/readthedocs/core/forms.py
--- a/readthedocs/core/forms.py
+++ b/readthedocs/core/forms.py
@@ -4,6 +4,7 @@
from builtins import object
import logging
+from django.contrib.auth.models import User
from haystack.forms import SearchForm
from haystack.query import SearchQuerySet
from django import forms
@@ -44,6 +45,22 @@
return profile
+class UserDeleteForm(forms.ModelForm):
+ username = CharField(label=_('Username'), help_text=_('Please type your username to confirm.'))
+
+ class Meta(object):
+ model = User
+ fields = ['username']
+
+ def clean_username(self):
+ data = self.cleaned_data['username']
+
+ if self.instance.username != data:
+ raise forms.ValidationError(_("Username does not match!"))
+
+ return data
+
+
class FacetField(forms.MultipleChoiceField):
"""
diff --git a/readthedocs/profiles/urls/private.py b/readthedocs/profiles/urls/private.py
--- a/readthedocs/profiles/urls/private.py
+++ b/readthedocs/profiles/urls/private.py
@@ -18,4 +18,5 @@
'template_name': 'profiles/private/edit_profile.html',
},
name='profiles_profile_edit'),
+ url(r'^delete/', views.delete_account, name='delete_account')
]
diff --git a/readthedocs/profiles/views.py b/readthedocs/profiles/views.py
--- a/readthedocs/profiles/views.py
+++ b/readthedocs/profiles/views.py
@@ -1,16 +1,21 @@
"""Views for creating, editing and viewing site-specific user profiles."""
from __future__ import absolute_import
+
+from django.contrib import messages
+from django.contrib.auth import logout
from django.contrib.auth.decorators import login_required
from django.contrib.auth.models import User
from django.core.exceptions import ObjectDoesNotExist
from django.core.urlresolvers import reverse
from django.http import Http404
from django.http import HttpResponseRedirect
-from django.shortcuts import get_object_or_404
+from django.shortcuts import get_object_or_404, render, redirect
from django.shortcuts import render_to_response
from django.template import RequestContext
+from readthedocs.core.forms import UserDeleteForm
+
def create_profile(request, form_class, success_url=None,
template_name='profiles/private/create_profile.html',
@@ -183,6 +188,27 @@
edit_profile = login_required(edit_profile)
+@login_required()
+def delete_account(request):
+ form = UserDeleteForm()
+ template_name = 'profiles/private/delete_account.html'
+
+ if request.method == 'POST':
+ form = UserDeleteForm(instance=request.user, data=request.POST)
+ if form.is_valid():
+
+ # Do not delete the account permanently because it may create disaster
+ # Inactive the user instead.
+ request.user.is_active = False
+ request.user.save()
+ logout(request)
+ messages.info(request, 'You have successfully deleted your account')
+
+ return redirect('homepage')
+
+ return render(request, template_name, {'form': form})
+
+
def profile_detail(request, username, public_profile_field=None,
template_name='profiles/public/profile_detail.html',
extra_context=None):
|
{"golden_diff": "diff --git a/readthedocs/core/forms.py b/readthedocs/core/forms.py\n--- a/readthedocs/core/forms.py\n+++ b/readthedocs/core/forms.py\n@@ -4,6 +4,7 @@\n from builtins import object\n import logging\n \n+from django.contrib.auth.models import User\n from haystack.forms import SearchForm\n from haystack.query import SearchQuerySet\n from django import forms\n@@ -44,6 +45,22 @@\n return profile\n \n \n+class UserDeleteForm(forms.ModelForm):\n+ username = CharField(label=_('Username'), help_text=_('Please type your username to confirm.'))\n+\n+ class Meta(object):\n+ model = User\n+ fields = ['username']\n+\n+ def clean_username(self):\n+ data = self.cleaned_data['username']\n+\n+ if self.instance.username != data:\n+ raise forms.ValidationError(_(\"Username does not match!\"))\n+\n+ return data\n+\n+\n class FacetField(forms.MultipleChoiceField):\n \n \"\"\"\ndiff --git a/readthedocs/profiles/urls/private.py b/readthedocs/profiles/urls/private.py\n--- a/readthedocs/profiles/urls/private.py\n+++ b/readthedocs/profiles/urls/private.py\n@@ -18,4 +18,5 @@\n 'template_name': 'profiles/private/edit_profile.html',\n },\n name='profiles_profile_edit'),\n+ url(r'^delete/', views.delete_account, name='delete_account')\n ]\ndiff --git a/readthedocs/profiles/views.py b/readthedocs/profiles/views.py\n--- a/readthedocs/profiles/views.py\n+++ b/readthedocs/profiles/views.py\n@@ -1,16 +1,21 @@\n \"\"\"Views for creating, editing and viewing site-specific user profiles.\"\"\"\n \n from __future__ import absolute_import\n+\n+from django.contrib import messages\n+from django.contrib.auth import logout\n from django.contrib.auth.decorators import login_required\n from django.contrib.auth.models import User\n from django.core.exceptions import ObjectDoesNotExist\n from django.core.urlresolvers import reverse\n from django.http import Http404\n from django.http import HttpResponseRedirect\n-from django.shortcuts import get_object_or_404\n+from django.shortcuts import get_object_or_404, render, redirect\n from django.shortcuts import render_to_response\n from django.template import RequestContext\n \n+from readthedocs.core.forms import UserDeleteForm\n+\n \n def create_profile(request, form_class, success_url=None,\n template_name='profiles/private/create_profile.html',\n@@ -183,6 +188,27 @@\n edit_profile = login_required(edit_profile)\n \n \n+@login_required()\n+def delete_account(request):\n+ form = UserDeleteForm()\n+ template_name = 'profiles/private/delete_account.html'\n+\n+ if request.method == 'POST':\n+ form = UserDeleteForm(instance=request.user, data=request.POST)\n+ if form.is_valid():\n+\n+ # Do not delete the account permanently because it may create disaster\n+ # Inactive the user instead.\n+ request.user.is_active = False\n+ request.user.save()\n+ logout(request)\n+ messages.info(request, 'You have successfully deleted your account')\n+\n+ return redirect('homepage')\n+\n+ return render(request, template_name, {'form': form})\n+\n+\n def profile_detail(request, username, public_profile_field=None,\n template_name='profiles/public/profile_detail.html',\n extra_context=None):\n", "issue": "Enable users to delete their own account\nCurrently, people are posting issues if they need their account deleted. This leads to extra noise in the issues, and to some privacy concerns. \r\n\r\nI propose we add an email account and notice to the README, letting users know where they can email to request account deletion. This will allow better triaging and chunking of deletions. \r\n\r\n@ericholscher What do you think? \n", "before_files": [{"content": "\"\"\"Forms for core app.\"\"\"\n\nfrom __future__ import absolute_import\nfrom builtins import object\nimport logging\n\nfrom haystack.forms import SearchForm\nfrom haystack.query import SearchQuerySet\nfrom django import forms\nfrom django.forms.fields import CharField\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom .models import UserProfile\n\nlog = logging.getLogger(__name__)\n\n\nclass UserProfileForm(forms.ModelForm):\n first_name = CharField(label=_('First name'), required=False)\n last_name = CharField(label=_('Last name'), required=False)\n\n class Meta(object):\n model = UserProfile\n # Don't allow users edit someone else's user page,\n fields = ['first_name', 'last_name', 'homepage', 'allow_ads']\n\n def __init__(self, *args, **kwargs):\n super(UserProfileForm, self).__init__(*args, **kwargs)\n try:\n self.fields['first_name'].initial = self.instance.user.first_name\n self.fields['last_name'].initial = self.instance.user.last_name\n except AttributeError:\n pass\n\n def save(self, commit=True):\n first_name = self.cleaned_data.pop('first_name', None)\n last_name = self.cleaned_data.pop('last_name', None)\n profile = super(UserProfileForm, self).save(commit=commit)\n if commit:\n user = profile.user\n user.first_name = first_name\n user.last_name = last_name\n user.save()\n return profile\n\n\nclass FacetField(forms.MultipleChoiceField):\n\n \"\"\"\n For filtering searches on a facet.\n\n Has validation for the format of facet values.\n \"\"\"\n\n def valid_value(self, value):\n \"\"\"\n Although this is a choice field, no choices need to be supplied.\n\n Instead, we just validate that the value is in the correct format\n for facet filtering (facet_name:value)\n \"\"\"\n if \":\" not in value:\n return False\n return True\n\n\nclass FacetedSearchForm(SearchForm):\n\n \"\"\"\n Supports fetching faceted results with a corresponding query.\n\n `facets`\n A list of facet names for which to get facet counts\n `models`\n Limit the search to one or more models\n \"\"\"\n\n selected_facets = FacetField(required=False)\n\n def __init__(self, *args, **kwargs):\n facets = kwargs.pop('facets', [])\n models = kwargs.pop('models', [])\n super(FacetedSearchForm, self).__init__(*args, **kwargs)\n\n for facet in facets:\n self.searchqueryset = self.searchqueryset.facet(facet)\n if models:\n self.searchqueryset = self.searchqueryset.models(*models)\n\n def clean_selected_facets(self):\n facets = self.cleaned_data['selected_facets']\n cleaned_facets = []\n clean = SearchQuerySet().query.clean\n for facet in facets:\n field, value = facet.split(\":\", 1)\n if not value: # Ignore empty values\n continue\n value = clean(value)\n cleaned_facets.append(u'%s:\"%s\"' % (field, value))\n return cleaned_facets\n\n def search(self):\n sqs = super(FacetedSearchForm, self).search()\n for facet in self.cleaned_data['selected_facets']:\n sqs = sqs.narrow(facet)\n self.searchqueryset = sqs\n return sqs\n", "path": "readthedocs/core/forms.py"}, {"content": "\"\"\"Views for creating, editing and viewing site-specific user profiles.\"\"\"\n\nfrom __future__ import absolute_import\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.urlresolvers import reverse\nfrom django.http import Http404\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\n\n\ndef create_profile(request, form_class, success_url=None,\n template_name='profiles/private/create_profile.html',\n extra_context=None):\n \"\"\"\n Create a profile for the current user, if one doesn't already exist.\n\n If the user already has a profile, a redirect will be issued to the\n :view:`profiles.views.edit_profile` view.\n\n **Optional arguments:**\n\n ``extra_context``\n A dictionary of variables to add to the template context. Any\n callable object in this dictionary will be called to produce\n the end result which appears in the context.\n\n ``form_class``\n The form class to use for validating and creating the user\n profile. This form class must define a method named\n ``save()``, implementing the same argument signature as the\n ``save()`` method of a standard Django ``ModelForm`` (this\n view will call ``save(commit=False)`` to obtain the profile\n object, and fill in the user before the final save). If the\n profile object includes many-to-many relations, the convention\n established by ``ModelForm`` of using a method named\n ``save_m2m()`` will be used, and so your form class should\n also define this method.\n\n ``success_url``\n The URL to redirect to after successful profile creation. If\n this argument is not supplied, this will default to the URL of\n :view:`profiles.views.profile_detail` for the newly-created\n profile object.\n\n ``template_name``\n The template to use when displaying the profile-creation\n form. If not supplied, this will default to\n :template:`profiles/create_profile.html`.\n\n **Context:**\n\n ``form``\n The profile-creation form.\n\n **Template:**\n\n ``template_name`` keyword argument, or\n :template:`profiles/create_profile.html`.\n\n \"\"\"\n try:\n profile_obj = request.user.profile\n return HttpResponseRedirect(reverse('profiles_edit_profile'))\n except ObjectDoesNotExist:\n pass\n\n #\n # We set up success_url here, rather than as the default value for\n # the argument. Trying to do it as the argument's default would\n # mean evaluating the call to reverse() at the time this module is\n # first imported, which introduces a circular dependency: to\n # perform the reverse lookup we need access to profiles/urls.py,\n # but profiles/urls.py in turn imports this module.\n #\n\n if success_url is None:\n success_url = reverse('profiles_profile_detail',\n kwargs={'username': request.user.username})\n if request.method == 'POST':\n form = form_class(data=request.POST, files=request.FILES)\n if form.is_valid():\n profile_obj = form.save(commit=False)\n profile_obj.user = request.user\n profile_obj.save()\n if hasattr(form, 'save_m2m'):\n form.save_m2m()\n return HttpResponseRedirect(success_url)\n else:\n form = form_class()\n\n if extra_context is None:\n extra_context = {}\n context = RequestContext(request)\n for key, value in list(extra_context.items()):\n context[key] = (value() if callable(value) else value)\n\n return render_to_response(template_name,\n {'form': form},\n context_instance=context)\ncreate_profile = login_required(create_profile)\n\n\ndef edit_profile(request, form_class, success_url=None,\n template_name='profiles/private/edit_profile.html',\n extra_context=None):\n \"\"\"\n Edit the current user's profile.\n\n If the user does not already have a profile, a redirect will be issued to\n the :view:`profiles.views.create_profile` view.\n\n **Optional arguments:**\n\n ``extra_context``\n A dictionary of variables to add to the template context. Any\n callable object in this dictionary will be called to produce\n the end result which appears in the context.\n\n ``form_class``\n The form class to use for validating and editing the user\n profile. This form class must operate similarly to a standard\n Django ``ModelForm`` in that it must accept an instance of the\n object to be edited as the keyword argument ``instance`` to\n its constructor, and it must implement a method named\n ``save()`` which will save the updates to the object.\n\n ``success_url``\n The URL to redirect to following a successful edit. If not\n specified, this will default to the URL of\n :view:`profiles.views.profile_detail` for the profile object\n being edited.\n\n ``template_name``\n The template to use when displaying the profile-editing\n form. If not specified, this will default to\n :template:`profiles/edit_profile.html`.\n\n **Context:**\n\n ``form``\n The form for editing the profile.\n\n ``profile``\n The user's current profile.\n\n **Template:**\n\n ``template_name`` keyword argument or\n :template:`profiles/edit_profile.html`.\n\n \"\"\"\n try:\n profile_obj = request.user.profile\n except ObjectDoesNotExist:\n return HttpResponseRedirect(reverse('profiles_profile_create'))\n\n if success_url is None:\n success_url = reverse('profiles_profile_detail',\n kwargs={'username': request.user.username})\n if request.method == 'POST':\n form = form_class(data=request.POST, files=request.FILES, instance=profile_obj)\n if form.is_valid():\n form.save()\n return HttpResponseRedirect(success_url)\n else:\n form = form_class(instance=profile_obj)\n\n if extra_context is None:\n extra_context = {}\n context = RequestContext(request)\n for key, value in list(extra_context.items()):\n context[key] = (value() if callable(value) else value)\n\n return render_to_response(template_name, {\n 'form': form,\n 'profile': profile_obj,\n 'user': profile_obj.user,\n }, context_instance=context)\nedit_profile = login_required(edit_profile)\n\n\ndef profile_detail(request, username, public_profile_field=None,\n template_name='profiles/public/profile_detail.html',\n extra_context=None):\n \"\"\"\n Detail view of a user's profile.\n\n If the user has not yet created a profile, ``Http404`` will be\n raised.\n\n **Required arguments:**\n\n ``username``\n The username of the user whose profile is being displayed.\n\n **Optional arguments:**\n\n ``extra_context``\n A dictionary of variables to add to the template context. Any\n callable object in this dictionary will be called to produce\n the end result which appears in the context.\n\n ``public_profile_field``\n The name of a ``BooleanField`` on the profile model; if the\n value of that field on the user's profile is ``False``, the\n ``profile`` variable in the template will be ``None``. Use\n this feature to allow users to mark their profiles as not\n being publicly viewable.\n\n If this argument is not specified, it will be assumed that all\n users' profiles are publicly viewable.\n\n ``template_name``\n The name of the template to use for displaying the profile. If\n not specified, this will default to\n :template:`profiles/profile_detail.html`.\n\n **Context:**\n\n ``profile``\n The user's profile, or ``None`` if the user's profile is not\n publicly viewable (see the description of\n ``public_profile_field`` above).\n\n **Template:**\n\n ``template_name`` keyword argument or\n :template:`profiles/profile_detail.html`.\n\n \"\"\"\n user = get_object_or_404(User, username=username)\n try:\n profile_obj = user.profile\n except ObjectDoesNotExist:\n raise Http404\n if public_profile_field is not None and \\\n not getattr(profile_obj, public_profile_field):\n profile_obj = None\n\n if extra_context is None:\n extra_context = {}\n context = RequestContext(request)\n for key, value in list(extra_context.items()):\n context[key] = (value() if callable(value) else value)\n\n return render_to_response(template_name,\n {'profile': profile_obj},\n context_instance=context)\n", "path": "readthedocs/profiles/views.py"}, {"content": "\"\"\"URL patterns for views to modify user profiles.\"\"\"\n\nfrom __future__ import absolute_import\nfrom django.conf.urls import url\n\nfrom readthedocs.core.forms import UserProfileForm\nfrom readthedocs.profiles import views\n\nurlpatterns = [\n url(r'^create/', views.create_profile,\n {\n 'form_class': UserProfileForm,\n },\n name='profiles_profile_create'),\n url(r'^edit/', views.edit_profile,\n {\n 'form_class': UserProfileForm,\n 'template_name': 'profiles/private/edit_profile.html',\n },\n name='profiles_profile_edit'),\n]\n", "path": "readthedocs/profiles/urls/private.py"}], "after_files": [{"content": "\"\"\"Forms for core app.\"\"\"\n\nfrom __future__ import absolute_import\nfrom builtins import object\nimport logging\n\nfrom django.contrib.auth.models import User\nfrom haystack.forms import SearchForm\nfrom haystack.query import SearchQuerySet\nfrom django import forms\nfrom django.forms.fields import CharField\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom .models import UserProfile\n\nlog = logging.getLogger(__name__)\n\n\nclass UserProfileForm(forms.ModelForm):\n first_name = CharField(label=_('First name'), required=False)\n last_name = CharField(label=_('Last name'), required=False)\n\n class Meta(object):\n model = UserProfile\n # Don't allow users edit someone else's user page,\n fields = ['first_name', 'last_name', 'homepage', 'allow_ads']\n\n def __init__(self, *args, **kwargs):\n super(UserProfileForm, self).__init__(*args, **kwargs)\n try:\n self.fields['first_name'].initial = self.instance.user.first_name\n self.fields['last_name'].initial = self.instance.user.last_name\n except AttributeError:\n pass\n\n def save(self, commit=True):\n first_name = self.cleaned_data.pop('first_name', None)\n last_name = self.cleaned_data.pop('last_name', None)\n profile = super(UserProfileForm, self).save(commit=commit)\n if commit:\n user = profile.user\n user.first_name = first_name\n user.last_name = last_name\n user.save()\n return profile\n\n\nclass UserDeleteForm(forms.ModelForm):\n username = CharField(label=_('Username'), help_text=_('Please type your username to confirm.'))\n\n class Meta(object):\n model = User\n fields = ['username']\n\n def clean_username(self):\n data = self.cleaned_data['username']\n\n if self.instance.username != data:\n raise forms.ValidationError(_(\"Username does not match!\"))\n\n return data\n\n\nclass FacetField(forms.MultipleChoiceField):\n\n \"\"\"\n For filtering searches on a facet.\n\n Has validation for the format of facet values.\n \"\"\"\n\n def valid_value(self, value):\n \"\"\"\n Although this is a choice field, no choices need to be supplied.\n\n Instead, we just validate that the value is in the correct format\n for facet filtering (facet_name:value)\n \"\"\"\n if \":\" not in value:\n return False\n return True\n\n\nclass FacetedSearchForm(SearchForm):\n\n \"\"\"\n Supports fetching faceted results with a corresponding query.\n\n `facets`\n A list of facet names for which to get facet counts\n `models`\n Limit the search to one or more models\n \"\"\"\n\n selected_facets = FacetField(required=False)\n\n def __init__(self, *args, **kwargs):\n facets = kwargs.pop('facets', [])\n models = kwargs.pop('models', [])\n super(FacetedSearchForm, self).__init__(*args, **kwargs)\n\n for facet in facets:\n self.searchqueryset = self.searchqueryset.facet(facet)\n if models:\n self.searchqueryset = self.searchqueryset.models(*models)\n\n def clean_selected_facets(self):\n facets = self.cleaned_data['selected_facets']\n cleaned_facets = []\n clean = SearchQuerySet().query.clean\n for facet in facets:\n field, value = facet.split(\":\", 1)\n if not value: # Ignore empty values\n continue\n value = clean(value)\n cleaned_facets.append(u'%s:\"%s\"' % (field, value))\n return cleaned_facets\n\n def search(self):\n sqs = super(FacetedSearchForm, self).search()\n for facet in self.cleaned_data['selected_facets']:\n sqs = sqs.narrow(facet)\n self.searchqueryset = sqs\n return sqs\n", "path": "readthedocs/core/forms.py"}, {"content": "\"\"\"Views for creating, editing and viewing site-specific user profiles.\"\"\"\n\nfrom __future__ import absolute_import\n\nfrom django.contrib import messages\nfrom django.contrib.auth import logout\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.urlresolvers import reverse\nfrom django.http import Http404\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, render, redirect\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\n\nfrom readthedocs.core.forms import UserDeleteForm\n\n\ndef create_profile(request, form_class, success_url=None,\n template_name='profiles/private/create_profile.html',\n extra_context=None):\n \"\"\"\n Create a profile for the current user, if one doesn't already exist.\n\n If the user already has a profile, a redirect will be issued to the\n :view:`profiles.views.edit_profile` view.\n\n **Optional arguments:**\n\n ``extra_context``\n A dictionary of variables to add to the template context. Any\n callable object in this dictionary will be called to produce\n the end result which appears in the context.\n\n ``form_class``\n The form class to use for validating and creating the user\n profile. This form class must define a method named\n ``save()``, implementing the same argument signature as the\n ``save()`` method of a standard Django ``ModelForm`` (this\n view will call ``save(commit=False)`` to obtain the profile\n object, and fill in the user before the final save). If the\n profile object includes many-to-many relations, the convention\n established by ``ModelForm`` of using a method named\n ``save_m2m()`` will be used, and so your form class should\n also define this method.\n\n ``success_url``\n The URL to redirect to after successful profile creation. If\n this argument is not supplied, this will default to the URL of\n :view:`profiles.views.profile_detail` for the newly-created\n profile object.\n\n ``template_name``\n The template to use when displaying the profile-creation\n form. If not supplied, this will default to\n :template:`profiles/create_profile.html`.\n\n **Context:**\n\n ``form``\n The profile-creation form.\n\n **Template:**\n\n ``template_name`` keyword argument, or\n :template:`profiles/create_profile.html`.\n\n \"\"\"\n try:\n profile_obj = request.user.profile\n return HttpResponseRedirect(reverse('profiles_edit_profile'))\n except ObjectDoesNotExist:\n pass\n\n #\n # We set up success_url here, rather than as the default value for\n # the argument. Trying to do it as the argument's default would\n # mean evaluating the call to reverse() at the time this module is\n # first imported, which introduces a circular dependency: to\n # perform the reverse lookup we need access to profiles/urls.py,\n # but profiles/urls.py in turn imports this module.\n #\n\n if success_url is None:\n success_url = reverse('profiles_profile_detail',\n kwargs={'username': request.user.username})\n if request.method == 'POST':\n form = form_class(data=request.POST, files=request.FILES)\n if form.is_valid():\n profile_obj = form.save(commit=False)\n profile_obj.user = request.user\n profile_obj.save()\n if hasattr(form, 'save_m2m'):\n form.save_m2m()\n return HttpResponseRedirect(success_url)\n else:\n form = form_class()\n\n if extra_context is None:\n extra_context = {}\n context = RequestContext(request)\n for key, value in list(extra_context.items()):\n context[key] = (value() if callable(value) else value)\n\n return render_to_response(template_name,\n {'form': form},\n context_instance=context)\ncreate_profile = login_required(create_profile)\n\n\ndef edit_profile(request, form_class, success_url=None,\n template_name='profiles/private/edit_profile.html',\n extra_context=None):\n \"\"\"\n Edit the current user's profile.\n\n If the user does not already have a profile, a redirect will be issued to\n the :view:`profiles.views.create_profile` view.\n\n **Optional arguments:**\n\n ``extra_context``\n A dictionary of variables to add to the template context. Any\n callable object in this dictionary will be called to produce\n the end result which appears in the context.\n\n ``form_class``\n The form class to use for validating and editing the user\n profile. This form class must operate similarly to a standard\n Django ``ModelForm`` in that it must accept an instance of the\n object to be edited as the keyword argument ``instance`` to\n its constructor, and it must implement a method named\n ``save()`` which will save the updates to the object.\n\n ``success_url``\n The URL to redirect to following a successful edit. If not\n specified, this will default to the URL of\n :view:`profiles.views.profile_detail` for the profile object\n being edited.\n\n ``template_name``\n The template to use when displaying the profile-editing\n form. If not specified, this will default to\n :template:`profiles/edit_profile.html`.\n\n **Context:**\n\n ``form``\n The form for editing the profile.\n\n ``profile``\n The user's current profile.\n\n **Template:**\n\n ``template_name`` keyword argument or\n :template:`profiles/edit_profile.html`.\n\n \"\"\"\n try:\n profile_obj = request.user.profile\n except ObjectDoesNotExist:\n return HttpResponseRedirect(reverse('profiles_profile_create'))\n\n if success_url is None:\n success_url = reverse('profiles_profile_detail',\n kwargs={'username': request.user.username})\n if request.method == 'POST':\n form = form_class(data=request.POST, files=request.FILES, instance=profile_obj)\n if form.is_valid():\n form.save()\n return HttpResponseRedirect(success_url)\n else:\n form = form_class(instance=profile_obj)\n\n if extra_context is None:\n extra_context = {}\n context = RequestContext(request)\n for key, value in list(extra_context.items()):\n context[key] = (value() if callable(value) else value)\n\n return render_to_response(template_name, {\n 'form': form,\n 'profile': profile_obj,\n 'user': profile_obj.user,\n }, context_instance=context)\nedit_profile = login_required(edit_profile)\n\n\n@login_required()\ndef delete_account(request):\n form = UserDeleteForm()\n template_name = 'profiles/private/delete_account.html'\n\n if request.method == 'POST':\n form = UserDeleteForm(instance=request.user, data=request.POST)\n if form.is_valid():\n\n # Do not delete the account permanently because it may create disaster\n # Inactive the user instead.\n request.user.is_active = False\n request.user.save()\n logout(request)\n messages.info(request, 'You have successfully deleted your account')\n\n return redirect('homepage')\n\n return render(request, template_name, {'form': form})\n\n\ndef profile_detail(request, username, public_profile_field=None,\n template_name='profiles/public/profile_detail.html',\n extra_context=None):\n \"\"\"\n Detail view of a user's profile.\n\n If the user has not yet created a profile, ``Http404`` will be\n raised.\n\n **Required arguments:**\n\n ``username``\n The username of the user whose profile is being displayed.\n\n **Optional arguments:**\n\n ``extra_context``\n A dictionary of variables to add to the template context. Any\n callable object in this dictionary will be called to produce\n the end result which appears in the context.\n\n ``public_profile_field``\n The name of a ``BooleanField`` on the profile model; if the\n value of that field on the user's profile is ``False``, the\n ``profile`` variable in the template will be ``None``. Use\n this feature to allow users to mark their profiles as not\n being publicly viewable.\n\n If this argument is not specified, it will be assumed that all\n users' profiles are publicly viewable.\n\n ``template_name``\n The name of the template to use for displaying the profile. If\n not specified, this will default to\n :template:`profiles/profile_detail.html`.\n\n **Context:**\n\n ``profile``\n The user's profile, or ``None`` if the user's profile is not\n publicly viewable (see the description of\n ``public_profile_field`` above).\n\n **Template:**\n\n ``template_name`` keyword argument or\n :template:`profiles/profile_detail.html`.\n\n \"\"\"\n user = get_object_or_404(User, username=username)\n try:\n profile_obj = user.profile\n except ObjectDoesNotExist:\n raise Http404\n if public_profile_field is not None and \\\n not getattr(profile_obj, public_profile_field):\n profile_obj = None\n\n if extra_context is None:\n extra_context = {}\n context = RequestContext(request)\n for key, value in list(extra_context.items()):\n context[key] = (value() if callable(value) else value)\n\n return render_to_response(template_name,\n {'profile': profile_obj},\n context_instance=context)\n", "path": "readthedocs/profiles/views.py"}, {"content": "\"\"\"URL patterns for views to modify user profiles.\"\"\"\n\nfrom __future__ import absolute_import\nfrom django.conf.urls import url\n\nfrom readthedocs.core.forms import UserProfileForm\nfrom readthedocs.profiles import views\n\nurlpatterns = [\n url(r'^create/', views.create_profile,\n {\n 'form_class': UserProfileForm,\n },\n name='profiles_profile_create'),\n url(r'^edit/', views.edit_profile,\n {\n 'form_class': UserProfileForm,\n 'template_name': 'profiles/private/edit_profile.html',\n },\n name='profiles_profile_edit'),\n url(r'^delete/', views.delete_account, name='delete_account')\n]\n", "path": "readthedocs/profiles/urls/private.py"}]}
| 3,982 | 727 |
gh_patches_debug_23027
|
rasdani/github-patches
|
git_diff
|
mirumee__ariadne-172
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unbound enum values are None when used in arguments
When used as a mutation input, enum parameter should be `str`, but actually is `None`.
```python
def test_executing_mutation_takes_enum():
type_defs = """
type Query {
_: String
}
type Mutation {
eat(meal: Meal!): Int!
}
enum Meal {
SPAM
}
"""
mutation = MutationType()
@mutation.field("eat")
def resolve_eat(*_, meal): # pylint: disable=unused-variable
assert meal == "SPAM"
return 42
schema = make_executable_schema(type_defs, mutation)
result = graphql_sync(schema, 'mutation { eat(meal: SPAM) }')
assert result.errors is None
assert result.data == {"eat": 42}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ariadne/enums.py`
Content:
```
1 import enum
2
3 from typing import Any, Dict, Optional, Union, cast
4
5 from graphql.type import GraphQLEnumType, GraphQLNamedType, GraphQLSchema
6
7 from .types import SchemaBindable
8
9
10 class EnumType(SchemaBindable):
11 def __init__(
12 self, name: str, values=Union[Dict[str, Any], enum.Enum, enum.IntEnum]
13 ) -> None:
14 self.name = name
15 try:
16 self.values = values.__members__ # pylint: disable=no-member
17 except AttributeError:
18 self.values = values
19
20 def bind_to_schema(self, schema: GraphQLSchema) -> None:
21 graphql_type = schema.type_map.get(self.name)
22 self.validate_graphql_type(graphql_type)
23 graphql_type = cast(GraphQLEnumType, graphql_type)
24
25 for key, value in self.values.items():
26 if key not in graphql_type.values:
27 raise ValueError(
28 "Value %s is not defined on enum %s" % (key, self.name)
29 )
30 graphql_type.values[key].value = value
31
32 def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:
33 if not graphql_type:
34 raise ValueError("Enum %s is not defined in the schema" % self.name)
35 if not isinstance(graphql_type, GraphQLEnumType):
36 raise ValueError(
37 "%s is defined in the schema, but it is instance of %s (expected %s)"
38 % (self.name, type(graphql_type).__name__, GraphQLEnumType.__name__)
39 )
40
```
Path: `ariadne/executable_schema.py`
Content:
```
1 from typing import List, Union
2
3 from graphql import DocumentNode, GraphQLSchema, build_ast_schema, extend_schema, parse
4
5 from .types import SchemaBindable
6
7
8 def make_executable_schema(
9 type_defs: Union[str, List[str]],
10 bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,
11 ) -> GraphQLSchema:
12 if isinstance(type_defs, list):
13 type_defs = join_type_defs(type_defs)
14
15 ast_document = parse(type_defs)
16 schema = build_and_extend_schema(ast_document)
17
18 if isinstance(bindables, list):
19 for obj in bindables:
20 obj.bind_to_schema(schema)
21 elif bindables:
22 bindables.bind_to_schema(schema)
23
24 return schema
25
26
27 def join_type_defs(type_defs: List[str]) -> str:
28 return "\n\n".join(t.strip() for t in type_defs)
29
30
31 def build_and_extend_schema(ast: DocumentNode) -> GraphQLSchema:
32 schema = build_ast_schema(ast)
33 extension_ast = extract_extensions(ast)
34
35 if extension_ast.definitions:
36 schema = extend_schema(schema, extension_ast)
37
38 return schema
39
40
41 EXTENSION_KINDS = [
42 "scalar_type_extension",
43 "object_type_extension",
44 "interface_type_extension",
45 "union_type_extension",
46 "enum_type_extension",
47 "input_object_type_extension",
48 ]
49
50
51 def extract_extensions(ast: DocumentNode) -> DocumentNode:
52 extensions = [node for node in ast.definitions if node.kind in EXTENSION_KINDS]
53 return DocumentNode(definitions=extensions)
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ariadne/enums.py b/ariadne/enums.py
--- a/ariadne/enums.py
+++ b/ariadne/enums.py
@@ -37,3 +37,15 @@
"%s is defined in the schema, but it is instance of %s (expected %s)"
% (self.name, type(graphql_type).__name__, GraphQLEnumType.__name__)
)
+
+
+def set_default_enum_values_on_schema(schema: GraphQLSchema):
+ for type_object in schema.type_map.values():
+ if isinstance(type_object, GraphQLEnumType):
+ set_default_enum_values(type_object)
+
+
+def set_default_enum_values(graphql_type: GraphQLEnumType):
+ for key in graphql_type.values:
+ if graphql_type.values[key].value is None:
+ graphql_type.values[key].value = key
diff --git a/ariadne/executable_schema.py b/ariadne/executable_schema.py
--- a/ariadne/executable_schema.py
+++ b/ariadne/executable_schema.py
@@ -2,6 +2,7 @@
from graphql import DocumentNode, GraphQLSchema, build_ast_schema, extend_schema, parse
+from .enums import set_default_enum_values_on_schema
from .types import SchemaBindable
@@ -21,6 +22,8 @@
elif bindables:
bindables.bind_to_schema(schema)
+ set_default_enum_values_on_schema(schema)
+
return schema
|
{"golden_diff": "diff --git a/ariadne/enums.py b/ariadne/enums.py\n--- a/ariadne/enums.py\n+++ b/ariadne/enums.py\n@@ -37,3 +37,15 @@\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLEnumType.__name__)\n )\n+\n+\n+def set_default_enum_values_on_schema(schema: GraphQLSchema):\n+ for type_object in schema.type_map.values():\n+ if isinstance(type_object, GraphQLEnumType):\n+ set_default_enum_values(type_object)\n+\n+\n+def set_default_enum_values(graphql_type: GraphQLEnumType):\n+ for key in graphql_type.values:\n+ if graphql_type.values[key].value is None:\n+ graphql_type.values[key].value = key\ndiff --git a/ariadne/executable_schema.py b/ariadne/executable_schema.py\n--- a/ariadne/executable_schema.py\n+++ b/ariadne/executable_schema.py\n@@ -2,6 +2,7 @@\n \n from graphql import DocumentNode, GraphQLSchema, build_ast_schema, extend_schema, parse\n \n+from .enums import set_default_enum_values_on_schema\n from .types import SchemaBindable\n \n \n@@ -21,6 +22,8 @@\n elif bindables:\n bindables.bind_to_schema(schema)\n \n+ set_default_enum_values_on_schema(schema)\n+\n return schema\n", "issue": "Unbound enum values are None when used in arguments\nWhen used as a mutation input, enum parameter should be `str`, but actually is `None`.\r\n\r\n```python\r\ndef test_executing_mutation_takes_enum():\r\n type_defs = \"\"\"\r\n type Query {\r\n _: String\r\n }\r\n\r\n type Mutation {\r\n eat(meal: Meal!): Int!\r\n }\r\n\r\n enum Meal {\r\n SPAM\r\n }\r\n \"\"\"\r\n\r\n mutation = MutationType()\r\n\r\n @mutation.field(\"eat\")\r\n def resolve_eat(*_, meal): # pylint: disable=unused-variable\r\n assert meal == \"SPAM\"\r\n return 42\r\n\r\n schema = make_executable_schema(type_defs, mutation)\r\n\r\n result = graphql_sync(schema, 'mutation { eat(meal: SPAM) }')\r\n assert result.errors is None\r\n assert result.data == {\"eat\": 42}\r\n```\n", "before_files": [{"content": "import enum\n\nfrom typing import Any, Dict, Optional, Union, cast\n\nfrom graphql.type import GraphQLEnumType, GraphQLNamedType, GraphQLSchema\n\nfrom .types import SchemaBindable\n\n\nclass EnumType(SchemaBindable):\n def __init__(\n self, name: str, values=Union[Dict[str, Any], enum.Enum, enum.IntEnum]\n ) -> None:\n self.name = name\n try:\n self.values = values.__members__ # pylint: disable=no-member\n except AttributeError:\n self.values = values\n\n def bind_to_schema(self, schema: GraphQLSchema) -> None:\n graphql_type = schema.type_map.get(self.name)\n self.validate_graphql_type(graphql_type)\n graphql_type = cast(GraphQLEnumType, graphql_type)\n\n for key, value in self.values.items():\n if key not in graphql_type.values:\n raise ValueError(\n \"Value %s is not defined on enum %s\" % (key, self.name)\n )\n graphql_type.values[key].value = value\n\n def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:\n if not graphql_type:\n raise ValueError(\"Enum %s is not defined in the schema\" % self.name)\n if not isinstance(graphql_type, GraphQLEnumType):\n raise ValueError(\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLEnumType.__name__)\n )\n", "path": "ariadne/enums.py"}, {"content": "from typing import List, Union\n\nfrom graphql import DocumentNode, GraphQLSchema, build_ast_schema, extend_schema, parse\n\nfrom .types import SchemaBindable\n\n\ndef make_executable_schema(\n type_defs: Union[str, List[str]],\n bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,\n) -> GraphQLSchema:\n if isinstance(type_defs, list):\n type_defs = join_type_defs(type_defs)\n\n ast_document = parse(type_defs)\n schema = build_and_extend_schema(ast_document)\n\n if isinstance(bindables, list):\n for obj in bindables:\n obj.bind_to_schema(schema)\n elif bindables:\n bindables.bind_to_schema(schema)\n\n return schema\n\n\ndef join_type_defs(type_defs: List[str]) -> str:\n return \"\\n\\n\".join(t.strip() for t in type_defs)\n\n\ndef build_and_extend_schema(ast: DocumentNode) -> GraphQLSchema:\n schema = build_ast_schema(ast)\n extension_ast = extract_extensions(ast)\n\n if extension_ast.definitions:\n schema = extend_schema(schema, extension_ast)\n\n return schema\n\n\nEXTENSION_KINDS = [\n \"scalar_type_extension\",\n \"object_type_extension\",\n \"interface_type_extension\",\n \"union_type_extension\",\n \"enum_type_extension\",\n \"input_object_type_extension\",\n]\n\n\ndef extract_extensions(ast: DocumentNode) -> DocumentNode:\n extensions = [node for node in ast.definitions if node.kind in EXTENSION_KINDS]\n return DocumentNode(definitions=extensions)\n", "path": "ariadne/executable_schema.py"}], "after_files": [{"content": "import enum\n\nfrom typing import Any, Dict, Optional, Union, cast\n\nfrom graphql.type import GraphQLEnumType, GraphQLNamedType, GraphQLSchema\n\nfrom .types import SchemaBindable\n\n\nclass EnumType(SchemaBindable):\n def __init__(\n self, name: str, values=Union[Dict[str, Any], enum.Enum, enum.IntEnum]\n ) -> None:\n self.name = name\n try:\n self.values = values.__members__ # pylint: disable=no-member\n except AttributeError:\n self.values = values\n\n def bind_to_schema(self, schema: GraphQLSchema) -> None:\n graphql_type = schema.type_map.get(self.name)\n self.validate_graphql_type(graphql_type)\n graphql_type = cast(GraphQLEnumType, graphql_type)\n\n for key, value in self.values.items():\n if key not in graphql_type.values:\n raise ValueError(\n \"Value %s is not defined on enum %s\" % (key, self.name)\n )\n graphql_type.values[key].value = value\n\n def validate_graphql_type(self, graphql_type: Optional[GraphQLNamedType]) -> None:\n if not graphql_type:\n raise ValueError(\"Enum %s is not defined in the schema\" % self.name)\n if not isinstance(graphql_type, GraphQLEnumType):\n raise ValueError(\n \"%s is defined in the schema, but it is instance of %s (expected %s)\"\n % (self.name, type(graphql_type).__name__, GraphQLEnumType.__name__)\n )\n\n\ndef set_default_enum_values_on_schema(schema: GraphQLSchema):\n for type_object in schema.type_map.values():\n if isinstance(type_object, GraphQLEnumType):\n set_default_enum_values(type_object)\n\n\ndef set_default_enum_values(graphql_type: GraphQLEnumType):\n for key in graphql_type.values:\n if graphql_type.values[key].value is None:\n graphql_type.values[key].value = key\n", "path": "ariadne/enums.py"}, {"content": "from typing import List, Union\n\nfrom graphql import DocumentNode, GraphQLSchema, build_ast_schema, extend_schema, parse\n\nfrom .enums import set_default_enum_values_on_schema\nfrom .types import SchemaBindable\n\n\ndef make_executable_schema(\n type_defs: Union[str, List[str]],\n bindables: Union[SchemaBindable, List[SchemaBindable], None] = None,\n) -> GraphQLSchema:\n if isinstance(type_defs, list):\n type_defs = join_type_defs(type_defs)\n\n ast_document = parse(type_defs)\n schema = build_and_extend_schema(ast_document)\n\n if isinstance(bindables, list):\n for obj in bindables:\n obj.bind_to_schema(schema)\n elif bindables:\n bindables.bind_to_schema(schema)\n\n set_default_enum_values_on_schema(schema)\n\n return schema\n\n\ndef join_type_defs(type_defs: List[str]) -> str:\n return \"\\n\\n\".join(t.strip() for t in type_defs)\n\n\ndef build_and_extend_schema(ast: DocumentNode) -> GraphQLSchema:\n schema = build_ast_schema(ast)\n extension_ast = extract_extensions(ast)\n\n if extension_ast.definitions:\n schema = extend_schema(schema, extension_ast)\n\n return schema\n\n\nEXTENSION_KINDS = [\n \"scalar_type_extension\",\n \"object_type_extension\",\n \"interface_type_extension\",\n \"union_type_extension\",\n \"enum_type_extension\",\n \"input_object_type_extension\",\n]\n\n\ndef extract_extensions(ast: DocumentNode) -> DocumentNode:\n extensions = [node for node in ast.definitions if node.kind in EXTENSION_KINDS]\n return DocumentNode(definitions=extensions)\n", "path": "ariadne/executable_schema.py"}]}
| 1,296 | 329 |
gh_patches_debug_12133
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-504
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
get_func_args() from scrapy.utils.python crashes on partial functions
I use Python 2.7 and used a partial function (`functools.partial`) as part of an input_processor for the ItemLoader. For instance,
``` python
price = Field(input_processor=Compose(some_partial_func, some_other_func))
```
During execution, `get_func_args() from scrapy.util.python` goes into an infinite loop until
``` python
/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.pyc in get_func_args(func, stripself)
161 return []
162 else:
--> 163 return get_func_args(func.__call__, True)
164 else:
165 raise TypeError('%s is not callable' % type(func))
/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.pyc in get_func_args(func, stripself)
161 return []
162 else:
--> 163 return get_func_args(func.__call__, True)
164 else:
165 raise TypeError('%s is not callable' % type(func))
/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.pyc in get_func_args(func, stripself)
151 if inspect.isfunction(func):
152 func_args, _, _, _ = inspect.getargspec(func)
--> 153 elif inspect.isclass(func):
154 return get_func_args(func.__init__, True)
155 elif inspect.ismethod(func):
/usr/lib/python2.7/inspect.pyc in isclass(object)
63 __doc__ documentation string
64 __module__ name of module in which this class was defined"""
---> 65 return isinstance(object, (type, types.ClassType))
66
67 def ismethod(object):
RuntimeError: maximum recursion depth exceeded while calling a Python object
```
happens. Looks like `get_func_args` needs to handle partial functions as a separate case.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/python.py`
Content:
```
1 """
2 This module contains essential stuff that should've come with Python itself ;)
3
4 It also contains functions (or functionality) which is in Python versions
5 higher than 2.5 which used to be the lowest version supported by Scrapy.
6
7 """
8 import os
9 import re
10 import inspect
11 import weakref
12 import errno
13 from functools import wraps
14 from sgmllib import SGMLParser
15
16
17 class FixedSGMLParser(SGMLParser):
18 """The SGMLParser that comes with Python has a bug in the convert_charref()
19 method. This is the same class with the bug fixed"""
20
21 def convert_charref(self, name):
22 """This method fixes a bug in Python's SGMLParser."""
23 try:
24 n = int(name)
25 except ValueError:
26 return
27 if not 0 <= n <= 127 : # ASCII ends at 127, not 255
28 return
29 return self.convert_codepoint(n)
30
31
32 def flatten(x):
33 """flatten(sequence) -> list
34
35 Returns a single, flat list which contains all elements retrieved
36 from the sequence and all recursively contained sub-sequences
37 (iterables).
38
39 Examples:
40 >>> [1, 2, [3,4], (5,6)]
41 [1, 2, [3, 4], (5, 6)]
42 >>> flatten([[[1,2,3], (42,None)], [4,5], [6], 7, (8,9,10)])
43 [1, 2, 3, 42, None, 4, 5, 6, 7, 8, 9, 10]"""
44
45 result = []
46 for el in x:
47 if hasattr(el, "__iter__"):
48 result.extend(flatten(el))
49 else:
50 result.append(el)
51 return result
52
53
54 def unique(list_, key=lambda x: x):
55 """efficient function to uniquify a list preserving item order"""
56 seen = set()
57 result = []
58 for item in list_:
59 seenkey = key(item)
60 if seenkey in seen:
61 continue
62 seen.add(seenkey)
63 result.append(item)
64 return result
65
66
67 def str_to_unicode(text, encoding=None, errors='strict'):
68 """Return the unicode representation of text in the given encoding. Unlike
69 .encode(encoding) this function can be applied directly to a unicode
70 object without the risk of double-decoding problems (which can happen if
71 you don't use the default 'ascii' encoding)
72 """
73
74 if encoding is None:
75 encoding = 'utf-8'
76 if isinstance(text, str):
77 return text.decode(encoding, errors)
78 elif isinstance(text, unicode):
79 return text
80 else:
81 raise TypeError('str_to_unicode must receive a str or unicode object, got %s' % type(text).__name__)
82
83 def unicode_to_str(text, encoding=None, errors='strict'):
84 """Return the str representation of text in the given encoding. Unlike
85 .encode(encoding) this function can be applied directly to a str
86 object without the risk of double-decoding problems (which can happen if
87 you don't use the default 'ascii' encoding)
88 """
89
90 if encoding is None:
91 encoding = 'utf-8'
92 if isinstance(text, unicode):
93 return text.encode(encoding, errors)
94 elif isinstance(text, str):
95 return text
96 else:
97 raise TypeError('unicode_to_str must receive a unicode or str object, got %s' % type(text).__name__)
98
99 def re_rsearch(pattern, text, chunk_size=1024):
100 """
101 This function does a reverse search in a text using a regular expression
102 given in the attribute 'pattern'.
103 Since the re module does not provide this functionality, we have to find for
104 the expression into chunks of text extracted from the end (for the sake of efficiency).
105 At first, a chunk of 'chunk_size' kilobytes is extracted from the end, and searched for
106 the pattern. If the pattern is not found, another chunk is extracted, and another
107 search is performed.
108 This process continues until a match is found, or until the whole file is read.
109 In case the pattern wasn't found, None is returned, otherwise it returns a tuple containing
110 the start position of the match, and the ending (regarding the entire text).
111 """
112 def _chunk_iter():
113 offset = len(text)
114 while True:
115 offset -= (chunk_size * 1024)
116 if offset <= 0:
117 break
118 yield (text[offset:], offset)
119 yield (text, 0)
120
121 pattern = re.compile(pattern) if isinstance(pattern, basestring) else pattern
122 for chunk, offset in _chunk_iter():
123 matches = [match for match in pattern.finditer(chunk)]
124 if matches:
125 return (offset + matches[-1].span()[0], offset + matches[-1].span()[1])
126 return None
127
128 def memoizemethod_noargs(method):
129 """Decorator to cache the result of a method (without arguments) using a
130 weak reference to its object
131 """
132 cache = weakref.WeakKeyDictionary()
133 @wraps(method)
134 def new_method(self, *args, **kwargs):
135 if self not in cache:
136 cache[self] = method(self, *args, **kwargs)
137 return cache[self]
138 return new_method
139
140 _BINARYCHARS = set(map(chr, range(32))) - set(["\0", "\t", "\n", "\r"])
141
142 def isbinarytext(text):
143 """Return True if the given text is considered binary, or false
144 otherwise, by looking for binary bytes at their chars
145 """
146 assert isinstance(text, str), "text must be str, got '%s'" % type(text).__name__
147 return any(c in _BINARYCHARS for c in text)
148
149 def get_func_args(func, stripself=False):
150 """Return the argument name list of a callable"""
151 if inspect.isfunction(func):
152 func_args, _, _, _ = inspect.getargspec(func)
153 elif inspect.isclass(func):
154 return get_func_args(func.__init__, True)
155 elif inspect.ismethod(func):
156 return get_func_args(func.__func__, True)
157 elif inspect.ismethoddescriptor(func):
158 return []
159 elif hasattr(func, '__call__'):
160 if inspect.isroutine(func):
161 return []
162 else:
163 return get_func_args(func.__call__, True)
164 else:
165 raise TypeError('%s is not callable' % type(func))
166 if stripself:
167 func_args.pop(0)
168 return func_args
169
170 def get_spec(func):
171 """Returns (args, kwargs) tuple for a function
172 >>> import re
173 >>> get_spec(re.match)
174 (['pattern', 'string'], {'flags': 0})
175
176 >>> class Test(object):
177 ... def __call__(self, val):
178 ... pass
179 ... def method(self, val, flags=0):
180 ... pass
181
182 >>> get_spec(Test)
183 (['self', 'val'], {})
184
185 >>> get_spec(Test.method)
186 (['self', 'val'], {'flags': 0})
187
188 >>> get_spec(Test().method)
189 (['self', 'val'], {'flags': 0})
190 """
191
192 if inspect.isfunction(func) or inspect.ismethod(func):
193 spec = inspect.getargspec(func)
194 elif hasattr(func, '__call__'):
195 spec = inspect.getargspec(func.__call__)
196 else:
197 raise TypeError('%s is not callable' % type(func))
198
199 defaults = spec.defaults or []
200
201 firstdefault = len(spec.args) - len(defaults)
202 args = spec.args[:firstdefault]
203 kwargs = dict(zip(spec.args[firstdefault:], defaults))
204 return args, kwargs
205
206 def equal_attributes(obj1, obj2, attributes):
207 """Compare two objects attributes"""
208 # not attributes given return False by default
209 if not attributes:
210 return False
211
212 for attr in attributes:
213 # support callables like itemgetter
214 if callable(attr):
215 if not attr(obj1) == attr(obj2):
216 return False
217 else:
218 # check that objects has attribute
219 if not hasattr(obj1, attr):
220 return False
221 if not hasattr(obj2, attr):
222 return False
223 # compare object attributes
224 if not getattr(obj1, attr) == getattr(obj2, attr):
225 return False
226 # all attributes equal
227 return True
228
229
230 class WeakKeyCache(object):
231
232 def __init__(self, default_factory):
233 self.default_factory = default_factory
234 self._weakdict = weakref.WeakKeyDictionary()
235
236 def __getitem__(self, key):
237 if key not in self._weakdict:
238 self._weakdict[key] = self.default_factory(key)
239 return self._weakdict[key]
240
241
242 def stringify_dict(dct_or_tuples, encoding='utf-8', keys_only=True):
243 """Return a (new) dict with the unicode keys (and values if, keys_only is
244 False) of the given dict converted to strings. `dct_or_tuples` can be a
245 dict or a list of tuples, like any dict constructor supports.
246 """
247 d = {}
248 for k, v in dict(dct_or_tuples).iteritems():
249 k = k.encode(encoding) if isinstance(k, unicode) else k
250 if not keys_only:
251 v = v.encode(encoding) if isinstance(v, unicode) else v
252 d[k] = v
253 return d
254
255 def is_writable(path):
256 """Return True if the given path can be written (if it exists) or created
257 (if it doesn't exist)
258 """
259 if os.path.exists(path):
260 return os.access(path, os.W_OK)
261 else:
262 return os.access(os.path.dirname(path), os.W_OK)
263
264 def setattr_default(obj, name, value):
265 """Set attribute value, but only if it's not already set. Similar to
266 setdefault() for dicts.
267 """
268 if not hasattr(obj, name):
269 setattr(obj, name, value)
270
271
272 def retry_on_eintr(function, *args, **kw):
273 """Run a function and retry it while getting EINTR errors"""
274 while True:
275 try:
276 return function(*args, **kw)
277 except IOError as e:
278 if e.errno != errno.EINTR:
279 raise
280
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/utils/python.py b/scrapy/utils/python.py
--- a/scrapy/utils/python.py
+++ b/scrapy/utils/python.py
@@ -10,7 +10,7 @@
import inspect
import weakref
import errno
-from functools import wraps
+from functools import partial, wraps
from sgmllib import SGMLParser
@@ -156,6 +156,8 @@
return get_func_args(func.__func__, True)
elif inspect.ismethoddescriptor(func):
return []
+ elif isinstance(func, partial):
+ return get_func_args(func.func)
elif hasattr(func, '__call__'):
if inspect.isroutine(func):
return []
|
{"golden_diff": "diff --git a/scrapy/utils/python.py b/scrapy/utils/python.py\n--- a/scrapy/utils/python.py\n+++ b/scrapy/utils/python.py\n@@ -10,7 +10,7 @@\n import inspect\n import weakref\n import errno\n-from functools import wraps\n+from functools import partial, wraps\n from sgmllib import SGMLParser\n \n \n@@ -156,6 +156,8 @@\n return get_func_args(func.__func__, True)\n elif inspect.ismethoddescriptor(func):\n return []\n+ elif isinstance(func, partial):\n+ return get_func_args(func.func)\n elif hasattr(func, '__call__'):\n if inspect.isroutine(func):\n return []\n", "issue": "get_func_args() from scrapy.utils.python crashes on partial functions\nI use Python 2.7 and used a partial function (`functools.partial`) as part of an input_processor for the ItemLoader. For instance,\n\n``` python\nprice = Field(input_processor=Compose(some_partial_func, some_other_func))\n```\n\nDuring execution, `get_func_args() from scrapy.util.python` goes into an infinite loop until\n\n``` python\n/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.pyc in get_func_args(func, stripself)\n 161 return []\n 162 else:\n--> 163 return get_func_args(func.__call__, True)\n 164 else:\n 165 raise TypeError('%s is not callable' % type(func))\n\n/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.pyc in get_func_args(func, stripself)\n 161 return []\n 162 else:\n--> 163 return get_func_args(func.__call__, True)\n 164 else:\n 165 raise TypeError('%s is not callable' % type(func))\n\n/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.pyc in get_func_args(func, stripself)\n 151 if inspect.isfunction(func):\n 152 func_args, _, _, _ = inspect.getargspec(func)\n--> 153 elif inspect.isclass(func):\n 154 return get_func_args(func.__init__, True)\n 155 elif inspect.ismethod(func):\n\n/usr/lib/python2.7/inspect.pyc in isclass(object)\n 63 __doc__ documentation string\n 64 __module__ name of module in which this class was defined\"\"\"\n---> 65 return isinstance(object, (type, types.ClassType))\n 66 \n 67 def ismethod(object):\n\nRuntimeError: maximum recursion depth exceeded while calling a Python object\n```\n\nhappens. Looks like `get_func_args` needs to handle partial functions as a separate case.\n\n", "before_files": [{"content": "\"\"\"\nThis module contains essential stuff that should've come with Python itself ;)\n\nIt also contains functions (or functionality) which is in Python versions\nhigher than 2.5 which used to be the lowest version supported by Scrapy.\n\n\"\"\"\nimport os\nimport re\nimport inspect\nimport weakref\nimport errno\nfrom functools import wraps\nfrom sgmllib import SGMLParser\n\n\nclass FixedSGMLParser(SGMLParser):\n \"\"\"The SGMLParser that comes with Python has a bug in the convert_charref()\n method. This is the same class with the bug fixed\"\"\"\n\n def convert_charref(self, name):\n \"\"\"This method fixes a bug in Python's SGMLParser.\"\"\"\n try:\n n = int(name)\n except ValueError:\n return\n if not 0 <= n <= 127 : # ASCII ends at 127, not 255\n return\n return self.convert_codepoint(n)\n\n\ndef flatten(x):\n \"\"\"flatten(sequence) -> list\n\n Returns a single, flat list which contains all elements retrieved\n from the sequence and all recursively contained sub-sequences\n (iterables).\n\n Examples:\n >>> [1, 2, [3,4], (5,6)]\n [1, 2, [3, 4], (5, 6)]\n >>> flatten([[[1,2,3], (42,None)], [4,5], [6], 7, (8,9,10)])\n [1, 2, 3, 42, None, 4, 5, 6, 7, 8, 9, 10]\"\"\"\n\n result = []\n for el in x:\n if hasattr(el, \"__iter__\"):\n result.extend(flatten(el))\n else:\n result.append(el)\n return result\n\n\ndef unique(list_, key=lambda x: x):\n \"\"\"efficient function to uniquify a list preserving item order\"\"\"\n seen = set()\n result = []\n for item in list_:\n seenkey = key(item)\n if seenkey in seen:\n continue\n seen.add(seenkey)\n result.append(item)\n return result\n\n\ndef str_to_unicode(text, encoding=None, errors='strict'):\n \"\"\"Return the unicode representation of text in the given encoding. Unlike\n .encode(encoding) this function can be applied directly to a unicode\n object without the risk of double-decoding problems (which can happen if\n you don't use the default 'ascii' encoding)\n \"\"\"\n\n if encoding is None:\n encoding = 'utf-8'\n if isinstance(text, str):\n return text.decode(encoding, errors)\n elif isinstance(text, unicode):\n return text\n else:\n raise TypeError('str_to_unicode must receive a str or unicode object, got %s' % type(text).__name__)\n\ndef unicode_to_str(text, encoding=None, errors='strict'):\n \"\"\"Return the str representation of text in the given encoding. Unlike\n .encode(encoding) this function can be applied directly to a str\n object without the risk of double-decoding problems (which can happen if\n you don't use the default 'ascii' encoding)\n \"\"\"\n\n if encoding is None:\n encoding = 'utf-8'\n if isinstance(text, unicode):\n return text.encode(encoding, errors)\n elif isinstance(text, str):\n return text\n else:\n raise TypeError('unicode_to_str must receive a unicode or str object, got %s' % type(text).__name__)\n\ndef re_rsearch(pattern, text, chunk_size=1024):\n \"\"\"\n This function does a reverse search in a text using a regular expression\n given in the attribute 'pattern'.\n Since the re module does not provide this functionality, we have to find for\n the expression into chunks of text extracted from the end (for the sake of efficiency).\n At first, a chunk of 'chunk_size' kilobytes is extracted from the end, and searched for\n the pattern. If the pattern is not found, another chunk is extracted, and another\n search is performed.\n This process continues until a match is found, or until the whole file is read.\n In case the pattern wasn't found, None is returned, otherwise it returns a tuple containing\n the start position of the match, and the ending (regarding the entire text).\n \"\"\"\n def _chunk_iter():\n offset = len(text)\n while True:\n offset -= (chunk_size * 1024)\n if offset <= 0:\n break\n yield (text[offset:], offset)\n yield (text, 0)\n\n pattern = re.compile(pattern) if isinstance(pattern, basestring) else pattern\n for chunk, offset in _chunk_iter():\n matches = [match for match in pattern.finditer(chunk)]\n if matches:\n return (offset + matches[-1].span()[0], offset + matches[-1].span()[1])\n return None\n\ndef memoizemethod_noargs(method):\n \"\"\"Decorator to cache the result of a method (without arguments) using a\n weak reference to its object\n \"\"\"\n cache = weakref.WeakKeyDictionary()\n @wraps(method)\n def new_method(self, *args, **kwargs):\n if self not in cache:\n cache[self] = method(self, *args, **kwargs)\n return cache[self]\n return new_method\n\n_BINARYCHARS = set(map(chr, range(32))) - set([\"\\0\", \"\\t\", \"\\n\", \"\\r\"])\n\ndef isbinarytext(text):\n \"\"\"Return True if the given text is considered binary, or false\n otherwise, by looking for binary bytes at their chars\n \"\"\"\n assert isinstance(text, str), \"text must be str, got '%s'\" % type(text).__name__\n return any(c in _BINARYCHARS for c in text)\n\ndef get_func_args(func, stripself=False):\n \"\"\"Return the argument name list of a callable\"\"\"\n if inspect.isfunction(func):\n func_args, _, _, _ = inspect.getargspec(func)\n elif inspect.isclass(func):\n return get_func_args(func.__init__, True)\n elif inspect.ismethod(func):\n return get_func_args(func.__func__, True)\n elif inspect.ismethoddescriptor(func):\n return []\n elif hasattr(func, '__call__'):\n if inspect.isroutine(func):\n return []\n else:\n return get_func_args(func.__call__, True)\n else:\n raise TypeError('%s is not callable' % type(func))\n if stripself:\n func_args.pop(0)\n return func_args\n\ndef get_spec(func):\n \"\"\"Returns (args, kwargs) tuple for a function\n >>> import re\n >>> get_spec(re.match)\n (['pattern', 'string'], {'flags': 0})\n\n >>> class Test(object):\n ... def __call__(self, val):\n ... pass\n ... def method(self, val, flags=0):\n ... pass\n\n >>> get_spec(Test)\n (['self', 'val'], {})\n\n >>> get_spec(Test.method)\n (['self', 'val'], {'flags': 0})\n\n >>> get_spec(Test().method)\n (['self', 'val'], {'flags': 0})\n \"\"\"\n\n if inspect.isfunction(func) or inspect.ismethod(func):\n spec = inspect.getargspec(func)\n elif hasattr(func, '__call__'):\n spec = inspect.getargspec(func.__call__)\n else:\n raise TypeError('%s is not callable' % type(func))\n\n defaults = spec.defaults or []\n\n firstdefault = len(spec.args) - len(defaults)\n args = spec.args[:firstdefault]\n kwargs = dict(zip(spec.args[firstdefault:], defaults))\n return args, kwargs\n\ndef equal_attributes(obj1, obj2, attributes):\n \"\"\"Compare two objects attributes\"\"\"\n # not attributes given return False by default\n if not attributes:\n return False\n\n for attr in attributes:\n # support callables like itemgetter\n if callable(attr):\n if not attr(obj1) == attr(obj2):\n return False\n else:\n # check that objects has attribute\n if not hasattr(obj1, attr):\n return False\n if not hasattr(obj2, attr):\n return False\n # compare object attributes\n if not getattr(obj1, attr) == getattr(obj2, attr):\n return False\n # all attributes equal\n return True\n\n\nclass WeakKeyCache(object):\n\n def __init__(self, default_factory):\n self.default_factory = default_factory\n self._weakdict = weakref.WeakKeyDictionary()\n\n def __getitem__(self, key):\n if key not in self._weakdict:\n self._weakdict[key] = self.default_factory(key)\n return self._weakdict[key]\n\n\ndef stringify_dict(dct_or_tuples, encoding='utf-8', keys_only=True):\n \"\"\"Return a (new) dict with the unicode keys (and values if, keys_only is\n False) of the given dict converted to strings. `dct_or_tuples` can be a\n dict or a list of tuples, like any dict constructor supports.\n \"\"\"\n d = {}\n for k, v in dict(dct_or_tuples).iteritems():\n k = k.encode(encoding) if isinstance(k, unicode) else k\n if not keys_only:\n v = v.encode(encoding) if isinstance(v, unicode) else v\n d[k] = v\n return d\n\ndef is_writable(path):\n \"\"\"Return True if the given path can be written (if it exists) or created\n (if it doesn't exist)\n \"\"\"\n if os.path.exists(path):\n return os.access(path, os.W_OK)\n else:\n return os.access(os.path.dirname(path), os.W_OK)\n\ndef setattr_default(obj, name, value):\n \"\"\"Set attribute value, but only if it's not already set. Similar to\n setdefault() for dicts.\n \"\"\"\n if not hasattr(obj, name):\n setattr(obj, name, value)\n\n\ndef retry_on_eintr(function, *args, **kw):\n \"\"\"Run a function and retry it while getting EINTR errors\"\"\"\n while True:\n try:\n return function(*args, **kw)\n except IOError as e:\n if e.errno != errno.EINTR:\n raise\n", "path": "scrapy/utils/python.py"}], "after_files": [{"content": "\"\"\"\nThis module contains essential stuff that should've come with Python itself ;)\n\nIt also contains functions (or functionality) which is in Python versions\nhigher than 2.5 which used to be the lowest version supported by Scrapy.\n\n\"\"\"\nimport os\nimport re\nimport inspect\nimport weakref\nimport errno\nfrom functools import partial, wraps\nfrom sgmllib import SGMLParser\n\n\nclass FixedSGMLParser(SGMLParser):\n \"\"\"The SGMLParser that comes with Python has a bug in the convert_charref()\n method. This is the same class with the bug fixed\"\"\"\n\n def convert_charref(self, name):\n \"\"\"This method fixes a bug in Python's SGMLParser.\"\"\"\n try:\n n = int(name)\n except ValueError:\n return\n if not 0 <= n <= 127 : # ASCII ends at 127, not 255\n return\n return self.convert_codepoint(n)\n\n\ndef flatten(x):\n \"\"\"flatten(sequence) -> list\n\n Returns a single, flat list which contains all elements retrieved\n from the sequence and all recursively contained sub-sequences\n (iterables).\n\n Examples:\n >>> [1, 2, [3,4], (5,6)]\n [1, 2, [3, 4], (5, 6)]\n >>> flatten([[[1,2,3], (42,None)], [4,5], [6], 7, (8,9,10)])\n [1, 2, 3, 42, None, 4, 5, 6, 7, 8, 9, 10]\"\"\"\n\n result = []\n for el in x:\n if hasattr(el, \"__iter__\"):\n result.extend(flatten(el))\n else:\n result.append(el)\n return result\n\n\ndef unique(list_, key=lambda x: x):\n \"\"\"efficient function to uniquify a list preserving item order\"\"\"\n seen = set()\n result = []\n for item in list_:\n seenkey = key(item)\n if seenkey in seen:\n continue\n seen.add(seenkey)\n result.append(item)\n return result\n\n\ndef str_to_unicode(text, encoding=None, errors='strict'):\n \"\"\"Return the unicode representation of text in the given encoding. Unlike\n .encode(encoding) this function can be applied directly to a unicode\n object without the risk of double-decoding problems (which can happen if\n you don't use the default 'ascii' encoding)\n \"\"\"\n\n if encoding is None:\n encoding = 'utf-8'\n if isinstance(text, str):\n return text.decode(encoding, errors)\n elif isinstance(text, unicode):\n return text\n else:\n raise TypeError('str_to_unicode must receive a str or unicode object, got %s' % type(text).__name__)\n\ndef unicode_to_str(text, encoding=None, errors='strict'):\n \"\"\"Return the str representation of text in the given encoding. Unlike\n .encode(encoding) this function can be applied directly to a str\n object without the risk of double-decoding problems (which can happen if\n you don't use the default 'ascii' encoding)\n \"\"\"\n\n if encoding is None:\n encoding = 'utf-8'\n if isinstance(text, unicode):\n return text.encode(encoding, errors)\n elif isinstance(text, str):\n return text\n else:\n raise TypeError('unicode_to_str must receive a unicode or str object, got %s' % type(text).__name__)\n\ndef re_rsearch(pattern, text, chunk_size=1024):\n \"\"\"\n This function does a reverse search in a text using a regular expression\n given in the attribute 'pattern'.\n Since the re module does not provide this functionality, we have to find for\n the expression into chunks of text extracted from the end (for the sake of efficiency).\n At first, a chunk of 'chunk_size' kilobytes is extracted from the end, and searched for\n the pattern. If the pattern is not found, another chunk is extracted, and another\n search is performed.\n This process continues until a match is found, or until the whole file is read.\n In case the pattern wasn't found, None is returned, otherwise it returns a tuple containing\n the start position of the match, and the ending (regarding the entire text).\n \"\"\"\n def _chunk_iter():\n offset = len(text)\n while True:\n offset -= (chunk_size * 1024)\n if offset <= 0:\n break\n yield (text[offset:], offset)\n yield (text, 0)\n\n pattern = re.compile(pattern) if isinstance(pattern, basestring) else pattern\n for chunk, offset in _chunk_iter():\n matches = [match for match in pattern.finditer(chunk)]\n if matches:\n return (offset + matches[-1].span()[0], offset + matches[-1].span()[1])\n return None\n\ndef memoizemethod_noargs(method):\n \"\"\"Decorator to cache the result of a method (without arguments) using a\n weak reference to its object\n \"\"\"\n cache = weakref.WeakKeyDictionary()\n @wraps(method)\n def new_method(self, *args, **kwargs):\n if self not in cache:\n cache[self] = method(self, *args, **kwargs)\n return cache[self]\n return new_method\n\n_BINARYCHARS = set(map(chr, range(32))) - set([\"\\0\", \"\\t\", \"\\n\", \"\\r\"])\n\ndef isbinarytext(text):\n \"\"\"Return True if the given text is considered binary, or false\n otherwise, by looking for binary bytes at their chars\n \"\"\"\n assert isinstance(text, str), \"text must be str, got '%s'\" % type(text).__name__\n return any(c in _BINARYCHARS for c in text)\n\ndef get_func_args(func, stripself=False):\n \"\"\"Return the argument name list of a callable\"\"\"\n if inspect.isfunction(func):\n func_args, _, _, _ = inspect.getargspec(func)\n elif inspect.isclass(func):\n return get_func_args(func.__init__, True)\n elif inspect.ismethod(func):\n return get_func_args(func.__func__, True)\n elif inspect.ismethoddescriptor(func):\n return []\n elif isinstance(func, partial):\n return get_func_args(func.func)\n elif hasattr(func, '__call__'):\n if inspect.isroutine(func):\n return []\n else:\n return get_func_args(func.__call__, True)\n else:\n raise TypeError('%s is not callable' % type(func))\n if stripself:\n func_args.pop(0)\n return func_args\n\ndef get_spec(func):\n \"\"\"Returns (args, kwargs) tuple for a function\n >>> import re\n >>> get_spec(re.match)\n (['pattern', 'string'], {'flags': 0})\n\n >>> class Test(object):\n ... def __call__(self, val):\n ... pass\n ... def method(self, val, flags=0):\n ... pass\n\n >>> get_spec(Test)\n (['self', 'val'], {})\n\n >>> get_spec(Test.method)\n (['self', 'val'], {'flags': 0})\n\n >>> get_spec(Test().method)\n (['self', 'val'], {'flags': 0})\n \"\"\"\n\n if inspect.isfunction(func) or inspect.ismethod(func):\n spec = inspect.getargspec(func)\n elif hasattr(func, '__call__'):\n spec = inspect.getargspec(func.__call__)\n else:\n raise TypeError('%s is not callable' % type(func))\n\n defaults = spec.defaults or []\n\n firstdefault = len(spec.args) - len(defaults)\n args = spec.args[:firstdefault]\n kwargs = dict(zip(spec.args[firstdefault:], defaults))\n return args, kwargs\n\ndef equal_attributes(obj1, obj2, attributes):\n \"\"\"Compare two objects attributes\"\"\"\n # not attributes given return False by default\n if not attributes:\n return False\n\n for attr in attributes:\n # support callables like itemgetter\n if callable(attr):\n if not attr(obj1) == attr(obj2):\n return False\n else:\n # check that objects has attribute\n if not hasattr(obj1, attr):\n return False\n if not hasattr(obj2, attr):\n return False\n # compare object attributes\n if not getattr(obj1, attr) == getattr(obj2, attr):\n return False\n # all attributes equal\n return True\n\n\nclass WeakKeyCache(object):\n\n def __init__(self, default_factory):\n self.default_factory = default_factory\n self._weakdict = weakref.WeakKeyDictionary()\n\n def __getitem__(self, key):\n if key not in self._weakdict:\n self._weakdict[key] = self.default_factory(key)\n return self._weakdict[key]\n\n\ndef stringify_dict(dct_or_tuples, encoding='utf-8', keys_only=True):\n \"\"\"Return a (new) dict with the unicode keys (and values if, keys_only is\n False) of the given dict converted to strings. `dct_or_tuples` can be a\n dict or a list of tuples, like any dict constructor supports.\n \"\"\"\n d = {}\n for k, v in dict(dct_or_tuples).iteritems():\n k = k.encode(encoding) if isinstance(k, unicode) else k\n if not keys_only:\n v = v.encode(encoding) if isinstance(v, unicode) else v\n d[k] = v\n return d\n\ndef is_writable(path):\n \"\"\"Return True if the given path can be written (if it exists) or created\n (if it doesn't exist)\n \"\"\"\n if os.path.exists(path):\n return os.access(path, os.W_OK)\n else:\n return os.access(os.path.dirname(path), os.W_OK)\n\ndef setattr_default(obj, name, value):\n \"\"\"Set attribute value, but only if it's not already set. Similar to\n setdefault() for dicts.\n \"\"\"\n if not hasattr(obj, name):\n setattr(obj, name, value)\n\n\ndef retry_on_eintr(function, *args, **kw):\n \"\"\"Run a function and retry it while getting EINTR errors\"\"\"\n while True:\n try:\n return function(*args, **kw)\n except IOError as e:\n if e.errno != errno.EINTR:\n raise\n", "path": "scrapy/utils/python.py"}]}
| 3,699 | 149 |
gh_patches_debug_11707
|
rasdani/github-patches
|
git_diff
|
elastic__ecs-1164
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect output of the "tracing" fields in the Beats yml file
Just like the `base` fields, the `tracing` fields are not nested under the name of the field set. So it's not `base.@timestamp`, it's `@timestamp`, and it's not `tracing.trace.id`, it's `trace.id`.
In the Beats field yaml file the ECS project generates, the tracing fields are incorrectly nested under a `tracing` section, which means Beats interprets the field names incorrectly (`tracing.trace.id`).
This is a bug, these fields shouldn't be nested this way.
In order to fix this issue, we should remove this nesting in the Beats yml output. Just like `@timestamp` and other base fields are not nested under a field group.
I think this bug fix will be at minimum backported to 1.7. Thoughts welcome on this, is there a need to backport to 1.6 as well?
The Beats PR https://github.com/elastic/beats/pull/22571 to import ECS 1.7 should be adjusted with these changes, once the bug fix is ready. cc @andrewstucki
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/generators/beats.py`
Content:
```
1 from os.path import join
2 from collections import OrderedDict
3 from generators import ecs_helpers
4
5
6 def generate(ecs_nested, ecs_version, out_dir):
7 # Load temporary whitelist for default_fields workaround.
8 df_whitelist = ecs_helpers.yaml_load('scripts/generators/beats_default_fields_whitelist.yml')
9
10 # base first
11 beats_fields = fieldset_field_array(ecs_nested['base']['fields'], df_whitelist, ecs_nested['base']['prefix'])
12
13 allowed_fieldset_keys = ['name', 'title', 'group', 'description', 'footnote', 'type']
14 # other fieldsets
15 for fieldset_name in sorted(ecs_nested):
16 if 'base' == fieldset_name:
17 continue
18 fieldset = ecs_nested[fieldset_name]
19
20 beats_field = ecs_helpers.dict_copy_keys_ordered(fieldset, allowed_fieldset_keys)
21 beats_field['fields'] = fieldset_field_array(fieldset['fields'], df_whitelist, fieldset['prefix'])
22 beats_fields.append(beats_field)
23
24 beats_file = OrderedDict()
25 beats_file['key'] = 'ecs'
26 beats_file['title'] = 'ECS'
27 beats_file['description'] = 'ECS Fields.'
28 beats_file['fields'] = beats_fields
29
30 write_beats_yaml(beats_file, ecs_version, out_dir)
31
32
33 def fieldset_field_array(source_fields, df_whitelist, fieldset_prefix):
34 allowed_keys = ['name', 'level', 'required', 'type', 'object_type',
35 'ignore_above', 'multi_fields', 'format', 'input_format',
36 'output_format', 'output_precision', 'description',
37 'example', 'enabled', 'index', 'path', 'scaling_factor']
38 multi_fields_allowed_keys = ['name', 'type', 'norms', 'default_field', 'normalizer', 'ignore_above']
39
40 fields = []
41 for nested_field_name in source_fields:
42 ecs_field = source_fields[nested_field_name]
43 beats_field = ecs_helpers.dict_copy_keys_ordered(ecs_field, allowed_keys)
44 if '' == fieldset_prefix:
45 contextual_name = nested_field_name
46 else:
47 contextual_name = '.'.join(nested_field_name.split('.')[1:])
48
49 cleaned_multi_fields = []
50 if 'multi_fields' in ecs_field:
51 for mf in ecs_field['multi_fields']:
52 # Set default_field if necessary. Avoid adding the key if the parent
53 # field already is marked with default_field: false.
54 if not mf['flat_name'] in df_whitelist and ecs_field['flat_name'] in df_whitelist:
55 mf['default_field'] = False
56 cleaned_multi_fields.append(
57 ecs_helpers.dict_copy_keys_ordered(mf, multi_fields_allowed_keys))
58 beats_field['multi_fields'] = cleaned_multi_fields
59
60 beats_field['name'] = contextual_name
61
62 if not ecs_field['flat_name'] in df_whitelist:
63 beats_field['default_field'] = False
64
65 fields.append(beats_field)
66 return sorted(fields, key=lambda x: x['name'])
67
68 # Helpers
69
70
71 def write_beats_yaml(beats_file, ecs_version, out_dir):
72 ecs_helpers.make_dirs(join(out_dir, 'beats'))
73 warning = file_header().format(version=ecs_version)
74 ecs_helpers.yaml_dump(join(out_dir, 'beats/fields.ecs.yml'), [beats_file], preamble=warning)
75
76
77 # Templates
78
79
80 def file_header():
81 return '''
82 # WARNING! Do not edit this file directly, it was generated by the ECS project,
83 # based on ECS version {version}.
84 # Please visit https://github.com/elastic/ecs to suggest changes to ECS fields.
85
86 '''.lstrip()
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/generators/beats.py b/scripts/generators/beats.py
--- a/scripts/generators/beats.py
+++ b/scripts/generators/beats.py
@@ -17,6 +17,11 @@
continue
fieldset = ecs_nested[fieldset_name]
+ # Handle when `root:true`
+ if fieldset.get('root', False):
+ beats_fields.extend(fieldset_field_array(fieldset['fields'], df_whitelist, fieldset['prefix']))
+ continue
+
beats_field = ecs_helpers.dict_copy_keys_ordered(fieldset, allowed_fieldset_keys)
beats_field['fields'] = fieldset_field_array(fieldset['fields'], df_whitelist, fieldset['prefix'])
beats_fields.append(beats_field)
|
{"golden_diff": "diff --git a/scripts/generators/beats.py b/scripts/generators/beats.py\n--- a/scripts/generators/beats.py\n+++ b/scripts/generators/beats.py\n@@ -17,6 +17,11 @@\n continue\n fieldset = ecs_nested[fieldset_name]\n \n+ # Handle when `root:true`\n+ if fieldset.get('root', False):\n+ beats_fields.extend(fieldset_field_array(fieldset['fields'], df_whitelist, fieldset['prefix']))\n+ continue\n+\n beats_field = ecs_helpers.dict_copy_keys_ordered(fieldset, allowed_fieldset_keys)\n beats_field['fields'] = fieldset_field_array(fieldset['fields'], df_whitelist, fieldset['prefix'])\n beats_fields.append(beats_field)\n", "issue": "Incorrect output of the \"tracing\" fields in the Beats yml file\nJust like the `base` fields, the `tracing` fields are not nested under the name of the field set. So it's not `base.@timestamp`, it's `@timestamp`, and it's not `tracing.trace.id`, it's `trace.id`.\r\n\r\nIn the Beats field yaml file the ECS project generates, the tracing fields are incorrectly nested under a `tracing` section, which means Beats interprets the field names incorrectly (`tracing.trace.id`).\r\n\r\nThis is a bug, these fields shouldn't be nested this way.\r\n\r\nIn order to fix this issue, we should remove this nesting in the Beats yml output. Just like `@timestamp` and other base fields are not nested under a field group.\r\n\r\nI think this bug fix will be at minimum backported to 1.7. Thoughts welcome on this, is there a need to backport to 1.6 as well?\r\n\r\nThe Beats PR https://github.com/elastic/beats/pull/22571 to import ECS 1.7 should be adjusted with these changes, once the bug fix is ready. cc @andrewstucki \r\n\n", "before_files": [{"content": "from os.path import join\nfrom collections import OrderedDict\nfrom generators import ecs_helpers\n\n\ndef generate(ecs_nested, ecs_version, out_dir):\n # Load temporary whitelist for default_fields workaround.\n df_whitelist = ecs_helpers.yaml_load('scripts/generators/beats_default_fields_whitelist.yml')\n\n # base first\n beats_fields = fieldset_field_array(ecs_nested['base']['fields'], df_whitelist, ecs_nested['base']['prefix'])\n\n allowed_fieldset_keys = ['name', 'title', 'group', 'description', 'footnote', 'type']\n # other fieldsets\n for fieldset_name in sorted(ecs_nested):\n if 'base' == fieldset_name:\n continue\n fieldset = ecs_nested[fieldset_name]\n\n beats_field = ecs_helpers.dict_copy_keys_ordered(fieldset, allowed_fieldset_keys)\n beats_field['fields'] = fieldset_field_array(fieldset['fields'], df_whitelist, fieldset['prefix'])\n beats_fields.append(beats_field)\n\n beats_file = OrderedDict()\n beats_file['key'] = 'ecs'\n beats_file['title'] = 'ECS'\n beats_file['description'] = 'ECS Fields.'\n beats_file['fields'] = beats_fields\n\n write_beats_yaml(beats_file, ecs_version, out_dir)\n\n\ndef fieldset_field_array(source_fields, df_whitelist, fieldset_prefix):\n allowed_keys = ['name', 'level', 'required', 'type', 'object_type',\n 'ignore_above', 'multi_fields', 'format', 'input_format',\n 'output_format', 'output_precision', 'description',\n 'example', 'enabled', 'index', 'path', 'scaling_factor']\n multi_fields_allowed_keys = ['name', 'type', 'norms', 'default_field', 'normalizer', 'ignore_above']\n\n fields = []\n for nested_field_name in source_fields:\n ecs_field = source_fields[nested_field_name]\n beats_field = ecs_helpers.dict_copy_keys_ordered(ecs_field, allowed_keys)\n if '' == fieldset_prefix:\n contextual_name = nested_field_name\n else:\n contextual_name = '.'.join(nested_field_name.split('.')[1:])\n\n cleaned_multi_fields = []\n if 'multi_fields' in ecs_field:\n for mf in ecs_field['multi_fields']:\n # Set default_field if necessary. Avoid adding the key if the parent\n # field already is marked with default_field: false.\n if not mf['flat_name'] in df_whitelist and ecs_field['flat_name'] in df_whitelist:\n mf['default_field'] = False\n cleaned_multi_fields.append(\n ecs_helpers.dict_copy_keys_ordered(mf, multi_fields_allowed_keys))\n beats_field['multi_fields'] = cleaned_multi_fields\n\n beats_field['name'] = contextual_name\n\n if not ecs_field['flat_name'] in df_whitelist:\n beats_field['default_field'] = False\n\n fields.append(beats_field)\n return sorted(fields, key=lambda x: x['name'])\n\n# Helpers\n\n\ndef write_beats_yaml(beats_file, ecs_version, out_dir):\n ecs_helpers.make_dirs(join(out_dir, 'beats'))\n warning = file_header().format(version=ecs_version)\n ecs_helpers.yaml_dump(join(out_dir, 'beats/fields.ecs.yml'), [beats_file], preamble=warning)\n\n\n# Templates\n\n\ndef file_header():\n return '''\n# WARNING! Do not edit this file directly, it was generated by the ECS project,\n# based on ECS version {version}.\n# Please visit https://github.com/elastic/ecs to suggest changes to ECS fields.\n\n'''.lstrip()\n", "path": "scripts/generators/beats.py"}], "after_files": [{"content": "from os.path import join\nfrom collections import OrderedDict\nfrom generators import ecs_helpers\n\n\ndef generate(ecs_nested, ecs_version, out_dir):\n # Load temporary whitelist for default_fields workaround.\n df_whitelist = ecs_helpers.yaml_load('scripts/generators/beats_default_fields_whitelist.yml')\n\n # base first\n beats_fields = fieldset_field_array(ecs_nested['base']['fields'], df_whitelist, ecs_nested['base']['prefix'])\n\n allowed_fieldset_keys = ['name', 'title', 'group', 'description', 'footnote', 'type']\n # other fieldsets\n for fieldset_name in sorted(ecs_nested):\n if 'base' == fieldset_name:\n continue\n fieldset = ecs_nested[fieldset_name]\n\n # Handle when `root:true`\n if fieldset.get('root', False):\n beats_fields.extend(fieldset_field_array(fieldset['fields'], df_whitelist, fieldset['prefix']))\n continue\n\n beats_field = ecs_helpers.dict_copy_keys_ordered(fieldset, allowed_fieldset_keys)\n beats_field['fields'] = fieldset_field_array(fieldset['fields'], df_whitelist, fieldset['prefix'])\n beats_fields.append(beats_field)\n\n beats_file = OrderedDict()\n beats_file['key'] = 'ecs'\n beats_file['title'] = 'ECS'\n beats_file['description'] = 'ECS Fields.'\n beats_file['fields'] = beats_fields\n\n write_beats_yaml(beats_file, ecs_version, out_dir)\n\n\ndef fieldset_field_array(source_fields, df_whitelist, fieldset_prefix):\n allowed_keys = ['name', 'level', 'required', 'type', 'object_type',\n 'ignore_above', 'multi_fields', 'format', 'input_format',\n 'output_format', 'output_precision', 'description',\n 'example', 'enabled', 'index', 'path', 'scaling_factor']\n multi_fields_allowed_keys = ['name', 'type', 'norms', 'default_field', 'normalizer', 'ignore_above']\n\n fields = []\n for nested_field_name in source_fields:\n ecs_field = source_fields[nested_field_name]\n beats_field = ecs_helpers.dict_copy_keys_ordered(ecs_field, allowed_keys)\n if '' == fieldset_prefix:\n contextual_name = nested_field_name\n else:\n contextual_name = '.'.join(nested_field_name.split('.')[1:])\n\n cleaned_multi_fields = []\n if 'multi_fields' in ecs_field:\n for mf in ecs_field['multi_fields']:\n # Set default_field if necessary. Avoid adding the key if the parent\n # field already is marked with default_field: false.\n if not mf['flat_name'] in df_whitelist and ecs_field['flat_name'] in df_whitelist:\n mf['default_field'] = False\n cleaned_multi_fields.append(\n ecs_helpers.dict_copy_keys_ordered(mf, multi_fields_allowed_keys))\n beats_field['multi_fields'] = cleaned_multi_fields\n\n beats_field['name'] = contextual_name\n\n if not ecs_field['flat_name'] in df_whitelist:\n beats_field['default_field'] = False\n\n fields.append(beats_field)\n return sorted(fields, key=lambda x: x['name'])\n\n# Helpers\n\n\ndef write_beats_yaml(beats_file, ecs_version, out_dir):\n ecs_helpers.make_dirs(join(out_dir, 'beats'))\n warning = file_header().format(version=ecs_version)\n ecs_helpers.yaml_dump(join(out_dir, 'beats/fields.ecs.yml'), [beats_file], preamble=warning)\n\n\n# Templates\n\n\ndef file_header():\n return '''\n# WARNING! Do not edit this file directly, it was generated by the ECS project,\n# based on ECS version {version}.\n# Please visit https://github.com/elastic/ecs to suggest changes to ECS fields.\n\n'''.lstrip()\n", "path": "scripts/generators/beats.py"}]}
| 1,461 | 166 |
gh_patches_debug_34022
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-594
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
engine status util references removed engine.slots attribute
```
Traceback (most recent call last): Less
File "/usr/lib/pymodules/python2.7/scrapy/xlib/pydispatch/robustapply.py", line 54, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/pymodules/python2.7/scrapy/contrib/memusage.py", line 63, in engine_started
tsk.start(60.0, now=True)
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 163, in start
self()
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 208, in __call__
d = defer.maybeDeferred(self.f, *self.a, **self.kw)
--- <exception caught here> ---
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 134, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/pymodules/python2.7/scrapy/contrib/memusage.py", line 103, in _check_warning
self._send_report(self.notify_mails, subj)
File "/usr/lib/pymodules/python2.7/scrapy/contrib/memusage.py", line 116, in _send_report
s += pformat(get_engine_status(self.crawler.engine))
File "/usr/lib/pymodules/python2.7/scrapy/utils/engine.py", line 33, in get_engine_status
for spider in engine.slots.keys():
exceptions.AttributeError: 'ExecutionEngine' object has no attribute 'slots'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/engine.py`
Content:
```
1 """Some debugging functions for working with the Scrapy engine"""
2
3 from __future__ import print_function
4 from time import time # used in global tests code
5
6 def get_engine_status(engine):
7 """Return a report of the current engine status"""
8 global_tests = [
9 "time()-engine.start_time",
10 "engine.has_capacity()",
11 "len(engine.downloader.active)",
12 "engine.scraper.is_idle()",
13 ]
14 spider_tests = [
15 "engine.spider_is_idle(spider)",
16 "engine.slot.closing",
17 "len(engine.slot.inprogress)",
18 "len(engine.slot.scheduler.dqs or [])",
19 "len(engine.slot.scheduler.mqs)",
20 "len(engine.scraper.slot.queue)",
21 "len(engine.scraper.slot.active)",
22 "engine.scraper.slot.active_size",
23 "engine.scraper.slot.itemproc_size",
24 "engine.scraper.slot.needs_backout()",
25 ]
26
27 status = {'global': [], 'spiders': {}}
28 for test in global_tests:
29 try:
30 status['global'] += [(test, eval(test))]
31 except Exception as e:
32 status['global'] += [(test, "%s (exception)" % type(e).__name__)]
33 for spider in engine.slots.keys():
34 x = []
35 for test in spider_tests:
36 try:
37 x += [(test, eval(test))]
38 except Exception as e:
39 x += [(test, "%s (exception)" % type(e).__name__)]
40 status['spiders'][spider] = x
41 return status
42
43 def format_engine_status(engine=None):
44 status = get_engine_status(engine)
45 s = "Execution engine status\n\n"
46 for test, result in status['global']:
47 s += "%-47s : %s\n" % (test, result)
48 s += "\n"
49 for spider, tests in status['spiders'].items():
50 s += "Spider: %s\n" % spider
51 for test, result in tests:
52 s += " %-50s : %s\n" % (test, result)
53 return s
54
55 def print_engine_status(engine):
56 print(format_engine_status(engine))
57
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/utils/engine.py b/scrapy/utils/engine.py
--- a/scrapy/utils/engine.py
+++ b/scrapy/utils/engine.py
@@ -5,14 +5,13 @@
def get_engine_status(engine):
"""Return a report of the current engine status"""
- global_tests = [
+ tests = [
"time()-engine.start_time",
"engine.has_capacity()",
"len(engine.downloader.active)",
"engine.scraper.is_idle()",
- ]
- spider_tests = [
- "engine.spider_is_idle(spider)",
+ "engine.spider.name",
+ "engine.spider_is_idle(engine.spider)",
"engine.slot.closing",
"len(engine.slot.inprogress)",
"len(engine.slot.scheduler.dqs or [])",
@@ -24,34 +23,23 @@
"engine.scraper.slot.needs_backout()",
]
- status = {'global': [], 'spiders': {}}
- for test in global_tests:
+ checks = []
+ for test in tests:
try:
- status['global'] += [(test, eval(test))]
+ checks += [(test, eval(test))]
except Exception as e:
- status['global'] += [(test, "%s (exception)" % type(e).__name__)]
- for spider in engine.slots.keys():
- x = []
- for test in spider_tests:
- try:
- x += [(test, eval(test))]
- except Exception as e:
- x += [(test, "%s (exception)" % type(e).__name__)]
- status['spiders'][spider] = x
- return status
+ checks += [(test, "%s (exception)" % type(e).__name__)]
+
+ return checks
def format_engine_status(engine=None):
- status = get_engine_status(engine)
+ checks = get_engine_status(engine)
s = "Execution engine status\n\n"
- for test, result in status['global']:
+ for test, result in checks:
s += "%-47s : %s\n" % (test, result)
s += "\n"
- for spider, tests in status['spiders'].items():
- s += "Spider: %s\n" % spider
- for test, result in tests:
- s += " %-50s : %s\n" % (test, result)
+
return s
def print_engine_status(engine):
print(format_engine_status(engine))
-
|
{"golden_diff": "diff --git a/scrapy/utils/engine.py b/scrapy/utils/engine.py\n--- a/scrapy/utils/engine.py\n+++ b/scrapy/utils/engine.py\n@@ -5,14 +5,13 @@\n \n def get_engine_status(engine):\n \"\"\"Return a report of the current engine status\"\"\"\n- global_tests = [\n+ tests = [\n \"time()-engine.start_time\",\n \"engine.has_capacity()\",\n \"len(engine.downloader.active)\",\n \"engine.scraper.is_idle()\",\n- ]\n- spider_tests = [\n- \"engine.spider_is_idle(spider)\",\n+ \"engine.spider.name\",\n+ \"engine.spider_is_idle(engine.spider)\",\n \"engine.slot.closing\",\n \"len(engine.slot.inprogress)\",\n \"len(engine.slot.scheduler.dqs or [])\",\n@@ -24,34 +23,23 @@\n \"engine.scraper.slot.needs_backout()\",\n ]\n \n- status = {'global': [], 'spiders': {}}\n- for test in global_tests:\n+ checks = []\n+ for test in tests:\n try:\n- status['global'] += [(test, eval(test))]\n+ checks += [(test, eval(test))]\n except Exception as e:\n- status['global'] += [(test, \"%s (exception)\" % type(e).__name__)]\n- for spider in engine.slots.keys():\n- x = []\n- for test in spider_tests:\n- try:\n- x += [(test, eval(test))]\n- except Exception as e:\n- x += [(test, \"%s (exception)\" % type(e).__name__)]\n- status['spiders'][spider] = x\n- return status\n+ checks += [(test, \"%s (exception)\" % type(e).__name__)]\n+\n+ return checks\n \n def format_engine_status(engine=None):\n- status = get_engine_status(engine)\n+ checks = get_engine_status(engine)\n s = \"Execution engine status\\n\\n\"\n- for test, result in status['global']:\n+ for test, result in checks:\n s += \"%-47s : %s\\n\" % (test, result)\n s += \"\\n\"\n- for spider, tests in status['spiders'].items():\n- s += \"Spider: %s\\n\" % spider\n- for test, result in tests:\n- s += \" %-50s : %s\\n\" % (test, result)\n+\n return s\n \n def print_engine_status(engine):\n print(format_engine_status(engine))\n-\n", "issue": "engine status util references removed engine.slots attribute\n```\nTraceback (most recent call last): Less\n File \"/usr/lib/pymodules/python2.7/scrapy/xlib/pydispatch/robustapply.py\", line 54, in robustApply\n return receiver(*arguments, **named)\n File \"/usr/lib/pymodules/python2.7/scrapy/contrib/memusage.py\", line 63, in engine_started\n tsk.start(60.0, now=True)\n File \"/usr/lib/python2.7/dist-packages/twisted/internet/task.py\", line 163, in start\n self()\n File \"/usr/lib/python2.7/dist-packages/twisted/internet/task.py\", line 208, in __call__\n d = defer.maybeDeferred(self.f, *self.a, **self.kw)\n --- <exception caught here> ---\n File \"/usr/lib/python2.7/dist-packages/twisted/internet/defer.py\", line 134, in maybeDeferred\n result = f(*args, **kw)\n File \"/usr/lib/pymodules/python2.7/scrapy/contrib/memusage.py\", line 103, in _check_warning\n self._send_report(self.notify_mails, subj)\n File \"/usr/lib/pymodules/python2.7/scrapy/contrib/memusage.py\", line 116, in _send_report\n s += pformat(get_engine_status(self.crawler.engine))\n File \"/usr/lib/pymodules/python2.7/scrapy/utils/engine.py\", line 33, in get_engine_status\n for spider in engine.slots.keys():\n exceptions.AttributeError: 'ExecutionEngine' object has no attribute 'slots'\n```\n\n", "before_files": [{"content": "\"\"\"Some debugging functions for working with the Scrapy engine\"\"\"\n\nfrom __future__ import print_function\nfrom time import time # used in global tests code\n\ndef get_engine_status(engine):\n \"\"\"Return a report of the current engine status\"\"\"\n global_tests = [\n \"time()-engine.start_time\",\n \"engine.has_capacity()\",\n \"len(engine.downloader.active)\",\n \"engine.scraper.is_idle()\",\n ]\n spider_tests = [\n \"engine.spider_is_idle(spider)\",\n \"engine.slot.closing\",\n \"len(engine.slot.inprogress)\",\n \"len(engine.slot.scheduler.dqs or [])\",\n \"len(engine.slot.scheduler.mqs)\",\n \"len(engine.scraper.slot.queue)\",\n \"len(engine.scraper.slot.active)\",\n \"engine.scraper.slot.active_size\",\n \"engine.scraper.slot.itemproc_size\",\n \"engine.scraper.slot.needs_backout()\",\n ]\n\n status = {'global': [], 'spiders': {}}\n for test in global_tests:\n try:\n status['global'] += [(test, eval(test))]\n except Exception as e:\n status['global'] += [(test, \"%s (exception)\" % type(e).__name__)]\n for spider in engine.slots.keys():\n x = []\n for test in spider_tests:\n try:\n x += [(test, eval(test))]\n except Exception as e:\n x += [(test, \"%s (exception)\" % type(e).__name__)]\n status['spiders'][spider] = x\n return status\n\ndef format_engine_status(engine=None):\n status = get_engine_status(engine)\n s = \"Execution engine status\\n\\n\"\n for test, result in status['global']:\n s += \"%-47s : %s\\n\" % (test, result)\n s += \"\\n\"\n for spider, tests in status['spiders'].items():\n s += \"Spider: %s\\n\" % spider\n for test, result in tests:\n s += \" %-50s : %s\\n\" % (test, result)\n return s\n\ndef print_engine_status(engine):\n print(format_engine_status(engine))\n\n", "path": "scrapy/utils/engine.py"}], "after_files": [{"content": "\"\"\"Some debugging functions for working with the Scrapy engine\"\"\"\n\nfrom __future__ import print_function\nfrom time import time # used in global tests code\n\ndef get_engine_status(engine):\n \"\"\"Return a report of the current engine status\"\"\"\n tests = [\n \"time()-engine.start_time\",\n \"engine.has_capacity()\",\n \"len(engine.downloader.active)\",\n \"engine.scraper.is_idle()\",\n \"engine.spider.name\",\n \"engine.spider_is_idle(engine.spider)\",\n \"engine.slot.closing\",\n \"len(engine.slot.inprogress)\",\n \"len(engine.slot.scheduler.dqs or [])\",\n \"len(engine.slot.scheduler.mqs)\",\n \"len(engine.scraper.slot.queue)\",\n \"len(engine.scraper.slot.active)\",\n \"engine.scraper.slot.active_size\",\n \"engine.scraper.slot.itemproc_size\",\n \"engine.scraper.slot.needs_backout()\",\n ]\n\n checks = []\n for test in tests:\n try:\n checks += [(test, eval(test))]\n except Exception as e:\n checks += [(test, \"%s (exception)\" % type(e).__name__)]\n\n return checks\n\ndef format_engine_status(engine=None):\n checks = get_engine_status(engine)\n s = \"Execution engine status\\n\\n\"\n for test, result in checks:\n s += \"%-47s : %s\\n\" % (test, result)\n s += \"\\n\"\n\n return s\n\ndef print_engine_status(engine):\n print(format_engine_status(engine))\n", "path": "scrapy/utils/engine.py"}]}
| 1,208 | 551 |
gh_patches_debug_44681
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-3316
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add viewer query
With PR #3202 we've added ability to get the data of currently logged in user with the `user` query. I would recommend refactoring it a little bit and introducing a separate query for that. The problem with the `user` query is that it expects the `ID!` argument, but it also accepts passing `""` as ID value to resolve the logged in user. This brakes the single-responsibility rule (which is a good practice for GraphQL queries), is a bit unintuitive and makes to code harder to maintain.
I would propose changing the schema to have the following queries:
`viewer: User` - returns currently authenticated user
`user(id: ID!): User` - resolves a user by ID, where ID is required
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/account/resolvers.py`
Content:
```
1 import graphene
2 import graphene_django_optimizer as gql_optimizer
3 from django.db.models import Q
4 from i18naddress import get_validation_rules
5
6 from ...account import models
7 from ...core.utils import get_client_ip, get_country_by_ip
8 from ..utils import filter_by_query_param
9 from .types import AddressValidationData, ChoiceValue, User
10
11 USER_SEARCH_FIELDS = (
12 'email', 'default_shipping_address__first_name',
13 'default_shipping_address__last_name', 'default_shipping_address__city',
14 'default_shipping_address__country')
15
16
17 def resolve_user(info, id):
18 logged_user = info.context.user
19 if not id:
20 return logged_user
21 user = graphene.Node.get_node_from_global_id(info, id, User)
22 if logged_user.has_perm('account.manage_users') or user == logged_user:
23 return user
24 return None
25
26
27 def resolve_customers(info, query):
28 qs = models.User.objects.filter(
29 Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False)))
30 qs = filter_by_query_param(
31 queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)
32 qs = qs.order_by('email')
33 qs = qs.distinct()
34 return gql_optimizer.query(qs, info)
35
36
37 def resolve_staff_users(info, query):
38 qs = models.User.objects.filter(is_staff=True)
39 qs = filter_by_query_param(
40 queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)
41 qs = qs.order_by('email')
42 qs = qs.distinct()
43 return gql_optimizer.query(qs, info)
44
45
46 def resolve_address_validator(info, input):
47 country_code = input['country_code']
48 if not country_code:
49 client_ip = get_client_ip(info.context)
50 country = get_country_by_ip(client_ip)
51 if country:
52 country_code = country.code
53 else:
54 return None
55 params = {
56 'country_code': country_code,
57 'country_area': input['country_area'],
58 'city_area': input['city_area']}
59 rules = get_validation_rules(params)
60 return AddressValidationData(
61 country_code=rules.country_code,
62 country_name=rules.country_name,
63 address_format=rules.address_format,
64 address_latin_format=rules.address_latin_format,
65 allowed_fields=rules.allowed_fields,
66 required_fields=rules.required_fields,
67 upper_fields=rules.upper_fields,
68 country_area_type=rules.country_area_type,
69 country_area_choices=[
70 ChoiceValue(area[0], area[1])
71 for area in rules.country_area_choices],
72 city_type=rules.city_type,
73 city_area_choices=[
74 ChoiceValue(area[0], area[1]) for area in rules.city_area_choices],
75 postal_code_type=rules.postal_code_type,
76 postal_code_matchers=[
77 compiled.pattern for compiled in rules.postal_code_matchers],
78 postal_code_examples=rules.postal_code_examples,
79 postal_code_prefix=rules.postal_code_prefix)
80
```
Path: `saleor/graphql/account/types.py`
Content:
```
1 import graphene
2 import graphene_django_optimizer as gql_optimizer
3 from django.contrib.auth import get_user_model
4 from graphene import relay
5
6 from ...account import models
7 from ...core.permissions import get_permissions
8 from ..core.types.common import (
9 CountableDjangoObjectType, CountryDisplay, PermissionDisplay)
10 from ..utils import format_permissions_for_display
11
12
13 class AddressInput(graphene.InputObjectType):
14 first_name = graphene.String(description='Given name.')
15 last_name = graphene.String(description='Family name.')
16 company_name = graphene.String(description='Company or organization.')
17 street_address_1 = graphene.String(description='Address.')
18 street_address_2 = graphene.String(description='Address.')
19 city = graphene.String(description='City.')
20 city_area = graphene.String(description='District.')
21 postal_code = graphene.String(description='Postal code.')
22 country = graphene.String(required=True, description='Country.')
23 country_area = graphene.String(description='State or province.')
24 phone = graphene.String(description='Phone number.')
25
26
27 class Address(CountableDjangoObjectType):
28 country = graphene.Field(
29 CountryDisplay, required=True, description='Default shop\'s country')
30
31 class Meta:
32 exclude_fields = ['user_set', 'user_addresses']
33 description = 'Represents user address data.'
34 interfaces = [relay.Node]
35 model = models.Address
36
37 def resolve_country(self, info):
38 return CountryDisplay(
39 code=self.country.code, country=self.country.name)
40
41
42 class User(CountableDjangoObjectType):
43 permissions = graphene.List(
44 PermissionDisplay, description='List of user\'s permissions.')
45 addresses = gql_optimizer.field(
46 graphene.List(
47 Address, description='List of all user\'s addresses.'),
48 model_field='addresses')
49
50 class Meta:
51 exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']
52 description = 'Represents user data.'
53 interfaces = [relay.Node]
54 model = get_user_model()
55
56 def resolve_permissions(self, info, **kwargs):
57 if self.is_superuser:
58 permissions = get_permissions()
59 else:
60 permissions = self.user_permissions.prefetch_related(
61 'content_type').order_by('codename')
62 return format_permissions_for_display(permissions)
63
64 def resolve_addresses(self, info, **kwargs):
65 return self.addresses.all()
66
67
68 class AddressValidationInput(graphene.InputObjectType):
69 country_code = graphene.String()
70 country_area = graphene.String()
71 city_area = graphene.String()
72
73
74 class ChoiceValue(graphene.ObjectType):
75 raw = graphene.String()
76 verbose = graphene.String()
77
78
79 class AddressValidationData(graphene.ObjectType):
80 country_code = graphene.String()
81 country_name = graphene.String()
82 address_format = graphene.String()
83 address_latin_format = graphene.String()
84 allowed_fields = graphene.List(graphene.String)
85 required_fields = graphene.List(graphene.String)
86 upper_fields = graphene.List(graphene.String)
87 country_area_type = graphene.String()
88 country_area_choices = graphene.List(ChoiceValue)
89 city_type = graphene.String()
90 city_area_choices = graphene.List(ChoiceValue)
91 postal_code_type = graphene.String()
92 postal_code_matchers = graphene.List(graphene.String)
93 postal_code_examples = graphene.List(graphene.String)
94 postal_code_prefix = graphene.String()
95
```
Path: `saleor/graphql/account/schema.py`
Content:
```
1 import graphene
2 from graphql_jwt.decorators import login_required, permission_required
3
4 from ..core.fields import PrefetchingConnectionField
5 from ..descriptions import DESCRIPTIONS
6 from .mutations import (
7 AddressCreate, AddressDelete, AddressUpdate, CustomerCreate,
8 CustomerDelete, CustomerPasswordReset, CustomerRegister, CustomerUpdate,
9 LoggedUserUpdate, PasswordReset, SetPassword, StaffCreate, StaffDelete,
10 StaffUpdate)
11 from .resolvers import (
12 resolve_address_validator, resolve_customers, resolve_staff_users,
13 resolve_user)
14 from .types import AddressValidationData, AddressValidationInput, User
15
16
17 class AccountQueries(graphene.ObjectType):
18 address_validator = graphene.Field(
19 AddressValidationData,
20 input=graphene.Argument(AddressValidationInput, required=True))
21 customers = PrefetchingConnectionField(
22 User, description='List of the shop\'s customers.',
23 query=graphene.String(description=DESCRIPTIONS['user']))
24 staff_users = PrefetchingConnectionField(
25 User, description='List of the shop\'s staff users.',
26 query=graphene.String(description=DESCRIPTIONS['user']))
27 user = graphene.Field(
28 User, id=graphene.Argument(graphene.ID, required=True),
29 description='Lookup an user by ID.')
30
31 def resolve_address_validator(self, info, input):
32 return resolve_address_validator(info, input)
33
34 @permission_required('account.manage_users')
35 def resolve_customers(self, info, query=None, **kwargs):
36 return resolve_customers(info, query=query)
37
38 @permission_required('account.manage_staff')
39 def resolve_staff_users(self, info, query=None, **kwargs):
40 return resolve_staff_users(info, query=query)
41
42 @login_required
43 def resolve_user(self, info, id):
44 return resolve_user(info, id)
45
46
47 class AccountMutations(graphene.ObjectType):
48 password_reset = PasswordReset.Field()
49 set_password = SetPassword.Field()
50
51 customer_create = CustomerCreate.Field()
52 customer_delete = CustomerDelete.Field()
53 customer_password_reset = CustomerPasswordReset.Field()
54 customer_register = CustomerRegister.Field()
55 customer_update = CustomerUpdate.Field()
56
57 logged_user_update = LoggedUserUpdate.Field()
58
59 staff_create = StaffCreate.Field()
60 staff_delete = StaffDelete.Field()
61 staff_update = StaffUpdate.Field()
62
63 address_create = AddressCreate.Field()
64 address_delete = AddressDelete.Field()
65 address_update = AddressUpdate.Field()
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/graphql/account/resolvers.py b/saleor/graphql/account/resolvers.py
--- a/saleor/graphql/account/resolvers.py
+++ b/saleor/graphql/account/resolvers.py
@@ -1,4 +1,3 @@
-import graphene
import graphene_django_optimizer as gql_optimizer
from django.db.models import Q
from i18naddress import get_validation_rules
@@ -6,7 +5,7 @@
from ...account import models
from ...core.utils import get_client_ip, get_country_by_ip
from ..utils import filter_by_query_param
-from .types import AddressValidationData, ChoiceValue, User
+from .types import AddressValidationData, ChoiceValue
USER_SEARCH_FIELDS = (
'email', 'default_shipping_address__first_name',
@@ -14,16 +13,6 @@
'default_shipping_address__country')
-def resolve_user(info, id):
- logged_user = info.context.user
- if not id:
- return logged_user
- user = graphene.Node.get_node_from_global_id(info, id, User)
- if logged_user.has_perm('account.manage_users') or user == logged_user:
- return user
- return None
-
-
def resolve_customers(info, query):
qs = models.User.objects.filter(
Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False)))
diff --git a/saleor/graphql/account/schema.py b/saleor/graphql/account/schema.py
--- a/saleor/graphql/account/schema.py
+++ b/saleor/graphql/account/schema.py
@@ -9,8 +9,7 @@
LoggedUserUpdate, PasswordReset, SetPassword, StaffCreate, StaffDelete,
StaffUpdate)
from .resolvers import (
- resolve_address_validator, resolve_customers, resolve_staff_users,
- resolve_user)
+ resolve_address_validator, resolve_customers, resolve_staff_users)
from .types import AddressValidationData, AddressValidationInput, User
@@ -21,6 +20,8 @@
customers = PrefetchingConnectionField(
User, description='List of the shop\'s customers.',
query=graphene.String(description=DESCRIPTIONS['user']))
+ me = graphene.Field(
+ User, description='Logged in user data.')
staff_users = PrefetchingConnectionField(
User, description='List of the shop\'s staff users.',
query=graphene.String(description=DESCRIPTIONS['user']))
@@ -35,13 +36,17 @@
def resolve_customers(self, info, query=None, **kwargs):
return resolve_customers(info, query=query)
+ @login_required
+ def resolve_me(self, info):
+ return info.context.user
+
@permission_required('account.manage_staff')
def resolve_staff_users(self, info, query=None, **kwargs):
return resolve_staff_users(info, query=query)
- @login_required
+ @permission_required('account.manage_users')
def resolve_user(self, info, id):
- return resolve_user(info, id)
+ return graphene.Node.get_node_from_global_id(info, id, User)
class AccountMutations(graphene.ObjectType):
diff --git a/saleor/graphql/account/types.py b/saleor/graphql/account/types.py
--- a/saleor/graphql/account/types.py
+++ b/saleor/graphql/account/types.py
@@ -2,6 +2,7 @@
import graphene_django_optimizer as gql_optimizer
from django.contrib.auth import get_user_model
from graphene import relay
+from graphql_jwt.decorators import permission_required
from ...account import models
from ...core.permissions import get_permissions
@@ -47,6 +48,7 @@
PrefetchingConnectionField(
Address, description='List of all user\'s addresses.'),
model_field='addresses')
+ note = graphene.String(description='A note about the customer')
class Meta:
exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']
@@ -65,6 +67,10 @@
def resolve_addresses(self, info, **kwargs):
return self.addresses.all()
+ @permission_required('account.manage_users')
+ def resolve_note(self, info):
+ return self.note
+
class AddressValidationInput(graphene.InputObjectType):
country_code = graphene.String()
|
{"golden_diff": "diff --git a/saleor/graphql/account/resolvers.py b/saleor/graphql/account/resolvers.py\n--- a/saleor/graphql/account/resolvers.py\n+++ b/saleor/graphql/account/resolvers.py\n@@ -1,4 +1,3 @@\n-import graphene\n import graphene_django_optimizer as gql_optimizer\n from django.db.models import Q\n from i18naddress import get_validation_rules\n@@ -6,7 +5,7 @@\n from ...account import models\n from ...core.utils import get_client_ip, get_country_by_ip\n from ..utils import filter_by_query_param\n-from .types import AddressValidationData, ChoiceValue, User\n+from .types import AddressValidationData, ChoiceValue\n \n USER_SEARCH_FIELDS = (\n 'email', 'default_shipping_address__first_name',\n@@ -14,16 +13,6 @@\n 'default_shipping_address__country')\n \n \n-def resolve_user(info, id):\n- logged_user = info.context.user\n- if not id:\n- return logged_user\n- user = graphene.Node.get_node_from_global_id(info, id, User)\n- if logged_user.has_perm('account.manage_users') or user == logged_user:\n- return user\n- return None\n-\n-\n def resolve_customers(info, query):\n qs = models.User.objects.filter(\n Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False)))\ndiff --git a/saleor/graphql/account/schema.py b/saleor/graphql/account/schema.py\n--- a/saleor/graphql/account/schema.py\n+++ b/saleor/graphql/account/schema.py\n@@ -9,8 +9,7 @@\n LoggedUserUpdate, PasswordReset, SetPassword, StaffCreate, StaffDelete,\n StaffUpdate)\n from .resolvers import (\n- resolve_address_validator, resolve_customers, resolve_staff_users,\n- resolve_user)\n+ resolve_address_validator, resolve_customers, resolve_staff_users)\n from .types import AddressValidationData, AddressValidationInput, User\n \n \n@@ -21,6 +20,8 @@\n customers = PrefetchingConnectionField(\n User, description='List of the shop\\'s customers.',\n query=graphene.String(description=DESCRIPTIONS['user']))\n+ me = graphene.Field(\n+ User, description='Logged in user data.')\n staff_users = PrefetchingConnectionField(\n User, description='List of the shop\\'s staff users.',\n query=graphene.String(description=DESCRIPTIONS['user']))\n@@ -35,13 +36,17 @@\n def resolve_customers(self, info, query=None, **kwargs):\n return resolve_customers(info, query=query)\n \n+ @login_required\n+ def resolve_me(self, info):\n+ return info.context.user\n+\n @permission_required('account.manage_staff')\n def resolve_staff_users(self, info, query=None, **kwargs):\n return resolve_staff_users(info, query=query)\n \n- @login_required\n+ @permission_required('account.manage_users')\n def resolve_user(self, info, id):\n- return resolve_user(info, id)\n+ return graphene.Node.get_node_from_global_id(info, id, User)\n \n \n class AccountMutations(graphene.ObjectType):\ndiff --git a/saleor/graphql/account/types.py b/saleor/graphql/account/types.py\n--- a/saleor/graphql/account/types.py\n+++ b/saleor/graphql/account/types.py\n@@ -2,6 +2,7 @@\n import graphene_django_optimizer as gql_optimizer\n from django.contrib.auth import get_user_model\n from graphene import relay\n+from graphql_jwt.decorators import permission_required\n \n from ...account import models\n from ...core.permissions import get_permissions\n@@ -47,6 +48,7 @@\n PrefetchingConnectionField(\n Address, description='List of all user\\'s addresses.'),\n model_field='addresses')\n+ note = graphene.String(description='A note about the customer')\n \n class Meta:\n exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']\n@@ -65,6 +67,10 @@\n def resolve_addresses(self, info, **kwargs):\n return self.addresses.all()\n \n+ @permission_required('account.manage_users')\n+ def resolve_note(self, info):\n+ return self.note\n+\n \n class AddressValidationInput(graphene.InputObjectType):\n country_code = graphene.String()\n", "issue": "Add viewer query\nWith PR #3202 we've added ability to get the data of currently logged in user with the `user` query. I would recommend refactoring it a little bit and introducing a separate query for that. The problem with the `user` query is that it expects the `ID!` argument, but it also accepts passing `\"\"` as ID value to resolve the logged in user. This brakes the single-responsibility rule (which is a good practice for GraphQL queries), is a bit unintuitive and makes to code harder to maintain.\r\n\r\nI would propose changing the schema to have the following queries:\r\n`viewer: User` - returns currently authenticated user\r\n`user(id: ID!): User` - resolves a user by ID, where ID is required\n", "before_files": [{"content": "import graphene\nimport graphene_django_optimizer as gql_optimizer\nfrom django.db.models import Q\nfrom i18naddress import get_validation_rules\n\nfrom ...account import models\nfrom ...core.utils import get_client_ip, get_country_by_ip\nfrom ..utils import filter_by_query_param\nfrom .types import AddressValidationData, ChoiceValue, User\n\nUSER_SEARCH_FIELDS = (\n 'email', 'default_shipping_address__first_name',\n 'default_shipping_address__last_name', 'default_shipping_address__city',\n 'default_shipping_address__country')\n\n\ndef resolve_user(info, id):\n logged_user = info.context.user\n if not id:\n return logged_user\n user = graphene.Node.get_node_from_global_id(info, id, User)\n if logged_user.has_perm('account.manage_users') or user == logged_user:\n return user\n return None\n\n\ndef resolve_customers(info, query):\n qs = models.User.objects.filter(\n Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False)))\n qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n qs = qs.order_by('email')\n qs = qs.distinct()\n return gql_optimizer.query(qs, info)\n\n\ndef resolve_staff_users(info, query):\n qs = models.User.objects.filter(is_staff=True)\n qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n qs = qs.order_by('email')\n qs = qs.distinct()\n return gql_optimizer.query(qs, info)\n\n\ndef resolve_address_validator(info, input):\n country_code = input['country_code']\n if not country_code:\n client_ip = get_client_ip(info.context)\n country = get_country_by_ip(client_ip)\n if country:\n country_code = country.code\n else:\n return None\n params = {\n 'country_code': country_code,\n 'country_area': input['country_area'],\n 'city_area': input['city_area']}\n rules = get_validation_rules(params)\n return AddressValidationData(\n country_code=rules.country_code,\n country_name=rules.country_name,\n address_format=rules.address_format,\n address_latin_format=rules.address_latin_format,\n allowed_fields=rules.allowed_fields,\n required_fields=rules.required_fields,\n upper_fields=rules.upper_fields,\n country_area_type=rules.country_area_type,\n country_area_choices=[\n ChoiceValue(area[0], area[1])\n for area in rules.country_area_choices],\n city_type=rules.city_type,\n city_area_choices=[\n ChoiceValue(area[0], area[1]) for area in rules.city_area_choices],\n postal_code_type=rules.postal_code_type,\n postal_code_matchers=[\n compiled.pattern for compiled in rules.postal_code_matchers],\n postal_code_examples=rules.postal_code_examples,\n postal_code_prefix=rules.postal_code_prefix)\n", "path": "saleor/graphql/account/resolvers.py"}, {"content": "import graphene\nimport graphene_django_optimizer as gql_optimizer\nfrom django.contrib.auth import get_user_model\nfrom graphene import relay\n\nfrom ...account import models\nfrom ...core.permissions import get_permissions\nfrom ..core.types.common import (\n CountableDjangoObjectType, CountryDisplay, PermissionDisplay)\nfrom ..utils import format_permissions_for_display\n\n\nclass AddressInput(graphene.InputObjectType):\n first_name = graphene.String(description='Given name.')\n last_name = graphene.String(description='Family name.')\n company_name = graphene.String(description='Company or organization.')\n street_address_1 = graphene.String(description='Address.')\n street_address_2 = graphene.String(description='Address.')\n city = graphene.String(description='City.')\n city_area = graphene.String(description='District.')\n postal_code = graphene.String(description='Postal code.')\n country = graphene.String(required=True, description='Country.')\n country_area = graphene.String(description='State or province.')\n phone = graphene.String(description='Phone number.')\n\n\nclass Address(CountableDjangoObjectType):\n country = graphene.Field(\n CountryDisplay, required=True, description='Default shop\\'s country')\n\n class Meta:\n exclude_fields = ['user_set', 'user_addresses']\n description = 'Represents user address data.'\n interfaces = [relay.Node]\n model = models.Address\n\n def resolve_country(self, info):\n return CountryDisplay(\n code=self.country.code, country=self.country.name)\n\n\nclass User(CountableDjangoObjectType):\n permissions = graphene.List(\n PermissionDisplay, description='List of user\\'s permissions.')\n addresses = gql_optimizer.field(\n graphene.List(\n Address, description='List of all user\\'s addresses.'),\n model_field='addresses')\n\n class Meta:\n exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']\n description = 'Represents user data.'\n interfaces = [relay.Node]\n model = get_user_model()\n\n def resolve_permissions(self, info, **kwargs):\n if self.is_superuser:\n permissions = get_permissions()\n else:\n permissions = self.user_permissions.prefetch_related(\n 'content_type').order_by('codename')\n return format_permissions_for_display(permissions)\n\n def resolve_addresses(self, info, **kwargs):\n return self.addresses.all()\n\n\nclass AddressValidationInput(graphene.InputObjectType):\n country_code = graphene.String()\n country_area = graphene.String()\n city_area = graphene.String()\n\n\nclass ChoiceValue(graphene.ObjectType):\n raw = graphene.String()\n verbose = graphene.String()\n\n\nclass AddressValidationData(graphene.ObjectType):\n country_code = graphene.String()\n country_name = graphene.String()\n address_format = graphene.String()\n address_latin_format = graphene.String()\n allowed_fields = graphene.List(graphene.String)\n required_fields = graphene.List(graphene.String)\n upper_fields = graphene.List(graphene.String)\n country_area_type = graphene.String()\n country_area_choices = graphene.List(ChoiceValue)\n city_type = graphene.String()\n city_area_choices = graphene.List(ChoiceValue)\n postal_code_type = graphene.String()\n postal_code_matchers = graphene.List(graphene.String)\n postal_code_examples = graphene.List(graphene.String)\n postal_code_prefix = graphene.String()\n", "path": "saleor/graphql/account/types.py"}, {"content": "import graphene\nfrom graphql_jwt.decorators import login_required, permission_required\n\nfrom ..core.fields import PrefetchingConnectionField\nfrom ..descriptions import DESCRIPTIONS\nfrom .mutations import (\n AddressCreate, AddressDelete, AddressUpdate, CustomerCreate,\n CustomerDelete, CustomerPasswordReset, CustomerRegister, CustomerUpdate,\n LoggedUserUpdate, PasswordReset, SetPassword, StaffCreate, StaffDelete,\n StaffUpdate)\nfrom .resolvers import (\n resolve_address_validator, resolve_customers, resolve_staff_users,\n resolve_user)\nfrom .types import AddressValidationData, AddressValidationInput, User\n\n\nclass AccountQueries(graphene.ObjectType):\n address_validator = graphene.Field(\n AddressValidationData,\n input=graphene.Argument(AddressValidationInput, required=True))\n customers = PrefetchingConnectionField(\n User, description='List of the shop\\'s customers.',\n query=graphene.String(description=DESCRIPTIONS['user']))\n staff_users = PrefetchingConnectionField(\n User, description='List of the shop\\'s staff users.',\n query=graphene.String(description=DESCRIPTIONS['user']))\n user = graphene.Field(\n User, id=graphene.Argument(graphene.ID, required=True),\n description='Lookup an user by ID.')\n\n def resolve_address_validator(self, info, input):\n return resolve_address_validator(info, input)\n\n @permission_required('account.manage_users')\n def resolve_customers(self, info, query=None, **kwargs):\n return resolve_customers(info, query=query)\n\n @permission_required('account.manage_staff')\n def resolve_staff_users(self, info, query=None, **kwargs):\n return resolve_staff_users(info, query=query)\n\n @login_required\n def resolve_user(self, info, id):\n return resolve_user(info, id)\n\n\nclass AccountMutations(graphene.ObjectType):\n password_reset = PasswordReset.Field()\n set_password = SetPassword.Field()\n\n customer_create = CustomerCreate.Field()\n customer_delete = CustomerDelete.Field()\n customer_password_reset = CustomerPasswordReset.Field()\n customer_register = CustomerRegister.Field()\n customer_update = CustomerUpdate.Field()\n\n logged_user_update = LoggedUserUpdate.Field()\n\n staff_create = StaffCreate.Field()\n staff_delete = StaffDelete.Field()\n staff_update = StaffUpdate.Field()\n\n address_create = AddressCreate.Field()\n address_delete = AddressDelete.Field()\n address_update = AddressUpdate.Field()\n", "path": "saleor/graphql/account/schema.py"}], "after_files": [{"content": "import graphene_django_optimizer as gql_optimizer\nfrom django.db.models import Q\nfrom i18naddress import get_validation_rules\n\nfrom ...account import models\nfrom ...core.utils import get_client_ip, get_country_by_ip\nfrom ..utils import filter_by_query_param\nfrom .types import AddressValidationData, ChoiceValue\n\nUSER_SEARCH_FIELDS = (\n 'email', 'default_shipping_address__first_name',\n 'default_shipping_address__last_name', 'default_shipping_address__city',\n 'default_shipping_address__country')\n\n\ndef resolve_customers(info, query):\n qs = models.User.objects.filter(\n Q(is_staff=False) | (Q(is_staff=True) & Q(orders__isnull=False)))\n qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n qs = qs.order_by('email')\n qs = qs.distinct()\n return gql_optimizer.query(qs, info)\n\n\ndef resolve_staff_users(info, query):\n qs = models.User.objects.filter(is_staff=True)\n qs = filter_by_query_param(\n queryset=qs, query=query, search_fields=USER_SEARCH_FIELDS)\n qs = qs.order_by('email')\n qs = qs.distinct()\n return gql_optimizer.query(qs, info)\n\n\ndef resolve_address_validator(info, input):\n country_code = input['country_code']\n if not country_code:\n client_ip = get_client_ip(info.context)\n country = get_country_by_ip(client_ip)\n if country:\n country_code = country.code\n else:\n return None\n params = {\n 'country_code': country_code,\n 'country_area': input['country_area'],\n 'city_area': input['city_area']}\n rules = get_validation_rules(params)\n return AddressValidationData(\n country_code=rules.country_code,\n country_name=rules.country_name,\n address_format=rules.address_format,\n address_latin_format=rules.address_latin_format,\n allowed_fields=rules.allowed_fields,\n required_fields=rules.required_fields,\n upper_fields=rules.upper_fields,\n country_area_type=rules.country_area_type,\n country_area_choices=[\n ChoiceValue(area[0], area[1])\n for area in rules.country_area_choices],\n city_type=rules.city_type,\n city_area_choices=[\n ChoiceValue(area[0], area[1]) for area in rules.city_area_choices],\n postal_code_type=rules.postal_code_type,\n postal_code_matchers=[\n compiled.pattern for compiled in rules.postal_code_matchers],\n postal_code_examples=rules.postal_code_examples,\n postal_code_prefix=rules.postal_code_prefix)\n", "path": "saleor/graphql/account/resolvers.py"}, {"content": "import graphene\nimport graphene_django_optimizer as gql_optimizer\nfrom django.contrib.auth import get_user_model\nfrom graphene import relay\nfrom graphql_jwt.decorators import permission_required\n\nfrom ...account import models\nfrom ...core.permissions import get_permissions\nfrom ..core.fields import PrefetchingConnectionField\nfrom ..core.types.common import (\n CountableDjangoObjectType, CountryDisplay, PermissionDisplay)\nfrom ..utils import format_permissions_for_display\n\n\nclass AddressInput(graphene.InputObjectType):\n first_name = graphene.String(description='Given name.')\n last_name = graphene.String(description='Family name.')\n company_name = graphene.String(description='Company or organization.')\n street_address_1 = graphene.String(description='Address.')\n street_address_2 = graphene.String(description='Address.')\n city = graphene.String(description='City.')\n city_area = graphene.String(description='District.')\n postal_code = graphene.String(description='Postal code.')\n country = graphene.String(required=True, description='Country.')\n country_area = graphene.String(description='State or province.')\n phone = graphene.String(description='Phone number.')\n\n\nclass Address(CountableDjangoObjectType):\n country = graphene.Field(\n CountryDisplay, required=True, description='Default shop\\'s country')\n\n class Meta:\n exclude_fields = ['user_set', 'user_addresses']\n description = 'Represents user address data.'\n interfaces = [relay.Node]\n model = models.Address\n\n def resolve_country(self, info):\n return CountryDisplay(\n code=self.country.code, country=self.country.name)\n\n\nclass User(CountableDjangoObjectType):\n permissions = graphene.List(\n PermissionDisplay, description='List of user\\'s permissions.')\n addresses = gql_optimizer.field(\n PrefetchingConnectionField(\n Address, description='List of all user\\'s addresses.'),\n model_field='addresses')\n note = graphene.String(description='A note about the customer')\n\n class Meta:\n exclude_fields = ['password', 'is_superuser', 'OrderEvent_set']\n description = 'Represents user data.'\n interfaces = [relay.Node]\n model = get_user_model()\n\n def resolve_permissions(self, info, **kwargs):\n if self.is_superuser:\n permissions = get_permissions()\n else:\n permissions = self.user_permissions.prefetch_related(\n 'content_type').order_by('codename')\n return format_permissions_for_display(permissions)\n\n def resolve_addresses(self, info, **kwargs):\n return self.addresses.all()\n\n @permission_required('account.manage_users')\n def resolve_note(self, info):\n return self.note\n\n\nclass AddressValidationInput(graphene.InputObjectType):\n country_code = graphene.String()\n country_area = graphene.String()\n city_area = graphene.String()\n\n\nclass ChoiceValue(graphene.ObjectType):\n raw = graphene.String()\n verbose = graphene.String()\n\n\nclass AddressValidationData(graphene.ObjectType):\n country_code = graphene.String()\n country_name = graphene.String()\n address_format = graphene.String()\n address_latin_format = graphene.String()\n allowed_fields = graphene.List(graphene.String)\n required_fields = graphene.List(graphene.String)\n upper_fields = graphene.List(graphene.String)\n country_area_type = graphene.String()\n country_area_choices = graphene.List(ChoiceValue)\n city_type = graphene.String()\n city_area_choices = graphene.List(ChoiceValue)\n postal_code_type = graphene.String()\n postal_code_matchers = graphene.List(graphene.String)\n postal_code_examples = graphene.List(graphene.String)\n postal_code_prefix = graphene.String()\n", "path": "saleor/graphql/account/types.py"}, {"content": "import graphene\nfrom graphql_jwt.decorators import login_required, permission_required\n\nfrom ..core.fields import PrefetchingConnectionField\nfrom ..descriptions import DESCRIPTIONS\nfrom .mutations import (\n AddressCreate, AddressDelete, AddressUpdate, CustomerCreate,\n CustomerDelete, CustomerPasswordReset, CustomerRegister, CustomerUpdate,\n LoggedUserUpdate, PasswordReset, SetPassword, StaffCreate, StaffDelete,\n StaffUpdate)\nfrom .resolvers import (\n resolve_address_validator, resolve_customers, resolve_staff_users)\nfrom .types import AddressValidationData, AddressValidationInput, User\n\n\nclass AccountQueries(graphene.ObjectType):\n address_validator = graphene.Field(\n AddressValidationData,\n input=graphene.Argument(AddressValidationInput, required=True))\n customers = PrefetchingConnectionField(\n User, description='List of the shop\\'s customers.',\n query=graphene.String(description=DESCRIPTIONS['user']))\n me = graphene.Field(\n User, description='Logged in user data.')\n staff_users = PrefetchingConnectionField(\n User, description='List of the shop\\'s staff users.',\n query=graphene.String(description=DESCRIPTIONS['user']))\n user = graphene.Field(\n User, id=graphene.Argument(graphene.ID, required=True),\n description='Lookup an user by ID.')\n\n def resolve_address_validator(self, info, input):\n return resolve_address_validator(info, input)\n\n @permission_required('account.manage_users')\n def resolve_customers(self, info, query=None, **kwargs):\n return resolve_customers(info, query=query)\n\n @login_required\n def resolve_me(self, info):\n return info.context.user\n\n @permission_required('account.manage_staff')\n def resolve_staff_users(self, info, query=None, **kwargs):\n return resolve_staff_users(info, query=query)\n\n @permission_required('account.manage_users')\n def resolve_user(self, info, id):\n return graphene.Node.get_node_from_global_id(info, id, User)\n\n\nclass AccountMutations(graphene.ObjectType):\n password_reset = PasswordReset.Field()\n set_password = SetPassword.Field()\n\n customer_create = CustomerCreate.Field()\n customer_delete = CustomerDelete.Field()\n customer_password_reset = CustomerPasswordReset.Field()\n customer_register = CustomerRegister.Field()\n customer_update = CustomerUpdate.Field()\n\n logged_user_update = LoggedUserUpdate.Field()\n\n staff_create = StaffCreate.Field()\n staff_delete = StaffDelete.Field()\n staff_update = StaffUpdate.Field()\n\n address_create = AddressCreate.Field()\n address_delete = AddressDelete.Field()\n address_update = AddressUpdate.Field()\n", "path": "saleor/graphql/account/schema.py"}]}
| 2,727 | 930 |
gh_patches_debug_26974
|
rasdani/github-patches
|
git_diff
|
sanic-org__sanic-2072
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RecursionError on Sanic subclass initialisation
Discovered at https://github.com/sanic-org/sanic/issues/2071
version: latest sanic master (`8a2ea626c6d04a5eb1e28d071ffa56bf9ad98a12`)
description:
RecursionError occurs when initialising Sanic subclass
minimal code to reproduce:
```python
from sanic import Sanic
class Custom(Sanic):
pass
custom = Custom("custom")
```
Potential fix: https://github.com/sanic-org/sanic/pull/2072
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/blueprints.py`
Content:
```
1 from __future__ import annotations
2
3 import asyncio
4
5 from collections import defaultdict
6 from types import SimpleNamespace
7 from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Union
8
9 from sanic_routing.exceptions import NotFound # type: ignore
10 from sanic_routing.route import Route # type: ignore
11
12 from sanic.base import BaseSanic
13 from sanic.blueprint_group import BlueprintGroup
14 from sanic.exceptions import SanicException
15 from sanic.models.futures import FutureRoute, FutureStatic
16 from sanic.models.handler_types import (
17 ListenerType,
18 MiddlewareType,
19 RouteHandler,
20 )
21
22
23 if TYPE_CHECKING:
24 from sanic import Sanic # noqa
25
26
27 class Blueprint(BaseSanic):
28 """
29 In *Sanic* terminology, a **Blueprint** is a logical collection of
30 URLs that perform a specific set of tasks which can be identified by
31 a unique name.
32
33 It is the main tool for grouping functionality and similar endpoints.
34
35 `See user guide re: blueprints
36 <https://sanicframework.org/guide/best-practices/blueprints.html>`__
37
38 :param name: unique name of the blueprint
39 :param url_prefix: URL to be prefixed before all route URLs
40 :param host: IP Address of FQDN for the sanic server to use.
41 :param version: Blueprint Version
42 :param strict_slashes: Enforce the API urls are requested with a
43 training */*
44 """
45
46 __fake_slots__ = (
47 "_apps",
48 "_future_routes",
49 "_future_statics",
50 "_future_middleware",
51 "_future_listeners",
52 "_future_exceptions",
53 "_future_signals",
54 "ctx",
55 "exceptions",
56 "host",
57 "listeners",
58 "middlewares",
59 "name",
60 "routes",
61 "statics",
62 "strict_slashes",
63 "url_prefix",
64 "version",
65 "websocket_routes",
66 )
67
68 def __init__(
69 self,
70 name: str,
71 url_prefix: Optional[str] = None,
72 host: Optional[str] = None,
73 version: Optional[int] = None,
74 strict_slashes: Optional[bool] = None,
75 ):
76
77 self._apps: Set[Sanic] = set()
78 self.ctx = SimpleNamespace()
79 self.exceptions: List[RouteHandler] = []
80 self.host = host
81 self.listeners: Dict[str, List[ListenerType]] = {}
82 self.middlewares: List[MiddlewareType] = []
83 self.name = name
84 self.routes: List[Route] = []
85 self.statics: List[RouteHandler] = []
86 self.strict_slashes = strict_slashes
87 self.url_prefix = url_prefix
88 self.version = version
89 self.websocket_routes: List[Route] = []
90
91 def __repr__(self) -> str:
92 args = ", ".join(
93 [
94 f'{attr}="{getattr(self, attr)}"'
95 if isinstance(getattr(self, attr), str)
96 else f"{attr}={getattr(self, attr)}"
97 for attr in (
98 "name",
99 "url_prefix",
100 "host",
101 "version",
102 "strict_slashes",
103 )
104 ]
105 )
106 return f"Blueprint({args})"
107
108 @property
109 def apps(self):
110 if not self._apps:
111 raise SanicException(
112 f"{self} has not yet been registered to an app"
113 )
114 return self._apps
115
116 def route(self, *args, **kwargs):
117 kwargs["apply"] = False
118 return super().route(*args, **kwargs)
119
120 def static(self, *args, **kwargs):
121 kwargs["apply"] = False
122 return super().static(*args, **kwargs)
123
124 def middleware(self, *args, **kwargs):
125 kwargs["apply"] = False
126 return super().middleware(*args, **kwargs)
127
128 def listener(self, *args, **kwargs):
129 kwargs["apply"] = False
130 return super().listener(*args, **kwargs)
131
132 def exception(self, *args, **kwargs):
133 kwargs["apply"] = False
134 return super().exception(*args, **kwargs)
135
136 def signal(self, event: str, *args, **kwargs):
137 kwargs["apply"] = False
138 return super().signal(event, *args, **kwargs)
139
140 @staticmethod
141 def group(*blueprints, url_prefix="", version=None, strict_slashes=None):
142 """
143 Create a list of blueprints, optionally grouping them under a
144 general URL prefix.
145
146 :param blueprints: blueprints to be registered as a group
147 :param url_prefix: URL route to be prepended to all sub-prefixes
148 :param version: API Version to be used for Blueprint group
149 :param strict_slashes: Indicate strict slash termination behavior
150 for URL
151 """
152
153 def chain(nested) -> Iterable[Blueprint]:
154 """itertools.chain() but leaves strings untouched"""
155 for i in nested:
156 if isinstance(i, (list, tuple)):
157 yield from chain(i)
158 elif isinstance(i, BlueprintGroup):
159 yield from i.blueprints
160 else:
161 yield i
162
163 bps = BlueprintGroup(
164 url_prefix=url_prefix,
165 version=version,
166 strict_slashes=strict_slashes,
167 )
168 for bp in chain(blueprints):
169 bps.append(bp)
170 return bps
171
172 def register(self, app, options):
173 """
174 Register the blueprint to the sanic app.
175
176 :param app: Instance of :class:`sanic.app.Sanic` class
177 :param options: Options to be used while registering the
178 blueprint into the app.
179 *url_prefix* - URL Prefix to override the blueprint prefix
180 """
181
182 self._apps.add(app)
183 url_prefix = options.get("url_prefix", self.url_prefix)
184
185 routes = []
186 middleware = []
187 exception_handlers = []
188 listeners = defaultdict(list)
189
190 # Routes
191 for future in self._future_routes:
192 # attach the blueprint name to the handler so that it can be
193 # prefixed properly in the router
194 future.handler.__blueprintname__ = self.name
195 # Prepend the blueprint URI prefix if available
196 uri = url_prefix + future.uri if url_prefix else future.uri
197
198 strict_slashes = (
199 self.strict_slashes
200 if future.strict_slashes is None
201 and self.strict_slashes is not None
202 else future.strict_slashes
203 )
204 name = app._generate_name(future.name)
205
206 apply_route = FutureRoute(
207 future.handler,
208 uri[1:] if uri.startswith("//") else uri,
209 future.methods,
210 future.host or self.host,
211 strict_slashes,
212 future.stream,
213 future.version or self.version,
214 name,
215 future.ignore_body,
216 future.websocket,
217 future.subprotocols,
218 future.unquote,
219 future.static,
220 )
221
222 route = app._apply_route(apply_route)
223 operation = (
224 routes.extend if isinstance(route, list) else routes.append
225 )
226 operation(route)
227
228 # Static Files
229 for future in self._future_statics:
230 # Prepend the blueprint URI prefix if available
231 uri = url_prefix + future.uri if url_prefix else future.uri
232 apply_route = FutureStatic(uri, *future[1:])
233 route = app._apply_static(apply_route)
234 routes.append(route)
235
236 route_names = [route.name for route in routes if route]
237
238 # Middleware
239 if route_names:
240 for future in self._future_middleware:
241 middleware.append(app._apply_middleware(future, route_names))
242
243 # Exceptions
244 for future in self._future_exceptions:
245 exception_handlers.append(app._apply_exception_handler(future))
246
247 # Event listeners
248 for listener in self._future_listeners:
249 listeners[listener.event].append(app._apply_listener(listener))
250
251 for signal in self._future_signals:
252 signal.condition.update({"blueprint": self.name})
253 app._apply_signal(signal)
254
255 self.routes = [route for route in routes if isinstance(route, Route)]
256
257 # Deprecate these in 21.6
258 self.websocket_routes = [
259 route for route in self.routes if route.ctx.websocket
260 ]
261 self.middlewares = middleware
262 self.exceptions = exception_handlers
263 self.listeners = dict(listeners)
264
265 async def dispatch(self, *args, **kwargs):
266 condition = kwargs.pop("condition", {})
267 condition.update({"blueprint": self.name})
268 kwargs["condition"] = condition
269 await asyncio.gather(
270 *[app.dispatch(*args, **kwargs) for app in self.apps]
271 )
272
273 def event(self, event: str, timeout: Optional[Union[int, float]] = None):
274 events = set()
275 for app in self.apps:
276 signal = app.signal_router.name_index.get(event)
277 if not signal:
278 raise NotFound("Could not find signal %s" % event)
279 events.add(signal.ctx.event)
280
281 return asyncio.wait(
282 [event.wait() for event in events],
283 return_when=asyncio.FIRST_COMPLETED,
284 timeout=timeout,
285 )
286
```
Path: `sanic/base.py`
Content:
```
1 from typing import Any, Tuple
2 from warnings import warn
3
4 from sanic.mixins.exceptions import ExceptionMixin
5 from sanic.mixins.listeners import ListenerMixin
6 from sanic.mixins.middleware import MiddlewareMixin
7 from sanic.mixins.routes import RouteMixin
8 from sanic.mixins.signals import SignalMixin
9
10
11 class Base(type):
12 def __new__(cls, name, bases, attrs):
13 init = attrs.get("__init__")
14
15 def __init__(self, *args, **kwargs):
16 nonlocal init
17 nonlocal name
18
19 bases = [
20 b for base in type(self).__bases__ for b in base.__bases__
21 ]
22
23 for base in bases:
24 base.__init__(self, *args, **kwargs)
25
26 if init:
27 init(self, *args, **kwargs)
28
29 attrs["__init__"] = __init__
30 return type.__new__(cls, name, bases, attrs)
31
32
33 class BaseSanic(
34 RouteMixin,
35 MiddlewareMixin,
36 ListenerMixin,
37 ExceptionMixin,
38 SignalMixin,
39 metaclass=Base,
40 ):
41 __fake_slots__: Tuple[str, ...]
42
43 def __str__(self) -> str:
44 return f"<{self.__class__.__name__} {self.name}>"
45
46 def __repr__(self) -> str:
47 return f'{self.__class__.__name__}(name="{self.name}")'
48
49 def __setattr__(self, name: str, value: Any) -> None:
50 # This is a temporary compat layer so we can raise a warning until
51 # setting attributes on the app instance can be removed and deprecated
52 # with a proper implementation of __slots__
53 if name not in self.__fake_slots__:
54 warn(
55 f"Setting variables on {self.__class__.__name__} instances is "
56 "deprecated and will be removed in version 21.9. You should "
57 f"change your {self.__class__.__name__} instance to use "
58 f"instance.ctx.{name} instead."
59 )
60 super().__setattr__(name, value)
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sanic/base.py b/sanic/base.py
--- a/sanic/base.py
+++ b/sanic/base.py
@@ -8,38 +8,19 @@
from sanic.mixins.signals import SignalMixin
-class Base(type):
- def __new__(cls, name, bases, attrs):
- init = attrs.get("__init__")
-
- def __init__(self, *args, **kwargs):
- nonlocal init
- nonlocal name
-
- bases = [
- b for base in type(self).__bases__ for b in base.__bases__
- ]
-
- for base in bases:
- base.__init__(self, *args, **kwargs)
-
- if init:
- init(self, *args, **kwargs)
-
- attrs["__init__"] = __init__
- return type.__new__(cls, name, bases, attrs)
-
-
class BaseSanic(
RouteMixin,
MiddlewareMixin,
ListenerMixin,
ExceptionMixin,
SignalMixin,
- metaclass=Base,
):
__fake_slots__: Tuple[str, ...]
+ def __init__(self, *args, **kwargs) -> None:
+ for base in BaseSanic.__bases__:
+ base.__init__(self, *args, **kwargs) # type: ignore
+
def __str__(self) -> str:
return f"<{self.__class__.__name__} {self.name}>"
diff --git a/sanic/blueprints.py b/sanic/blueprints.py
--- a/sanic/blueprints.py
+++ b/sanic/blueprints.py
@@ -73,6 +73,7 @@
version: Optional[int] = None,
strict_slashes: Optional[bool] = None,
):
+ super().__init__()
self._apps: Set[Sanic] = set()
self.ctx = SimpleNamespace()
|
{"golden_diff": "diff --git a/sanic/base.py b/sanic/base.py\n--- a/sanic/base.py\n+++ b/sanic/base.py\n@@ -8,38 +8,19 @@\n from sanic.mixins.signals import SignalMixin\n \n \n-class Base(type):\n- def __new__(cls, name, bases, attrs):\n- init = attrs.get(\"__init__\")\n-\n- def __init__(self, *args, **kwargs):\n- nonlocal init\n- nonlocal name\n-\n- bases = [\n- b for base in type(self).__bases__ for b in base.__bases__\n- ]\n-\n- for base in bases:\n- base.__init__(self, *args, **kwargs)\n-\n- if init:\n- init(self, *args, **kwargs)\n-\n- attrs[\"__init__\"] = __init__\n- return type.__new__(cls, name, bases, attrs)\n-\n-\n class BaseSanic(\n RouteMixin,\n MiddlewareMixin,\n ListenerMixin,\n ExceptionMixin,\n SignalMixin,\n- metaclass=Base,\n ):\n __fake_slots__: Tuple[str, ...]\n \n+ def __init__(self, *args, **kwargs) -> None:\n+ for base in BaseSanic.__bases__:\n+ base.__init__(self, *args, **kwargs) # type: ignore\n+\n def __str__(self) -> str:\n return f\"<{self.__class__.__name__} {self.name}>\"\n \ndiff --git a/sanic/blueprints.py b/sanic/blueprints.py\n--- a/sanic/blueprints.py\n+++ b/sanic/blueprints.py\n@@ -73,6 +73,7 @@\n version: Optional[int] = None,\n strict_slashes: Optional[bool] = None,\n ):\n+ super().__init__()\n \n self._apps: Set[Sanic] = set()\n self.ctx = SimpleNamespace()\n", "issue": "RecursionError on Sanic subclass initialisation\nDiscovered at https://github.com/sanic-org/sanic/issues/2071\r\n\r\nversion: latest sanic master (`8a2ea626c6d04a5eb1e28d071ffa56bf9ad98a12`)\r\n\r\ndescription:\r\n\r\nRecursionError occurs when initialising Sanic subclass\r\n\r\nminimal code to reproduce:\r\n\r\n```python\r\nfrom sanic import Sanic\r\n\r\n\r\nclass Custom(Sanic):\r\n pass\r\n\r\ncustom = Custom(\"custom\")\r\n```\r\n\r\n\r\nPotential fix: https://github.com/sanic-org/sanic/pull/2072\n", "before_files": [{"content": "from __future__ import annotations\n\nimport asyncio\n\nfrom collections import defaultdict\nfrom types import SimpleNamespace\nfrom typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Union\n\nfrom sanic_routing.exceptions import NotFound # type: ignore\nfrom sanic_routing.route import Route # type: ignore\n\nfrom sanic.base import BaseSanic\nfrom sanic.blueprint_group import BlueprintGroup\nfrom sanic.exceptions import SanicException\nfrom sanic.models.futures import FutureRoute, FutureStatic\nfrom sanic.models.handler_types import (\n ListenerType,\n MiddlewareType,\n RouteHandler,\n)\n\n\nif TYPE_CHECKING:\n from sanic import Sanic # noqa\n\n\nclass Blueprint(BaseSanic):\n \"\"\"\n In *Sanic* terminology, a **Blueprint** is a logical collection of\n URLs that perform a specific set of tasks which can be identified by\n a unique name.\n\n It is the main tool for grouping functionality and similar endpoints.\n\n `See user guide re: blueprints\n <https://sanicframework.org/guide/best-practices/blueprints.html>`__\n\n :param name: unique name of the blueprint\n :param url_prefix: URL to be prefixed before all route URLs\n :param host: IP Address of FQDN for the sanic server to use.\n :param version: Blueprint Version\n :param strict_slashes: Enforce the API urls are requested with a\n training */*\n \"\"\"\n\n __fake_slots__ = (\n \"_apps\",\n \"_future_routes\",\n \"_future_statics\",\n \"_future_middleware\",\n \"_future_listeners\",\n \"_future_exceptions\",\n \"_future_signals\",\n \"ctx\",\n \"exceptions\",\n \"host\",\n \"listeners\",\n \"middlewares\",\n \"name\",\n \"routes\",\n \"statics\",\n \"strict_slashes\",\n \"url_prefix\",\n \"version\",\n \"websocket_routes\",\n )\n\n def __init__(\n self,\n name: str,\n url_prefix: Optional[str] = None,\n host: Optional[str] = None,\n version: Optional[int] = None,\n strict_slashes: Optional[bool] = None,\n ):\n\n self._apps: Set[Sanic] = set()\n self.ctx = SimpleNamespace()\n self.exceptions: List[RouteHandler] = []\n self.host = host\n self.listeners: Dict[str, List[ListenerType]] = {}\n self.middlewares: List[MiddlewareType] = []\n self.name = name\n self.routes: List[Route] = []\n self.statics: List[RouteHandler] = []\n self.strict_slashes = strict_slashes\n self.url_prefix = url_prefix\n self.version = version\n self.websocket_routes: List[Route] = []\n\n def __repr__(self) -> str:\n args = \", \".join(\n [\n f'{attr}=\"{getattr(self, attr)}\"'\n if isinstance(getattr(self, attr), str)\n else f\"{attr}={getattr(self, attr)}\"\n for attr in (\n \"name\",\n \"url_prefix\",\n \"host\",\n \"version\",\n \"strict_slashes\",\n )\n ]\n )\n return f\"Blueprint({args})\"\n\n @property\n def apps(self):\n if not self._apps:\n raise SanicException(\n f\"{self} has not yet been registered to an app\"\n )\n return self._apps\n\n def route(self, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().route(*args, **kwargs)\n\n def static(self, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().static(*args, **kwargs)\n\n def middleware(self, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().middleware(*args, **kwargs)\n\n def listener(self, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().listener(*args, **kwargs)\n\n def exception(self, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().exception(*args, **kwargs)\n\n def signal(self, event: str, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().signal(event, *args, **kwargs)\n\n @staticmethod\n def group(*blueprints, url_prefix=\"\", version=None, strict_slashes=None):\n \"\"\"\n Create a list of blueprints, optionally grouping them under a\n general URL prefix.\n\n :param blueprints: blueprints to be registered as a group\n :param url_prefix: URL route to be prepended to all sub-prefixes\n :param version: API Version to be used for Blueprint group\n :param strict_slashes: Indicate strict slash termination behavior\n for URL\n \"\"\"\n\n def chain(nested) -> Iterable[Blueprint]:\n \"\"\"itertools.chain() but leaves strings untouched\"\"\"\n for i in nested:\n if isinstance(i, (list, tuple)):\n yield from chain(i)\n elif isinstance(i, BlueprintGroup):\n yield from i.blueprints\n else:\n yield i\n\n bps = BlueprintGroup(\n url_prefix=url_prefix,\n version=version,\n strict_slashes=strict_slashes,\n )\n for bp in chain(blueprints):\n bps.append(bp)\n return bps\n\n def register(self, app, options):\n \"\"\"\n Register the blueprint to the sanic app.\n\n :param app: Instance of :class:`sanic.app.Sanic` class\n :param options: Options to be used while registering the\n blueprint into the app.\n *url_prefix* - URL Prefix to override the blueprint prefix\n \"\"\"\n\n self._apps.add(app)\n url_prefix = options.get(\"url_prefix\", self.url_prefix)\n\n routes = []\n middleware = []\n exception_handlers = []\n listeners = defaultdict(list)\n\n # Routes\n for future in self._future_routes:\n # attach the blueprint name to the handler so that it can be\n # prefixed properly in the router\n future.handler.__blueprintname__ = self.name\n # Prepend the blueprint URI prefix if available\n uri = url_prefix + future.uri if url_prefix else future.uri\n\n strict_slashes = (\n self.strict_slashes\n if future.strict_slashes is None\n and self.strict_slashes is not None\n else future.strict_slashes\n )\n name = app._generate_name(future.name)\n\n apply_route = FutureRoute(\n future.handler,\n uri[1:] if uri.startswith(\"//\") else uri,\n future.methods,\n future.host or self.host,\n strict_slashes,\n future.stream,\n future.version or self.version,\n name,\n future.ignore_body,\n future.websocket,\n future.subprotocols,\n future.unquote,\n future.static,\n )\n\n route = app._apply_route(apply_route)\n operation = (\n routes.extend if isinstance(route, list) else routes.append\n )\n operation(route)\n\n # Static Files\n for future in self._future_statics:\n # Prepend the blueprint URI prefix if available\n uri = url_prefix + future.uri if url_prefix else future.uri\n apply_route = FutureStatic(uri, *future[1:])\n route = app._apply_static(apply_route)\n routes.append(route)\n\n route_names = [route.name for route in routes if route]\n\n # Middleware\n if route_names:\n for future in self._future_middleware:\n middleware.append(app._apply_middleware(future, route_names))\n\n # Exceptions\n for future in self._future_exceptions:\n exception_handlers.append(app._apply_exception_handler(future))\n\n # Event listeners\n for listener in self._future_listeners:\n listeners[listener.event].append(app._apply_listener(listener))\n\n for signal in self._future_signals:\n signal.condition.update({\"blueprint\": self.name})\n app._apply_signal(signal)\n\n self.routes = [route for route in routes if isinstance(route, Route)]\n\n # Deprecate these in 21.6\n self.websocket_routes = [\n route for route in self.routes if route.ctx.websocket\n ]\n self.middlewares = middleware\n self.exceptions = exception_handlers\n self.listeners = dict(listeners)\n\n async def dispatch(self, *args, **kwargs):\n condition = kwargs.pop(\"condition\", {})\n condition.update({\"blueprint\": self.name})\n kwargs[\"condition\"] = condition\n await asyncio.gather(\n *[app.dispatch(*args, **kwargs) for app in self.apps]\n )\n\n def event(self, event: str, timeout: Optional[Union[int, float]] = None):\n events = set()\n for app in self.apps:\n signal = app.signal_router.name_index.get(event)\n if not signal:\n raise NotFound(\"Could not find signal %s\" % event)\n events.add(signal.ctx.event)\n\n return asyncio.wait(\n [event.wait() for event in events],\n return_when=asyncio.FIRST_COMPLETED,\n timeout=timeout,\n )\n", "path": "sanic/blueprints.py"}, {"content": "from typing import Any, Tuple\nfrom warnings import warn\n\nfrom sanic.mixins.exceptions import ExceptionMixin\nfrom sanic.mixins.listeners import ListenerMixin\nfrom sanic.mixins.middleware import MiddlewareMixin\nfrom sanic.mixins.routes import RouteMixin\nfrom sanic.mixins.signals import SignalMixin\n\n\nclass Base(type):\n def __new__(cls, name, bases, attrs):\n init = attrs.get(\"__init__\")\n\n def __init__(self, *args, **kwargs):\n nonlocal init\n nonlocal name\n\n bases = [\n b for base in type(self).__bases__ for b in base.__bases__\n ]\n\n for base in bases:\n base.__init__(self, *args, **kwargs)\n\n if init:\n init(self, *args, **kwargs)\n\n attrs[\"__init__\"] = __init__\n return type.__new__(cls, name, bases, attrs)\n\n\nclass BaseSanic(\n RouteMixin,\n MiddlewareMixin,\n ListenerMixin,\n ExceptionMixin,\n SignalMixin,\n metaclass=Base,\n):\n __fake_slots__: Tuple[str, ...]\n\n def __str__(self) -> str:\n return f\"<{self.__class__.__name__} {self.name}>\"\n\n def __repr__(self) -> str:\n return f'{self.__class__.__name__}(name=\"{self.name}\")'\n\n def __setattr__(self, name: str, value: Any) -> None:\n # This is a temporary compat layer so we can raise a warning until\n # setting attributes on the app instance can be removed and deprecated\n # with a proper implementation of __slots__\n if name not in self.__fake_slots__:\n warn(\n f\"Setting variables on {self.__class__.__name__} instances is \"\n \"deprecated and will be removed in version 21.9. You should \"\n f\"change your {self.__class__.__name__} instance to use \"\n f\"instance.ctx.{name} instead.\"\n )\n super().__setattr__(name, value)\n", "path": "sanic/base.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport asyncio\n\nfrom collections import defaultdict\nfrom types import SimpleNamespace\nfrom typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Union\n\nfrom sanic_routing.exceptions import NotFound # type: ignore\nfrom sanic_routing.route import Route # type: ignore\n\nfrom sanic.base import BaseSanic\nfrom sanic.blueprint_group import BlueprintGroup\nfrom sanic.exceptions import SanicException\nfrom sanic.models.futures import FutureRoute, FutureStatic\nfrom sanic.models.handler_types import (\n ListenerType,\n MiddlewareType,\n RouteHandler,\n)\n\n\nif TYPE_CHECKING:\n from sanic import Sanic # noqa\n\n\nclass Blueprint(BaseSanic):\n \"\"\"\n In *Sanic* terminology, a **Blueprint** is a logical collection of\n URLs that perform a specific set of tasks which can be identified by\n a unique name.\n\n It is the main tool for grouping functionality and similar endpoints.\n\n `See user guide re: blueprints\n <https://sanicframework.org/guide/best-practices/blueprints.html>`__\n\n :param name: unique name of the blueprint\n :param url_prefix: URL to be prefixed before all route URLs\n :param host: IP Address of FQDN for the sanic server to use.\n :param version: Blueprint Version\n :param strict_slashes: Enforce the API urls are requested with a\n training */*\n \"\"\"\n\n __fake_slots__ = (\n \"_apps\",\n \"_future_routes\",\n \"_future_statics\",\n \"_future_middleware\",\n \"_future_listeners\",\n \"_future_exceptions\",\n \"_future_signals\",\n \"ctx\",\n \"exceptions\",\n \"host\",\n \"listeners\",\n \"middlewares\",\n \"name\",\n \"routes\",\n \"statics\",\n \"strict_slashes\",\n \"url_prefix\",\n \"version\",\n \"websocket_routes\",\n )\n\n def __init__(\n self,\n name: str,\n url_prefix: Optional[str] = None,\n host: Optional[str] = None,\n version: Optional[int] = None,\n strict_slashes: Optional[bool] = None,\n ):\n super().__init__()\n\n self._apps: Set[Sanic] = set()\n self.ctx = SimpleNamespace()\n self.exceptions: List[RouteHandler] = []\n self.host = host\n self.listeners: Dict[str, List[ListenerType]] = {}\n self.middlewares: List[MiddlewareType] = []\n self.name = name\n self.routes: List[Route] = []\n self.statics: List[RouteHandler] = []\n self.strict_slashes = strict_slashes\n self.url_prefix = url_prefix\n self.version = version\n self.websocket_routes: List[Route] = []\n\n def __repr__(self) -> str:\n args = \", \".join(\n [\n f'{attr}=\"{getattr(self, attr)}\"'\n if isinstance(getattr(self, attr), str)\n else f\"{attr}={getattr(self, attr)}\"\n for attr in (\n \"name\",\n \"url_prefix\",\n \"host\",\n \"version\",\n \"strict_slashes\",\n )\n ]\n )\n return f\"Blueprint({args})\"\n\n @property\n def apps(self):\n if not self._apps:\n raise SanicException(\n f\"{self} has not yet been registered to an app\"\n )\n return self._apps\n\n def route(self, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().route(*args, **kwargs)\n\n def static(self, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().static(*args, **kwargs)\n\n def middleware(self, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().middleware(*args, **kwargs)\n\n def listener(self, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().listener(*args, **kwargs)\n\n def exception(self, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().exception(*args, **kwargs)\n\n def signal(self, event: str, *args, **kwargs):\n kwargs[\"apply\"] = False\n return super().signal(event, *args, **kwargs)\n\n @staticmethod\n def group(*blueprints, url_prefix=\"\", version=None, strict_slashes=None):\n \"\"\"\n Create a list of blueprints, optionally grouping them under a\n general URL prefix.\n\n :param blueprints: blueprints to be registered as a group\n :param url_prefix: URL route to be prepended to all sub-prefixes\n :param version: API Version to be used for Blueprint group\n :param strict_slashes: Indicate strict slash termination behavior\n for URL\n \"\"\"\n\n def chain(nested) -> Iterable[Blueprint]:\n \"\"\"itertools.chain() but leaves strings untouched\"\"\"\n for i in nested:\n if isinstance(i, (list, tuple)):\n yield from chain(i)\n elif isinstance(i, BlueprintGroup):\n yield from i.blueprints\n else:\n yield i\n\n bps = BlueprintGroup(\n url_prefix=url_prefix,\n version=version,\n strict_slashes=strict_slashes,\n )\n for bp in chain(blueprints):\n bps.append(bp)\n return bps\n\n def register(self, app, options):\n \"\"\"\n Register the blueprint to the sanic app.\n\n :param app: Instance of :class:`sanic.app.Sanic` class\n :param options: Options to be used while registering the\n blueprint into the app.\n *url_prefix* - URL Prefix to override the blueprint prefix\n \"\"\"\n\n self._apps.add(app)\n url_prefix = options.get(\"url_prefix\", self.url_prefix)\n\n routes = []\n middleware = []\n exception_handlers = []\n listeners = defaultdict(list)\n\n # Routes\n for future in self._future_routes:\n # attach the blueprint name to the handler so that it can be\n # prefixed properly in the router\n future.handler.__blueprintname__ = self.name\n # Prepend the blueprint URI prefix if available\n uri = url_prefix + future.uri if url_prefix else future.uri\n\n strict_slashes = (\n self.strict_slashes\n if future.strict_slashes is None\n and self.strict_slashes is not None\n else future.strict_slashes\n )\n name = app._generate_name(future.name)\n\n apply_route = FutureRoute(\n future.handler,\n uri[1:] if uri.startswith(\"//\") else uri,\n future.methods,\n future.host or self.host,\n strict_slashes,\n future.stream,\n future.version or self.version,\n name,\n future.ignore_body,\n future.websocket,\n future.subprotocols,\n future.unquote,\n future.static,\n )\n\n route = app._apply_route(apply_route)\n operation = (\n routes.extend if isinstance(route, list) else routes.append\n )\n operation(route)\n\n # Static Files\n for future in self._future_statics:\n # Prepend the blueprint URI prefix if available\n uri = url_prefix + future.uri if url_prefix else future.uri\n apply_route = FutureStatic(uri, *future[1:])\n route = app._apply_static(apply_route)\n routes.append(route)\n\n route_names = [route.name for route in routes if route]\n\n # Middleware\n if route_names:\n for future in self._future_middleware:\n middleware.append(app._apply_middleware(future, route_names))\n\n # Exceptions\n for future in self._future_exceptions:\n exception_handlers.append(app._apply_exception_handler(future))\n\n # Event listeners\n for listener in self._future_listeners:\n listeners[listener.event].append(app._apply_listener(listener))\n\n for signal in self._future_signals:\n signal.condition.update({\"blueprint\": self.name})\n app._apply_signal(signal)\n\n self.routes = [route for route in routes if isinstance(route, Route)]\n\n # Deprecate these in 21.6\n self.websocket_routes = [\n route for route in self.routes if route.ctx.websocket\n ]\n self.middlewares = middleware\n self.exceptions = exception_handlers\n self.listeners = dict(listeners)\n\n async def dispatch(self, *args, **kwargs):\n condition = kwargs.pop(\"condition\", {})\n condition.update({\"blueprint\": self.name})\n kwargs[\"condition\"] = condition\n await asyncio.gather(\n *[app.dispatch(*args, **kwargs) for app in self.apps]\n )\n\n def event(self, event: str, timeout: Optional[Union[int, float]] = None):\n events = set()\n for app in self.apps:\n signal = app.signal_router.name_index.get(event)\n if not signal:\n raise NotFound(\"Could not find signal %s\" % event)\n events.add(signal.ctx.event)\n\n return asyncio.wait(\n [event.wait() for event in events],\n return_when=asyncio.FIRST_COMPLETED,\n timeout=timeout,\n )\n", "path": "sanic/blueprints.py"}, {"content": "from typing import Any, Tuple\nfrom warnings import warn\n\nfrom sanic.mixins.exceptions import ExceptionMixin\nfrom sanic.mixins.listeners import ListenerMixin\nfrom sanic.mixins.middleware import MiddlewareMixin\nfrom sanic.mixins.routes import RouteMixin\nfrom sanic.mixins.signals import SignalMixin\n\n\nclass BaseSanic(\n RouteMixin,\n MiddlewareMixin,\n ListenerMixin,\n ExceptionMixin,\n SignalMixin,\n):\n __fake_slots__: Tuple[str, ...]\n\n def __init__(self, *args, **kwargs) -> None:\n for base in BaseSanic.__bases__:\n base.__init__(self, *args, **kwargs) # type: ignore\n\n def __str__(self) -> str:\n return f\"<{self.__class__.__name__} {self.name}>\"\n\n def __repr__(self) -> str:\n return f'{self.__class__.__name__}(name=\"{self.name}\")'\n\n def __setattr__(self, name: str, value: Any) -> None:\n # This is a temporary compat layer so we can raise a warning until\n # setting attributes on the app instance can be removed and deprecated\n # with a proper implementation of __slots__\n if name not in self.__fake_slots__:\n warn(\n f\"Setting variables on {self.__class__.__name__} instances is \"\n \"deprecated and will be removed in version 21.9. You should \"\n f\"change your {self.__class__.__name__} instance to use \"\n f\"instance.ctx.{name} instead.\"\n )\n super().__setattr__(name, value)\n", "path": "sanic/base.py"}]}
| 3,688 | 418 |
gh_patches_debug_38969
|
rasdani/github-patches
|
git_diff
|
sql-machine-learning__elasticdl-397
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
worker_test failure
```
python -m unittest elasticdl/worker/*_test.py
/usr/local/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
2019-05-21 13:57:07.262725: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
WARNING:tensorflow:From /usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:642: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
must be str, not NoneType
FLoss is 3.090042
Loss is 1.4608976
Loss is 0.913306
Loss is 0.5969497
Loss is 0.66515267
Loss is 0.3935135
Loss is 0.37774342
Loss is 0.289928
.
======================================================================
FAIL: test_distributed_train (elasticdl.worker.worker_test.WorkerTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/l.zou/git/elasticdl/elasticdl/worker/worker_test.py", line 96, in test_distributed_train
self.assertTrue(res)
AssertionError: False is not true
----------------------------------------------------------------------
Ran 2 tests in 0.165s
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/worker/worker.py`
Content:
```
1 import traceback
2 import tensorflow as tf
3 assert tf.executing_eagerly()
4
5 from google.protobuf import empty_pb2
6 from tensorflow.python.ops import math_ops
7 from elasticdl.proto import master_pb2_grpc
8 from elasticdl.proto import master_pb2
9 from elasticdl.common.ndarray import ndarray_to_tensor, tensor_to_ndarray
10 from elasticdl.common.model_helper import load_user_model, build_model
11 from edl_data.codec import TFExampleCodec
12 from edl_data.codec import BytesCodec
13 import itertools
14 import recordio
15
16 # the default max number of a minibatch retrain as its gradients are not accepted by master.
17 DEFAULT_MAX_MINIBATCH_RETRAIN_NUM = 64
18
19 class Worker(object):
20 """ElasticDL worker"""
21
22 def __init__(self,
23 model_file,
24 channel=None,
25 max_retrain_num=DEFAULT_MAX_MINIBATCH_RETRAIN_NUM,
26 codec_type=None):
27 """
28 Arguments:
29 model_module: A module to define the model
30 channel: grpc channel
31 max_retrain_num: max number of a minibatch retrain as its gradients are not accepted by master
32 """
33
34 model_module = load_user_model(model_file)
35 self._model = model_module.model
36 self._feature_columns = model_module.feature_columns()
37 self._all_columns = self._feature_columns + model_module.label_columns()
38 build_model(self._model, self._feature_columns)
39 self._input_fn = model_module.input_fn
40 self._opt_fn = model_module.optimizer
41 self._loss = model_module.loss
42
43 if channel is None:
44 self._stub = None
45 else:
46 self._stub = master_pb2_grpc.MasterStub(channel)
47 self._max_retrain_num = max_retrain_num
48 self._model_version = -1
49 self._codec_type = codec_type
50
51 def get_task(self):
52 """
53 get task from master
54 """
55 return self._stub.GetTask(empty_pb2.Empty())
56
57 def get_model(self, min_version):
58 """
59 get model from master, and update model_version
60 """
61 req = master_pb2.GetModelRequest()
62 req.min_version = min_version
63 model = self._stub.GetModel(req)
64
65 for var in self._model.trainable_variables:
66 # Assumes all trainable variables exist in model.param.
67 var.assign(
68 tensor_to_ndarray(model.param[var.name]))
69 self._model_version = model.version
70
71 def report_task_result(self, task_id, err_msg):
72 """
73 report task result to master
74 """
75 report = master_pb2.ReportTaskResultRequest()
76 report.task_id = task_id
77 report.err_message = err_msg
78 return self._stub.ReportTaskResult(report)
79
80 def report_gradient(self, grads):
81 """
82 report gradient to ps, return (accepted, model_version) from rpc call.
83 """
84 req = master_pb2.ReportGradientRequest()
85 for g, v in zip(grads, self._model.trainable_variables):
86 req.gradient[v.name].CopyFrom(
87 ndarray_to_tensor(g.numpy()))
88 req.model_version = self._model_version
89 res = self._stub.ReportGradient(req)
90 return res.accepted, res.model_version
91
92 def distributed_train(self):
93 """
94 Distributed training.
95 """
96 if self._codec_type == "tf_example":
97 codec = TFExampleCodec(self._all_columns)
98 elif self._codec_type == "bytes":
99 codec = BytesCodec(self._all_columns)
100 else:
101 raise ValueError("invalid codec_type: " + self._codec_type)
102 while True:
103 task = self.get_task()
104 if not task.shard_file_name:
105 # No more task
106 break
107 batch_size = task.minibatch_size
108 err_msg = ""
109 try:
110 with recordio.File(task.shard_file_name, "r", decoder=codec.decode) as rdio_r:
111 reader = rdio_r.get_reader(task.start, task.end)
112 min_model_version = task.model_version
113 while True:
114 record_buf = list(
115 itertools.islice(reader, 0, batch_size))
116 if not record_buf:
117 break
118
119 for _ in range(self._max_retrain_num):
120 # TODO: optimize the logic to avoid unnecessary get_model call.
121 self.get_model(
122 max(self._model_version, min_model_version))
123
124 batch_input_data, batch_label = self._input_fn(record_buf)
125
126 with tf.GradientTape() as tape:
127 inputs = []
128 for f_col in self._feature_columns:
129 inputs.append(batch_input_data[f_col.key])
130 if len(inputs) == 1:
131 inputs = inputs[0]
132 outputs = self._model.call(inputs, training=True)
133 loss = self._loss(outputs, batch_label.flatten())
134
135 # TODO: Add regularization loss if any,
136 # which should be divided by the number of contributing workers.
137 grads = tape.gradient(
138 loss, self._model.trainable_variables)
139 print("Loss is ", loss.numpy())
140
141 accepted, min_model_version = self.report_gradient(
142 grads)
143 if accepted:
144 break
145 else:
146 # Worker got stuck, fail the task.
147 # TODO: stop the worker if it fails to make any progress for some time.
148 raise RuntimeError("Worker got stuck")
149
150
151 except Exception as ex:
152 err_msg = str(ex)
153 traceback.print_exc()
154 self.report_task_result(task.task_id, err_msg)
155
156 def local_train(self, file_list, batch_size, epoch=1, kwargs=None):
157 """
158 Local training for local testing. Must in eager mode.
159 Argments:
160 batch_size: batch size in training
161 epoch: the number of epoch in training
162 kwargs: contains a dict of parameters used in training
163 """
164 optimizer = self._opt_fn()
165 for _ in range(epoch):
166 for f in file_list:
167 with recordio.File(f, "r") as rdio_r:
168 reader = rdio_r.get_reader(0, rdio_r.count())
169 while True:
170 record_buf = list(
171 itertools.islice(reader, 0, batch_size))
172 if not record_buf:
173 break
174
175 data, labels = self._input_fn(record_buf)
176
177 with tf.GradientTape() as tape:
178 inputs = []
179 for f_col in self._feature_columns:
180 inputs.append(data[f_col.key])
181 if len(inputs) == 1:
182 inputs = inputs[0]
183 outputs = self._model.call(inputs, training=True)
184 loss = self._loss(outputs, labels)
185
186 # Add regularization loss if any.
187 # Note: for distributed training, the regularization loss should
188 # be divided by the number of contributing workers, which
189 # might be difficult for elasticdl.
190 if self._model.losses:
191 loss += math_ops.add_n(self._model.losses)
192 grads = tape.gradient(
193 loss, self._model.trainable_variables)
194 optimizer.apply_gradients(
195 zip(grads, self._model.trainable_variables))
196 print("Loss is ", loss.numpy())
197
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticdl/worker/worker.py b/elasticdl/worker/worker.py
--- a/elasticdl/worker/worker.py
+++ b/elasticdl/worker/worker.py
@@ -30,15 +30,21 @@
channel: grpc channel
max_retrain_num: max number of a minibatch retrain as its gradients are not accepted by master
"""
-
model_module = load_user_model(model_file)
self._model = model_module.model
self._feature_columns = model_module.feature_columns()
- self._all_columns = self._feature_columns + model_module.label_columns()
build_model(self._model, self._feature_columns)
self._input_fn = model_module.input_fn
self._opt_fn = model_module.optimizer
self._loss = model_module.loss
+ all_columns = self._feature_columns + model_module.label_columns()
+ if codec_type == "tf_example":
+ self._codec = TFExampleCodec(all_columns)
+ elif codec_type == "bytes":
+ self._codec = BytesCodec(all_columns)
+ else:
+ raise ValueError("invalid codec_type: " + codec_type)
+
if channel is None:
self._stub = None
@@ -93,12 +99,6 @@
"""
Distributed training.
"""
- if self._codec_type == "tf_example":
- codec = TFExampleCodec(self._all_columns)
- elif self._codec_type == "bytes":
- codec = BytesCodec(self._all_columns)
- else:
- raise ValueError("invalid codec_type: " + self._codec_type)
while True:
task = self.get_task()
if not task.shard_file_name:
@@ -107,7 +107,7 @@
batch_size = task.minibatch_size
err_msg = ""
try:
- with recordio.File(task.shard_file_name, "r", decoder=codec.decode) as rdio_r:
+ with recordio.File(task.shard_file_name, "r", decoder=self._codec.decode) as rdio_r:
reader = rdio_r.get_reader(task.start, task.end)
min_model_version = task.model_version
while True:
@@ -164,7 +164,7 @@
optimizer = self._opt_fn()
for _ in range(epoch):
for f in file_list:
- with recordio.File(f, "r") as rdio_r:
+ with recordio.File(f, "r", decoder=self._codec.decode) as rdio_r:
reader = rdio_r.get_reader(0, rdio_r.count())
while True:
record_buf = list(
|
{"golden_diff": "diff --git a/elasticdl/worker/worker.py b/elasticdl/worker/worker.py\n--- a/elasticdl/worker/worker.py\n+++ b/elasticdl/worker/worker.py\n@@ -30,15 +30,21 @@\n channel: grpc channel\n max_retrain_num: max number of a minibatch retrain as its gradients are not accepted by master\n \"\"\"\n-\n model_module = load_user_model(model_file)\n self._model = model_module.model\n self._feature_columns = model_module.feature_columns()\n- self._all_columns = self._feature_columns + model_module.label_columns()\n build_model(self._model, self._feature_columns)\n self._input_fn = model_module.input_fn \n self._opt_fn = model_module.optimizer\n self._loss = model_module.loss\n+ all_columns = self._feature_columns + model_module.label_columns()\n+ if codec_type == \"tf_example\":\n+ self._codec = TFExampleCodec(all_columns)\n+ elif codec_type == \"bytes\":\n+ self._codec = BytesCodec(all_columns)\n+ else:\n+ raise ValueError(\"invalid codec_type: \" + codec_type)\n+\n \n if channel is None:\n self._stub = None\n@@ -93,12 +99,6 @@\n \"\"\"\n Distributed training.\n \"\"\"\n- if self._codec_type == \"tf_example\":\n- codec = TFExampleCodec(self._all_columns)\n- elif self._codec_type == \"bytes\":\n- codec = BytesCodec(self._all_columns)\n- else:\n- raise ValueError(\"invalid codec_type: \" + self._codec_type)\n while True:\n task = self.get_task()\n if not task.shard_file_name:\n@@ -107,7 +107,7 @@\n batch_size = task.minibatch_size\n err_msg = \"\"\n try:\n- with recordio.File(task.shard_file_name, \"r\", decoder=codec.decode) as rdio_r:\n+ with recordio.File(task.shard_file_name, \"r\", decoder=self._codec.decode) as rdio_r:\n reader = rdio_r.get_reader(task.start, task.end)\n min_model_version = task.model_version\n while True:\n@@ -164,7 +164,7 @@\n optimizer = self._opt_fn()\n for _ in range(epoch):\n for f in file_list:\n- with recordio.File(f, \"r\") as rdio_r:\n+ with recordio.File(f, \"r\", decoder=self._codec.decode) as rdio_r:\n reader = rdio_r.get_reader(0, rdio_r.count())\n while True:\n record_buf = list(\n", "issue": "worker_test failure\n```\r\npython -m unittest elasticdl/worker/*_test.py\r\n/usr/local/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\r\n from ._conv import register_converters as _register_converters\r\n2019-05-21 13:57:07.262725: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\r\nWARNING:tensorflow:From /usr/local/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:642: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nColocations handled automatically by placer.\r\nmust be str, not NoneType\r\nFLoss is 3.090042\r\nLoss is 1.4608976\r\nLoss is 0.913306\r\nLoss is 0.5969497\r\nLoss is 0.66515267\r\nLoss is 0.3935135\r\nLoss is 0.37774342\r\nLoss is 0.289928\r\n.\r\n======================================================================\r\nFAIL: test_distributed_train (elasticdl.worker.worker_test.WorkerTest)\r\n----------------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/Users/l.zou/git/elasticdl/elasticdl/worker/worker_test.py\", line 96, in test_distributed_train\r\n self.assertTrue(res)\r\nAssertionError: False is not true\r\n\r\n----------------------------------------------------------------------\r\nRan 2 tests in 0.165s\r\n\r\n```\n", "before_files": [{"content": "import traceback\nimport tensorflow as tf\nassert tf.executing_eagerly()\n\nfrom google.protobuf import empty_pb2\nfrom tensorflow.python.ops import math_ops\nfrom elasticdl.proto import master_pb2_grpc\nfrom elasticdl.proto import master_pb2\nfrom elasticdl.common.ndarray import ndarray_to_tensor, tensor_to_ndarray\nfrom elasticdl.common.model_helper import load_user_model, build_model\nfrom edl_data.codec import TFExampleCodec\nfrom edl_data.codec import BytesCodec\nimport itertools\nimport recordio\n\n# the default max number of a minibatch retrain as its gradients are not accepted by master.\nDEFAULT_MAX_MINIBATCH_RETRAIN_NUM = 64\n\nclass Worker(object):\n \"\"\"ElasticDL worker\"\"\"\n\n def __init__(self,\n model_file,\n channel=None,\n max_retrain_num=DEFAULT_MAX_MINIBATCH_RETRAIN_NUM,\n codec_type=None):\n \"\"\"\n Arguments:\n model_module: A module to define the model\n channel: grpc channel\n max_retrain_num: max number of a minibatch retrain as its gradients are not accepted by master\n \"\"\"\n\n model_module = load_user_model(model_file)\n self._model = model_module.model\n self._feature_columns = model_module.feature_columns()\n self._all_columns = self._feature_columns + model_module.label_columns()\n build_model(self._model, self._feature_columns)\n self._input_fn = model_module.input_fn \n self._opt_fn = model_module.optimizer\n self._loss = model_module.loss\n\n if channel is None:\n self._stub = None\n else:\n self._stub = master_pb2_grpc.MasterStub(channel)\n self._max_retrain_num = max_retrain_num\n self._model_version = -1\n self._codec_type = codec_type\n\n def get_task(self):\n \"\"\"\n get task from master\n \"\"\"\n return self._stub.GetTask(empty_pb2.Empty())\n\n def get_model(self, min_version):\n \"\"\"\n get model from master, and update model_version\n \"\"\"\n req = master_pb2.GetModelRequest()\n req.min_version = min_version\n model = self._stub.GetModel(req)\n\n for var in self._model.trainable_variables:\n # Assumes all trainable variables exist in model.param.\n var.assign(\n tensor_to_ndarray(model.param[var.name]))\n self._model_version = model.version\n\n def report_task_result(self, task_id, err_msg):\n \"\"\"\n report task result to master\n \"\"\"\n report = master_pb2.ReportTaskResultRequest()\n report.task_id = task_id\n report.err_message = err_msg\n return self._stub.ReportTaskResult(report)\n\n def report_gradient(self, grads):\n \"\"\"\n report gradient to ps, return (accepted, model_version) from rpc call.\n \"\"\"\n req = master_pb2.ReportGradientRequest()\n for g, v in zip(grads, self._model.trainable_variables):\n req.gradient[v.name].CopyFrom(\n ndarray_to_tensor(g.numpy()))\n req.model_version = self._model_version\n res = self._stub.ReportGradient(req)\n return res.accepted, res.model_version\n\n def distributed_train(self):\n \"\"\"\n Distributed training.\n \"\"\"\n if self._codec_type == \"tf_example\":\n codec = TFExampleCodec(self._all_columns)\n elif self._codec_type == \"bytes\":\n codec = BytesCodec(self._all_columns)\n else:\n raise ValueError(\"invalid codec_type: \" + self._codec_type)\n while True:\n task = self.get_task()\n if not task.shard_file_name:\n # No more task\n break\n batch_size = task.minibatch_size\n err_msg = \"\"\n try:\n with recordio.File(task.shard_file_name, \"r\", decoder=codec.decode) as rdio_r:\n reader = rdio_r.get_reader(task.start, task.end)\n min_model_version = task.model_version\n while True:\n record_buf = list(\n itertools.islice(reader, 0, batch_size))\n if not record_buf:\n break\n\n for _ in range(self._max_retrain_num):\n # TODO: optimize the logic to avoid unnecessary get_model call.\n self.get_model(\n max(self._model_version, min_model_version))\n\n batch_input_data, batch_label = self._input_fn(record_buf)\n\n with tf.GradientTape() as tape:\n inputs = []\n for f_col in self._feature_columns:\n inputs.append(batch_input_data[f_col.key])\n if len(inputs) == 1:\n inputs = inputs[0]\n outputs = self._model.call(inputs, training=True)\n loss = self._loss(outputs, batch_label.flatten())\n\n # TODO: Add regularization loss if any,\n # which should be divided by the number of contributing workers.\n grads = tape.gradient(\n loss, self._model.trainable_variables)\n print(\"Loss is \", loss.numpy())\n\n accepted, min_model_version = self.report_gradient(\n grads)\n if accepted:\n break\n else:\n # Worker got stuck, fail the task.\n # TODO: stop the worker if it fails to make any progress for some time.\n raise RuntimeError(\"Worker got stuck\")\n\n\n except Exception as ex:\n err_msg = str(ex)\n traceback.print_exc()\n self.report_task_result(task.task_id, err_msg)\n\n def local_train(self, file_list, batch_size, epoch=1, kwargs=None):\n \"\"\"\n Local training for local testing. Must in eager mode.\n Argments:\n batch_size: batch size in training\n epoch: the number of epoch in training\n kwargs: contains a dict of parameters used in training\n \"\"\"\n optimizer = self._opt_fn()\n for _ in range(epoch):\n for f in file_list:\n with recordio.File(f, \"r\") as rdio_r:\n reader = rdio_r.get_reader(0, rdio_r.count())\n while True:\n record_buf = list(\n itertools.islice(reader, 0, batch_size))\n if not record_buf:\n break\n\n data, labels = self._input_fn(record_buf)\n\n with tf.GradientTape() as tape:\n inputs = []\n for f_col in self._feature_columns:\n inputs.append(data[f_col.key])\n if len(inputs) == 1:\n inputs = inputs[0]\n outputs = self._model.call(inputs, training=True)\n loss = self._loss(outputs, labels)\n\n # Add regularization loss if any.\n # Note: for distributed training, the regularization loss should\n # be divided by the number of contributing workers, which\n # might be difficult for elasticdl.\n if self._model.losses:\n loss += math_ops.add_n(self._model.losses)\n grads = tape.gradient(\n loss, self._model.trainable_variables)\n optimizer.apply_gradients(\n zip(grads, self._model.trainable_variables))\n print(\"Loss is \", loss.numpy())\n", "path": "elasticdl/worker/worker.py"}], "after_files": [{"content": "import traceback\nimport tensorflow as tf\nassert tf.executing_eagerly()\n\nfrom google.protobuf import empty_pb2\nfrom tensorflow.python.ops import math_ops\nfrom elasticdl.proto import master_pb2_grpc\nfrom elasticdl.proto import master_pb2\nfrom elasticdl.common.ndarray import ndarray_to_tensor, tensor_to_ndarray\nfrom elasticdl.common.model_helper import load_user_model, build_model\nfrom edl_data.codec import TFExampleCodec\nfrom edl_data.codec import BytesCodec\nimport itertools\nimport recordio\n\n# the default max number of a minibatch retrain as its gradients are not accepted by master.\nDEFAULT_MAX_MINIBATCH_RETRAIN_NUM = 64\n\nclass Worker(object):\n \"\"\"ElasticDL worker\"\"\"\n\n def __init__(self,\n model_file,\n channel=None,\n max_retrain_num=DEFAULT_MAX_MINIBATCH_RETRAIN_NUM,\n codec_type=None):\n \"\"\"\n Arguments:\n model_module: A module to define the model\n channel: grpc channel\n max_retrain_num: max number of a minibatch retrain as its gradients are not accepted by master\n \"\"\"\n model_module = load_user_model(model_file)\n self._model = model_module.model\n self._feature_columns = model_module.feature_columns()\n build_model(self._model, self._feature_columns)\n self._input_fn = model_module.input_fn \n self._opt_fn = model_module.optimizer\n self._loss = model_module.loss\n all_columns = self._feature_columns + model_module.label_columns()\n if codec_type == \"tf_example\":\n self._codec = TFExampleCodec(all_columns)\n elif codec_type == \"bytes\":\n self._codec = BytesCodec(all_columns)\n else:\n raise ValueError(\"invalid codec_type: \" + codec_type)\n\n\n if channel is None:\n self._stub = None\n else:\n self._stub = master_pb2_grpc.MasterStub(channel)\n self._max_retrain_num = max_retrain_num\n self._model_version = -1\n self._codec_type = codec_type\n\n def get_task(self):\n \"\"\"\n get task from master\n \"\"\"\n return self._stub.GetTask(empty_pb2.Empty())\n\n def get_model(self, min_version):\n \"\"\"\n get model from master, and update model_version\n \"\"\"\n req = master_pb2.GetModelRequest()\n req.min_version = min_version\n model = self._stub.GetModel(req)\n\n for var in self._model.trainable_variables:\n # Assumes all trainable variables exist in model.param.\n var.assign(\n tensor_to_ndarray(model.param[var.name]))\n self._model_version = model.version\n\n def report_task_result(self, task_id, err_msg):\n \"\"\"\n report task result to master\n \"\"\"\n report = master_pb2.ReportTaskResultRequest()\n report.task_id = task_id\n report.err_message = err_msg\n return self._stub.ReportTaskResult(report)\n\n def report_gradient(self, grads):\n \"\"\"\n report gradient to ps, return (accepted, model_version) from rpc call.\n \"\"\"\n req = master_pb2.ReportGradientRequest()\n for g, v in zip(grads, self._model.trainable_variables):\n req.gradient[v.name].CopyFrom(\n ndarray_to_tensor(g.numpy()))\n req.model_version = self._model_version\n res = self._stub.ReportGradient(req)\n return res.accepted, res.model_version\n\n def distributed_train(self):\n \"\"\"\n Distributed training.\n \"\"\"\n while True:\n task = self.get_task()\n if not task.shard_file_name:\n # No more task\n break\n batch_size = task.minibatch_size\n err_msg = \"\"\n try:\n with recordio.File(task.shard_file_name, \"r\", decoder=self._codec.decode) as rdio_r:\n reader = rdio_r.get_reader(task.start, task.end)\n min_model_version = task.model_version\n while True:\n record_buf = list(\n itertools.islice(reader, 0, batch_size))\n if not record_buf:\n break\n\n for _ in range(self._max_retrain_num):\n # TODO: optimize the logic to avoid unnecessary get_model call.\n self.get_model(\n max(self._model_version, min_model_version))\n\n batch_input_data, batch_label = self._input_fn(record_buf)\n\n with tf.GradientTape() as tape:\n inputs = []\n for f_col in self._feature_columns:\n inputs.append(batch_input_data[f_col.key])\n if len(inputs) == 1:\n inputs = inputs[0]\n outputs = self._model.call(inputs, training=True)\n loss = self._loss(outputs, batch_label.flatten())\n\n # TODO: Add regularization loss if any,\n # which should be divided by the number of contributing workers.\n grads = tape.gradient(\n loss, self._model.trainable_variables)\n print(\"Loss is \", loss.numpy())\n\n accepted, min_model_version = self.report_gradient(\n grads)\n if accepted:\n break\n else:\n # Worker got stuck, fail the task.\n # TODO: stop the worker if it fails to make any progress for some time.\n raise RuntimeError(\"Worker got stuck\")\n\n\n except Exception as ex:\n err_msg = str(ex)\n traceback.print_exc()\n self.report_task_result(task.task_id, err_msg)\n\n def local_train(self, file_list, batch_size, epoch=1, kwargs=None):\n \"\"\"\n Local training for local testing. Must in eager mode.\n Argments:\n batch_size: batch size in training\n epoch: the number of epoch in training\n kwargs: contains a dict of parameters used in training\n \"\"\"\n optimizer = self._opt_fn()\n for _ in range(epoch):\n for f in file_list:\n with recordio.File(f, \"r\", decoder=self._codec.decode) as rdio_r:\n reader = rdio_r.get_reader(0, rdio_r.count())\n while True:\n record_buf = list(\n itertools.islice(reader, 0, batch_size))\n if not record_buf:\n break\n\n data, labels = self._input_fn(record_buf)\n\n with tf.GradientTape() as tape:\n inputs = []\n for f_col in self._feature_columns:\n inputs.append(data[f_col.key])\n if len(inputs) == 1:\n inputs = inputs[0]\n outputs = self._model.call(inputs, training=True)\n loss = self._loss(outputs, labels)\n\n # Add regularization loss if any.\n # Note: for distributed training, the regularization loss should\n # be divided by the number of contributing workers, which\n # might be difficult for elasticdl.\n if self._model.losses:\n loss += math_ops.add_n(self._model.losses)\n grads = tape.gradient(\n loss, self._model.trainable_variables)\n optimizer.apply_gradients(\n zip(grads, self._model.trainable_variables))\n print(\"Loss is \", loss.numpy())\n", "path": "elasticdl/worker/worker.py"}]}
| 2,693 | 586 |
gh_patches_debug_22797
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-2681
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make cirq.GridQubit + cirq.GridQubit work
```cirq.GridQubit(a, b) + (c, d)``` works
```cirq.GridQubit(a, b) + cirq.GridQubit(c, d)``` does not work
The latter should act like the former.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq/devices/grid_qubit.py`
Content:
```
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 from typing import Iterable, List, Optional, Set, Tuple, TYPE_CHECKING
17
18 from cirq import ops, protocols
19
20 if TYPE_CHECKING:
21 import cirq
22
23
24 class GridQubit(ops.Qid):
25 """A qubit on a 2d square lattice.
26
27 GridQubits use row-major ordering:
28
29 GridQubit(0, 0) < GridQubit(0, 1) < GridQubit(1, 0) < GridQubit(1, 1)
30
31 New GridQubits can be constructed by adding or subtracting tuples
32
33 >>> cirq.GridQubit(2, 3) + (3, 1)
34 cirq.GridQubit(5, 4)
35
36 >>> cirq.GridQubit(2, 3) - (1, 2)
37 cirq.GridQubit(1, 1)
38 """
39
40 def __init__(self, row: int, col: int):
41 self.row = row
42 self.col = col
43
44 def _comparison_key(self):
45 return self.row, self.col
46
47 @property
48 def dimension(self) -> int:
49 return 2
50
51 def is_adjacent(self, other: 'cirq.Qid') -> bool:
52 """Determines if two qubits are adjacent qubits."""
53 return (isinstance(other, GridQubit) and
54 abs(self.row - other.row) + abs(self.col - other.col) == 1)
55
56 def neighbors(self,
57 qids: Optional[Iterable[ops.Qid]] = None) -> Set['GridQubit']:
58 """Returns qubits that are potential neighbors to this GridQubit
59
60 Args:
61 qids: optional Iterable of qubits to constrain neighbors to.
62 """
63 neighbors = set()
64 for q in [self + (0, 1), self + (1, 0), self + (-1, 0), self + (0, -1)]:
65 if qids is None or q in qids:
66 neighbors.add(q)
67 return neighbors
68
69 @staticmethod
70 def square(diameter: int, top: int = 0, left: int = 0) -> List['GridQubit']:
71 """Returns a square of GridQubits.
72
73 Args:
74 diameter: Length of a side of the square
75 top: Row number of the topmost row
76 left: Column number of the leftmost row
77
78 Returns:
79 A list of GridQubits filling in a square grid
80 """
81 return GridQubit.rect(diameter, diameter, top=top, left=left)
82
83 @staticmethod
84 def rect(rows: int, cols: int, top: int = 0,
85 left: int = 0) -> List['GridQubit']:
86 """Returns a rectangle of GridQubits.
87
88 Args:
89 rows: Number of rows in the rectangle
90 cols: Number of columns in the rectangle
91 top: Row number of the topmost row
92 left: Column number of the leftmost row
93
94 Returns:
95 A list of GridQubits filling in a rectangular grid
96 """
97 return [
98 GridQubit(row, col)
99 for row in range(top, top + rows)
100 for col in range(left, left + cols)
101 ]
102
103 @staticmethod
104 def from_diagram(diagram: str) -> List['GridQubit']:
105 """Parse ASCII art device layout into info about qubits and
106 connectivity. As an example, the below diagram will create a list of
107 GridQubits in a pyramid structure.
108 ---A---
109 --AAA--
110 -AAAAA-
111 AAAAAAA
112
113 You can use any character other than a hyphen to mark a qubit. As an
114 example, the qubits for the Bristlecone device could be represented by
115 the below diagram. This produces a diamond-shaped grid of qubits, and
116 qubits with the same letter correspond to the same readout line.
117
118 .....AB.....
119 ....ABCD....
120 ...ABCDEF...
121 ..ABCDEFGH..
122 .ABCDEFGHIJ.
123 ABCDEFGHIJKL
124 .CDEFGHIJKL.
125 ..EFGHIJKL..
126 ...GHIJKL...
127 ....IJKL....
128 .....KL.....
129
130 Args:
131 diagram: String representing the qubit layout. Each line represents
132 a row. Alphanumeric characters are assigned as qubits.
133 Dots ('.'), dashes ('-'), and spaces (' ') are treated as
134 empty locations in the grid. If diagram has characters other
135 than alphanumerics, spacers, and newlines ('\n'), an error will
136 be thrown. The top-left corner of the diagram will be have
137 coordinate (0,0).
138
139 Returns:
140 A list of GridQubits corresponding to the provided diagram
141
142 Raises:
143 ValueError: If the input string contains an invalid character.
144 """
145 lines = diagram.strip().split('\n')
146 no_qubit_characters = ['.', '-', ' ']
147 qubits = []
148 for row, line in enumerate(lines):
149 for col, c in enumerate(line.strip()):
150 if c not in no_qubit_characters:
151 if not c.isalnum():
152 raise ValueError("Input string has invalid character")
153 qubits.append(GridQubit(row, col))
154 return qubits
155
156 def __repr__(self):
157 return 'cirq.GridQubit({}, {})'.format(self.row, self.col)
158
159 def __str__(self):
160 return '({}, {})'.format(self.row, self.col)
161
162 def _json_dict_(self):
163 return protocols.obj_to_dict_helper(self, ['row', 'col'])
164
165 def __add__(self, other: Tuple[int, int]) -> 'GridQubit':
166 if not (isinstance(other, tuple) and len(other) == 2 and
167 all(isinstance(x, int) for x in other)):
168 raise TypeError(
169 'Can only add tuples of length 2 to GridQubits. Was {}'.format(
170 other))
171 return GridQubit(row=self.row + other[0], col=self.col + other[1])
172
173 def __sub__(self, other: Tuple[int, int]) -> 'GridQubit':
174 if not (isinstance(other, tuple) and len(other) == 2 and
175 all(isinstance(x, int) for x in other)):
176 raise TypeError(
177 'Can only subtract tuples of length 2 to GridQubits. Was {}'.
178 format(other))
179 return GridQubit(row=self.row - other[0], col=self.col - other[1])
180
181 def __radd__(self, other: Tuple[int, int]) -> 'GridQubit':
182 return self + other
183
184 def __rsub__(self, other: Tuple[int, int]) -> 'GridQubit':
185 return -self + other
186
187 def __neg__(self) -> 'GridQubit':
188 return GridQubit(row=-self.row, col=-self.col)
189
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cirq/devices/grid_qubit.py b/cirq/devices/grid_qubit.py
--- a/cirq/devices/grid_qubit.py
+++ b/cirq/devices/grid_qubit.py
@@ -163,6 +163,8 @@
return protocols.obj_to_dict_helper(self, ['row', 'col'])
def __add__(self, other: Tuple[int, int]) -> 'GridQubit':
+ if isinstance(other, GridQubit):
+ return GridQubit(row=self.row + other.row, col=self.col + other.col)
if not (isinstance(other, tuple) and len(other) == 2 and
all(isinstance(x, int) for x in other)):
raise TypeError(
@@ -171,6 +173,8 @@
return GridQubit(row=self.row + other[0], col=self.col + other[1])
def __sub__(self, other: Tuple[int, int]) -> 'GridQubit':
+ if isinstance(other, GridQubit):
+ return GridQubit(row=self.row - other.row, col=self.col - other.col)
if not (isinstance(other, tuple) and len(other) == 2 and
all(isinstance(x, int) for x in other)):
raise TypeError(
|
{"golden_diff": "diff --git a/cirq/devices/grid_qubit.py b/cirq/devices/grid_qubit.py\n--- a/cirq/devices/grid_qubit.py\n+++ b/cirq/devices/grid_qubit.py\n@@ -163,6 +163,8 @@\n return protocols.obj_to_dict_helper(self, ['row', 'col'])\n \n def __add__(self, other: Tuple[int, int]) -> 'GridQubit':\n+ if isinstance(other, GridQubit):\n+ return GridQubit(row=self.row + other.row, col=self.col + other.col)\n if not (isinstance(other, tuple) and len(other) == 2 and\n all(isinstance(x, int) for x in other)):\n raise TypeError(\n@@ -171,6 +173,8 @@\n return GridQubit(row=self.row + other[0], col=self.col + other[1])\n \n def __sub__(self, other: Tuple[int, int]) -> 'GridQubit':\n+ if isinstance(other, GridQubit):\n+ return GridQubit(row=self.row - other.row, col=self.col - other.col)\n if not (isinstance(other, tuple) and len(other) == 2 and\n all(isinstance(x, int) for x in other)):\n raise TypeError(\n", "issue": "Make cirq.GridQubit + cirq.GridQubit work\n```cirq.GridQubit(a, b) + (c, d)``` works\r\n\r\n```cirq.GridQubit(a, b) + cirq.GridQubit(c, d)``` does not work\r\n\r\nThe latter should act like the former.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom typing import Iterable, List, Optional, Set, Tuple, TYPE_CHECKING\n\nfrom cirq import ops, protocols\n\nif TYPE_CHECKING:\n import cirq\n\n\nclass GridQubit(ops.Qid):\n \"\"\"A qubit on a 2d square lattice.\n\n GridQubits use row-major ordering:\n\n GridQubit(0, 0) < GridQubit(0, 1) < GridQubit(1, 0) < GridQubit(1, 1)\n\n New GridQubits can be constructed by adding or subtracting tuples\n\n >>> cirq.GridQubit(2, 3) + (3, 1)\n cirq.GridQubit(5, 4)\n\n >>> cirq.GridQubit(2, 3) - (1, 2)\n cirq.GridQubit(1, 1)\n \"\"\"\n\n def __init__(self, row: int, col: int):\n self.row = row\n self.col = col\n\n def _comparison_key(self):\n return self.row, self.col\n\n @property\n def dimension(self) -> int:\n return 2\n\n def is_adjacent(self, other: 'cirq.Qid') -> bool:\n \"\"\"Determines if two qubits are adjacent qubits.\"\"\"\n return (isinstance(other, GridQubit) and\n abs(self.row - other.row) + abs(self.col - other.col) == 1)\n\n def neighbors(self,\n qids: Optional[Iterable[ops.Qid]] = None) -> Set['GridQubit']:\n \"\"\"Returns qubits that are potential neighbors to this GridQubit\n\n Args:\n qids: optional Iterable of qubits to constrain neighbors to.\n \"\"\"\n neighbors = set()\n for q in [self + (0, 1), self + (1, 0), self + (-1, 0), self + (0, -1)]:\n if qids is None or q in qids:\n neighbors.add(q)\n return neighbors\n\n @staticmethod\n def square(diameter: int, top: int = 0, left: int = 0) -> List['GridQubit']:\n \"\"\"Returns a square of GridQubits.\n\n Args:\n diameter: Length of a side of the square\n top: Row number of the topmost row\n left: Column number of the leftmost row\n\n Returns:\n A list of GridQubits filling in a square grid\n \"\"\"\n return GridQubit.rect(diameter, diameter, top=top, left=left)\n\n @staticmethod\n def rect(rows: int, cols: int, top: int = 0,\n left: int = 0) -> List['GridQubit']:\n \"\"\"Returns a rectangle of GridQubits.\n\n Args:\n rows: Number of rows in the rectangle\n cols: Number of columns in the rectangle\n top: Row number of the topmost row\n left: Column number of the leftmost row\n\n Returns:\n A list of GridQubits filling in a rectangular grid\n \"\"\"\n return [\n GridQubit(row, col)\n for row in range(top, top + rows)\n for col in range(left, left + cols)\n ]\n\n @staticmethod\n def from_diagram(diagram: str) -> List['GridQubit']:\n \"\"\"Parse ASCII art device layout into info about qubits and\n connectivity. As an example, the below diagram will create a list of\n GridQubits in a pyramid structure.\n ---A---\n --AAA--\n -AAAAA-\n AAAAAAA\n\n You can use any character other than a hyphen to mark a qubit. As an\n example, the qubits for the Bristlecone device could be represented by\n the below diagram. This produces a diamond-shaped grid of qubits, and\n qubits with the same letter correspond to the same readout line.\n\n .....AB.....\n ....ABCD....\n ...ABCDEF...\n ..ABCDEFGH..\n .ABCDEFGHIJ.\n ABCDEFGHIJKL\n .CDEFGHIJKL.\n ..EFGHIJKL..\n ...GHIJKL...\n ....IJKL....\n .....KL.....\n\n Args:\n diagram: String representing the qubit layout. Each line represents\n a row. Alphanumeric characters are assigned as qubits.\n Dots ('.'), dashes ('-'), and spaces (' ') are treated as\n empty locations in the grid. If diagram has characters other\n than alphanumerics, spacers, and newlines ('\\n'), an error will\n be thrown. The top-left corner of the diagram will be have\n coordinate (0,0).\n\n Returns:\n A list of GridQubits corresponding to the provided diagram\n\n Raises:\n ValueError: If the input string contains an invalid character.\n \"\"\"\n lines = diagram.strip().split('\\n')\n no_qubit_characters = ['.', '-', ' ']\n qubits = []\n for row, line in enumerate(lines):\n for col, c in enumerate(line.strip()):\n if c not in no_qubit_characters:\n if not c.isalnum():\n raise ValueError(\"Input string has invalid character\")\n qubits.append(GridQubit(row, col))\n return qubits\n\n def __repr__(self):\n return 'cirq.GridQubit({}, {})'.format(self.row, self.col)\n\n def __str__(self):\n return '({}, {})'.format(self.row, self.col)\n\n def _json_dict_(self):\n return protocols.obj_to_dict_helper(self, ['row', 'col'])\n\n def __add__(self, other: Tuple[int, int]) -> 'GridQubit':\n if not (isinstance(other, tuple) and len(other) == 2 and\n all(isinstance(x, int) for x in other)):\n raise TypeError(\n 'Can only add tuples of length 2 to GridQubits. Was {}'.format(\n other))\n return GridQubit(row=self.row + other[0], col=self.col + other[1])\n\n def __sub__(self, other: Tuple[int, int]) -> 'GridQubit':\n if not (isinstance(other, tuple) and len(other) == 2 and\n all(isinstance(x, int) for x in other)):\n raise TypeError(\n 'Can only subtract tuples of length 2 to GridQubits. Was {}'.\n format(other))\n return GridQubit(row=self.row - other[0], col=self.col - other[1])\n\n def __radd__(self, other: Tuple[int, int]) -> 'GridQubit':\n return self + other\n\n def __rsub__(self, other: Tuple[int, int]) -> 'GridQubit':\n return -self + other\n\n def __neg__(self) -> 'GridQubit':\n return GridQubit(row=-self.row, col=-self.col)\n", "path": "cirq/devices/grid_qubit.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom typing import Iterable, List, Optional, Set, Tuple, TYPE_CHECKING\n\nfrom cirq import ops, protocols\n\nif TYPE_CHECKING:\n import cirq\n\n\nclass GridQubit(ops.Qid):\n \"\"\"A qubit on a 2d square lattice.\n\n GridQubits use row-major ordering:\n\n GridQubit(0, 0) < GridQubit(0, 1) < GridQubit(1, 0) < GridQubit(1, 1)\n\n New GridQubits can be constructed by adding or subtracting tuples\n\n >>> cirq.GridQubit(2, 3) + (3, 1)\n cirq.GridQubit(5, 4)\n\n >>> cirq.GridQubit(2, 3) - (1, 2)\n cirq.GridQubit(1, 1)\n \"\"\"\n\n def __init__(self, row: int, col: int):\n self.row = row\n self.col = col\n\n def _comparison_key(self):\n return self.row, self.col\n\n @property\n def dimension(self) -> int:\n return 2\n\n def is_adjacent(self, other: 'cirq.Qid') -> bool:\n \"\"\"Determines if two qubits are adjacent qubits.\"\"\"\n return (isinstance(other, GridQubit) and\n abs(self.row - other.row) + abs(self.col - other.col) == 1)\n\n def neighbors(self,\n qids: Optional[Iterable[ops.Qid]] = None) -> Set['GridQubit']:\n \"\"\"Returns qubits that are potential neighbors to this GridQubit\n\n Args:\n qids: optional Iterable of qubits to constrain neighbors to.\n \"\"\"\n neighbors = set()\n for q in [self + (0, 1), self + (1, 0), self + (-1, 0), self + (0, -1)]:\n if qids is None or q in qids:\n neighbors.add(q)\n return neighbors\n\n @staticmethod\n def square(diameter: int, top: int = 0, left: int = 0) -> List['GridQubit']:\n \"\"\"Returns a square of GridQubits.\n\n Args:\n diameter: Length of a side of the square\n top: Row number of the topmost row\n left: Column number of the leftmost row\n\n Returns:\n A list of GridQubits filling in a square grid\n \"\"\"\n return GridQubit.rect(diameter, diameter, top=top, left=left)\n\n @staticmethod\n def rect(rows: int, cols: int, top: int = 0,\n left: int = 0) -> List['GridQubit']:\n \"\"\"Returns a rectangle of GridQubits.\n\n Args:\n rows: Number of rows in the rectangle\n cols: Number of columns in the rectangle\n top: Row number of the topmost row\n left: Column number of the leftmost row\n\n Returns:\n A list of GridQubits filling in a rectangular grid\n \"\"\"\n return [\n GridQubit(row, col)\n for row in range(top, top + rows)\n for col in range(left, left + cols)\n ]\n\n @staticmethod\n def from_diagram(diagram: str) -> List['GridQubit']:\n \"\"\"Parse ASCII art device layout into info about qubits and\n connectivity. As an example, the below diagram will create a list of\n GridQubits in a pyramid structure.\n ---A---\n --AAA--\n -AAAAA-\n AAAAAAA\n\n You can use any character other than a hyphen to mark a qubit. As an\n example, the qubits for the Bristlecone device could be represented by\n the below diagram. This produces a diamond-shaped grid of qubits, and\n qubits with the same letter correspond to the same readout line.\n\n .....AB.....\n ....ABCD....\n ...ABCDEF...\n ..ABCDEFGH..\n .ABCDEFGHIJ.\n ABCDEFGHIJKL\n .CDEFGHIJKL.\n ..EFGHIJKL..\n ...GHIJKL...\n ....IJKL....\n .....KL.....\n\n Args:\n diagram: String representing the qubit layout. Each line represents\n a row. Alphanumeric characters are assigned as qubits.\n Dots ('.'), dashes ('-'), and spaces (' ') are treated as\n empty locations in the grid. If diagram has characters other\n than alphanumerics, spacers, and newlines ('\\n'), an error will\n be thrown. The top-left corner of the diagram will be have\n coordinate (0,0).\n\n Returns:\n A list of GridQubits corresponding to the provided diagram\n\n Raises:\n ValueError: If the input string contains an invalid character.\n \"\"\"\n lines = diagram.strip().split('\\n')\n no_qubit_characters = ['.', '-', ' ']\n qubits = []\n for row, line in enumerate(lines):\n for col, c in enumerate(line.strip()):\n if c not in no_qubit_characters:\n if not c.isalnum():\n raise ValueError(\"Input string has invalid character\")\n qubits.append(GridQubit(row, col))\n return qubits\n\n def __repr__(self):\n return 'cirq.GridQubit({}, {})'.format(self.row, self.col)\n\n def __str__(self):\n return '({}, {})'.format(self.row, self.col)\n\n def _json_dict_(self):\n return protocols.obj_to_dict_helper(self, ['row', 'col'])\n\n def __add__(self, other: Tuple[int, int]) -> 'GridQubit':\n if isinstance(other, GridQubit):\n return GridQubit(row=self.row + other.row, col=self.col + other.col)\n if not (isinstance(other, tuple) and len(other) == 2 and\n all(isinstance(x, int) for x in other)):\n raise TypeError(\n 'Can only add tuples of length 2 to GridQubits. Was {}'.format(\n other))\n return GridQubit(row=self.row + other[0], col=self.col + other[1])\n\n def __sub__(self, other: Tuple[int, int]) -> 'GridQubit':\n if isinstance(other, GridQubit):\n return GridQubit(row=self.row - other.row, col=self.col - other.col)\n if not (isinstance(other, tuple) and len(other) == 2 and\n all(isinstance(x, int) for x in other)):\n raise TypeError(\n 'Can only subtract tuples of length 2 to GridQubits. Was {}'.\n format(other))\n return GridQubit(row=self.row - other[0], col=self.col - other[1])\n\n def __radd__(self, other: Tuple[int, int]) -> 'GridQubit':\n return self + other\n\n def __rsub__(self, other: Tuple[int, int]) -> 'GridQubit':\n return -self + other\n\n def __neg__(self) -> 'GridQubit':\n return GridQubit(row=-self.row, col=-self.col)\n", "path": "cirq/devices/grid_qubit.py"}]}
| 2,438 | 276 |
gh_patches_debug_26859
|
rasdani/github-patches
|
git_diff
|
SeldonIO__MLServer-850
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MLServer to hide http health request logs to avoid polluting the logs
As part of the Seldon Core addition https://github.com/SeldonIO/seldon-core/pull/4028 which moves the TCP ready checks into proper HTTP request ready checks to `v2/health/ready` there is now a lot of noise from the readiness checks every 5 seconds. We should explore ways in which we avoid this noise, perhaps making it completely silent by default, or eventually once the prometheus server is created on a separate server this could also be added (And both of them could be muted)

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlserver/rest/server.py`
Content:
```
1 import uvicorn
2
3 from ..settings import Settings
4 from ..handlers import DataPlane, ModelRepositoryHandlers, get_custom_handlers
5 from ..model import MLModel
6
7 from .utils import matches
8 from .app import create_app
9 from .logging import logger
10 from typing import Optional
11
12
13 class _NoSignalServer(uvicorn.Server):
14 def install_signal_handlers(self):
15 pass
16
17
18 class RESTServer:
19 def __init__(
20 self,
21 settings: Settings,
22 data_plane: DataPlane,
23 model_repository_handlers: ModelRepositoryHandlers,
24 ):
25 self._settings = settings
26 self._data_plane = data_plane
27 self._model_repository_handlers = model_repository_handlers
28 self._app = create_app(
29 self._settings,
30 data_plane=self._data_plane,
31 model_repository_handlers=self._model_repository_handlers,
32 )
33
34 async def add_custom_handlers(self, model: MLModel) -> MLModel:
35 handlers = get_custom_handlers(model)
36 for custom_handler, handler_method in handlers:
37 self._app.add_api_route(
38 custom_handler.rest_path,
39 handler_method,
40 methods=[custom_handler.rest_method],
41 )
42
43 return model
44
45 async def delete_custom_handlers(self, model: MLModel) -> MLModel:
46 handlers = get_custom_handlers(model)
47 if len(handlers) == 0:
48 return model
49
50 # NOTE: Loop in reverse, so that it's quicker to find all the recently
51 # added routes and we can remove routes on-the-fly
52 for i, route in reversed(list(enumerate(self._app.routes))):
53 for j, (custom_handler, handler_method) in enumerate(handlers):
54 if matches(route, custom_handler, handler_method): # type: ignore
55 self._app.routes.pop(i)
56 handlers.pop(j)
57
58 return model
59
60 async def start(self):
61 cfg = self._get_config()
62 self._server = _NoSignalServer(cfg)
63 await self._server.serve()
64
65 def _get_config(self):
66 kwargs = {}
67
68 if self._settings._custom_rest_server_settings:
69 logger.warning(
70 "REST custom configuration is out of support. Use as your own risk"
71 )
72 kwargs.update(self._settings._custom_rest_server_settings)
73
74 kwargs.update(
75 {
76 "host": self._settings.host,
77 "port": self._settings.http_port,
78 "root_path": self._settings.root_path,
79 "access_log": self._settings.debug,
80 }
81 )
82
83 if self._settings.logging_settings:
84 # If not None, use ours. Otherwise, let Uvicorn fall back on its
85 # own config.
86 kwargs.update({"log_config": self._settings.logging_settings})
87
88 return uvicorn.Config(self._app, **kwargs)
89
90 async def stop(self, sig: Optional[int] = None):
91 self._server.handle_exit(sig=sig, frame=None)
92
```
Path: `mlserver/rest/logging.py`
Content:
```
1 import logging
2
3 loggerName = "mlserver.rest"
4 logger = logging.getLogger(loggerName)
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mlserver/rest/logging.py b/mlserver/rest/logging.py
--- a/mlserver/rest/logging.py
+++ b/mlserver/rest/logging.py
@@ -1,4 +1,35 @@
import logging
+
+class HealthEndpointFilter(logging.Filter):
+ """
+ Filter to avoid logging health endpoints.
+ From:
+ https://github.com/encode/starlette/issues/864#issuecomment-653076434
+ """
+
+ def filter(self, record: logging.LogRecord) -> bool:
+ if not isinstance(record.args, tuple):
+ return True
+
+ if len(record.args) < 3:
+ return True
+
+ request_method = record.args[1]
+ query_string = record.args[2]
+ if request_method != "GET":
+ return True
+
+ if query_string in ["/v2/health/live", "/v2/health/ready"]:
+ return False
+
+ return True
+
+
+def disable_health_access_logs() -> None:
+ uvicorn_logger = logging.getLogger("uvicorn.access")
+ uvicorn_logger.addFilter(HealthEndpointFilter())
+
+
loggerName = "mlserver.rest"
logger = logging.getLogger(loggerName)
diff --git a/mlserver/rest/server.py b/mlserver/rest/server.py
--- a/mlserver/rest/server.py
+++ b/mlserver/rest/server.py
@@ -6,7 +6,7 @@
from .utils import matches
from .app import create_app
-from .logging import logger
+from .logging import logger, disable_health_access_logs
from typing import Optional
@@ -60,6 +60,9 @@
async def start(self):
cfg = self._get_config()
self._server = _NoSignalServer(cfg)
+ if not self._settings.debug:
+ disable_health_access_logs()
+
await self._server.serve()
def _get_config(self):
|
{"golden_diff": "diff --git a/mlserver/rest/logging.py b/mlserver/rest/logging.py\n--- a/mlserver/rest/logging.py\n+++ b/mlserver/rest/logging.py\n@@ -1,4 +1,35 @@\n import logging\n \n+\n+class HealthEndpointFilter(logging.Filter):\n+ \"\"\"\n+ Filter to avoid logging health endpoints.\n+ From:\n+ https://github.com/encode/starlette/issues/864#issuecomment-653076434\n+ \"\"\"\n+\n+ def filter(self, record: logging.LogRecord) -> bool:\n+ if not isinstance(record.args, tuple):\n+ return True\n+\n+ if len(record.args) < 3:\n+ return True\n+\n+ request_method = record.args[1]\n+ query_string = record.args[2]\n+ if request_method != \"GET\":\n+ return True\n+\n+ if query_string in [\"/v2/health/live\", \"/v2/health/ready\"]:\n+ return False\n+\n+ return True\n+\n+\n+def disable_health_access_logs() -> None:\n+ uvicorn_logger = logging.getLogger(\"uvicorn.access\")\n+ uvicorn_logger.addFilter(HealthEndpointFilter())\n+\n+\n loggerName = \"mlserver.rest\"\n logger = logging.getLogger(loggerName)\ndiff --git a/mlserver/rest/server.py b/mlserver/rest/server.py\n--- a/mlserver/rest/server.py\n+++ b/mlserver/rest/server.py\n@@ -6,7 +6,7 @@\n \n from .utils import matches\n from .app import create_app\n-from .logging import logger\n+from .logging import logger, disable_health_access_logs\n from typing import Optional\n \n \n@@ -60,6 +60,9 @@\n async def start(self):\n cfg = self._get_config()\n self._server = _NoSignalServer(cfg)\n+ if not self._settings.debug:\n+ disable_health_access_logs()\n+\n await self._server.serve()\n \n def _get_config(self):\n", "issue": "MLServer to hide http health request logs to avoid polluting the logs\nAs part of the Seldon Core addition https://github.com/SeldonIO/seldon-core/pull/4028 which moves the TCP ready checks into proper HTTP request ready checks to `v2/health/ready` there is now a lot of noise from the readiness checks every 5 seconds. We should explore ways in which we avoid this noise, perhaps making it completely silent by default, or eventually once the prometheus server is created on a separate server this could also be added (And both of them could be muted)\r\n\r\n\r\n\n", "before_files": [{"content": "import uvicorn\n\nfrom ..settings import Settings\nfrom ..handlers import DataPlane, ModelRepositoryHandlers, get_custom_handlers\nfrom ..model import MLModel\n\nfrom .utils import matches\nfrom .app import create_app\nfrom .logging import logger\nfrom typing import Optional\n\n\nclass _NoSignalServer(uvicorn.Server):\n def install_signal_handlers(self):\n pass\n\n\nclass RESTServer:\n def __init__(\n self,\n settings: Settings,\n data_plane: DataPlane,\n model_repository_handlers: ModelRepositoryHandlers,\n ):\n self._settings = settings\n self._data_plane = data_plane\n self._model_repository_handlers = model_repository_handlers\n self._app = create_app(\n self._settings,\n data_plane=self._data_plane,\n model_repository_handlers=self._model_repository_handlers,\n )\n\n async def add_custom_handlers(self, model: MLModel) -> MLModel:\n handlers = get_custom_handlers(model)\n for custom_handler, handler_method in handlers:\n self._app.add_api_route(\n custom_handler.rest_path,\n handler_method,\n methods=[custom_handler.rest_method],\n )\n\n return model\n\n async def delete_custom_handlers(self, model: MLModel) -> MLModel:\n handlers = get_custom_handlers(model)\n if len(handlers) == 0:\n return model\n\n # NOTE: Loop in reverse, so that it's quicker to find all the recently\n # added routes and we can remove routes on-the-fly\n for i, route in reversed(list(enumerate(self._app.routes))):\n for j, (custom_handler, handler_method) in enumerate(handlers):\n if matches(route, custom_handler, handler_method): # type: ignore\n self._app.routes.pop(i)\n handlers.pop(j)\n\n return model\n\n async def start(self):\n cfg = self._get_config()\n self._server = _NoSignalServer(cfg)\n await self._server.serve()\n\n def _get_config(self):\n kwargs = {}\n\n if self._settings._custom_rest_server_settings:\n logger.warning(\n \"REST custom configuration is out of support. Use as your own risk\"\n )\n kwargs.update(self._settings._custom_rest_server_settings)\n\n kwargs.update(\n {\n \"host\": self._settings.host,\n \"port\": self._settings.http_port,\n \"root_path\": self._settings.root_path,\n \"access_log\": self._settings.debug,\n }\n )\n\n if self._settings.logging_settings:\n # If not None, use ours. Otherwise, let Uvicorn fall back on its\n # own config.\n kwargs.update({\"log_config\": self._settings.logging_settings})\n\n return uvicorn.Config(self._app, **kwargs)\n\n async def stop(self, sig: Optional[int] = None):\n self._server.handle_exit(sig=sig, frame=None)\n", "path": "mlserver/rest/server.py"}, {"content": "import logging\n\nloggerName = \"mlserver.rest\"\nlogger = logging.getLogger(loggerName)\n", "path": "mlserver/rest/logging.py"}], "after_files": [{"content": "import uvicorn\n\nfrom ..settings import Settings\nfrom ..handlers import DataPlane, ModelRepositoryHandlers, get_custom_handlers\nfrom ..model import MLModel\n\nfrom .utils import matches\nfrom .app import create_app\nfrom .logging import logger, disable_health_access_logs\nfrom typing import Optional\n\n\nclass _NoSignalServer(uvicorn.Server):\n def install_signal_handlers(self):\n pass\n\n\nclass RESTServer:\n def __init__(\n self,\n settings: Settings,\n data_plane: DataPlane,\n model_repository_handlers: ModelRepositoryHandlers,\n ):\n self._settings = settings\n self._data_plane = data_plane\n self._model_repository_handlers = model_repository_handlers\n self._app = create_app(\n self._settings,\n data_plane=self._data_plane,\n model_repository_handlers=self._model_repository_handlers,\n )\n\n async def add_custom_handlers(self, model: MLModel) -> MLModel:\n handlers = get_custom_handlers(model)\n for custom_handler, handler_method in handlers:\n self._app.add_api_route(\n custom_handler.rest_path,\n handler_method,\n methods=[custom_handler.rest_method],\n )\n\n return model\n\n async def delete_custom_handlers(self, model: MLModel) -> MLModel:\n handlers = get_custom_handlers(model)\n if len(handlers) == 0:\n return model\n\n # NOTE: Loop in reverse, so that it's quicker to find all the recently\n # added routes and we can remove routes on-the-fly\n for i, route in reversed(list(enumerate(self._app.routes))):\n for j, (custom_handler, handler_method) in enumerate(handlers):\n if matches(route, custom_handler, handler_method): # type: ignore\n self._app.routes.pop(i)\n handlers.pop(j)\n\n return model\n\n async def start(self):\n cfg = self._get_config()\n self._server = _NoSignalServer(cfg)\n if not self._settings.debug:\n disable_health_access_logs()\n\n await self._server.serve()\n\n def _get_config(self):\n kwargs = {}\n\n if self._settings._custom_rest_server_settings:\n logger.warning(\n \"REST custom configuration is out of support. Use as your own risk\"\n )\n kwargs.update(self._settings._custom_rest_server_settings)\n\n kwargs.update(\n {\n \"host\": self._settings.host,\n \"port\": self._settings.http_port,\n \"root_path\": self._settings.root_path,\n \"access_log\": self._settings.debug,\n }\n )\n\n if self._settings.logging_settings:\n # If not None, use ours. Otherwise, let Uvicorn fall back on its\n # own config.\n kwargs.update({\"log_config\": self._settings.logging_settings})\n\n return uvicorn.Config(self._app, **kwargs)\n\n async def stop(self, sig: Optional[int] = None):\n self._server.handle_exit(sig=sig, frame=None)\n", "path": "mlserver/rest/server.py"}, {"content": "import logging\n\n\nclass HealthEndpointFilter(logging.Filter):\n \"\"\"\n Filter to avoid logging health endpoints.\n From:\n https://github.com/encode/starlette/issues/864#issuecomment-653076434\n \"\"\"\n\n def filter(self, record: logging.LogRecord) -> bool:\n if not isinstance(record.args, tuple):\n return True\n\n if len(record.args) < 3:\n return True\n\n request_method = record.args[1]\n query_string = record.args[2]\n if request_method != \"GET\":\n return True\n\n if query_string in [\"/v2/health/live\", \"/v2/health/ready\"]:\n return False\n\n return True\n\n\ndef disable_health_access_logs() -> None:\n uvicorn_logger = logging.getLogger(\"uvicorn.access\")\n uvicorn_logger.addFilter(HealthEndpointFilter())\n\n\nloggerName = \"mlserver.rest\"\nlogger = logging.getLogger(loggerName)\n", "path": "mlserver/rest/logging.py"}]}
| 1,279 | 424 |
gh_patches_debug_23720
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-22870
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
lstsq
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/jax/numpy/linalg.py`
Content:
```
1 # local
2 import ivy
3 from ivy.functional.frontends.jax import Array
4 from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
5 from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
6 from ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs
7
8
9 @to_ivy_arrays_and_back
10 def cholesky(a):
11 return ivy.cholesky(a)
12
13
14 @to_ivy_arrays_and_back
15 def cond(x, p=None):
16 return ivy.cond(x, p=p)
17
18
19 @to_ivy_arrays_and_back
20 def det(a):
21 return ivy.det(a)
22
23
24 @to_ivy_arrays_and_back
25 def eig(a):
26 return ivy.eig(a)
27
28
29 @to_ivy_arrays_and_back
30 def eigh(a, UPLO="L", symmetrize_input=True):
31 def symmetrize(x):
32 # TODO : Take Hermitian transpose after complex numbers added
33 return (x + ivy.swapaxes(x, -1, -2)) / 2
34
35 if symmetrize_input:
36 a = symmetrize(a)
37
38 return ivy.eigh(a, UPLO=UPLO)
39
40
41 @to_ivy_arrays_and_back
42 def eigvals(a):
43 return ivy.eigvals(a)
44
45
46 @to_ivy_arrays_and_back
47 def eigvalsh(a, UPLO="L"):
48 return ivy.eigvalsh(a, UPLO=UPLO)
49
50
51 @to_ivy_arrays_and_back
52 def inv(a):
53 return ivy.inv(a)
54
55
56 @to_ivy_arrays_and_back
57 def matrix_power(a, n):
58 return ivy.matrix_power(a, n)
59
60
61 @to_ivy_arrays_and_back
62 def matrix_rank(M, tol=None):
63 return ivy.matrix_rank(M, atol=tol)
64
65
66 @to_ivy_arrays_and_back
67 def multi_dot(arrays, *, precision=None):
68 return ivy.multi_dot(arrays)
69
70
71 @to_ivy_arrays_and_back
72 @with_supported_dtypes(
73 {"0.4.14 and below": ("float32", "float64")},
74 "jax",
75 )
76 def norm(x, ord=None, axis=None, keepdims=False):
77 if ord is None:
78 ord = 2
79 if type(axis) in [list, tuple] and len(axis) == 2:
80 return Array(ivy.matrix_norm(x, ord=ord, axis=axis, keepdims=keepdims))
81 return Array(ivy.vector_norm(x, ord=ord, axis=axis, keepdims=keepdims))
82
83
84 @to_ivy_arrays_and_back
85 def pinv(a, rcond=None):
86 return ivy.pinv(a, rtol=rcond)
87
88
89 @to_ivy_arrays_and_back
90 def qr(a, mode="reduced"):
91 return ivy.qr(a, mode=mode)
92
93
94 @to_ivy_arrays_and_back
95 def slogdet(a, method=None):
96 return ivy.slogdet(a)
97
98
99 @to_ivy_arrays_and_back
100 def solve(a, b):
101 return ivy.solve(a, b)
102
103
104 @to_ivy_arrays_and_back
105 def svd(a, /, *, full_matrices=True, compute_uv=True, hermitian=None):
106 if not compute_uv:
107 return ivy.svdvals(a)
108 return ivy.svd(a, full_matrices=full_matrices)
109
110
111 @to_ivy_arrays_and_back
112 @with_unsupported_dtypes({"0.4.14 and below": ("float16", "bfloat16")}, "jax")
113 def tensorinv(a, ind=2):
114 old_shape = ivy.shape(a)
115 prod = 1
116 if ind > 0:
117 invshape = old_shape[ind:] + old_shape[:ind]
118 for k in old_shape[ind:]:
119 prod *= k
120 else:
121 raise ValueError("Invalid ind argument.")
122 a = ivy.reshape(a, shape=(prod, -1))
123 ia = ivy.inv(a)
124 new_shape = tuple([*invshape])
125 return Array(ivy.reshape(ia, shape=new_shape))
126
127
128 @to_ivy_arrays_and_back
129 def tensorsolve(a, b, axes=None):
130 a, b = promote_types_of_jax_inputs(a, b)
131 return ivy.tensorsolve(a, b, axes=axes)
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ivy/functional/frontends/jax/numpy/linalg.py b/ivy/functional/frontends/jax/numpy/linalg.py
--- a/ivy/functional/frontends/jax/numpy/linalg.py
+++ b/ivy/functional/frontends/jax/numpy/linalg.py
@@ -4,6 +4,7 @@
from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
from ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs
+from ivy.functional.frontends.numpy.linalg import lstsq as numpy_lstsq
@to_ivy_arrays_and_back
@@ -53,6 +54,23 @@
return ivy.inv(a)
+# TODO: replace this with function from API
+# As the composition provides numerically unstable results
+@to_ivy_arrays_and_back
+def lstsq(a, b, rcond=None, *, numpy_resid=False):
+ if numpy_resid:
+ return numpy_lstsq(a, b, rcond=rcond)
+ least_squares_solution = ivy.matmul(
+ ivy.pinv(a, rtol=1e-15).astype(ivy.float64), b.astype(ivy.float64)
+ )
+ residuals = ivy.sum((b - ivy.matmul(a, least_squares_solution)) ** 2).astype(
+ ivy.float64
+ )
+ svd_values = ivy.svd(a, compute_uv=False)
+ rank = ivy.matrix_rank(a).astype(ivy.int32)
+ return (least_squares_solution, residuals, rank, svd_values[0])
+
+
@to_ivy_arrays_and_back
def matrix_power(a, n):
return ivy.matrix_power(a, n)
|
{"golden_diff": "diff --git a/ivy/functional/frontends/jax/numpy/linalg.py b/ivy/functional/frontends/jax/numpy/linalg.py\n--- a/ivy/functional/frontends/jax/numpy/linalg.py\n+++ b/ivy/functional/frontends/jax/numpy/linalg.py\n@@ -4,6 +4,7 @@\n from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\n from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\n from ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs\n+from ivy.functional.frontends.numpy.linalg import lstsq as numpy_lstsq\n \n \n @to_ivy_arrays_and_back\n@@ -53,6 +54,23 @@\n return ivy.inv(a)\n \n \n+# TODO: replace this with function from API\n+# As the composition provides numerically unstable results\n+@to_ivy_arrays_and_back\n+def lstsq(a, b, rcond=None, *, numpy_resid=False):\n+ if numpy_resid:\n+ return numpy_lstsq(a, b, rcond=rcond)\n+ least_squares_solution = ivy.matmul(\n+ ivy.pinv(a, rtol=1e-15).astype(ivy.float64), b.astype(ivy.float64)\n+ )\n+ residuals = ivy.sum((b - ivy.matmul(a, least_squares_solution)) ** 2).astype(\n+ ivy.float64\n+ )\n+ svd_values = ivy.svd(a, compute_uv=False)\n+ rank = ivy.matrix_rank(a).astype(ivy.int32)\n+ return (least_squares_solution, residuals, rank, svd_values[0])\n+\n+\n @to_ivy_arrays_and_back\n def matrix_power(a, n):\n return ivy.matrix_power(a, n)\n", "issue": "lstsq\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax import Array\nfrom ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs\n\n\n@to_ivy_arrays_and_back\ndef cholesky(a):\n return ivy.cholesky(a)\n\n\n@to_ivy_arrays_and_back\ndef cond(x, p=None):\n return ivy.cond(x, p=p)\n\n\n@to_ivy_arrays_and_back\ndef det(a):\n return ivy.det(a)\n\n\n@to_ivy_arrays_and_back\ndef eig(a):\n return ivy.eig(a)\n\n\n@to_ivy_arrays_and_back\ndef eigh(a, UPLO=\"L\", symmetrize_input=True):\n def symmetrize(x):\n # TODO : Take Hermitian transpose after complex numbers added\n return (x + ivy.swapaxes(x, -1, -2)) / 2\n\n if symmetrize_input:\n a = symmetrize(a)\n\n return ivy.eigh(a, UPLO=UPLO)\n\n\n@to_ivy_arrays_and_back\ndef eigvals(a):\n return ivy.eigvals(a)\n\n\n@to_ivy_arrays_and_back\ndef eigvalsh(a, UPLO=\"L\"):\n return ivy.eigvalsh(a, UPLO=UPLO)\n\n\n@to_ivy_arrays_and_back\ndef inv(a):\n return ivy.inv(a)\n\n\n@to_ivy_arrays_and_back\ndef matrix_power(a, n):\n return ivy.matrix_power(a, n)\n\n\n@to_ivy_arrays_and_back\ndef matrix_rank(M, tol=None):\n return ivy.matrix_rank(M, atol=tol)\n\n\n@to_ivy_arrays_and_back\ndef multi_dot(arrays, *, precision=None):\n return ivy.multi_dot(arrays)\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes(\n {\"0.4.14 and below\": (\"float32\", \"float64\")},\n \"jax\",\n)\ndef norm(x, ord=None, axis=None, keepdims=False):\n if ord is None:\n ord = 2\n if type(axis) in [list, tuple] and len(axis) == 2:\n return Array(ivy.matrix_norm(x, ord=ord, axis=axis, keepdims=keepdims))\n return Array(ivy.vector_norm(x, ord=ord, axis=axis, keepdims=keepdims))\n\n\n@to_ivy_arrays_and_back\ndef pinv(a, rcond=None):\n return ivy.pinv(a, rtol=rcond)\n\n\n@to_ivy_arrays_and_back\ndef qr(a, mode=\"reduced\"):\n return ivy.qr(a, mode=mode)\n\n\n@to_ivy_arrays_and_back\ndef slogdet(a, method=None):\n return ivy.slogdet(a)\n\n\n@to_ivy_arrays_and_back\ndef solve(a, b):\n return ivy.solve(a, b)\n\n\n@to_ivy_arrays_and_back\ndef svd(a, /, *, full_matrices=True, compute_uv=True, hermitian=None):\n if not compute_uv:\n return ivy.svdvals(a)\n return ivy.svd(a, full_matrices=full_matrices)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"0.4.14 and below\": (\"float16\", \"bfloat16\")}, \"jax\")\ndef tensorinv(a, ind=2):\n old_shape = ivy.shape(a)\n prod = 1\n if ind > 0:\n invshape = old_shape[ind:] + old_shape[:ind]\n for k in old_shape[ind:]:\n prod *= k\n else:\n raise ValueError(\"Invalid ind argument.\")\n a = ivy.reshape(a, shape=(prod, -1))\n ia = ivy.inv(a)\n new_shape = tuple([*invshape])\n return Array(ivy.reshape(ia, shape=new_shape))\n\n\n@to_ivy_arrays_and_back\ndef tensorsolve(a, b, axes=None):\n a, b = promote_types_of_jax_inputs(a, b)\n return ivy.tensorsolve(a, b, axes=axes)\n", "path": "ivy/functional/frontends/jax/numpy/linalg.py"}], "after_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax import Array\nfrom ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.jax.numpy import promote_types_of_jax_inputs\nfrom ivy.functional.frontends.numpy.linalg import lstsq as numpy_lstsq\n\n\n@to_ivy_arrays_and_back\ndef cholesky(a):\n return ivy.cholesky(a)\n\n\n@to_ivy_arrays_and_back\ndef cond(x, p=None):\n return ivy.cond(x, p=p)\n\n\n@to_ivy_arrays_and_back\ndef det(a):\n return ivy.det(a)\n\n\n@to_ivy_arrays_and_back\ndef eig(a):\n return ivy.eig(a)\n\n\n@to_ivy_arrays_and_back\ndef eigh(a, UPLO=\"L\", symmetrize_input=True):\n def symmetrize(x):\n # TODO : Take Hermitian transpose after complex numbers added\n return (x + ivy.swapaxes(x, -1, -2)) / 2\n\n if symmetrize_input:\n a = symmetrize(a)\n\n return ivy.eigh(a, UPLO=UPLO)\n\n\n@to_ivy_arrays_and_back\ndef eigvals(a):\n return ivy.eigvals(a)\n\n\n@to_ivy_arrays_and_back\ndef eigvalsh(a, UPLO=\"L\"):\n return ivy.eigvalsh(a, UPLO=UPLO)\n\n\n@to_ivy_arrays_and_back\ndef inv(a):\n return ivy.inv(a)\n\n\n# TODO: replace this with function from API\n# As the composition provides numerically unstable results\n@to_ivy_arrays_and_back\ndef lstsq(a, b, rcond=None, *, numpy_resid=False):\n if numpy_resid:\n return numpy_lstsq(a, b, rcond=rcond)\n least_squares_solution = ivy.matmul(\n ivy.pinv(a, rtol=1e-15).astype(ivy.float64), b.astype(ivy.float64)\n )\n residuals = ivy.sum((b - ivy.matmul(a, least_squares_solution)) ** 2).astype(\n ivy.float64\n )\n svd_values = ivy.svd(a, compute_uv=False)\n rank = ivy.matrix_rank(a).astype(ivy.int32)\n return (least_squares_solution, residuals, rank, svd_values[0])\n\n\n@to_ivy_arrays_and_back\ndef matrix_power(a, n):\n return ivy.matrix_power(a, n)\n\n\n@to_ivy_arrays_and_back\ndef matrix_rank(M, tol=None):\n return ivy.matrix_rank(M, atol=tol)\n\n\n@to_ivy_arrays_and_back\ndef multi_dot(arrays, *, precision=None):\n return ivy.multi_dot(arrays)\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes(\n {\"0.4.14 and below\": (\"float32\", \"float64\")},\n \"jax\",\n)\ndef norm(x, ord=None, axis=None, keepdims=False):\n if ord is None:\n ord = 2\n if type(axis) in [list, tuple] and len(axis) == 2:\n return Array(ivy.matrix_norm(x, ord=ord, axis=axis, keepdims=keepdims))\n return Array(ivy.vector_norm(x, ord=ord, axis=axis, keepdims=keepdims))\n\n\n@to_ivy_arrays_and_back\ndef pinv(a, rcond=None):\n return ivy.pinv(a, rtol=rcond)\n\n\n@to_ivy_arrays_and_back\ndef qr(a, mode=\"reduced\"):\n return ivy.qr(a, mode=mode)\n\n\n@to_ivy_arrays_and_back\ndef slogdet(a, method=None):\n return ivy.slogdet(a)\n\n\n@to_ivy_arrays_and_back\ndef solve(a, b):\n return ivy.solve(a, b)\n\n\n@to_ivy_arrays_and_back\ndef svd(a, /, *, full_matrices=True, compute_uv=True, hermitian=None):\n if not compute_uv:\n return ivy.svdvals(a)\n return ivy.svd(a, full_matrices=full_matrices)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"0.4.14 and below\": (\"float16\", \"bfloat16\")}, \"jax\")\ndef tensorinv(a, ind=2):\n old_shape = ivy.shape(a)\n prod = 1\n if ind > 0:\n invshape = old_shape[ind:] + old_shape[:ind]\n for k in old_shape[ind:]:\n prod *= k\n else:\n raise ValueError(\"Invalid ind argument.\")\n a = ivy.reshape(a, shape=(prod, -1))\n ia = ivy.inv(a)\n new_shape = tuple([*invshape])\n return Array(ivy.reshape(ia, shape=new_shape))\n\n\n@to_ivy_arrays_and_back\ndef tensorsolve(a, b, axes=None):\n a, b = promote_types_of_jax_inputs(a, b)\n return ivy.tensorsolve(a, b, axes=axes)\n", "path": "ivy/functional/frontends/jax/numpy/linalg.py"}]}
| 1,508 | 404 |
gh_patches_debug_21108
|
rasdani/github-patches
|
git_diff
|
mesonbuild__meson-5585
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ccache cannot work on dist builds
Due to how the dist build is implemented in Meson, it will always result in ccache being unable to work.
The reason for this is the forced use of `mkdtemp` for directories within the `check_dist` function (in `dist.py`). The source is unpacked into `$tmpdir/tmp$string` which is random and thus unable to produce cache hits for subsequent builds. If I am trying to do e.g., a CI dist build, this is not ideal since it makes the build very long.
There are two possible solutions I've come up with which could solve the issue:
* Add a `DISTBUILDDIR` (or similar) environment variable which is used instead of `mkdtemp` during a dist build; this directory would take the place of the `mkdtemp` directories and would not be deleted when the build stopped
* Add a `TMPBUILDSTRING` (or similar) environment variable which is used in place of the random part of the tempdir string (resulting in e.g., /tmp/tmp$string); this directory can still be created and deleted normally, or even deleted if it already exists since we can assume this should only be used for meson dist builds
Other solutions would be great too, as long as ccache becomes usable.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesonbuild/scripts/dist.py`
Content:
```
1 # Copyright 2017 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 import lzma
17 import os
18 import sys
19 import shutil
20 import subprocess
21 import pickle
22 import hashlib
23 import tarfile, zipfile
24 import tempfile
25 from glob import glob
26 from mesonbuild.environment import detect_ninja
27 from mesonbuild.mesonlib import windows_proof_rmtree
28 from mesonbuild import mlog
29
30 def create_hash(fname):
31 hashname = fname + '.sha256sum'
32 m = hashlib.sha256()
33 m.update(open(fname, 'rb').read())
34 with open(hashname, 'w') as f:
35 f.write('%s %s\n' % (m.hexdigest(), os.path.basename(fname)))
36
37
38 def create_zip(zipfilename, packaging_dir):
39 prefix = os.path.dirname(packaging_dir)
40 removelen = len(prefix) + 1
41 with zipfile.ZipFile(zipfilename,
42 'w',
43 compression=zipfile.ZIP_DEFLATED,
44 allowZip64=True) as zf:
45 zf.write(packaging_dir, packaging_dir[removelen:])
46 for root, dirs, files in os.walk(packaging_dir):
47 for d in dirs:
48 dname = os.path.join(root, d)
49 zf.write(dname, dname[removelen:])
50 for f in files:
51 fname = os.path.join(root, f)
52 zf.write(fname, fname[removelen:])
53
54 def del_gitfiles(dirname):
55 for f in glob(os.path.join(dirname, '.git*')):
56 if os.path.isdir(f) and not os.path.islink(f):
57 windows_proof_rmtree(f)
58 else:
59 os.unlink(f)
60
61 def process_submodules(dirname):
62 module_file = os.path.join(dirname, '.gitmodules')
63 if not os.path.exists(module_file):
64 return
65 subprocess.check_call(['git', 'submodule', 'update', '--init', '--recursive'], cwd=dirname)
66 for line in open(module_file):
67 line = line.strip()
68 if '=' not in line:
69 continue
70 k, v = line.split('=', 1)
71 k = k.strip()
72 v = v.strip()
73 if k != 'path':
74 continue
75 del_gitfiles(os.path.join(dirname, v))
76
77
78 def run_dist_scripts(dist_root, dist_scripts):
79 assert(os.path.isabs(dist_root))
80 env = os.environ.copy()
81 env['MESON_DIST_ROOT'] = dist_root
82 for d in dist_scripts:
83 script = d['exe']
84 args = d['args']
85 name = ' '.join(script + args)
86 print('Running custom dist script {!r}'.format(name))
87 try:
88 rc = subprocess.call(script + args, env=env)
89 if rc != 0:
90 sys.exit('Dist script errored out')
91 except OSError:
92 print('Failed to run dist script {!r}'.format(name))
93 sys.exit(1)
94
95
96 def git_have_dirty_index(src_root):
97 '''Check whether there are uncommitted changes in git'''
98 ret = subprocess.call(['git', '-C', src_root, 'diff-index', '--quiet', 'HEAD'])
99 return ret == 1
100
101 def create_dist_git(dist_name, src_root, bld_root, dist_sub, dist_scripts):
102 if git_have_dirty_index(src_root):
103 mlog.warning('Repository has uncommitted changes that will not be included in the dist tarball')
104 distdir = os.path.join(dist_sub, dist_name)
105 if os.path.exists(distdir):
106 shutil.rmtree(distdir)
107 os.makedirs(distdir)
108 subprocess.check_call(['git', 'clone', '--shared', src_root, distdir])
109 process_submodules(distdir)
110 del_gitfiles(distdir)
111 run_dist_scripts(distdir, dist_scripts)
112 xzname = distdir + '.tar.xz'
113 # Should use shutil but it got xz support only in 3.5.
114 with tarfile.open(xzname, 'w:xz') as tf:
115 tf.add(distdir, dist_name)
116 # Create only .tar.xz for now.
117 # zipname = distdir + '.zip'
118 # create_zip(zipname, distdir)
119 shutil.rmtree(distdir)
120 return (xzname, )
121
122
123 def hg_have_dirty_index(src_root):
124 '''Check whether there are uncommitted changes in hg'''
125 out = subprocess.check_output(['hg', '-R', src_root, 'summary'])
126 return b'commit: (clean)' not in out
127
128 def create_dist_hg(dist_name, src_root, bld_root, dist_sub, dist_scripts):
129 if hg_have_dirty_index(src_root):
130 mlog.warning('Repository has uncommitted changes that will not be included in the dist tarball')
131
132 os.makedirs(dist_sub, exist_ok=True)
133 tarname = os.path.join(dist_sub, dist_name + '.tar')
134 xzname = tarname + '.xz'
135 subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'tar', tarname])
136 if dist_scripts:
137 mlog.warning('dist scripts are not supported in Mercurial projects')
138 with lzma.open(xzname, 'wb') as xf, open(tarname, 'rb') as tf:
139 shutil.copyfileobj(tf, xf)
140 os.unlink(tarname)
141 # Create only .tar.xz for now.
142 # zipname = os.path.join(dist_sub, dist_name + '.zip')
143 # subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'zip', zipname])
144 return (xzname, )
145
146
147 def check_dist(packagename, meson_command):
148 print('Testing distribution package %s' % packagename)
149 unpackdir = tempfile.mkdtemp()
150 builddir = tempfile.mkdtemp()
151 installdir = tempfile.mkdtemp()
152 ninja_bin = detect_ninja()
153 try:
154 tf = tarfile.open(packagename)
155 tf.extractall(unpackdir)
156 srcdir = glob(os.path.join(unpackdir, '*'))[0]
157 if subprocess.call(meson_command + ['--backend=ninja', srcdir, builddir]) != 0:
158 print('Running Meson on distribution package failed')
159 return 1
160 if subprocess.call([ninja_bin], cwd=builddir) != 0:
161 print('Compiling the distribution package failed')
162 return 1
163 if subprocess.call([ninja_bin, 'test'], cwd=builddir) != 0:
164 print('Running unit tests on the distribution package failed')
165 return 1
166 myenv = os.environ.copy()
167 myenv['DESTDIR'] = installdir
168 if subprocess.call([ninja_bin, 'install'], cwd=builddir, env=myenv) != 0:
169 print('Installing the distribution package failed')
170 return 1
171 finally:
172 shutil.rmtree(unpackdir)
173 shutil.rmtree(builddir)
174 shutil.rmtree(installdir)
175 print('Distribution package %s tested' % packagename)
176 return 0
177
178 def run(args):
179 src_root = args[0]
180 bld_root = args[1]
181 meson_command = args[2:]
182 priv_dir = os.path.join(bld_root, 'meson-private')
183 dist_sub = os.path.join(bld_root, 'meson-dist')
184
185 buildfile = os.path.join(priv_dir, 'build.dat')
186
187 build = pickle.load(open(buildfile, 'rb'))
188
189 dist_name = build.project_name + '-' + build.project_version
190
191 _git = os.path.join(src_root, '.git')
192 if os.path.isdir(_git) or os.path.isfile(_git):
193 names = create_dist_git(dist_name, src_root, bld_root, dist_sub, build.dist_scripts)
194 elif os.path.isdir(os.path.join(src_root, '.hg')):
195 names = create_dist_hg(dist_name, src_root, bld_root, dist_sub, build.dist_scripts)
196 else:
197 print('Dist currently only works with Git or Mercurial repos')
198 return 1
199 if names is None:
200 return 1
201 error_count = 0
202 for name in names:
203 rc = check_dist(name, meson_command) # Check only one.
204 if rc == 0:
205 create_hash(name)
206 error_count += rc
207 return 1 if error_count else 0
208
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mesonbuild/scripts/dist.py b/mesonbuild/scripts/dist.py
--- a/mesonbuild/scripts/dist.py
+++ b/mesonbuild/scripts/dist.py
@@ -144,11 +144,15 @@
return (xzname, )
-def check_dist(packagename, meson_command):
+def check_dist(packagename, meson_command, privdir):
print('Testing distribution package %s' % packagename)
- unpackdir = tempfile.mkdtemp()
- builddir = tempfile.mkdtemp()
- installdir = tempfile.mkdtemp()
+ unpackdir = os.path.join(privdir, 'dist-unpack')
+ builddir = os.path.join(privdir, 'dist-build')
+ installdir = os.path.join(privdir, 'dist-install')
+ for p in (unpackdir, builddir, installdir):
+ if os.path.exists(p):
+ shutil.rmtree(p)
+ os.mkdir(p)
ninja_bin = detect_ninja()
try:
tf = tarfile.open(packagename)
@@ -200,7 +204,7 @@
return 1
error_count = 0
for name in names:
- rc = check_dist(name, meson_command) # Check only one.
+ rc = check_dist(name, meson_command, priv_dir) # Check only one.
if rc == 0:
create_hash(name)
error_count += rc
|
{"golden_diff": "diff --git a/mesonbuild/scripts/dist.py b/mesonbuild/scripts/dist.py\n--- a/mesonbuild/scripts/dist.py\n+++ b/mesonbuild/scripts/dist.py\n@@ -144,11 +144,15 @@\n return (xzname, )\n \n \n-def check_dist(packagename, meson_command):\n+def check_dist(packagename, meson_command, privdir):\n print('Testing distribution package %s' % packagename)\n- unpackdir = tempfile.mkdtemp()\n- builddir = tempfile.mkdtemp()\n- installdir = tempfile.mkdtemp()\n+ unpackdir = os.path.join(privdir, 'dist-unpack')\n+ builddir = os.path.join(privdir, 'dist-build')\n+ installdir = os.path.join(privdir, 'dist-install')\n+ for p in (unpackdir, builddir, installdir):\n+ if os.path.exists(p):\n+ shutil.rmtree(p)\n+ os.mkdir(p)\n ninja_bin = detect_ninja()\n try:\n tf = tarfile.open(packagename)\n@@ -200,7 +204,7 @@\n return 1\n error_count = 0\n for name in names:\n- rc = check_dist(name, meson_command) # Check only one.\n+ rc = check_dist(name, meson_command, priv_dir) # Check only one.\n if rc == 0:\n create_hash(name)\n error_count += rc\n", "issue": "ccache cannot work on dist builds\nDue to how the dist build is implemented in Meson, it will always result in ccache being unable to work.\r\n\r\nThe reason for this is the forced use of `mkdtemp` for directories within the `check_dist` function (in `dist.py`). The source is unpacked into `$tmpdir/tmp$string` which is random and thus unable to produce cache hits for subsequent builds. If I am trying to do e.g., a CI dist build, this is not ideal since it makes the build very long.\r\n\r\nThere are two possible solutions I've come up with which could solve the issue:\r\n* Add a `DISTBUILDDIR` (or similar) environment variable which is used instead of `mkdtemp` during a dist build; this directory would take the place of the `mkdtemp` directories and would not be deleted when the build stopped\r\n* Add a `TMPBUILDSTRING` (or similar) environment variable which is used in place of the random part of the tempdir string (resulting in e.g., /tmp/tmp$string); this directory can still be created and deleted normally, or even deleted if it already exists since we can assume this should only be used for meson dist builds\r\n\r\nOther solutions would be great too, as long as ccache becomes usable.\n", "before_files": [{"content": "# Copyright 2017 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport lzma\nimport os\nimport sys\nimport shutil\nimport subprocess\nimport pickle\nimport hashlib\nimport tarfile, zipfile\nimport tempfile\nfrom glob import glob\nfrom mesonbuild.environment import detect_ninja\nfrom mesonbuild.mesonlib import windows_proof_rmtree\nfrom mesonbuild import mlog\n\ndef create_hash(fname):\n hashname = fname + '.sha256sum'\n m = hashlib.sha256()\n m.update(open(fname, 'rb').read())\n with open(hashname, 'w') as f:\n f.write('%s %s\\n' % (m.hexdigest(), os.path.basename(fname)))\n\n\ndef create_zip(zipfilename, packaging_dir):\n prefix = os.path.dirname(packaging_dir)\n removelen = len(prefix) + 1\n with zipfile.ZipFile(zipfilename,\n 'w',\n compression=zipfile.ZIP_DEFLATED,\n allowZip64=True) as zf:\n zf.write(packaging_dir, packaging_dir[removelen:])\n for root, dirs, files in os.walk(packaging_dir):\n for d in dirs:\n dname = os.path.join(root, d)\n zf.write(dname, dname[removelen:])\n for f in files:\n fname = os.path.join(root, f)\n zf.write(fname, fname[removelen:])\n\ndef del_gitfiles(dirname):\n for f in glob(os.path.join(dirname, '.git*')):\n if os.path.isdir(f) and not os.path.islink(f):\n windows_proof_rmtree(f)\n else:\n os.unlink(f)\n\ndef process_submodules(dirname):\n module_file = os.path.join(dirname, '.gitmodules')\n if not os.path.exists(module_file):\n return\n subprocess.check_call(['git', 'submodule', 'update', '--init', '--recursive'], cwd=dirname)\n for line in open(module_file):\n line = line.strip()\n if '=' not in line:\n continue\n k, v = line.split('=', 1)\n k = k.strip()\n v = v.strip()\n if k != 'path':\n continue\n del_gitfiles(os.path.join(dirname, v))\n\n\ndef run_dist_scripts(dist_root, dist_scripts):\n assert(os.path.isabs(dist_root))\n env = os.environ.copy()\n env['MESON_DIST_ROOT'] = dist_root\n for d in dist_scripts:\n script = d['exe']\n args = d['args']\n name = ' '.join(script + args)\n print('Running custom dist script {!r}'.format(name))\n try:\n rc = subprocess.call(script + args, env=env)\n if rc != 0:\n sys.exit('Dist script errored out')\n except OSError:\n print('Failed to run dist script {!r}'.format(name))\n sys.exit(1)\n\n\ndef git_have_dirty_index(src_root):\n '''Check whether there are uncommitted changes in git'''\n ret = subprocess.call(['git', '-C', src_root, 'diff-index', '--quiet', 'HEAD'])\n return ret == 1\n\ndef create_dist_git(dist_name, src_root, bld_root, dist_sub, dist_scripts):\n if git_have_dirty_index(src_root):\n mlog.warning('Repository has uncommitted changes that will not be included in the dist tarball')\n distdir = os.path.join(dist_sub, dist_name)\n if os.path.exists(distdir):\n shutil.rmtree(distdir)\n os.makedirs(distdir)\n subprocess.check_call(['git', 'clone', '--shared', src_root, distdir])\n process_submodules(distdir)\n del_gitfiles(distdir)\n run_dist_scripts(distdir, dist_scripts)\n xzname = distdir + '.tar.xz'\n # Should use shutil but it got xz support only in 3.5.\n with tarfile.open(xzname, 'w:xz') as tf:\n tf.add(distdir, dist_name)\n # Create only .tar.xz for now.\n # zipname = distdir + '.zip'\n # create_zip(zipname, distdir)\n shutil.rmtree(distdir)\n return (xzname, )\n\n\ndef hg_have_dirty_index(src_root):\n '''Check whether there are uncommitted changes in hg'''\n out = subprocess.check_output(['hg', '-R', src_root, 'summary'])\n return b'commit: (clean)' not in out\n\ndef create_dist_hg(dist_name, src_root, bld_root, dist_sub, dist_scripts):\n if hg_have_dirty_index(src_root):\n mlog.warning('Repository has uncommitted changes that will not be included in the dist tarball')\n\n os.makedirs(dist_sub, exist_ok=True)\n tarname = os.path.join(dist_sub, dist_name + '.tar')\n xzname = tarname + '.xz'\n subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'tar', tarname])\n if dist_scripts:\n mlog.warning('dist scripts are not supported in Mercurial projects')\n with lzma.open(xzname, 'wb') as xf, open(tarname, 'rb') as tf:\n shutil.copyfileobj(tf, xf)\n os.unlink(tarname)\n # Create only .tar.xz for now.\n # zipname = os.path.join(dist_sub, dist_name + '.zip')\n # subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'zip', zipname])\n return (xzname, )\n\n\ndef check_dist(packagename, meson_command):\n print('Testing distribution package %s' % packagename)\n unpackdir = tempfile.mkdtemp()\n builddir = tempfile.mkdtemp()\n installdir = tempfile.mkdtemp()\n ninja_bin = detect_ninja()\n try:\n tf = tarfile.open(packagename)\n tf.extractall(unpackdir)\n srcdir = glob(os.path.join(unpackdir, '*'))[0]\n if subprocess.call(meson_command + ['--backend=ninja', srcdir, builddir]) != 0:\n print('Running Meson on distribution package failed')\n return 1\n if subprocess.call([ninja_bin], cwd=builddir) != 0:\n print('Compiling the distribution package failed')\n return 1\n if subprocess.call([ninja_bin, 'test'], cwd=builddir) != 0:\n print('Running unit tests on the distribution package failed')\n return 1\n myenv = os.environ.copy()\n myenv['DESTDIR'] = installdir\n if subprocess.call([ninja_bin, 'install'], cwd=builddir, env=myenv) != 0:\n print('Installing the distribution package failed')\n return 1\n finally:\n shutil.rmtree(unpackdir)\n shutil.rmtree(builddir)\n shutil.rmtree(installdir)\n print('Distribution package %s tested' % packagename)\n return 0\n\ndef run(args):\n src_root = args[0]\n bld_root = args[1]\n meson_command = args[2:]\n priv_dir = os.path.join(bld_root, 'meson-private')\n dist_sub = os.path.join(bld_root, 'meson-dist')\n\n buildfile = os.path.join(priv_dir, 'build.dat')\n\n build = pickle.load(open(buildfile, 'rb'))\n\n dist_name = build.project_name + '-' + build.project_version\n\n _git = os.path.join(src_root, '.git')\n if os.path.isdir(_git) or os.path.isfile(_git):\n names = create_dist_git(dist_name, src_root, bld_root, dist_sub, build.dist_scripts)\n elif os.path.isdir(os.path.join(src_root, '.hg')):\n names = create_dist_hg(dist_name, src_root, bld_root, dist_sub, build.dist_scripts)\n else:\n print('Dist currently only works with Git or Mercurial repos')\n return 1\n if names is None:\n return 1\n error_count = 0\n for name in names:\n rc = check_dist(name, meson_command) # Check only one.\n if rc == 0:\n create_hash(name)\n error_count += rc\n return 1 if error_count else 0\n", "path": "mesonbuild/scripts/dist.py"}], "after_files": [{"content": "# Copyright 2017 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport lzma\nimport os\nimport sys\nimport shutil\nimport subprocess\nimport pickle\nimport hashlib\nimport tarfile, zipfile\nimport tempfile\nfrom glob import glob\nfrom mesonbuild.environment import detect_ninja\nfrom mesonbuild.mesonlib import windows_proof_rmtree\nfrom mesonbuild import mlog\n\ndef create_hash(fname):\n hashname = fname + '.sha256sum'\n m = hashlib.sha256()\n m.update(open(fname, 'rb').read())\n with open(hashname, 'w') as f:\n f.write('%s %s\\n' % (m.hexdigest(), os.path.basename(fname)))\n\n\ndef create_zip(zipfilename, packaging_dir):\n prefix = os.path.dirname(packaging_dir)\n removelen = len(prefix) + 1\n with zipfile.ZipFile(zipfilename,\n 'w',\n compression=zipfile.ZIP_DEFLATED,\n allowZip64=True) as zf:\n zf.write(packaging_dir, packaging_dir[removelen:])\n for root, dirs, files in os.walk(packaging_dir):\n for d in dirs:\n dname = os.path.join(root, d)\n zf.write(dname, dname[removelen:])\n for f in files:\n fname = os.path.join(root, f)\n zf.write(fname, fname[removelen:])\n\ndef del_gitfiles(dirname):\n for f in glob(os.path.join(dirname, '.git*')):\n if os.path.isdir(f) and not os.path.islink(f):\n windows_proof_rmtree(f)\n else:\n os.unlink(f)\n\ndef process_submodules(dirname):\n module_file = os.path.join(dirname, '.gitmodules')\n if not os.path.exists(module_file):\n return\n subprocess.check_call(['git', 'submodule', 'update', '--init', '--recursive'], cwd=dirname)\n for line in open(module_file):\n line = line.strip()\n if '=' not in line:\n continue\n k, v = line.split('=', 1)\n k = k.strip()\n v = v.strip()\n if k != 'path':\n continue\n del_gitfiles(os.path.join(dirname, v))\n\n\ndef run_dist_scripts(dist_root, dist_scripts):\n assert(os.path.isabs(dist_root))\n env = os.environ.copy()\n env['MESON_DIST_ROOT'] = dist_root\n for d in dist_scripts:\n script = d['exe']\n args = d['args']\n name = ' '.join(script + args)\n print('Running custom dist script {!r}'.format(name))\n try:\n rc = subprocess.call(script + args, env=env)\n if rc != 0:\n sys.exit('Dist script errored out')\n except OSError:\n print('Failed to run dist script {!r}'.format(name))\n sys.exit(1)\n\n\ndef git_have_dirty_index(src_root):\n '''Check whether there are uncommitted changes in git'''\n ret = subprocess.call(['git', '-C', src_root, 'diff-index', '--quiet', 'HEAD'])\n return ret == 1\n\ndef create_dist_git(dist_name, src_root, bld_root, dist_sub, dist_scripts):\n if git_have_dirty_index(src_root):\n mlog.warning('Repository has uncommitted changes that will not be included in the dist tarball')\n distdir = os.path.join(dist_sub, dist_name)\n if os.path.exists(distdir):\n shutil.rmtree(distdir)\n os.makedirs(distdir)\n subprocess.check_call(['git', 'clone', '--shared', src_root, distdir])\n process_submodules(distdir)\n del_gitfiles(distdir)\n run_dist_scripts(distdir, dist_scripts)\n xzname = distdir + '.tar.xz'\n # Should use shutil but it got xz support only in 3.5.\n with tarfile.open(xzname, 'w:xz') as tf:\n tf.add(distdir, dist_name)\n # Create only .tar.xz for now.\n # zipname = distdir + '.zip'\n # create_zip(zipname, distdir)\n shutil.rmtree(distdir)\n return (xzname, )\n\n\ndef hg_have_dirty_index(src_root):\n '''Check whether there are uncommitted changes in hg'''\n out = subprocess.check_output(['hg', '-R', src_root, 'summary'])\n return b'commit: (clean)' not in out\n\ndef create_dist_hg(dist_name, src_root, bld_root, dist_sub, dist_scripts):\n if hg_have_dirty_index(src_root):\n mlog.warning('Repository has uncommitted changes that will not be included in the dist tarball')\n\n os.makedirs(dist_sub, exist_ok=True)\n tarname = os.path.join(dist_sub, dist_name + '.tar')\n xzname = tarname + '.xz'\n subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'tar', tarname])\n if dist_scripts:\n mlog.warning('dist scripts are not supported in Mercurial projects')\n with lzma.open(xzname, 'wb') as xf, open(tarname, 'rb') as tf:\n shutil.copyfileobj(tf, xf)\n os.unlink(tarname)\n # Create only .tar.xz for now.\n # zipname = os.path.join(dist_sub, dist_name + '.zip')\n # subprocess.check_call(['hg', 'archive', '-R', src_root, '-S', '-t', 'zip', zipname])\n return (xzname, )\n\n\ndef check_dist(packagename, meson_command, privdir):\n print('Testing distribution package %s' % packagename)\n unpackdir = os.path.join(privdir, 'dist-unpack')\n builddir = os.path.join(privdir, 'dist-build')\n installdir = os.path.join(privdir, 'dist-install')\n for p in (unpackdir, builddir, installdir):\n if os.path.exists(p):\n shutil.rmtree(p)\n os.mkdir(p)\n ninja_bin = detect_ninja()\n try:\n tf = tarfile.open(packagename)\n tf.extractall(unpackdir)\n srcdir = glob(os.path.join(unpackdir, '*'))[0]\n if subprocess.call(meson_command + ['--backend=ninja', srcdir, builddir]) != 0:\n print('Running Meson on distribution package failed')\n return 1\n if subprocess.call([ninja_bin], cwd=builddir) != 0:\n print('Compiling the distribution package failed')\n return 1\n if subprocess.call([ninja_bin, 'test'], cwd=builddir) != 0:\n print('Running unit tests on the distribution package failed')\n return 1\n myenv = os.environ.copy()\n myenv['DESTDIR'] = installdir\n if subprocess.call([ninja_bin, 'install'], cwd=builddir, env=myenv) != 0:\n print('Installing the distribution package failed')\n return 1\n finally:\n shutil.rmtree(unpackdir)\n shutil.rmtree(builddir)\n shutil.rmtree(installdir)\n print('Distribution package %s tested' % packagename)\n return 0\n\ndef run(args):\n src_root = args[0]\n bld_root = args[1]\n meson_command = args[2:]\n priv_dir = os.path.join(bld_root, 'meson-private')\n dist_sub = os.path.join(bld_root, 'meson-dist')\n\n buildfile = os.path.join(priv_dir, 'build.dat')\n\n build = pickle.load(open(buildfile, 'rb'))\n\n dist_name = build.project_name + '-' + build.project_version\n\n _git = os.path.join(src_root, '.git')\n if os.path.isdir(_git) or os.path.isfile(_git):\n names = create_dist_git(dist_name, src_root, bld_root, dist_sub, build.dist_scripts)\n elif os.path.isdir(os.path.join(src_root, '.hg')):\n names = create_dist_hg(dist_name, src_root, bld_root, dist_sub, build.dist_scripts)\n else:\n print('Dist currently only works with Git or Mercurial repos')\n return 1\n if names is None:\n return 1\n error_count = 0\n for name in names:\n rc = check_dist(name, meson_command, priv_dir) # Check only one.\n if rc == 0:\n create_hash(name)\n error_count += rc\n return 1 if error_count else 0\n", "path": "mesonbuild/scripts/dist.py"}]}
| 2,982 | 328 |
gh_patches_debug_18977
|
rasdani/github-patches
|
git_diff
|
ludwig-ai__ludwig-1650
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inconsistent learning rates shown in output when specifying learning rate in config
When specifying the learning_rate parameter in the training section of the config, `description.json` shows a different default learning rate in the optimizer section:
config.yaml:
<img width="231" alt="Screen Shot 2021-12-21 at 4 46 42 PM" src="https://user-images.githubusercontent.com/687280/147016130-2856cfda-c399-4fc8-b79b-2fe96003b559.png">
description.json:
<img width="698" alt="Screen Shot 2021-12-21 at 4 47 04 PM" src="https://user-images.githubusercontent.com/687280/147016222-60e44fd7-fd92-474d-b73d-17c5c62d5954.png">
Expected behavior is that there is one learning rate, and if it is specified in the config it should match that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ludwig/utils/defaults.py`
Content:
```
1 #! /usr/bin/env python
2 # Copyright (c) 2019 Uber Technologies, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 # ==============================================================================
16 import argparse
17 import copy
18 import logging
19 import sys
20
21 import yaml
22
23 from ludwig.constants import BINARY, CATEGORY, COLUMN, COMBINED, LOSS, NAME, PROC_COLUMN, TRAINING, TYPE
24 from ludwig.contrib import add_contrib_callback_args
25 from ludwig.features.feature_registries import base_type_registry, input_type_registry, output_type_registry
26 from ludwig.features.feature_utils import compute_feature_hash
27 from ludwig.globals import LUDWIG_VERSION
28 from ludwig.utils.data_utils import load_config_from_str
29 from ludwig.utils.misc_utils import get_from_registry, merge_dict, set_default_value
30 from ludwig.utils.print_utils import print_ludwig
31
32 logger = logging.getLogger(__name__)
33
34 default_random_seed = 42
35
36 default_preprocessing_force_split = False
37 default_preprocessing_split_probabilities = (0.7, 0.1, 0.2)
38 default_preprocessing_stratify = None
39
40 default_preprocessing_parameters = {
41 "force_split": default_preprocessing_force_split,
42 "split_probabilities": default_preprocessing_split_probabilities,
43 "stratify": default_preprocessing_stratify,
44 }
45 default_preprocessing_parameters.update(
46 {name: base_type.preprocessing_defaults() for name, base_type in base_type_registry.items()}
47 )
48
49 default_combiner_type = "concat"
50
51 default_training_params = {
52 "optimizer": {TYPE: "adam"},
53 "epochs": 100,
54 "regularization_lambda": 0,
55 "regularization_type": "l2",
56 "learning_rate": 0.001,
57 "batch_size": 128,
58 "eval_batch_size": None,
59 "early_stop": 5,
60 "reduce_learning_rate_on_plateau": 0,
61 "reduce_learning_rate_on_plateau_patience": 5,
62 "reduce_learning_rate_on_plateau_rate": 0.5,
63 "increase_batch_size_on_plateau": 0,
64 "increase_batch_size_on_plateau_patience": 5,
65 "increase_batch_size_on_plateau_rate": 2,
66 "increase_batch_size_on_plateau_max": 512,
67 "decay": False,
68 "decay_steps": 10000,
69 "decay_rate": 0.96,
70 "staircase": False,
71 "gradient_clipping": None,
72 "validation_field": COMBINED,
73 "validation_metric": LOSS,
74 "bucketing_field": None,
75 "learning_rate_warmup_epochs": 1,
76 }
77
78 default_optimizer_params_registry = {
79 "sgd": {"lr": 0.001},
80 "stochastic_gradient_descent": {"lr": 0.001},
81 "gd": {"lr": 0.001},
82 "gradient_descent": {"lr": 0.001},
83 "adam": {
84 "betas": (0.9, 0.999),
85 # 'beta_1': 0.9,
86 # 'beta_2': 0.999,
87 # 'epsilon': 1e-08
88 "eps": 1e-08,
89 },
90 "adadelta": {
91 "rho": 0.95,
92 "eps": 1e-08
93 # 'epsilon': 1e-08
94 },
95 "adagrad": {"initial_accumulator_value": 0.1},
96 "adamax": {},
97 "ftrl": {
98 "learning_rate_power": -0.5,
99 "initial_accumulator_value": 0.1,
100 "l1_regularization_strength": 0.0,
101 "l2_regularization_strength": 0.0,
102 },
103 "nadam": {},
104 "rmsprop": {
105 "weight_decay": 0.9,
106 "momentum": 0.0,
107 # 'epsilon': 1e-10,
108 "eps": 1e-10,
109 "centered": False,
110 },
111 }
112 default_optimizer_params_registry["stochastic_gradient_descent"] = default_optimizer_params_registry["sgd"]
113 default_optimizer_params_registry["gd"] = default_optimizer_params_registry["sgd"]
114 default_optimizer_params_registry["gradient_descent"] = default_optimizer_params_registry["sgd"]
115
116
117 def get_default_optimizer_params(optimizer_type):
118 if optimizer_type in default_optimizer_params_registry:
119 return default_optimizer_params_registry[optimizer_type]
120 else:
121 raise ValueError("Incorrect optimizer type: " + optimizer_type)
122
123
124 def _perform_sanity_checks(config):
125 assert "input_features" in config, "config does not define any input features"
126
127 assert "output_features" in config, "config does not define any output features"
128
129 assert isinstance(config["input_features"], list), (
130 "Ludwig expects input features in a list. Check your model " "config format"
131 )
132
133 assert isinstance(config["output_features"], list), (
134 "Ludwig expects output features in a list. Check your model " "config format"
135 )
136
137 assert len(config["input_features"]) > 0, "config needs to have at least one input feature"
138
139 assert len(config["output_features"]) > 0, "config needs to have at least one output feature"
140
141 if TRAINING in config:
142 assert isinstance(config[TRAINING], dict), (
143 "There is an issue while reading the training section of the "
144 "config. The parameters are expected to be"
145 "read as a dictionary. Please check your config format."
146 )
147
148 if "preprocessing" in config:
149 assert isinstance(config["preprocessing"], dict), (
150 "There is an issue while reading the preprocessing section of the "
151 "config. The parameters are expected to be read"
152 "as a dictionary. Please check your config format."
153 )
154
155 if "combiner" in config:
156 assert isinstance(config["combiner"], dict), (
157 "There is an issue while reading the combiner section of the "
158 "config. The parameters are expected to be read"
159 "as a dictionary. Please check your config format."
160 )
161
162
163 def _set_feature_column(config: dict) -> None:
164 for feature in config["input_features"] + config["output_features"]:
165 if COLUMN not in feature:
166 feature[COLUMN] = feature[NAME]
167
168
169 def _set_proc_column(config: dict) -> None:
170 for feature in config["input_features"] + config["output_features"]:
171 if PROC_COLUMN not in feature:
172 feature[PROC_COLUMN] = compute_feature_hash(feature)
173
174
175 def _merge_hyperopt_with_training(config: dict) -> None:
176 if "hyperopt" not in config:
177 return
178
179 scheduler = config["hyperopt"].get("sampler", {}).get("scheduler")
180 if not scheduler:
181 return
182
183 if TRAINING not in config:
184 config[TRAINING] = {}
185
186 # Disable early stopping when using a scheduler. We achieve this by setting the parameter
187 # to -1, which ensures the condition to apply early stopping is never met.
188 training = config[TRAINING]
189 early_stop = training.get("early_stop")
190 if early_stop is not None and early_stop != -1:
191 raise ValueError(
192 "Cannot set training parameter `early_stop` when using a hyperopt scheduler. "
193 "Unset this parameter in your config."
194 )
195 training["early_stop"] = -1
196
197 # At most one of max_t and epochs may be specified by the user, and we set them to be equal to
198 # ensure that Ludwig does not stop training before the scheduler has finished the trial, unless
199 # max_t is in time_total_s, in which case we set epochs very high to continue train until stopped.
200 max_t = scheduler.get("max_t")
201 time_attr = scheduler.get("time_attr")
202 epochs = training.get("epochs")
203 if max_t is not None and epochs is not None and max_t != epochs and time_attr != "time_total_s":
204 raise ValueError(
205 "Cannot set training parameter `epochs` when using a hyperopt scheduler with `max_t`. "
206 "Unset one of these parameters in your config."
207 )
208 elif max_t is not None:
209 if time_attr == "time_total_s":
210 training["epochs"] = sys.maxsize # essentially continue training until stopped
211 else:
212 training["epochs"] = max_t
213 elif epochs is not None:
214 scheduler["max_t"] = epochs
215
216
217 def merge_with_defaults(config):
218 config = copy.deepcopy(config)
219 _perform_sanity_checks(config)
220 _set_feature_column(config)
221 _set_proc_column(config)
222 _merge_hyperopt_with_training(config)
223
224 # ===== Preprocessing =====
225 config["preprocessing"] = merge_dict(default_preprocessing_parameters, config.get("preprocessing", {}))
226
227 stratify = config["preprocessing"]["stratify"]
228 if stratify is not None:
229 features = config["input_features"] + config["output_features"]
230 feature_names = {f[COLUMN] for f in features}
231 if stratify not in feature_names:
232 logger.warning("Stratify is not among the features. " "Cannot establish if it is a binary or category")
233 elif [f for f in features if f[COLUMN] == stratify][0][TYPE] not in {BINARY, CATEGORY}:
234 raise ValueError("Stratify feature must be binary or category")
235
236 # ===== Training =====
237 set_default_value(config, TRAINING, default_training_params)
238
239 for param, value in default_training_params.items():
240 set_default_value(config[TRAINING], param, value)
241
242 set_default_value(
243 config[TRAINING],
244 "validation_metric",
245 output_type_registry[config["output_features"][0][TYPE]].default_validation_metric,
246 )
247
248 # ===== Training Optimizer =====
249 optimizer = config[TRAINING]["optimizer"]
250 default_optimizer_params = get_default_optimizer_params(optimizer[TYPE])
251 for param in default_optimizer_params:
252 set_default_value(optimizer, param, default_optimizer_params[param])
253
254 # ===== Input Features =====
255 for input_feature in config["input_features"]:
256 get_from_registry(input_feature[TYPE], input_type_registry).populate_defaults(input_feature)
257
258 # ===== Combiner =====
259 set_default_value(config, "combiner", {TYPE: default_combiner_type})
260
261 # ===== Output features =====
262 for output_feature in config["output_features"]:
263 get_from_registry(output_feature[TYPE], output_type_registry).populate_defaults(output_feature)
264
265 return config
266
267
268 def render_config(config=None, output=None, **kwargs):
269 output_config = merge_with_defaults(config)
270 if output is None:
271 print(yaml.safe_dump(output_config, None, sort_keys=False))
272 else:
273 with open(output, "w") as f:
274 yaml.safe_dump(output_config, f, sort_keys=False)
275
276
277 def cli_render_config(sys_argv):
278 parser = argparse.ArgumentParser(
279 description="This script renders the full config from a user config.",
280 prog="ludwig render_config",
281 usage="%(prog)s [options]",
282 )
283 parser.add_argument(
284 "-c",
285 "--config",
286 type=load_config_from_str,
287 help="input user YAML config path",
288 )
289 parser.add_argument(
290 "-o",
291 "--output",
292 type=str,
293 help="output rendered YAML config path",
294 required=False,
295 )
296
297 add_contrib_callback_args(parser)
298 args = parser.parse_args(sys_argv)
299
300 args.callbacks = args.callbacks or []
301 for callback in args.callbacks:
302 callback.on_cmdline("render_config", *sys_argv)
303
304 print_ludwig("Render Config", LUDWIG_VERSION)
305 render_config(**vars(args))
306
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ludwig/utils/defaults.py b/ludwig/utils/defaults.py
--- a/ludwig/utils/defaults.py
+++ b/ludwig/utils/defaults.py
@@ -76,10 +76,10 @@
}
default_optimizer_params_registry = {
- "sgd": {"lr": 0.001},
- "stochastic_gradient_descent": {"lr": 0.001},
- "gd": {"lr": 0.001},
- "gradient_descent": {"lr": 0.001},
+ "sgd": {},
+ "stochastic_gradient_descent": {},
+ "gd": {},
+ "gradient_descent": {},
"adam": {
"betas": (0.9, 0.999),
# 'beta_1': 0.9,
@@ -247,6 +247,7 @@
# ===== Training Optimizer =====
optimizer = config[TRAINING]["optimizer"]
+ set_default_value(optimizer, "lr", config[TRAINING]["learning_rate"])
default_optimizer_params = get_default_optimizer_params(optimizer[TYPE])
for param in default_optimizer_params:
set_default_value(optimizer, param, default_optimizer_params[param])
|
{"golden_diff": "diff --git a/ludwig/utils/defaults.py b/ludwig/utils/defaults.py\n--- a/ludwig/utils/defaults.py\n+++ b/ludwig/utils/defaults.py\n@@ -76,10 +76,10 @@\n }\n \n default_optimizer_params_registry = {\n- \"sgd\": {\"lr\": 0.001},\n- \"stochastic_gradient_descent\": {\"lr\": 0.001},\n- \"gd\": {\"lr\": 0.001},\n- \"gradient_descent\": {\"lr\": 0.001},\n+ \"sgd\": {},\n+ \"stochastic_gradient_descent\": {},\n+ \"gd\": {},\n+ \"gradient_descent\": {},\n \"adam\": {\n \"betas\": (0.9, 0.999),\n # 'beta_1': 0.9,\n@@ -247,6 +247,7 @@\n \n # ===== Training Optimizer =====\n optimizer = config[TRAINING][\"optimizer\"]\n+ set_default_value(optimizer, \"lr\", config[TRAINING][\"learning_rate\"])\n default_optimizer_params = get_default_optimizer_params(optimizer[TYPE])\n for param in default_optimizer_params:\n set_default_value(optimizer, param, default_optimizer_params[param])\n", "issue": "Inconsistent learning rates shown in output when specifying learning rate in config \nWhen specifying the learning_rate parameter in the training section of the config, `description.json` shows a different default learning rate in the optimizer section:\r\n\r\nconfig.yaml:\r\n<img width=\"231\" alt=\"Screen Shot 2021-12-21 at 4 46 42 PM\" src=\"https://user-images.githubusercontent.com/687280/147016130-2856cfda-c399-4fc8-b79b-2fe96003b559.png\">\r\n\r\ndescription.json:\r\n<img width=\"698\" alt=\"Screen Shot 2021-12-21 at 4 47 04 PM\" src=\"https://user-images.githubusercontent.com/687280/147016222-60e44fd7-fd92-474d-b73d-17c5c62d5954.png\">\r\n\r\nExpected behavior is that there is one learning rate, and if it is specified in the config it should match that.\r\n\n", "before_files": [{"content": "#! /usr/bin/env python\n# Copyright (c) 2019 Uber Technologies, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\nimport argparse\nimport copy\nimport logging\nimport sys\n\nimport yaml\n\nfrom ludwig.constants import BINARY, CATEGORY, COLUMN, COMBINED, LOSS, NAME, PROC_COLUMN, TRAINING, TYPE\nfrom ludwig.contrib import add_contrib_callback_args\nfrom ludwig.features.feature_registries import base_type_registry, input_type_registry, output_type_registry\nfrom ludwig.features.feature_utils import compute_feature_hash\nfrom ludwig.globals import LUDWIG_VERSION\nfrom ludwig.utils.data_utils import load_config_from_str\nfrom ludwig.utils.misc_utils import get_from_registry, merge_dict, set_default_value\nfrom ludwig.utils.print_utils import print_ludwig\n\nlogger = logging.getLogger(__name__)\n\ndefault_random_seed = 42\n\ndefault_preprocessing_force_split = False\ndefault_preprocessing_split_probabilities = (0.7, 0.1, 0.2)\ndefault_preprocessing_stratify = None\n\ndefault_preprocessing_parameters = {\n \"force_split\": default_preprocessing_force_split,\n \"split_probabilities\": default_preprocessing_split_probabilities,\n \"stratify\": default_preprocessing_stratify,\n}\ndefault_preprocessing_parameters.update(\n {name: base_type.preprocessing_defaults() for name, base_type in base_type_registry.items()}\n)\n\ndefault_combiner_type = \"concat\"\n\ndefault_training_params = {\n \"optimizer\": {TYPE: \"adam\"},\n \"epochs\": 100,\n \"regularization_lambda\": 0,\n \"regularization_type\": \"l2\",\n \"learning_rate\": 0.001,\n \"batch_size\": 128,\n \"eval_batch_size\": None,\n \"early_stop\": 5,\n \"reduce_learning_rate_on_plateau\": 0,\n \"reduce_learning_rate_on_plateau_patience\": 5,\n \"reduce_learning_rate_on_plateau_rate\": 0.5,\n \"increase_batch_size_on_plateau\": 0,\n \"increase_batch_size_on_plateau_patience\": 5,\n \"increase_batch_size_on_plateau_rate\": 2,\n \"increase_batch_size_on_plateau_max\": 512,\n \"decay\": False,\n \"decay_steps\": 10000,\n \"decay_rate\": 0.96,\n \"staircase\": False,\n \"gradient_clipping\": None,\n \"validation_field\": COMBINED,\n \"validation_metric\": LOSS,\n \"bucketing_field\": None,\n \"learning_rate_warmup_epochs\": 1,\n}\n\ndefault_optimizer_params_registry = {\n \"sgd\": {\"lr\": 0.001},\n \"stochastic_gradient_descent\": {\"lr\": 0.001},\n \"gd\": {\"lr\": 0.001},\n \"gradient_descent\": {\"lr\": 0.001},\n \"adam\": {\n \"betas\": (0.9, 0.999),\n # 'beta_1': 0.9,\n # 'beta_2': 0.999,\n # 'epsilon': 1e-08\n \"eps\": 1e-08,\n },\n \"adadelta\": {\n \"rho\": 0.95,\n \"eps\": 1e-08\n # 'epsilon': 1e-08\n },\n \"adagrad\": {\"initial_accumulator_value\": 0.1},\n \"adamax\": {},\n \"ftrl\": {\n \"learning_rate_power\": -0.5,\n \"initial_accumulator_value\": 0.1,\n \"l1_regularization_strength\": 0.0,\n \"l2_regularization_strength\": 0.0,\n },\n \"nadam\": {},\n \"rmsprop\": {\n \"weight_decay\": 0.9,\n \"momentum\": 0.0,\n # 'epsilon': 1e-10,\n \"eps\": 1e-10,\n \"centered\": False,\n },\n}\ndefault_optimizer_params_registry[\"stochastic_gradient_descent\"] = default_optimizer_params_registry[\"sgd\"]\ndefault_optimizer_params_registry[\"gd\"] = default_optimizer_params_registry[\"sgd\"]\ndefault_optimizer_params_registry[\"gradient_descent\"] = default_optimizer_params_registry[\"sgd\"]\n\n\ndef get_default_optimizer_params(optimizer_type):\n if optimizer_type in default_optimizer_params_registry:\n return default_optimizer_params_registry[optimizer_type]\n else:\n raise ValueError(\"Incorrect optimizer type: \" + optimizer_type)\n\n\ndef _perform_sanity_checks(config):\n assert \"input_features\" in config, \"config does not define any input features\"\n\n assert \"output_features\" in config, \"config does not define any output features\"\n\n assert isinstance(config[\"input_features\"], list), (\n \"Ludwig expects input features in a list. Check your model \" \"config format\"\n )\n\n assert isinstance(config[\"output_features\"], list), (\n \"Ludwig expects output features in a list. Check your model \" \"config format\"\n )\n\n assert len(config[\"input_features\"]) > 0, \"config needs to have at least one input feature\"\n\n assert len(config[\"output_features\"]) > 0, \"config needs to have at least one output feature\"\n\n if TRAINING in config:\n assert isinstance(config[TRAINING], dict), (\n \"There is an issue while reading the training section of the \"\n \"config. The parameters are expected to be\"\n \"read as a dictionary. Please check your config format.\"\n )\n\n if \"preprocessing\" in config:\n assert isinstance(config[\"preprocessing\"], dict), (\n \"There is an issue while reading the preprocessing section of the \"\n \"config. The parameters are expected to be read\"\n \"as a dictionary. Please check your config format.\"\n )\n\n if \"combiner\" in config:\n assert isinstance(config[\"combiner\"], dict), (\n \"There is an issue while reading the combiner section of the \"\n \"config. The parameters are expected to be read\"\n \"as a dictionary. Please check your config format.\"\n )\n\n\ndef _set_feature_column(config: dict) -> None:\n for feature in config[\"input_features\"] + config[\"output_features\"]:\n if COLUMN not in feature:\n feature[COLUMN] = feature[NAME]\n\n\ndef _set_proc_column(config: dict) -> None:\n for feature in config[\"input_features\"] + config[\"output_features\"]:\n if PROC_COLUMN not in feature:\n feature[PROC_COLUMN] = compute_feature_hash(feature)\n\n\ndef _merge_hyperopt_with_training(config: dict) -> None:\n if \"hyperopt\" not in config:\n return\n\n scheduler = config[\"hyperopt\"].get(\"sampler\", {}).get(\"scheduler\")\n if not scheduler:\n return\n\n if TRAINING not in config:\n config[TRAINING] = {}\n\n # Disable early stopping when using a scheduler. We achieve this by setting the parameter\n # to -1, which ensures the condition to apply early stopping is never met.\n training = config[TRAINING]\n early_stop = training.get(\"early_stop\")\n if early_stop is not None and early_stop != -1:\n raise ValueError(\n \"Cannot set training parameter `early_stop` when using a hyperopt scheduler. \"\n \"Unset this parameter in your config.\"\n )\n training[\"early_stop\"] = -1\n\n # At most one of max_t and epochs may be specified by the user, and we set them to be equal to\n # ensure that Ludwig does not stop training before the scheduler has finished the trial, unless\n # max_t is in time_total_s, in which case we set epochs very high to continue train until stopped.\n max_t = scheduler.get(\"max_t\")\n time_attr = scheduler.get(\"time_attr\")\n epochs = training.get(\"epochs\")\n if max_t is not None and epochs is not None and max_t != epochs and time_attr != \"time_total_s\":\n raise ValueError(\n \"Cannot set training parameter `epochs` when using a hyperopt scheduler with `max_t`. \"\n \"Unset one of these parameters in your config.\"\n )\n elif max_t is not None:\n if time_attr == \"time_total_s\":\n training[\"epochs\"] = sys.maxsize # essentially continue training until stopped\n else:\n training[\"epochs\"] = max_t\n elif epochs is not None:\n scheduler[\"max_t\"] = epochs\n\n\ndef merge_with_defaults(config):\n config = copy.deepcopy(config)\n _perform_sanity_checks(config)\n _set_feature_column(config)\n _set_proc_column(config)\n _merge_hyperopt_with_training(config)\n\n # ===== Preprocessing =====\n config[\"preprocessing\"] = merge_dict(default_preprocessing_parameters, config.get(\"preprocessing\", {}))\n\n stratify = config[\"preprocessing\"][\"stratify\"]\n if stratify is not None:\n features = config[\"input_features\"] + config[\"output_features\"]\n feature_names = {f[COLUMN] for f in features}\n if stratify not in feature_names:\n logger.warning(\"Stratify is not among the features. \" \"Cannot establish if it is a binary or category\")\n elif [f for f in features if f[COLUMN] == stratify][0][TYPE] not in {BINARY, CATEGORY}:\n raise ValueError(\"Stratify feature must be binary or category\")\n\n # ===== Training =====\n set_default_value(config, TRAINING, default_training_params)\n\n for param, value in default_training_params.items():\n set_default_value(config[TRAINING], param, value)\n\n set_default_value(\n config[TRAINING],\n \"validation_metric\",\n output_type_registry[config[\"output_features\"][0][TYPE]].default_validation_metric,\n )\n\n # ===== Training Optimizer =====\n optimizer = config[TRAINING][\"optimizer\"]\n default_optimizer_params = get_default_optimizer_params(optimizer[TYPE])\n for param in default_optimizer_params:\n set_default_value(optimizer, param, default_optimizer_params[param])\n\n # ===== Input Features =====\n for input_feature in config[\"input_features\"]:\n get_from_registry(input_feature[TYPE], input_type_registry).populate_defaults(input_feature)\n\n # ===== Combiner =====\n set_default_value(config, \"combiner\", {TYPE: default_combiner_type})\n\n # ===== Output features =====\n for output_feature in config[\"output_features\"]:\n get_from_registry(output_feature[TYPE], output_type_registry).populate_defaults(output_feature)\n\n return config\n\n\ndef render_config(config=None, output=None, **kwargs):\n output_config = merge_with_defaults(config)\n if output is None:\n print(yaml.safe_dump(output_config, None, sort_keys=False))\n else:\n with open(output, \"w\") as f:\n yaml.safe_dump(output_config, f, sort_keys=False)\n\n\ndef cli_render_config(sys_argv):\n parser = argparse.ArgumentParser(\n description=\"This script renders the full config from a user config.\",\n prog=\"ludwig render_config\",\n usage=\"%(prog)s [options]\",\n )\n parser.add_argument(\n \"-c\",\n \"--config\",\n type=load_config_from_str,\n help=\"input user YAML config path\",\n )\n parser.add_argument(\n \"-o\",\n \"--output\",\n type=str,\n help=\"output rendered YAML config path\",\n required=False,\n )\n\n add_contrib_callback_args(parser)\n args = parser.parse_args(sys_argv)\n\n args.callbacks = args.callbacks or []\n for callback in args.callbacks:\n callback.on_cmdline(\"render_config\", *sys_argv)\n\n print_ludwig(\"Render Config\", LUDWIG_VERSION)\n render_config(**vars(args))\n", "path": "ludwig/utils/defaults.py"}], "after_files": [{"content": "#! /usr/bin/env python\n# Copyright (c) 2019 Uber Technologies, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\nimport argparse\nimport copy\nimport logging\nimport sys\n\nimport yaml\n\nfrom ludwig.constants import BINARY, CATEGORY, COLUMN, COMBINED, LOSS, NAME, PROC_COLUMN, TRAINING, TYPE\nfrom ludwig.contrib import add_contrib_callback_args\nfrom ludwig.features.feature_registries import base_type_registry, input_type_registry, output_type_registry\nfrom ludwig.features.feature_utils import compute_feature_hash\nfrom ludwig.globals import LUDWIG_VERSION\nfrom ludwig.utils.data_utils import load_config_from_str\nfrom ludwig.utils.misc_utils import get_from_registry, merge_dict, set_default_value\nfrom ludwig.utils.print_utils import print_ludwig\n\nlogger = logging.getLogger(__name__)\n\ndefault_random_seed = 42\n\ndefault_preprocessing_force_split = False\ndefault_preprocessing_split_probabilities = (0.7, 0.1, 0.2)\ndefault_preprocessing_stratify = None\n\ndefault_preprocessing_parameters = {\n \"force_split\": default_preprocessing_force_split,\n \"split_probabilities\": default_preprocessing_split_probabilities,\n \"stratify\": default_preprocessing_stratify,\n}\ndefault_preprocessing_parameters.update(\n {name: base_type.preprocessing_defaults() for name, base_type in base_type_registry.items()}\n)\n\ndefault_combiner_type = \"concat\"\n\ndefault_training_params = {\n \"optimizer\": {TYPE: \"adam\"},\n \"epochs\": 100,\n \"regularization_lambda\": 0,\n \"regularization_type\": \"l2\",\n \"learning_rate\": 0.001,\n \"batch_size\": 128,\n \"eval_batch_size\": None,\n \"early_stop\": 5,\n \"reduce_learning_rate_on_plateau\": 0,\n \"reduce_learning_rate_on_plateau_patience\": 5,\n \"reduce_learning_rate_on_plateau_rate\": 0.5,\n \"increase_batch_size_on_plateau\": 0,\n \"increase_batch_size_on_plateau_patience\": 5,\n \"increase_batch_size_on_plateau_rate\": 2,\n \"increase_batch_size_on_plateau_max\": 512,\n \"decay\": False,\n \"decay_steps\": 10000,\n \"decay_rate\": 0.96,\n \"staircase\": False,\n \"gradient_clipping\": None,\n \"validation_field\": COMBINED,\n \"validation_metric\": LOSS,\n \"bucketing_field\": None,\n \"learning_rate_warmup_epochs\": 1,\n}\n\ndefault_optimizer_params_registry = {\n \"sgd\": {},\n \"stochastic_gradient_descent\": {},\n \"gd\": {},\n \"gradient_descent\": {},\n \"adam\": {\n \"betas\": (0.9, 0.999),\n # 'beta_1': 0.9,\n # 'beta_2': 0.999,\n # 'epsilon': 1e-08\n \"eps\": 1e-08,\n },\n \"adadelta\": {\n \"rho\": 0.95,\n \"eps\": 1e-08\n # 'epsilon': 1e-08\n },\n \"adagrad\": {\"initial_accumulator_value\": 0.1},\n \"adamax\": {},\n \"ftrl\": {\n \"learning_rate_power\": -0.5,\n \"initial_accumulator_value\": 0.1,\n \"l1_regularization_strength\": 0.0,\n \"l2_regularization_strength\": 0.0,\n },\n \"nadam\": {},\n \"rmsprop\": {\n \"weight_decay\": 0.9,\n \"momentum\": 0.0,\n # 'epsilon': 1e-10,\n \"eps\": 1e-10,\n \"centered\": False,\n },\n}\ndefault_optimizer_params_registry[\"stochastic_gradient_descent\"] = default_optimizer_params_registry[\"sgd\"]\ndefault_optimizer_params_registry[\"gd\"] = default_optimizer_params_registry[\"sgd\"]\ndefault_optimizer_params_registry[\"gradient_descent\"] = default_optimizer_params_registry[\"sgd\"]\n\n\ndef get_default_optimizer_params(optimizer_type):\n if optimizer_type in default_optimizer_params_registry:\n return default_optimizer_params_registry[optimizer_type]\n else:\n raise ValueError(\"Incorrect optimizer type: \" + optimizer_type)\n\n\ndef _perform_sanity_checks(config):\n assert \"input_features\" in config, \"config does not define any input features\"\n\n assert \"output_features\" in config, \"config does not define any output features\"\n\n assert isinstance(config[\"input_features\"], list), (\n \"Ludwig expects input features in a list. Check your model \" \"config format\"\n )\n\n assert isinstance(config[\"output_features\"], list), (\n \"Ludwig expects output features in a list. Check your model \" \"config format\"\n )\n\n assert len(config[\"input_features\"]) > 0, \"config needs to have at least one input feature\"\n\n assert len(config[\"output_features\"]) > 0, \"config needs to have at least one output feature\"\n\n if TRAINING in config:\n assert isinstance(config[TRAINING], dict), (\n \"There is an issue while reading the training section of the \"\n \"config. The parameters are expected to be\"\n \"read as a dictionary. Please check your config format.\"\n )\n\n if \"preprocessing\" in config:\n assert isinstance(config[\"preprocessing\"], dict), (\n \"There is an issue while reading the preprocessing section of the \"\n \"config. The parameters are expected to be read\"\n \"as a dictionary. Please check your config format.\"\n )\n\n if \"combiner\" in config:\n assert isinstance(config[\"combiner\"], dict), (\n \"There is an issue while reading the combiner section of the \"\n \"config. The parameters are expected to be read\"\n \"as a dictionary. Please check your config format.\"\n )\n\n\ndef _set_feature_column(config: dict) -> None:\n for feature in config[\"input_features\"] + config[\"output_features\"]:\n if COLUMN not in feature:\n feature[COLUMN] = feature[NAME]\n\n\ndef _set_proc_column(config: dict) -> None:\n for feature in config[\"input_features\"] + config[\"output_features\"]:\n if PROC_COLUMN not in feature:\n feature[PROC_COLUMN] = compute_feature_hash(feature)\n\n\ndef _merge_hyperopt_with_training(config: dict) -> None:\n if \"hyperopt\" not in config:\n return\n\n scheduler = config[\"hyperopt\"].get(\"sampler\", {}).get(\"scheduler\")\n if not scheduler:\n return\n\n if TRAINING not in config:\n config[TRAINING] = {}\n\n # Disable early stopping when using a scheduler. We achieve this by setting the parameter\n # to -1, which ensures the condition to apply early stopping is never met.\n training = config[TRAINING]\n early_stop = training.get(\"early_stop\")\n if early_stop is not None and early_stop != -1:\n raise ValueError(\n \"Cannot set training parameter `early_stop` when using a hyperopt scheduler. \"\n \"Unset this parameter in your config.\"\n )\n training[\"early_stop\"] = -1\n\n # At most one of max_t and epochs may be specified by the user, and we set them to be equal to\n # ensure that Ludwig does not stop training before the scheduler has finished the trial, unless\n # max_t is in time_total_s, in which case we set epochs very high to continue train until stopped.\n max_t = scheduler.get(\"max_t\")\n time_attr = scheduler.get(\"time_attr\")\n epochs = training.get(\"epochs\")\n if max_t is not None and epochs is not None and max_t != epochs and time_attr != \"time_total_s\":\n raise ValueError(\n \"Cannot set training parameter `epochs` when using a hyperopt scheduler with `max_t`. \"\n \"Unset one of these parameters in your config.\"\n )\n elif max_t is not None:\n if time_attr == \"time_total_s\":\n training[\"epochs\"] = sys.maxsize # essentially continue training until stopped\n else:\n training[\"epochs\"] = max_t\n elif epochs is not None:\n scheduler[\"max_t\"] = epochs\n\n\ndef merge_with_defaults(config):\n config = copy.deepcopy(config)\n _perform_sanity_checks(config)\n _set_feature_column(config)\n _set_proc_column(config)\n _merge_hyperopt_with_training(config)\n\n # ===== Preprocessing =====\n config[\"preprocessing\"] = merge_dict(default_preprocessing_parameters, config.get(\"preprocessing\", {}))\n\n stratify = config[\"preprocessing\"][\"stratify\"]\n if stratify is not None:\n features = config[\"input_features\"] + config[\"output_features\"]\n feature_names = {f[COLUMN] for f in features}\n if stratify not in feature_names:\n logger.warning(\"Stratify is not among the features. \" \"Cannot establish if it is a binary or category\")\n elif [f for f in features if f[COLUMN] == stratify][0][TYPE] not in {BINARY, CATEGORY}:\n raise ValueError(\"Stratify feature must be binary or category\")\n\n # ===== Training =====\n set_default_value(config, TRAINING, default_training_params)\n\n for param, value in default_training_params.items():\n set_default_value(config[TRAINING], param, value)\n\n set_default_value(\n config[TRAINING],\n \"validation_metric\",\n output_type_registry[config[\"output_features\"][0][TYPE]].default_validation_metric,\n )\n\n # ===== Training Optimizer =====\n optimizer = config[TRAINING][\"optimizer\"]\n set_default_value(optimizer, \"lr\", config[TRAINING][\"learning_rate\"])\n default_optimizer_params = get_default_optimizer_params(optimizer[TYPE])\n for param in default_optimizer_params:\n set_default_value(optimizer, param, default_optimizer_params[param])\n\n # ===== Input Features =====\n for input_feature in config[\"input_features\"]:\n get_from_registry(input_feature[TYPE], input_type_registry).populate_defaults(input_feature)\n\n # ===== Combiner =====\n set_default_value(config, \"combiner\", {TYPE: default_combiner_type})\n\n # ===== Output features =====\n for output_feature in config[\"output_features\"]:\n get_from_registry(output_feature[TYPE], output_type_registry).populate_defaults(output_feature)\n\n return config\n\n\ndef render_config(config=None, output=None, **kwargs):\n output_config = merge_with_defaults(config)\n if output is None:\n print(yaml.safe_dump(output_config, None, sort_keys=False))\n else:\n with open(output, \"w\") as f:\n yaml.safe_dump(output_config, f, sort_keys=False)\n\n\ndef cli_render_config(sys_argv):\n parser = argparse.ArgumentParser(\n description=\"This script renders the full config from a user config.\",\n prog=\"ludwig render_config\",\n usage=\"%(prog)s [options]\",\n )\n parser.add_argument(\n \"-c\",\n \"--config\",\n type=load_config_from_str,\n help=\"input user YAML config path\",\n )\n parser.add_argument(\n \"-o\",\n \"--output\",\n type=str,\n help=\"output rendered YAML config path\",\n required=False,\n )\n\n add_contrib_callback_args(parser)\n args = parser.parse_args(sys_argv)\n\n args.callbacks = args.callbacks or []\n for callback in args.callbacks:\n callback.on_cmdline(\"render_config\", *sys_argv)\n\n print_ludwig(\"Render Config\", LUDWIG_VERSION)\n render_config(**vars(args))\n", "path": "ludwig/utils/defaults.py"}]}
| 3,995 | 279 |
gh_patches_debug_20082
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-600
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Top Trust Score Samples produces an error
**Describe the bug**
The `Top Trust Score Samples` check produces an error. The full stack trace of the error is attached below.
**To Reproduce**
Checkout my `urlnb` branch, which is up-to-date with `main` at the moment of opening this issue, and run the `deepchecks/deepchecks/docs/source/examples/use-cases/phishing_urls.ipynb` notebook.
The full stack trace of the error is attached below.
**Expected behavior**
The `Top Trust Score Samples` check should render correctly, as it did up until now.
**Screenshots**
If applicable, add screenshots to help explain your problem.


**Environment (please complete the following information):**
- OS: macOS Big Sur 11.6.1
- Python Version: 3.8.5
- Deepchecks Version: Latest commit on the `urlnb` branch.
**Additional context**
Log here:
```python
Top Trust Score Samples
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj)
916 method = get_real_method(obj, self.print_method)
917 if method is not None:
--> 918 method()
919 return True
920
~/clones/deepchecks/deepchecks/base/suite.py in _ipython_display_(self)
42
43 def _ipython_display_(self):
---> 44 display_suite_result(self.name, self.results)
45
46 def show(self):
~/clones/deepchecks/deepchecks/base/display_suite.py in display_suite_result(suite_name, results)
140 if display_table:
141 for i, r in enumerate(display_table):
--> 142 r.show(show_conditions=False, unique_id=unique_id)
143 if i < len(display_table) - 1:
144 display_html(light_hr, raw=True)
~/clones/deepchecks/deepchecks/base/check.py in show(self, show_conditions, unique_id)
173 """Display check result."""
174 if is_ipython_display():
--> 175 self._ipython_display_(show_conditions=show_conditions, unique_id=unique_id)
176 else:
177 print(self)
~/clones/deepchecks/deepchecks/base/check.py in _ipython_display_(self, show_conditions, unique_id)
103 for item in self.display:
104 if isinstance(item, (pd.DataFrame, Styler)):
--> 105 display_dataframe(item)
106 elif isinstance(item, str):
107 display_html(item, raw=True)
~/clones/deepchecks/deepchecks/base/display_pandas.py in display_dataframe(df)
29 df (Union[pd.DataFrame, Styler]): Dataframe to display
30 """
---> 31 display_html(dataframe_to_html(df), raw=True)
32
33
~/clones/deepchecks/deepchecks/base/display_pandas.py in dataframe_to_html(df)
53 # https://developer.mozilla.org/en-US/docs/Web/CSS/white-space#values
54 df_styler.set_properties(**{'white-space': 'pre-wrap'})
---> 55 return df_styler.render()
56 # Because of MLC-154. Dataframe with Multi-index or non unique indices does not have a style
57 # attribute, hence we need to display as a regular pd html format.
~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/pandas/io/formats/style.py in render(self, sparse_index, sparse_columns, **kwargs)
270 if sparse_columns is None:
271 sparse_columns = get_option("styler.sparse.columns")
--> 272 return self._render_html(sparse_index, sparse_columns, **kwargs)
273
274 def set_tooltips(
~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/pandas/io/formats/style_render.py in _render_html(self, sparse_index, sparse_columns, **kwargs)
119 Generates a dict with necessary kwargs passed to jinja2 template.
120 """
--> 121 self._compute()
122 # TODO: namespace all the pandas keys
123 d = self._translate(sparse_index, sparse_columns)
~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/pandas/io/formats/style_render.py in _compute(self)
158 r = self
159 for func, args, kwargs in self._todo:
--> 160 r = func(self)(*args, **kwargs)
161 return r
162
~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/pandas/io/formats/style.py in _applymap(self, func, subset, **kwargs)
1171 subset = non_reducing_slice(subset)
1172 result = self.data.loc[subset].applymap(func)
-> 1173 self._update_ctx(result)
1174 return self
1175
~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/pandas/io/formats/style.py in _update_ctx(self, attrs)
953 """
954 if not self.index.is_unique or not self.columns.is_unique:
--> 955 raise KeyError(
956 "`Styler.apply` and `.applymap` are not compatible "
957 "with non-unique index or columns."
KeyError: '`Styler.apply` and `.applymap` are not compatible with non-unique index or columns.'
Model Evaluation Suite
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepchecks/base/display_pandas.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """Handle displays of pandas objects."""
12 from typing import List, Union
13 import warnings
14
15 from IPython.core.display import display_html
16 import pandas as pd
17 from pandas.io.formats.style import Styler
18
19 from . import check # pylint: disable=unused-import
20
21
22 __all__ = ['display_dataframe', 'dataframe_to_html', 'display_conditions_table']
23
24
25 def display_dataframe(df: Union[pd.DataFrame, Styler]):
26 """Display in IPython given dataframe.
27
28 Args:
29 df (Union[pd.DataFrame, Styler]): Dataframe to display
30 """
31 display_html(dataframe_to_html(df), raw=True)
32
33
34 def dataframe_to_html(df: Union[pd.DataFrame, Styler]):
35 """Convert dataframe to html.
36
37 Args:
38 df (Union[pd.DataFrame, Styler]): Dataframe to convert to html
39 """
40 try:
41 if isinstance(df, pd.DataFrame):
42 df_styler = df.style
43 else:
44 df_styler = df
45 # Using deprecated pandas method so hiding the warning
46 with warnings.catch_warnings():
47 warnings.simplefilter(action='ignore', category=FutureWarning)
48 df_styler.set_precision(2)
49
50 # Align everything to the left
51 df_styler.set_table_styles([dict(selector='table,thead,tbody,th,td', props=[('text-align', 'left')])])
52 # Define how to handle white space characters (like \n)
53 # https://developer.mozilla.org/en-US/docs/Web/CSS/white-space#values
54 df_styler.set_properties(**{'white-space': 'pre-wrap'})
55 return df_styler.render()
56 # Because of MLC-154. Dataframe with Multi-index or non unique indices does not have a style
57 # attribute, hence we need to display as a regular pd html format.
58 except ValueError:
59 return df.to_html()
60
61
62 def display_conditions_table(check_results: Union['check.CheckResult', List['check.CheckResult']],
63 unique_id=None):
64 """Display the conditions table as DataFrame.
65
66 Args:
67 check_results (Union['CheckResult', List['CheckResult']]): check results to show conditions of.
68 unique_id (str): the unique id to append for the check names to create links
69 (won't create links if None/empty).
70 """
71 if not isinstance(check_results, List):
72 show_check_column = False
73 check_results = [check_results]
74 else:
75 show_check_column = True
76
77 table = []
78 for check_result in check_results:
79 for cond_result in check_result.conditions_results:
80 sort_value = cond_result.priority
81 icon = cond_result.get_icon()
82 check_header = check_result.get_header()
83 if unique_id and check_result.have_display():
84 check_id = f'{check_result.check.__class__.__name__}_{unique_id}'
85 link = f'<a href=#{check_id}>{check_header}</a>'
86 else:
87 link = check_header
88 sort_value = 1 if sort_value == 1 else 5 # if it failed but has no display still show on top
89 table.append([icon, link, cond_result.name,
90 cond_result.details, sort_value])
91
92 conditions_table = pd.DataFrame(data=table,
93 columns=['Status', 'Check', 'Condition', 'More Info', 'sort'])
94 conditions_table.sort_values(by=['sort'], inplace=True)
95 conditions_table.drop('sort', axis=1, inplace=True)
96 if show_check_column is False:
97 conditions_table.drop('Check', axis=1, inplace=True)
98 display_dataframe(conditions_table.style.hide_index())
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/deepchecks/base/display_pandas.py b/deepchecks/base/display_pandas.py
--- a/deepchecks/base/display_pandas.py
+++ b/deepchecks/base/display_pandas.py
@@ -47,11 +47,11 @@
warnings.simplefilter(action='ignore', category=FutureWarning)
df_styler.set_precision(2)
- # Align everything to the left
- df_styler.set_table_styles([dict(selector='table,thead,tbody,th,td', props=[('text-align', 'left')])])
- # Define how to handle white space characters (like \n)
- # https://developer.mozilla.org/en-US/docs/Web/CSS/white-space#values
- df_styler.set_properties(**{'white-space': 'pre-wrap'})
+ table_css_props = [
+ ('text-align', 'left'), # Align everything to the left
+ ('white-space', 'pre-wrap') # Define how to handle white space characters (like \n)
+ ]
+ df_styler.set_table_styles([dict(selector='table,thead,tbody,th,td', props=table_css_props)])
return df_styler.render()
# Because of MLC-154. Dataframe with Multi-index or non unique indices does not have a style
# attribute, hence we need to display as a regular pd html format.
|
{"golden_diff": "diff --git a/deepchecks/base/display_pandas.py b/deepchecks/base/display_pandas.py\n--- a/deepchecks/base/display_pandas.py\n+++ b/deepchecks/base/display_pandas.py\n@@ -47,11 +47,11 @@\n warnings.simplefilter(action='ignore', category=FutureWarning)\n df_styler.set_precision(2)\n \n- # Align everything to the left\n- df_styler.set_table_styles([dict(selector='table,thead,tbody,th,td', props=[('text-align', 'left')])])\n- # Define how to handle white space characters (like \\n)\n- # https://developer.mozilla.org/en-US/docs/Web/CSS/white-space#values\n- df_styler.set_properties(**{'white-space': 'pre-wrap'})\n+ table_css_props = [\n+ ('text-align', 'left'), # Align everything to the left\n+ ('white-space', 'pre-wrap') # Define how to handle white space characters (like \\n)\n+ ]\n+ df_styler.set_table_styles([dict(selector='table,thead,tbody,th,td', props=table_css_props)])\n return df_styler.render()\n # Because of MLC-154. Dataframe with Multi-index or non unique indices does not have a style\n # attribute, hence we need to display as a regular pd html format.\n", "issue": "[BUG] Top Trust Score Samples produces an error\n**Describe the bug**\r\nThe `Top Trust Score Samples` check produces an error. The full stack trace of the error is attached below.\r\n\r\n**To Reproduce**\r\nCheckout my `urlnb` branch, which is up-to-date with `main` at the moment of opening this issue, and run the `deepchecks/deepchecks/docs/source/examples/use-cases/phishing_urls.ipynb` notebook.\r\n\r\nThe full stack trace of the error is attached below.\r\n\r\n**Expected behavior**\r\nThe `Top Trust Score Samples` check should render correctly, as it did up until now.\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n\r\n\r\n\r\n**Environment (please complete the following information):**\r\n - OS: macOS Big Sur 11.6.1 \r\n - Python Version: 3.8.5\r\n - Deepchecks Version: Latest commit on the `urlnb` branch.\r\n\r\n**Additional context**\r\nLog here:\r\n```python\r\nTop Trust Score Samples\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj)\r\n 916 method = get_real_method(obj, self.print_method)\r\n 917 if method is not None:\r\n--> 918 method()\r\n 919 return True\r\n 920 \r\n\r\n~/clones/deepchecks/deepchecks/base/suite.py in _ipython_display_(self)\r\n 42 \r\n 43 def _ipython_display_(self):\r\n---> 44 display_suite_result(self.name, self.results)\r\n 45 \r\n 46 def show(self):\r\n\r\n~/clones/deepchecks/deepchecks/base/display_suite.py in display_suite_result(suite_name, results)\r\n 140 if display_table:\r\n 141 for i, r in enumerate(display_table):\r\n--> 142 r.show(show_conditions=False, unique_id=unique_id)\r\n 143 if i < len(display_table) - 1:\r\n 144 display_html(light_hr, raw=True)\r\n\r\n~/clones/deepchecks/deepchecks/base/check.py in show(self, show_conditions, unique_id)\r\n 173 \"\"\"Display check result.\"\"\"\r\n 174 if is_ipython_display():\r\n--> 175 self._ipython_display_(show_conditions=show_conditions, unique_id=unique_id)\r\n 176 else:\r\n 177 print(self)\r\n\r\n~/clones/deepchecks/deepchecks/base/check.py in _ipython_display_(self, show_conditions, unique_id)\r\n 103 for item in self.display:\r\n 104 if isinstance(item, (pd.DataFrame, Styler)):\r\n--> 105 display_dataframe(item)\r\n 106 elif isinstance(item, str):\r\n 107 display_html(item, raw=True)\r\n\r\n~/clones/deepchecks/deepchecks/base/display_pandas.py in display_dataframe(df)\r\n 29 df (Union[pd.DataFrame, Styler]): Dataframe to display\r\n 30 \"\"\"\r\n---> 31 display_html(dataframe_to_html(df), raw=True)\r\n 32 \r\n 33 \r\n\r\n~/clones/deepchecks/deepchecks/base/display_pandas.py in dataframe_to_html(df)\r\n 53 # https://developer.mozilla.org/en-US/docs/Web/CSS/white-space#values\r\n 54 df_styler.set_properties(**{'white-space': 'pre-wrap'})\r\n---> 55 return df_styler.render()\r\n 56 # Because of MLC-154. Dataframe with Multi-index or non unique indices does not have a style\r\n 57 # attribute, hence we need to display as a regular pd html format.\r\n\r\n~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/pandas/io/formats/style.py in render(self, sparse_index, sparse_columns, **kwargs)\r\n 270 if sparse_columns is None:\r\n 271 sparse_columns = get_option(\"styler.sparse.columns\")\r\n--> 272 return self._render_html(sparse_index, sparse_columns, **kwargs)\r\n 273 \r\n 274 def set_tooltips(\r\n\r\n~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/pandas/io/formats/style_render.py in _render_html(self, sparse_index, sparse_columns, **kwargs)\r\n 119 Generates a dict with necessary kwargs passed to jinja2 template.\r\n 120 \"\"\"\r\n--> 121 self._compute()\r\n 122 # TODO: namespace all the pandas keys\r\n 123 d = self._translate(sparse_index, sparse_columns)\r\n\r\n~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/pandas/io/formats/style_render.py in _compute(self)\r\n 158 r = self\r\n 159 for func, args, kwargs in self._todo:\r\n--> 160 r = func(self)(*args, **kwargs)\r\n 161 return r\r\n 162 \r\n\r\n~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/pandas/io/formats/style.py in _applymap(self, func, subset, **kwargs)\r\n 1171 subset = non_reducing_slice(subset)\r\n 1172 result = self.data.loc[subset].applymap(func)\r\n-> 1173 self._update_ctx(result)\r\n 1174 return self\r\n 1175 \r\n\r\n~/.pyenv/versions/3.8.5/envs/py3/lib/python3.8/site-packages/pandas/io/formats/style.py in _update_ctx(self, attrs)\r\n 953 \"\"\"\r\n 954 if not self.index.is_unique or not self.columns.is_unique:\r\n--> 955 raise KeyError(\r\n 956 \"`Styler.apply` and `.applymap` are not compatible \"\r\n 957 \"with non-unique index or columns.\"\r\n\r\nKeyError: '`Styler.apply` and `.applymap` are not compatible with non-unique index or columns.'\r\n\r\nModel Evaluation Suite\r\n```\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Handle displays of pandas objects.\"\"\"\nfrom typing import List, Union\nimport warnings\n\nfrom IPython.core.display import display_html\nimport pandas as pd\nfrom pandas.io.formats.style import Styler\n\nfrom . import check # pylint: disable=unused-import\n\n\n__all__ = ['display_dataframe', 'dataframe_to_html', 'display_conditions_table']\n\n\ndef display_dataframe(df: Union[pd.DataFrame, Styler]):\n \"\"\"Display in IPython given dataframe.\n\n Args:\n df (Union[pd.DataFrame, Styler]): Dataframe to display\n \"\"\"\n display_html(dataframe_to_html(df), raw=True)\n\n\ndef dataframe_to_html(df: Union[pd.DataFrame, Styler]):\n \"\"\"Convert dataframe to html.\n\n Args:\n df (Union[pd.DataFrame, Styler]): Dataframe to convert to html\n \"\"\"\n try:\n if isinstance(df, pd.DataFrame):\n df_styler = df.style\n else:\n df_styler = df\n # Using deprecated pandas method so hiding the warning\n with warnings.catch_warnings():\n warnings.simplefilter(action='ignore', category=FutureWarning)\n df_styler.set_precision(2)\n\n # Align everything to the left\n df_styler.set_table_styles([dict(selector='table,thead,tbody,th,td', props=[('text-align', 'left')])])\n # Define how to handle white space characters (like \\n)\n # https://developer.mozilla.org/en-US/docs/Web/CSS/white-space#values\n df_styler.set_properties(**{'white-space': 'pre-wrap'})\n return df_styler.render()\n # Because of MLC-154. Dataframe with Multi-index or non unique indices does not have a style\n # attribute, hence we need to display as a regular pd html format.\n except ValueError:\n return df.to_html()\n\n\ndef display_conditions_table(check_results: Union['check.CheckResult', List['check.CheckResult']],\n unique_id=None):\n \"\"\"Display the conditions table as DataFrame.\n\n Args:\n check_results (Union['CheckResult', List['CheckResult']]): check results to show conditions of.\n unique_id (str): the unique id to append for the check names to create links\n (won't create links if None/empty).\n \"\"\"\n if not isinstance(check_results, List):\n show_check_column = False\n check_results = [check_results]\n else:\n show_check_column = True\n\n table = []\n for check_result in check_results:\n for cond_result in check_result.conditions_results:\n sort_value = cond_result.priority\n icon = cond_result.get_icon()\n check_header = check_result.get_header()\n if unique_id and check_result.have_display():\n check_id = f'{check_result.check.__class__.__name__}_{unique_id}'\n link = f'<a href=#{check_id}>{check_header}</a>'\n else:\n link = check_header\n sort_value = 1 if sort_value == 1 else 5 # if it failed but has no display still show on top\n table.append([icon, link, cond_result.name,\n cond_result.details, sort_value])\n\n conditions_table = pd.DataFrame(data=table,\n columns=['Status', 'Check', 'Condition', 'More Info', 'sort'])\n conditions_table.sort_values(by=['sort'], inplace=True)\n conditions_table.drop('sort', axis=1, inplace=True)\n if show_check_column is False:\n conditions_table.drop('Check', axis=1, inplace=True)\n display_dataframe(conditions_table.style.hide_index())\n", "path": "deepchecks/base/display_pandas.py"}], "after_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Handle displays of pandas objects.\"\"\"\nfrom typing import List, Union\nimport warnings\n\nfrom IPython.core.display import display_html\nimport pandas as pd\nfrom pandas.io.formats.style import Styler\n\nfrom . import check # pylint: disable=unused-import\n\n\n__all__ = ['display_dataframe', 'dataframe_to_html', 'display_conditions_table']\n\n\ndef display_dataframe(df: Union[pd.DataFrame, Styler]):\n \"\"\"Display in IPython given dataframe.\n\n Args:\n df (Union[pd.DataFrame, Styler]): Dataframe to display\n \"\"\"\n display_html(dataframe_to_html(df), raw=True)\n\n\ndef dataframe_to_html(df: Union[pd.DataFrame, Styler]):\n \"\"\"Convert dataframe to html.\n\n Args:\n df (Union[pd.DataFrame, Styler]): Dataframe to convert to html\n \"\"\"\n try:\n if isinstance(df, pd.DataFrame):\n df_styler = df.style\n else:\n df_styler = df\n # Using deprecated pandas method so hiding the warning\n with warnings.catch_warnings():\n warnings.simplefilter(action='ignore', category=FutureWarning)\n df_styler.set_precision(2)\n\n table_css_props = [\n ('text-align', 'left'), # Align everything to the left\n ('white-space', 'pre-wrap') # Define how to handle white space characters (like \\n)\n ]\n df_styler.set_table_styles([dict(selector='table,thead,tbody,th,td', props=table_css_props)])\n return df_styler.render()\n # Because of MLC-154. Dataframe with Multi-index or non unique indices does not have a style\n # attribute, hence we need to display as a regular pd html format.\n except ValueError:\n return df.to_html()\n\n\ndef display_conditions_table(check_results: Union['check.CheckResult', List['check.CheckResult']],\n unique_id=None):\n \"\"\"Display the conditions table as DataFrame.\n\n Args:\n check_results (Union['CheckResult', List['CheckResult']]): check results to show conditions of.\n unique_id (str): the unique id to append for the check names to create links\n (won't create links if None/empty).\n \"\"\"\n if not isinstance(check_results, List):\n show_check_column = False\n check_results = [check_results]\n else:\n show_check_column = True\n\n table = []\n for check_result in check_results:\n for cond_result in check_result.conditions_results:\n sort_value = cond_result.priority\n icon = cond_result.get_icon()\n check_header = check_result.get_header()\n if unique_id and check_result.have_display():\n check_id = f'{check_result.check.__class__.__name__}_{unique_id}'\n link = f'<a href=#{check_id}>{check_header}</a>'\n else:\n link = check_header\n sort_value = 1 if sort_value == 1 else 5 # if it failed but has no display still show on top\n table.append([icon, link, cond_result.name,\n cond_result.details, sort_value])\n\n conditions_table = pd.DataFrame(data=table,\n columns=['Status', 'Check', 'Condition', 'More Info', 'sort'])\n conditions_table.sort_values(by=['sort'], inplace=True)\n conditions_table.drop('sort', axis=1, inplace=True)\n if show_check_column is False:\n conditions_table.drop('Check', axis=1, inplace=True)\n display_dataframe(conditions_table.style.hide_index())\n", "path": "deepchecks/base/display_pandas.py"}]}
| 2,852 | 307 |
gh_patches_debug_3684
|
rasdani/github-patches
|
git_diff
|
napari__napari-5474
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Visual Bug: Labels Layer Controls get squished when toggling 3D
## 🐛 Bug
When toggling to 3D, the labels layer Layer Controls widget gains an extra line `rendering`.
However the widget doesn't resize for this, so it results in a visual bug of everything squished and partially cut off:
<img width="267" alt="image" src="https://user-images.githubusercontent.com/76622105/212083289-a7333963-f66a-4875-bd11-e49965ef7a77.png">
If you manually expand the widget, it will look fine. However, in contrast to the 2D version of the widget, it will let you resize it vertically to be too small, squishing the contents again.
## To Reproduce
Steps to reproduce the behavior:
1. open napari
2. make a labels layer (can be empty)
3. toggle 3D
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
The widget should either resize to permit the extra line item or start out sufficiently large that when the line item is added the visual isn't squished.
## Environment
macOS 13.1, pyqt5, 0.4.17
## Additional context
<!-- Add any other context about the problem here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/_qt/layer_controls/qt_layer_controls_container.py`
Content:
```
1 from qtpy.QtWidgets import QFrame, QStackedWidget
2
3 from napari._qt.layer_controls.qt_image_controls import QtImageControls
4 from napari._qt.layer_controls.qt_labels_controls import QtLabelsControls
5 from napari._qt.layer_controls.qt_points_controls import QtPointsControls
6 from napari._qt.layer_controls.qt_shapes_controls import QtShapesControls
7 from napari._qt.layer_controls.qt_surface_controls import QtSurfaceControls
8 from napari._qt.layer_controls.qt_tracks_controls import QtTracksControls
9 from napari._qt.layer_controls.qt_vectors_controls import QtVectorsControls
10 from napari.layers import (
11 Image,
12 Labels,
13 Points,
14 Shapes,
15 Surface,
16 Tracks,
17 Vectors,
18 )
19 from napari.utils import config
20 from napari.utils.translations import trans
21
22 layer_to_controls = {
23 Labels: QtLabelsControls,
24 Image: QtImageControls,
25 Points: QtPointsControls,
26 Shapes: QtShapesControls,
27 Surface: QtSurfaceControls,
28 Vectors: QtVectorsControls,
29 Tracks: QtTracksControls,
30 }
31
32 if config.async_loading:
33 from napari.layers.image.experimental.octree_image import _OctreeImageBase
34
35 # The user visible layer controls for OctreeImage layers are identical
36 # to the regular image layer controls, for now.
37 layer_to_controls[_OctreeImageBase] = QtImageControls
38
39
40 def create_qt_layer_controls(layer):
41 """
42 Create a qt controls widget for a layer based on its layer type.
43
44 In case of a subclass, the type higher in the layer's method resolution
45 order will be used.
46
47 Parameters
48 ----------
49 layer : napari.layers._base_layer.Layer
50 Layer that needs its controls widget created.
51
52 Returns
53 -------
54 controls : napari.layers.base.QtLayerControls
55 Qt controls widget
56 """
57 candidates = []
58 for layer_type in layer_to_controls:
59 if isinstance(layer, layer_type):
60 candidates.append(layer_type)
61
62 if not candidates:
63 raise TypeError(
64 trans._(
65 'Could not find QtControls for layer of type {type_}',
66 deferred=True,
67 type_=type(layer),
68 )
69 )
70
71 layer_cls = layer.__class__
72 # Sort the list of candidates by 'lineage'
73 candidates.sort(key=lambda layer_type: layer_cls.mro().index(layer_type))
74 controls = layer_to_controls[candidates[0]]
75 return controls(layer)
76
77
78 class QtLayerControlsContainer(QStackedWidget):
79 """Container widget for QtLayerControl widgets.
80
81 Parameters
82 ----------
83 viewer : napari.components.ViewerModel
84 Napari viewer containing the rendered scene, layers, and controls.
85
86 Attributes
87 ----------
88 empty_widget : qtpy.QtWidgets.QFrame
89 Empty placeholder frame for when no layer is selected.
90 viewer : napari.components.ViewerModel
91 Napari viewer containing the rendered scene, layers, and controls.
92 widgets : dict
93 Dictionary of key value pairs matching layer with its widget controls.
94 widgets[layer] = controls
95 """
96
97 def __init__(self, viewer):
98 super().__init__()
99 self.setProperty("emphasized", True)
100 self.viewer = viewer
101
102 self.setMouseTracking(True)
103 self.empty_widget = QFrame()
104 self.widgets = {}
105 self.addWidget(self.empty_widget)
106 self.setCurrentWidget(self.empty_widget)
107
108 self.viewer.layers.events.inserted.connect(self._add)
109 self.viewer.layers.events.removed.connect(self._remove)
110 viewer.layers.selection.events.active.connect(self._display)
111
112 def _display(self, event):
113 """Change the displayed controls to be those of the target layer.
114
115 Parameters
116 ----------
117 event : Event
118 Event with the target layer at `event.item`.
119 """
120 layer = event.value
121 if layer is None:
122 self.setCurrentWidget(self.empty_widget)
123 else:
124 controls = self.widgets[layer]
125 self.setCurrentWidget(controls)
126
127 def _add(self, event):
128 """Add the controls target layer to the list of control widgets.
129
130 Parameters
131 ----------
132 event : Event
133 Event with the target layer at `event.value`.
134 """
135 layer = event.value
136 controls = create_qt_layer_controls(layer)
137 self.addWidget(controls)
138 self.widgets[layer] = controls
139
140 def _remove(self, event):
141 """Remove the controls target layer from the list of control widgets.
142
143 Parameters
144 ----------
145 event : Event
146 Event with the target layer at `event.value`.
147 """
148 layer = event.value
149 controls = self.widgets[layer]
150 self.removeWidget(controls)
151 # controls.close()
152 controls.hide()
153 controls.deleteLater()
154 controls = None
155 del self.widgets[layer]
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/napari/_qt/layer_controls/qt_layer_controls_container.py b/napari/_qt/layer_controls/qt_layer_controls_container.py
--- a/napari/_qt/layer_controls/qt_layer_controls_container.py
+++ b/napari/_qt/layer_controls/qt_layer_controls_container.py
@@ -101,6 +101,7 @@
self.setMouseTracking(True)
self.empty_widget = QFrame()
+ self.empty_widget.setObjectName("empty_controls_widget")
self.widgets = {}
self.addWidget(self.empty_widget)
self.setCurrentWidget(self.empty_widget)
|
{"golden_diff": "diff --git a/napari/_qt/layer_controls/qt_layer_controls_container.py b/napari/_qt/layer_controls/qt_layer_controls_container.py\n--- a/napari/_qt/layer_controls/qt_layer_controls_container.py\n+++ b/napari/_qt/layer_controls/qt_layer_controls_container.py\n@@ -101,6 +101,7 @@\n \n self.setMouseTracking(True)\n self.empty_widget = QFrame()\n+ self.empty_widget.setObjectName(\"empty_controls_widget\")\n self.widgets = {}\n self.addWidget(self.empty_widget)\n self.setCurrentWidget(self.empty_widget)\n", "issue": "Visual Bug: Labels Layer Controls get squished when toggling 3D\n## \ud83d\udc1b Bug\r\n\r\nWhen toggling to 3D, the labels layer Layer Controls widget gains an extra line `rendering`.\r\nHowever the widget doesn't resize for this, so it results in a visual bug of everything squished and partially cut off:\r\n<img width=\"267\" alt=\"image\" src=\"https://user-images.githubusercontent.com/76622105/212083289-a7333963-f66a-4875-bd11-e49965ef7a77.png\">\r\n\r\nIf you manually expand the widget, it will look fine. However, in contrast to the 2D version of the widget, it will let you resize it vertically to be too small, squishing the contents again.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. open napari\r\n2. make a labels layer (can be empty)\r\n3. toggle 3D\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\n\r\nThe widget should either resize to permit the extra line item or start out sufficiently large that when the line item is added the visual isn't squished.\r\n\r\n## Environment\r\n\r\nmacOS 13.1, pyqt5, 0.4.17\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "from qtpy.QtWidgets import QFrame, QStackedWidget\n\nfrom napari._qt.layer_controls.qt_image_controls import QtImageControls\nfrom napari._qt.layer_controls.qt_labels_controls import QtLabelsControls\nfrom napari._qt.layer_controls.qt_points_controls import QtPointsControls\nfrom napari._qt.layer_controls.qt_shapes_controls import QtShapesControls\nfrom napari._qt.layer_controls.qt_surface_controls import QtSurfaceControls\nfrom napari._qt.layer_controls.qt_tracks_controls import QtTracksControls\nfrom napari._qt.layer_controls.qt_vectors_controls import QtVectorsControls\nfrom napari.layers import (\n Image,\n Labels,\n Points,\n Shapes,\n Surface,\n Tracks,\n Vectors,\n)\nfrom napari.utils import config\nfrom napari.utils.translations import trans\n\nlayer_to_controls = {\n Labels: QtLabelsControls,\n Image: QtImageControls,\n Points: QtPointsControls,\n Shapes: QtShapesControls,\n Surface: QtSurfaceControls,\n Vectors: QtVectorsControls,\n Tracks: QtTracksControls,\n}\n\nif config.async_loading:\n from napari.layers.image.experimental.octree_image import _OctreeImageBase\n\n # The user visible layer controls for OctreeImage layers are identical\n # to the regular image layer controls, for now.\n layer_to_controls[_OctreeImageBase] = QtImageControls\n\n\ndef create_qt_layer_controls(layer):\n \"\"\"\n Create a qt controls widget for a layer based on its layer type.\n\n In case of a subclass, the type higher in the layer's method resolution\n order will be used.\n\n Parameters\n ----------\n layer : napari.layers._base_layer.Layer\n Layer that needs its controls widget created.\n\n Returns\n -------\n controls : napari.layers.base.QtLayerControls\n Qt controls widget\n \"\"\"\n candidates = []\n for layer_type in layer_to_controls:\n if isinstance(layer, layer_type):\n candidates.append(layer_type)\n\n if not candidates:\n raise TypeError(\n trans._(\n 'Could not find QtControls for layer of type {type_}',\n deferred=True,\n type_=type(layer),\n )\n )\n\n layer_cls = layer.__class__\n # Sort the list of candidates by 'lineage'\n candidates.sort(key=lambda layer_type: layer_cls.mro().index(layer_type))\n controls = layer_to_controls[candidates[0]]\n return controls(layer)\n\n\nclass QtLayerControlsContainer(QStackedWidget):\n \"\"\"Container widget for QtLayerControl widgets.\n\n Parameters\n ----------\n viewer : napari.components.ViewerModel\n Napari viewer containing the rendered scene, layers, and controls.\n\n Attributes\n ----------\n empty_widget : qtpy.QtWidgets.QFrame\n Empty placeholder frame for when no layer is selected.\n viewer : napari.components.ViewerModel\n Napari viewer containing the rendered scene, layers, and controls.\n widgets : dict\n Dictionary of key value pairs matching layer with its widget controls.\n widgets[layer] = controls\n \"\"\"\n\n def __init__(self, viewer):\n super().__init__()\n self.setProperty(\"emphasized\", True)\n self.viewer = viewer\n\n self.setMouseTracking(True)\n self.empty_widget = QFrame()\n self.widgets = {}\n self.addWidget(self.empty_widget)\n self.setCurrentWidget(self.empty_widget)\n\n self.viewer.layers.events.inserted.connect(self._add)\n self.viewer.layers.events.removed.connect(self._remove)\n viewer.layers.selection.events.active.connect(self._display)\n\n def _display(self, event):\n \"\"\"Change the displayed controls to be those of the target layer.\n\n Parameters\n ----------\n event : Event\n Event with the target layer at `event.item`.\n \"\"\"\n layer = event.value\n if layer is None:\n self.setCurrentWidget(self.empty_widget)\n else:\n controls = self.widgets[layer]\n self.setCurrentWidget(controls)\n\n def _add(self, event):\n \"\"\"Add the controls target layer to the list of control widgets.\n\n Parameters\n ----------\n event : Event\n Event with the target layer at `event.value`.\n \"\"\"\n layer = event.value\n controls = create_qt_layer_controls(layer)\n self.addWidget(controls)\n self.widgets[layer] = controls\n\n def _remove(self, event):\n \"\"\"Remove the controls target layer from the list of control widgets.\n\n Parameters\n ----------\n event : Event\n Event with the target layer at `event.value`.\n \"\"\"\n layer = event.value\n controls = self.widgets[layer]\n self.removeWidget(controls)\n # controls.close()\n controls.hide()\n controls.deleteLater()\n controls = None\n del self.widgets[layer]\n", "path": "napari/_qt/layer_controls/qt_layer_controls_container.py"}], "after_files": [{"content": "from qtpy.QtWidgets import QFrame, QStackedWidget\n\nfrom napari._qt.layer_controls.qt_image_controls import QtImageControls\nfrom napari._qt.layer_controls.qt_labels_controls import QtLabelsControls\nfrom napari._qt.layer_controls.qt_points_controls import QtPointsControls\nfrom napari._qt.layer_controls.qt_shapes_controls import QtShapesControls\nfrom napari._qt.layer_controls.qt_surface_controls import QtSurfaceControls\nfrom napari._qt.layer_controls.qt_tracks_controls import QtTracksControls\nfrom napari._qt.layer_controls.qt_vectors_controls import QtVectorsControls\nfrom napari.layers import (\n Image,\n Labels,\n Points,\n Shapes,\n Surface,\n Tracks,\n Vectors,\n)\nfrom napari.utils import config\nfrom napari.utils.translations import trans\n\nlayer_to_controls = {\n Labels: QtLabelsControls,\n Image: QtImageControls,\n Points: QtPointsControls,\n Shapes: QtShapesControls,\n Surface: QtSurfaceControls,\n Vectors: QtVectorsControls,\n Tracks: QtTracksControls,\n}\n\nif config.async_loading:\n from napari.layers.image.experimental.octree_image import _OctreeImageBase\n\n # The user visible layer controls for OctreeImage layers are identical\n # to the regular image layer controls, for now.\n layer_to_controls[_OctreeImageBase] = QtImageControls\n\n\ndef create_qt_layer_controls(layer):\n \"\"\"\n Create a qt controls widget for a layer based on its layer type.\n\n In case of a subclass, the type higher in the layer's method resolution\n order will be used.\n\n Parameters\n ----------\n layer : napari.layers._base_layer.Layer\n Layer that needs its controls widget created.\n\n Returns\n -------\n controls : napari.layers.base.QtLayerControls\n Qt controls widget\n \"\"\"\n candidates = []\n for layer_type in layer_to_controls:\n if isinstance(layer, layer_type):\n candidates.append(layer_type)\n\n if not candidates:\n raise TypeError(\n trans._(\n 'Could not find QtControls for layer of type {type_}',\n deferred=True,\n type_=type(layer),\n )\n )\n\n layer_cls = layer.__class__\n # Sort the list of candidates by 'lineage'\n candidates.sort(key=lambda layer_type: layer_cls.mro().index(layer_type))\n controls = layer_to_controls[candidates[0]]\n return controls(layer)\n\n\nclass QtLayerControlsContainer(QStackedWidget):\n \"\"\"Container widget for QtLayerControl widgets.\n\n Parameters\n ----------\n viewer : napari.components.ViewerModel\n Napari viewer containing the rendered scene, layers, and controls.\n\n Attributes\n ----------\n empty_widget : qtpy.QtWidgets.QFrame\n Empty placeholder frame for when no layer is selected.\n viewer : napari.components.ViewerModel\n Napari viewer containing the rendered scene, layers, and controls.\n widgets : dict\n Dictionary of key value pairs matching layer with its widget controls.\n widgets[layer] = controls\n \"\"\"\n\n def __init__(self, viewer):\n super().__init__()\n self.setProperty(\"emphasized\", True)\n self.viewer = viewer\n\n self.setMouseTracking(True)\n self.empty_widget = QFrame()\n self.empty_widget.setObjectName(\"empty_controls_widget\")\n self.widgets = {}\n self.addWidget(self.empty_widget)\n self.setCurrentWidget(self.empty_widget)\n\n self.viewer.layers.events.inserted.connect(self._add)\n self.viewer.layers.events.removed.connect(self._remove)\n viewer.layers.selection.events.active.connect(self._display)\n\n def _display(self, event):\n \"\"\"Change the displayed controls to be those of the target layer.\n\n Parameters\n ----------\n event : Event\n Event with the target layer at `event.item`.\n \"\"\"\n layer = event.value\n if layer is None:\n self.setCurrentWidget(self.empty_widget)\n else:\n controls = self.widgets[layer]\n self.setCurrentWidget(controls)\n\n def _add(self, event):\n \"\"\"Add the controls target layer to the list of control widgets.\n\n Parameters\n ----------\n event : Event\n Event with the target layer at `event.value`.\n \"\"\"\n layer = event.value\n controls = create_qt_layer_controls(layer)\n self.addWidget(controls)\n self.widgets[layer] = controls\n\n def _remove(self, event):\n \"\"\"Remove the controls target layer from the list of control widgets.\n\n Parameters\n ----------\n event : Event\n Event with the target layer at `event.value`.\n \"\"\"\n layer = event.value\n controls = self.widgets[layer]\n self.removeWidget(controls)\n # controls.close()\n controls.hide()\n controls.deleteLater()\n controls = None\n del self.widgets[layer]\n", "path": "napari/_qt/layer_controls/qt_layer_controls_container.py"}]}
| 1,945 | 125 |
gh_patches_debug_29600
|
rasdani/github-patches
|
git_diff
|
enthought__chaco-905
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
changing TextPlot color & font
Hello,
I would like to change TextPlot color & font.
Initialization is ok, but when the color trait is changed for instance, the color is not changed in the display.
There is the same issue with the font trait.
Here's the CME.
```python
#! /usr/bin/env python3
import numpy as np
from chaco.api import ArrayPlotData, Plot
from enable.api import ComponentEditor
from traits.api import (
Array,
Color,
Font,
HasTraits,
Instance,
List
)
from traitsui.api import Item, UItem, View
class Data(HasTraits):
data = Array
labels = List
color = Color
font = Font
plot = Instance(Plot)
def _data_default(self):
data = np.array([[0, 0], [0, 1], [1, 1], [1, 0]])
return (data)
def _labels_default(self):
labels = ['A', 'B', 'C', 'D']
return (labels)
def _plot_default(self):
self.plotdata = ArrayPlotData()
self.plotdata.set_data('x', self.data[:, 0])
self.plotdata.set_data('y', self.data[:, 1])
self.plotdata.set_data('labels', self.labels)
plot = Plot(self.plotdata)
plot.range2d.set_bounds((-1, -1), (2, 2))
plot.plot(("x", "y"),
type='scatter',
marker='dot',
color=self.color)
plot.plot(("x", "y", "labels"),
type='text',
text_margin=4,
h_position='right',
text_offset=(4, 4),
text_color=self.color)
return plot
traits_view = View(
UItem(
"plot",
editor=ComponentEditor(),
resizable=True
),
UItem('color',
style='simple'),
UItem('font'),
resizable=True,
buttons=["OK"],
width=900,
height=800,
)
def _color_changed(self):
self.plot.plots['plot0'][0].color = self.color
self.plot.plots['plot1'][0].text_color = self.color
def _font_changed(self):
name = self.font.family()
size = self.font.pointSize()
self.plot.plots['plot1'][0].text_font = '%s %d' % (name, size)
if __name__ == '__main__':
viewer = Data(color=(255, 128, 64))
viewer.configure_traits()
```
Thanks in advance for any help.
Regards
Debian 10 x86_64, Python 3.7.3, ETS source code from git repo
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chaco/plots/text_plot.py`
Content:
```
1 # (C) Copyright 2005-2021 Enthought, Inc., Austin, TX
2 # All rights reserved.
3 #
4 # This software is provided without warranty under the terms of the BSD
5 # license included in LICENSE.txt and may be redistributed only under
6 # the conditions described in the aforementioned license. The license
7 # is also available online at http://www.enthought.com/licenses/BSD.txt
8 #
9 # Thanks for using Enthought open source!
10
11 """
12 A plot that renders text values in two dimensions
13
14 """
15
16
17 from numpy import array, column_stack, empty, isfinite
18
19 # Enthought library imports
20 from enable.api import black_color_trait
21 from kiva.trait_defs.kiva_font_trait import KivaFont
22 from traits.api import Bool, Enum, Float, Int, Instance, List, Tuple, observe
23
24 # local imports
25 from chaco.array_data_source import ArrayDataSource
26 from chaco.label import Label
27 from chaco.base_xy_plot import BaseXYPlot
28
29
30 class TextPlot(BaseXYPlot):
31 """ A plot that positions textual labels in 2D """
32
33 #: text values corresponding to indices
34 text = Instance(ArrayDataSource)
35
36 #: The font of the tick labels.
37 text_font = KivaFont("sans-serif 10")
38
39 #: The color of the tick labels.
40 text_color = black_color_trait
41
42 #: The rotation of the tick labels.
43 text_rotate_angle = Float(0)
44
45 #: The margin around the label.
46 text_margin = Int(2)
47
48 #: horizontal position of text relative to target point
49 h_position = Enum("center", "left", "right")
50
51 #: vertical position of text relative to target point
52 v_position = Enum("center", "top", "bottom")
53
54 #: offset of text relative to non-index direction in pixels
55 text_offset = Tuple(Float, Float)
56
57 # ------------------------------------------------------------------------
58 # Private traits
59 # ------------------------------------------------------------------------
60
61 #: flag for whether the cache of Label instances is valid
62 _label_cache_valid = Bool(False, transient=True)
63
64 #: cache of Label instances for faster rendering
65 _label_cache = List(transient=True)
66
67 #: cache of bounding boxes of labels
68 _label_box_cache = List(transient=True)
69
70 # ------------------------------------------------------------------------
71 # Private methods
72 # ------------------------------------------------------------------------
73
74 def _compute_labels(self, gc):
75 """Generate the Label instances for the plot. """
76 self._label_cache = [
77 Label(
78 text=text,
79 font=self.text_font,
80 color=self.text_color,
81 rotate_angle=self.text_rotate_angle,
82 margin=self.text_margin,
83 )
84 for text in self.text.get_data()
85 ]
86 self._label_box_cache = [
87 array(label.get_bounding_box(gc), float)
88 for label in self._label_cache
89 ]
90 self._label_cache_valid = True
91
92 def _gather_points(self):
93 """Abstract method to collect data points that are within the range of
94 the plot, and cache them.
95 """
96 if self._cache_valid:
97 return
98
99 if not self.index or not self.value:
100 return
101
102 index, index_mask = self.index.get_data_mask()
103 value, value_mask = self.value.get_data_mask()
104
105 if len(index) == 0 or len(value) == 0 or len(index) != len(value):
106 self._cached_data_pts = []
107 self._cached_point_mask = []
108 self._cache_valid = True
109 return
110
111 index_range_mask = self.index_mapper.range.mask_data(index)
112 value_range_mask = self.value_mapper.range.mask_data(value)
113
114 nan_mask = isfinite(index) & index_mask & isfinite(value) & value_mask
115 point_mask = nan_mask & index_range_mask & value_range_mask
116
117 if not self._cache_valid:
118 if not point_mask.all():
119 points = column_stack([index[point_mask], value[point_mask]])
120 else:
121 points = column_stack([index, value])
122 self._cached_data_pts = points
123 self._cached_point_mask = point_mask
124 self._cache_valid = True
125
126 def _render(self, gc, pts):
127 if not self._label_cache_valid:
128 self._compute_labels(gc)
129
130 labels = [
131 label
132 for label, mask in zip(self._label_cache, self._cached_point_mask)
133 if mask
134 ]
135 boxes = [
136 label
137 for label, mask in zip(
138 self._label_box_cache, self._cached_point_mask
139 )
140 if mask
141 ]
142 offset = empty((2,), float)
143
144 with gc:
145 gc.clip_to_rect(self.x, self.y, self.width, self.height)
146 for pt, label, box in zip(pts, labels, boxes):
147 with gc:
148 if self.h_position == "center":
149 offset[0] = -box[0] / 2 + self.text_offset[0]
150 elif self.h_position == "right":
151 offset[0] = self.text_offset[0]
152 elif self.h_position == "left":
153 offset[0] = -box[0] / 2 + self.text_offset[0]
154 if self.v_position == "center":
155 offset[1] = -box[1] / 2 + self.text_offset[1]
156 elif self.v_position == "top":
157 offset[1] = self.text_offset[1]
158 elif self.v_position == "bottom":
159 offset[1] = -box[1] / 2 - self.text_offset[1]
160
161 pt += offset
162 gc.translate_ctm(*pt)
163
164 label.draw(gc)
165
166 # ------------------------------------------------------------------------
167 # Trait events
168 # ------------------------------------------------------------------------
169
170 @observe("index.data_changed")
171 def _invalidate(self, event):
172 self._cache_valid = False
173 self._screen_cache_valid = False
174 self._label_cache_valid = False
175
176 @observe("value.data_changed")
177 def _invalidate_labels(self, event):
178 self._label_cache_valid = False
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/chaco/plots/text_plot.py b/chaco/plots/text_plot.py
--- a/chaco/plots/text_plot.py
+++ b/chaco/plots/text_plot.py
@@ -34,25 +34,25 @@
text = Instance(ArrayDataSource)
#: The font of the tick labels.
- text_font = KivaFont("sans-serif 10")
+ text_font = KivaFont("sans-serif 10", redraw=True)
#: The color of the tick labels.
- text_color = black_color_trait
+ text_color = black_color_trait(redraw=True)
#: The rotation of the tick labels.
- text_rotate_angle = Float(0)
+ text_rotate_angle = Float(0, redraw=True)
#: The margin around the label.
- text_margin = Int(2)
+ text_margin = Int(2, redraw=True)
#: horizontal position of text relative to target point
- h_position = Enum("center", "left", "right")
+ h_position = Enum("center", "left", "right", redraw=True)
#: vertical position of text relative to target point
- v_position = Enum("center", "top", "bottom")
+ v_position = Enum("center", "top", "bottom", redraw=True)
#: offset of text relative to non-index direction in pixels
- text_offset = Tuple(Float, Float)
+ text_offset = Tuple(Float, Float, redraw=True)
# ------------------------------------------------------------------------
# Private traits
@@ -173,6 +173,6 @@
self._screen_cache_valid = False
self._label_cache_valid = False
- @observe("value.data_changed")
+ @observe("value.data_changed,+redraw")
def _invalidate_labels(self, event):
self._label_cache_valid = False
|
{"golden_diff": "diff --git a/chaco/plots/text_plot.py b/chaco/plots/text_plot.py\n--- a/chaco/plots/text_plot.py\n+++ b/chaco/plots/text_plot.py\n@@ -34,25 +34,25 @@\n text = Instance(ArrayDataSource)\n \n #: The font of the tick labels.\n- text_font = KivaFont(\"sans-serif 10\")\n+ text_font = KivaFont(\"sans-serif 10\", redraw=True)\n \n #: The color of the tick labels.\n- text_color = black_color_trait\n+ text_color = black_color_trait(redraw=True)\n \n #: The rotation of the tick labels.\n- text_rotate_angle = Float(0)\n+ text_rotate_angle = Float(0, redraw=True)\n \n #: The margin around the label.\n- text_margin = Int(2)\n+ text_margin = Int(2, redraw=True)\n \n #: horizontal position of text relative to target point\n- h_position = Enum(\"center\", \"left\", \"right\")\n+ h_position = Enum(\"center\", \"left\", \"right\", redraw=True)\n \n #: vertical position of text relative to target point\n- v_position = Enum(\"center\", \"top\", \"bottom\")\n+ v_position = Enum(\"center\", \"top\", \"bottom\", redraw=True)\n \n #: offset of text relative to non-index direction in pixels\n- text_offset = Tuple(Float, Float)\n+ text_offset = Tuple(Float, Float, redraw=True)\n \n # ------------------------------------------------------------------------\n # Private traits\n@@ -173,6 +173,6 @@\n self._screen_cache_valid = False\n self._label_cache_valid = False\n \n- @observe(\"value.data_changed\")\n+ @observe(\"value.data_changed,+redraw\")\n def _invalidate_labels(self, event):\n self._label_cache_valid = False\n", "issue": "changing TextPlot color & font\nHello,\r\n\r\nI would like to change TextPlot color & font.\r\n\r\nInitialization is ok, but when the color trait is changed for instance, the color is not changed in the display.\r\n\r\nThere is the same issue with the font trait.\r\n\r\nHere's the CME.\r\n\r\n```python\r\n#! /usr/bin/env python3\r\n\r\nimport numpy as np\r\n\r\nfrom chaco.api import ArrayPlotData, Plot\r\nfrom enable.api import ComponentEditor\r\nfrom traits.api import (\r\n Array,\r\n Color,\r\n Font,\r\n HasTraits,\r\n Instance,\r\n List\r\n)\r\nfrom traitsui.api import Item, UItem, View\r\n\r\n\r\nclass Data(HasTraits):\r\n\r\n data = Array\r\n labels = List\r\n color = Color\r\n font = Font\r\n plot = Instance(Plot)\r\n\r\n def _data_default(self):\r\n data = np.array([[0, 0], [0, 1], [1, 1], [1, 0]])\r\n return (data)\r\n\r\n def _labels_default(self):\r\n labels = ['A', 'B', 'C', 'D']\r\n return (labels)\r\n\r\n def _plot_default(self):\r\n self.plotdata = ArrayPlotData()\r\n self.plotdata.set_data('x', self.data[:, 0])\r\n self.plotdata.set_data('y', self.data[:, 1])\r\n self.plotdata.set_data('labels', self.labels)\r\n\r\n plot = Plot(self.plotdata)\r\n plot.range2d.set_bounds((-1, -1), (2, 2))\r\n plot.plot((\"x\", \"y\"),\r\n type='scatter',\r\n marker='dot',\r\n color=self.color)\r\n plot.plot((\"x\", \"y\", \"labels\"),\r\n type='text',\r\n text_margin=4,\r\n h_position='right',\r\n text_offset=(4, 4),\r\n text_color=self.color)\r\n\r\n return plot\r\n\r\n traits_view = View(\r\n UItem(\r\n \"plot\",\r\n editor=ComponentEditor(),\r\n resizable=True\r\n ),\r\n UItem('color',\r\n style='simple'),\r\n UItem('font'),\r\n resizable=True,\r\n buttons=[\"OK\"],\r\n width=900,\r\n height=800,\r\n )\r\n\r\n def _color_changed(self):\r\n self.plot.plots['plot0'][0].color = self.color\r\n self.plot.plots['plot1'][0].text_color = self.color\r\n\r\n def _font_changed(self):\r\n name = self.font.family()\r\n size = self.font.pointSize()\r\n self.plot.plots['plot1'][0].text_font = '%s %d' % (name, size)\r\n\r\n\r\nif __name__ == '__main__':\r\n viewer = Data(color=(255, 128, 64))\r\n viewer.configure_traits()\r\n```\r\n\r\nThanks in advance for any help.\r\n\r\nRegards\r\n\r\nDebian 10 x86_64, Python 3.7.3, ETS source code from git repo\r\n\n", "before_files": [{"content": "# (C) Copyright 2005-2021 Enthought, Inc., Austin, TX\n# All rights reserved.\n#\n# This software is provided without warranty under the terms of the BSD\n# license included in LICENSE.txt and may be redistributed only under\n# the conditions described in the aforementioned license. The license\n# is also available online at http://www.enthought.com/licenses/BSD.txt\n#\n# Thanks for using Enthought open source!\n\n\"\"\"\nA plot that renders text values in two dimensions\n\n\"\"\"\n\n\nfrom numpy import array, column_stack, empty, isfinite\n\n# Enthought library imports\nfrom enable.api import black_color_trait\nfrom kiva.trait_defs.kiva_font_trait import KivaFont\nfrom traits.api import Bool, Enum, Float, Int, Instance, List, Tuple, observe\n\n# local imports\nfrom chaco.array_data_source import ArrayDataSource\nfrom chaco.label import Label\nfrom chaco.base_xy_plot import BaseXYPlot\n\n\nclass TextPlot(BaseXYPlot):\n \"\"\" A plot that positions textual labels in 2D \"\"\"\n\n #: text values corresponding to indices\n text = Instance(ArrayDataSource)\n\n #: The font of the tick labels.\n text_font = KivaFont(\"sans-serif 10\")\n\n #: The color of the tick labels.\n text_color = black_color_trait\n\n #: The rotation of the tick labels.\n text_rotate_angle = Float(0)\n\n #: The margin around the label.\n text_margin = Int(2)\n\n #: horizontal position of text relative to target point\n h_position = Enum(\"center\", \"left\", \"right\")\n\n #: vertical position of text relative to target point\n v_position = Enum(\"center\", \"top\", \"bottom\")\n\n #: offset of text relative to non-index direction in pixels\n text_offset = Tuple(Float, Float)\n\n # ------------------------------------------------------------------------\n # Private traits\n # ------------------------------------------------------------------------\n\n #: flag for whether the cache of Label instances is valid\n _label_cache_valid = Bool(False, transient=True)\n\n #: cache of Label instances for faster rendering\n _label_cache = List(transient=True)\n\n #: cache of bounding boxes of labels\n _label_box_cache = List(transient=True)\n\n # ------------------------------------------------------------------------\n # Private methods\n # ------------------------------------------------------------------------\n\n def _compute_labels(self, gc):\n \"\"\"Generate the Label instances for the plot. \"\"\"\n self._label_cache = [\n Label(\n text=text,\n font=self.text_font,\n color=self.text_color,\n rotate_angle=self.text_rotate_angle,\n margin=self.text_margin,\n )\n for text in self.text.get_data()\n ]\n self._label_box_cache = [\n array(label.get_bounding_box(gc), float)\n for label in self._label_cache\n ]\n self._label_cache_valid = True\n\n def _gather_points(self):\n \"\"\"Abstract method to collect data points that are within the range of\n the plot, and cache them.\n \"\"\"\n if self._cache_valid:\n return\n\n if not self.index or not self.value:\n return\n\n index, index_mask = self.index.get_data_mask()\n value, value_mask = self.value.get_data_mask()\n\n if len(index) == 0 or len(value) == 0 or len(index) != len(value):\n self._cached_data_pts = []\n self._cached_point_mask = []\n self._cache_valid = True\n return\n\n index_range_mask = self.index_mapper.range.mask_data(index)\n value_range_mask = self.value_mapper.range.mask_data(value)\n\n nan_mask = isfinite(index) & index_mask & isfinite(value) & value_mask\n point_mask = nan_mask & index_range_mask & value_range_mask\n\n if not self._cache_valid:\n if not point_mask.all():\n points = column_stack([index[point_mask], value[point_mask]])\n else:\n points = column_stack([index, value])\n self._cached_data_pts = points\n self._cached_point_mask = point_mask\n self._cache_valid = True\n\n def _render(self, gc, pts):\n if not self._label_cache_valid:\n self._compute_labels(gc)\n\n labels = [\n label\n for label, mask in zip(self._label_cache, self._cached_point_mask)\n if mask\n ]\n boxes = [\n label\n for label, mask in zip(\n self._label_box_cache, self._cached_point_mask\n )\n if mask\n ]\n offset = empty((2,), float)\n\n with gc:\n gc.clip_to_rect(self.x, self.y, self.width, self.height)\n for pt, label, box in zip(pts, labels, boxes):\n with gc:\n if self.h_position == \"center\":\n offset[0] = -box[0] / 2 + self.text_offset[0]\n elif self.h_position == \"right\":\n offset[0] = self.text_offset[0]\n elif self.h_position == \"left\":\n offset[0] = -box[0] / 2 + self.text_offset[0]\n if self.v_position == \"center\":\n offset[1] = -box[1] / 2 + self.text_offset[1]\n elif self.v_position == \"top\":\n offset[1] = self.text_offset[1]\n elif self.v_position == \"bottom\":\n offset[1] = -box[1] / 2 - self.text_offset[1]\n\n pt += offset\n gc.translate_ctm(*pt)\n\n label.draw(gc)\n\n # ------------------------------------------------------------------------\n # Trait events\n # ------------------------------------------------------------------------\n\n @observe(\"index.data_changed\")\n def _invalidate(self, event):\n self._cache_valid = False\n self._screen_cache_valid = False\n self._label_cache_valid = False\n\n @observe(\"value.data_changed\")\n def _invalidate_labels(self, event):\n self._label_cache_valid = False\n", "path": "chaco/plots/text_plot.py"}], "after_files": [{"content": "# (C) Copyright 2005-2021 Enthought, Inc., Austin, TX\n# All rights reserved.\n#\n# This software is provided without warranty under the terms of the BSD\n# license included in LICENSE.txt and may be redistributed only under\n# the conditions described in the aforementioned license. The license\n# is also available online at http://www.enthought.com/licenses/BSD.txt\n#\n# Thanks for using Enthought open source!\n\n\"\"\"\nA plot that renders text values in two dimensions\n\n\"\"\"\n\n\nfrom numpy import array, column_stack, empty, isfinite\n\n# Enthought library imports\nfrom enable.api import black_color_trait\nfrom kiva.trait_defs.kiva_font_trait import KivaFont\nfrom traits.api import Bool, Enum, Float, Int, Instance, List, Tuple, observe\n\n# local imports\nfrom chaco.array_data_source import ArrayDataSource\nfrom chaco.label import Label\nfrom chaco.base_xy_plot import BaseXYPlot\n\n\nclass TextPlot(BaseXYPlot):\n \"\"\" A plot that positions textual labels in 2D \"\"\"\n\n #: text values corresponding to indices\n text = Instance(ArrayDataSource)\n\n #: The font of the tick labels.\n text_font = KivaFont(\"sans-serif 10\", redraw=True)\n\n #: The color of the tick labels.\n text_color = black_color_trait(redraw=True)\n\n #: The rotation of the tick labels.\n text_rotate_angle = Float(0, redraw=True)\n\n #: The margin around the label.\n text_margin = Int(2, redraw=True)\n\n #: horizontal position of text relative to target point\n h_position = Enum(\"center\", \"left\", \"right\", redraw=True)\n\n #: vertical position of text relative to target point\n v_position = Enum(\"center\", \"top\", \"bottom\", redraw=True)\n\n #: offset of text relative to non-index direction in pixels\n text_offset = Tuple(Float, Float, redraw=True)\n\n # ------------------------------------------------------------------------\n # Private traits\n # ------------------------------------------------------------------------\n\n #: flag for whether the cache of Label instances is valid\n _label_cache_valid = Bool(False, transient=True)\n\n #: cache of Label instances for faster rendering\n _label_cache = List(transient=True)\n\n #: cache of bounding boxes of labels\n _label_box_cache = List(transient=True)\n\n # ------------------------------------------------------------------------\n # Private methods\n # ------------------------------------------------------------------------\n\n def _compute_labels(self, gc):\n \"\"\"Generate the Label instances for the plot. \"\"\"\n self._label_cache = [\n Label(\n text=text,\n font=self.text_font,\n color=self.text_color,\n rotate_angle=self.text_rotate_angle,\n margin=self.text_margin,\n )\n for text in self.text.get_data()\n ]\n self._label_box_cache = [\n array(label.get_bounding_box(gc), float)\n for label in self._label_cache\n ]\n self._label_cache_valid = True\n\n def _gather_points(self):\n \"\"\"Abstract method to collect data points that are within the range of\n the plot, and cache them.\n \"\"\"\n if self._cache_valid:\n return\n\n if not self.index or not self.value:\n return\n\n index, index_mask = self.index.get_data_mask()\n value, value_mask = self.value.get_data_mask()\n\n if len(index) == 0 or len(value) == 0 or len(index) != len(value):\n self._cached_data_pts = []\n self._cached_point_mask = []\n self._cache_valid = True\n return\n\n index_range_mask = self.index_mapper.range.mask_data(index)\n value_range_mask = self.value_mapper.range.mask_data(value)\n\n nan_mask = isfinite(index) & index_mask & isfinite(value) & value_mask\n point_mask = nan_mask & index_range_mask & value_range_mask\n\n if not self._cache_valid:\n if not point_mask.all():\n points = column_stack([index[point_mask], value[point_mask]])\n else:\n points = column_stack([index, value])\n self._cached_data_pts = points\n self._cached_point_mask = point_mask\n self._cache_valid = True\n\n def _render(self, gc, pts):\n if not self._label_cache_valid:\n self._compute_labels(gc)\n\n labels = [\n label\n for label, mask in zip(self._label_cache, self._cached_point_mask)\n if mask\n ]\n boxes = [\n label\n for label, mask in zip(\n self._label_box_cache, self._cached_point_mask\n )\n if mask\n ]\n offset = empty((2,), float)\n\n with gc:\n gc.clip_to_rect(self.x, self.y, self.width, self.height)\n for pt, label, box in zip(pts, labels, boxes):\n with gc:\n if self.h_position == \"center\":\n offset[0] = -box[0] / 2 + self.text_offset[0]\n elif self.h_position == \"right\":\n offset[0] = self.text_offset[0]\n elif self.h_position == \"left\":\n offset[0] = -box[0] / 2 + self.text_offset[0]\n if self.v_position == \"center\":\n offset[1] = -box[1] / 2 + self.text_offset[1]\n elif self.v_position == \"top\":\n offset[1] = self.text_offset[1]\n elif self.v_position == \"bottom\":\n offset[1] = -box[1] / 2 - self.text_offset[1]\n\n pt += offset\n gc.translate_ctm(*pt)\n\n label.draw(gc)\n\n # ------------------------------------------------------------------------\n # Trait events\n # ------------------------------------------------------------------------\n\n @observe(\"index.data_changed\")\n def _invalidate(self, event):\n self._cache_valid = False\n self._screen_cache_valid = False\n self._label_cache_valid = False\n\n @observe(\"value.data_changed,+redraw\")\n def _invalidate_labels(self, event):\n self._label_cache_valid = False\n", "path": "chaco/plots/text_plot.py"}]}
| 2,588 | 405 |
gh_patches_debug_10802
|
rasdani/github-patches
|
git_diff
|
interlegis__sapl-2147
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Audiências Públicas sem possibilidade de Edição
Ao criar uma Audiência Pública e salva-la, não aparecem os metadados da matéria legislativa inseridas no preenchimento.
Ao clicar em Editar, só aparece o título da audiência criada.
grato
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sapl/audiencia/views.py`
Content:
```
1 from django.http import HttpResponse
2 from django.views.decorators.clickjacking import xframe_options_exempt
3 from django.views.generic import UpdateView
4 from sapl.crud.base import RP_DETAIL, RP_LIST, Crud
5
6 from .forms import AudienciaForm
7 from .models import AudienciaPublica
8
9
10 def index(request):
11 return HttpResponse("Audiência Pública")
12
13
14 class AudienciaCrud(Crud):
15 model = AudienciaPublica
16 public = [RP_LIST, RP_DETAIL, ]
17
18 class BaseMixin(Crud.BaseMixin):
19 list_field_names = ['numero', 'nome', 'tipo', 'materia',
20 'data']
21 ordering = 'nome', 'numero', 'tipo', 'data'
22
23 class ListView(Crud.ListView):
24 paginate_by = 10
25
26 class CreateView(Crud.CreateView):
27 form_class = AudienciaForm
28
29 def form_valid(self, form):
30 return super(Crud.CreateView, self).form_valid(form)
31
32 class UpdateView(Crud.UpdateView):
33 form_class = AudienciaForm
34
35 def get_initial(self):
36 initial = super(UpdateView, self).get_initial()
37 initial['tipo_materia'] = self.object.materia.tipo.id
38 initial['numero_materia'] = self.object.materia.numero
39 initial['ano_materia'] = self.object.materia.ano
40 return initial
41
42 class DeleteView(Crud.DeleteView):
43 pass
44
45 class DetailView(Crud.DetailView):
46
47 layout_key = 'AudienciaPublicaDetail'
48
49 @xframe_options_exempt
50 def get(self, request, *args, **kwargs):
51 return super().get(request, *args, **kwargs)
52
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sapl/audiencia/views.py b/sapl/audiencia/views.py
--- a/sapl/audiencia/views.py
+++ b/sapl/audiencia/views.py
@@ -34,9 +34,10 @@
def get_initial(self):
initial = super(UpdateView, self).get_initial()
- initial['tipo_materia'] = self.object.materia.tipo.id
- initial['numero_materia'] = self.object.materia.numero
- initial['ano_materia'] = self.object.materia.ano
+ if self.object.materia:
+ initial['tipo_materia'] = self.object.materia.tipo.id
+ initial['numero_materia'] = self.object.materia.numero
+ initial['ano_materia'] = self.object.materia.ano
return initial
class DeleteView(Crud.DeleteView):
|
{"golden_diff": "diff --git a/sapl/audiencia/views.py b/sapl/audiencia/views.py\n--- a/sapl/audiencia/views.py\n+++ b/sapl/audiencia/views.py\n@@ -34,9 +34,10 @@\n \n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n- initial['tipo_materia'] = self.object.materia.tipo.id\n- initial['numero_materia'] = self.object.materia.numero\n- initial['ano_materia'] = self.object.materia.ano\n+ if self.object.materia:\n+ initial['tipo_materia'] = self.object.materia.tipo.id\n+ initial['numero_materia'] = self.object.materia.numero\n+ initial['ano_materia'] = self.object.materia.ano\n return initial\n \n class DeleteView(Crud.DeleteView):\n", "issue": "Audi\u00eancias P\u00fablicas sem possibilidade de Edi\u00e7\u00e3o\nAo criar uma Audi\u00eancia P\u00fablica e salva-la, n\u00e3o aparecem os metadados da mat\u00e9ria legislativa inseridas no preenchimento. \r\nAo clicar em Editar, s\u00f3 aparece o t\u00edtulo da audi\u00eancia criada.\r\ngrato\n", "before_files": [{"content": "from django.http import HttpResponse\nfrom django.views.decorators.clickjacking import xframe_options_exempt\nfrom django.views.generic import UpdateView\nfrom sapl.crud.base import RP_DETAIL, RP_LIST, Crud\n\nfrom .forms import AudienciaForm\nfrom .models import AudienciaPublica\n\n\ndef index(request):\n return HttpResponse(\"Audi\u00eancia P\u00fablica\")\n\n\nclass AudienciaCrud(Crud):\n model = AudienciaPublica\n public = [RP_LIST, RP_DETAIL, ]\n\n class BaseMixin(Crud.BaseMixin):\n list_field_names = ['numero', 'nome', 'tipo', 'materia',\n 'data']\n ordering = 'nome', 'numero', 'tipo', 'data'\n\n class ListView(Crud.ListView):\n paginate_by = 10\n\n class CreateView(Crud.CreateView):\n form_class = AudienciaForm\n\n def form_valid(self, form):\n return super(Crud.CreateView, self).form_valid(form)\n\n class UpdateView(Crud.UpdateView):\n form_class = AudienciaForm\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n initial['tipo_materia'] = self.object.materia.tipo.id\n initial['numero_materia'] = self.object.materia.numero\n initial['ano_materia'] = self.object.materia.ano\n return initial\n \n class DeleteView(Crud.DeleteView):\n pass\n\n class DetailView(Crud.DetailView):\n\n layout_key = 'AudienciaPublicaDetail'\n\n @xframe_options_exempt\n def get(self, request, *args, **kwargs):\n return super().get(request, *args, **kwargs)\n\n ", "path": "sapl/audiencia/views.py"}], "after_files": [{"content": "from django.http import HttpResponse\nfrom django.views.decorators.clickjacking import xframe_options_exempt\nfrom django.views.generic import UpdateView\nfrom sapl.crud.base import RP_DETAIL, RP_LIST, Crud\n\nfrom .forms import AudienciaForm\nfrom .models import AudienciaPublica\n\n\ndef index(request):\n return HttpResponse(\"Audi\u00eancia P\u00fablica\")\n\n\nclass AudienciaCrud(Crud):\n model = AudienciaPublica\n public = [RP_LIST, RP_DETAIL, ]\n\n class BaseMixin(Crud.BaseMixin):\n list_field_names = ['numero', 'nome', 'tipo', 'materia',\n 'data']\n ordering = 'nome', 'numero', 'tipo', 'data'\n\n class ListView(Crud.ListView):\n paginate_by = 10\n\n class CreateView(Crud.CreateView):\n form_class = AudienciaForm\n\n def form_valid(self, form):\n return super(Crud.CreateView, self).form_valid(form)\n\n class UpdateView(Crud.UpdateView):\n form_class = AudienciaForm\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n if self.object.materia:\n initial['tipo_materia'] = self.object.materia.tipo.id\n initial['numero_materia'] = self.object.materia.numero\n initial['ano_materia'] = self.object.materia.ano\n return initial\n \n class DeleteView(Crud.DeleteView):\n pass\n\n class DetailView(Crud.DetailView):\n\n layout_key = 'AudienciaPublicaDetail'\n\n @xframe_options_exempt\n def get(self, request, *args, **kwargs):\n return super().get(request, *args, **kwargs)\n\n ", "path": "sapl/audiencia/views.py"}]}
| 788 | 188 |
gh_patches_debug_4666
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-498
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cupy.nonzero fails with some corner-case inputs
Code:
```
import sys
import cupy, numpy
shapes = [
(),
(0,),
(1,),
(0,2),
(0,0,2,0),
]
f = sys.stdout
for xp in (numpy, cupy):
print(xp.__name__)
for shape in shapes:
a = xp.ones(shape)
f.write('shape={:<15} => '.format(str(shape)))
try:
b = xp.nonzero(a)
f.write('{}\n'.format(b))
except Exception as e:
f.write('FAIL: {}\n'.format(e))
# get stack trace
cupy.nonzero(cupy.ones((0,)))
```
Result:
```
numpy
shape=() => (array([0]),)
shape=(0,) => (array([], dtype=int64),)
shape=(1,) => (array([0]),)
shape=(0, 2) => (array([], dtype=int64), array([], dtype=int64))
shape=(0, 0, 2, 0) => (array([], dtype=int64), array([], dtype=int64), array([], dtype=int64), array([], dtype=int64))
cupy
shape=() => (array([0]),)
shape=(0,) => FAIL: CUDA_ERROR_INVALID_VALUE: invalid argument
shape=(1,) => (array([0]),)
shape=(0, 2) => FAIL: CUDA_ERROR_INVALID_VALUE: invalid argument
shape=(0, 0, 2, 0) => FAIL: CUDA_ERROR_INVALID_VALUE: invalid argument
Traceback (most recent call last):
File "test-nonzero.py", line 26, in <module>
cupy.nonzero(cupy.ones((0,)))
File "/niboshi/repos/cupy/cupy/sorting/search.py", line 72, in nonzero
return a.nonzero()
File "cupy/core/core.pyx", line 810, in cupy.core.core.ndarray.nonzero (cupy/core/core.cpp:16210)
scan_index = scan(condition.astype(dtype).ravel())
File "cupy/core/core.pyx", line 3883, in cupy.core.core.scan (cupy/core/core.cpp:83826)
kern_scan(grid=((a.size - 1) // (2 * block_size) + 1,),
File "cupy/cuda/function.pyx", line 118, in cupy.cuda.function.Function.__call__ (cupy/cuda/function.cpp:3794)
_launch(
File "cupy/cuda/function.pyx", line 100, in cupy.cuda.function._launch (cupy/cuda/function.cpp:3431)
driver.launchKernel(
File "cupy/cuda/driver.pyx", line 170, in cupy.cuda.driver.launchKernel (cupy/cuda/driver.cpp:3262)
check_status(status)
File "cupy/cuda/driver.pyx", line 70, in cupy.cuda.driver.check_status (cupy/cuda/driver.cpp:1481)
raise CUDADriverError(status)
cupy.cuda.driver.CUDADriverError: CUDA_ERROR_INVALID_VALUE: invalid argument
```
CuPy version: latest master(v2.0.0a1)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/sorting/search.py`
Content:
```
1 from cupy import core
2
3
4 def argmax(a, axis=None, dtype=None, out=None, keepdims=False):
5 """Returns the indices of the maximum along an axis.
6
7 Args:
8 a (cupy.ndarray): Array to take argmax.
9 axis (int): Along which axis to find the maximum. ``a`` is flattened by
10 default.
11 dtype: Data type specifier.
12 out (cupy.ndarray): Output array.
13 keepdims (bool): If ``True``, the axis ``axis`` is preserved as an axis
14 of length one.
15
16 Returns:
17 cupy.ndarray: The indices of the maximum of ``a`` along an axis.
18
19 .. seealso:: :func:`numpy.argmax`
20
21 """
22 # TODO(okuta): check type
23 return a.argmax(axis=axis, dtype=dtype, out=out, keepdims=keepdims)
24
25
26 # TODO(okuta): Implement nanargmax
27
28
29 def argmin(a, axis=None, dtype=None, out=None, keepdims=False):
30 """Returns the indices of the minimum along an axis.
31
32 Args:
33 a (cupy.ndarray): Array to take argmin.
34 axis (int): Along which axis to find the minimum. ``a`` is flattened by
35 default.
36 dtype: Data type specifier.
37 out (cupy.ndarray): Output array.
38 keepdims (bool): If ``True``, the axis ``axis`` is preserved as an axis
39 of length one.
40
41 Returns:
42 cupy.ndarray: The indices of the minimum of ``a`` along an axis.
43
44 .. seealso:: :func:`numpy.argmin`
45
46 """
47 # TODO(okuta): check type
48 return a.argmin(axis=axis, dtype=dtype, out=out, keepdims=keepdims)
49
50
51 # TODO(okuta): Implement nanargmin
52
53
54 # TODO(okuta): Implement argwhere
55
56
57 def nonzero(a):
58 """Return the indices of the elements that are non-zero.
59
60 Returns a tuple of arrays, one for each dimension of a,
61 containing the indices of the non-zero elements in that dimension.
62
63 Args:
64 a (cupy.ndarray): array
65
66 Returns:
67 tuple of arrays: Indices of elements that are non-zero.
68
69 .. seealso:: :func:`numpy.nonzero`
70
71 """
72 return a.nonzero()
73
74
75 def flatnonzero(a):
76 """Return indices that are non-zero in the flattened version of a.
77
78 This is equivalent to a.ravel().nonzero()[0].
79
80 Args:
81 a (cupy.ndarray): input array
82
83 Returns:
84 cupy.ndarray: Output array,
85 containing the indices of the elements of a.ravel() that are non-zero.
86
87 .. seealso:: :func:`numpy.flatnonzero`
88 """
89 return a.ravel().nonzero()[0]
90
91
92 def where(condition, x=None, y=None):
93 """Return elements, either from x or y, depending on condition.
94
95 If only condition is given, return ``condition.nonzero()``.
96
97 Args:
98 condition (cupy.ndarray): When True, take x, otherwise take y.
99 x (cupy.ndarray): Values from which to choose on ``True``.
100 y (cupy.ndarray): Values from which to choose on ``False``.
101
102 Returns:
103 cupy.ndarray: Each element of output contains elements of ``x`` when
104 ``condition`` is ``True``, otherwise elements of ``y``. If only
105 ``condition`` is given, return the tuple ``condition.nonzero()``,
106 the indices where ``condition`` is True.
107
108 .. seealso:: :func:`numpy.where`
109
110 """
111
112 missing = (x is None, y is None).count(True)
113
114 if missing == 1:
115 raise ValueError("Must provide both 'x' and 'y' or neither.")
116 if missing == 2:
117 return nonzero(condition)
118
119 return _where_ufunc(condition.astype('?'), x, y)
120
121
122 _where_ufunc = core.create_ufunc(
123 'cupy_where',
124 ('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',
125 '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f',
126 # On CUDA 6.5 these combinations don't work correctly (on CUDA >=7.0, it
127 # works).
128 # See issue #551.
129 '?hd->d', '?Hd->d',
130 '?dd->d'),
131 'out0 = in0 ? in1 : in2')
132
133
134 # TODO(okuta): Implement searchsorted
135
136
137 # TODO(okuta): Implement extract
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cupy/sorting/search.py b/cupy/sorting/search.py
--- a/cupy/sorting/search.py
+++ b/cupy/sorting/search.py
@@ -69,6 +69,7 @@
.. seealso:: :func:`numpy.nonzero`
"""
+ assert isinstance(a, core.ndarray)
return a.nonzero()
@@ -86,6 +87,7 @@
.. seealso:: :func:`numpy.flatnonzero`
"""
+ assert isinstance(a, core.ndarray)
return a.ravel().nonzero()[0]
|
{"golden_diff": "diff --git a/cupy/sorting/search.py b/cupy/sorting/search.py\n--- a/cupy/sorting/search.py\n+++ b/cupy/sorting/search.py\n@@ -69,6 +69,7 @@\n .. seealso:: :func:`numpy.nonzero`\n \n \"\"\"\n+ assert isinstance(a, core.ndarray)\n return a.nonzero()\n \n \n@@ -86,6 +87,7 @@\n \n .. seealso:: :func:`numpy.flatnonzero`\n \"\"\"\n+ assert isinstance(a, core.ndarray)\n return a.ravel().nonzero()[0]\n", "issue": "cupy.nonzero fails with some corner-case inputs\nCode:\r\n```\r\nimport sys\r\nimport cupy, numpy\r\n\r\nshapes = [\r\n (),\r\n (0,),\r\n (1,),\r\n (0,2),\r\n (0,0,2,0),\r\n]\r\n\r\nf = sys.stdout\r\nfor xp in (numpy, cupy):\r\n print(xp.__name__)\r\n for shape in shapes:\r\n a = xp.ones(shape)\r\n f.write('shape={:<15} => '.format(str(shape)))\r\n\r\n try:\r\n b = xp.nonzero(a)\r\n f.write('{}\\n'.format(b))\r\n except Exception as e:\r\n f.write('FAIL: {}\\n'.format(e))\r\n\r\n# get stack trace\r\ncupy.nonzero(cupy.ones((0,)))\r\n```\r\n\r\nResult:\r\n```\r\nnumpy\r\nshape=() => (array([0]),)\r\nshape=(0,) => (array([], dtype=int64),)\r\nshape=(1,) => (array([0]),)\r\nshape=(0, 2) => (array([], dtype=int64), array([], dtype=int64))\r\nshape=(0, 0, 2, 0) => (array([], dtype=int64), array([], dtype=int64), array([], dtype=int64), array([], dtype=int64))\r\ncupy\r\nshape=() => (array([0]),)\r\nshape=(0,) => FAIL: CUDA_ERROR_INVALID_VALUE: invalid argument\r\nshape=(1,) => (array([0]),)\r\nshape=(0, 2) => FAIL: CUDA_ERROR_INVALID_VALUE: invalid argument\r\nshape=(0, 0, 2, 0) => FAIL: CUDA_ERROR_INVALID_VALUE: invalid argument\r\nTraceback (most recent call last):\r\n File \"test-nonzero.py\", line 26, in <module>\r\n cupy.nonzero(cupy.ones((0,)))\r\n File \"/niboshi/repos/cupy/cupy/sorting/search.py\", line 72, in nonzero\r\n return a.nonzero()\r\n File \"cupy/core/core.pyx\", line 810, in cupy.core.core.ndarray.nonzero (cupy/core/core.cpp:16210)\r\n scan_index = scan(condition.astype(dtype).ravel())\r\n File \"cupy/core/core.pyx\", line 3883, in cupy.core.core.scan (cupy/core/core.cpp:83826)\r\n kern_scan(grid=((a.size - 1) // (2 * block_size) + 1,),\r\n File \"cupy/cuda/function.pyx\", line 118, in cupy.cuda.function.Function.__call__ (cupy/cuda/function.cpp:3794)\r\n _launch(\r\n File \"cupy/cuda/function.pyx\", line 100, in cupy.cuda.function._launch (cupy/cuda/function.cpp:3431)\r\n driver.launchKernel(\r\n File \"cupy/cuda/driver.pyx\", line 170, in cupy.cuda.driver.launchKernel (cupy/cuda/driver.cpp:3262)\r\n check_status(status)\r\n File \"cupy/cuda/driver.pyx\", line 70, in cupy.cuda.driver.check_status (cupy/cuda/driver.cpp:1481)\r\n raise CUDADriverError(status)\r\ncupy.cuda.driver.CUDADriverError: CUDA_ERROR_INVALID_VALUE: invalid argument\r\n```\r\nCuPy version: latest master(v2.0.0a1)\n", "before_files": [{"content": "from cupy import core\n\n\ndef argmax(a, axis=None, dtype=None, out=None, keepdims=False):\n \"\"\"Returns the indices of the maximum along an axis.\n\n Args:\n a (cupy.ndarray): Array to take argmax.\n axis (int): Along which axis to find the maximum. ``a`` is flattened by\n default.\n dtype: Data type specifier.\n out (cupy.ndarray): Output array.\n keepdims (bool): If ``True``, the axis ``axis`` is preserved as an axis\n of length one.\n\n Returns:\n cupy.ndarray: The indices of the maximum of ``a`` along an axis.\n\n .. seealso:: :func:`numpy.argmax`\n\n \"\"\"\n # TODO(okuta): check type\n return a.argmax(axis=axis, dtype=dtype, out=out, keepdims=keepdims)\n\n\n# TODO(okuta): Implement nanargmax\n\n\ndef argmin(a, axis=None, dtype=None, out=None, keepdims=False):\n \"\"\"Returns the indices of the minimum along an axis.\n\n Args:\n a (cupy.ndarray): Array to take argmin.\n axis (int): Along which axis to find the minimum. ``a`` is flattened by\n default.\n dtype: Data type specifier.\n out (cupy.ndarray): Output array.\n keepdims (bool): If ``True``, the axis ``axis`` is preserved as an axis\n of length one.\n\n Returns:\n cupy.ndarray: The indices of the minimum of ``a`` along an axis.\n\n .. seealso:: :func:`numpy.argmin`\n\n \"\"\"\n # TODO(okuta): check type\n return a.argmin(axis=axis, dtype=dtype, out=out, keepdims=keepdims)\n\n\n# TODO(okuta): Implement nanargmin\n\n\n# TODO(okuta): Implement argwhere\n\n\ndef nonzero(a):\n \"\"\"Return the indices of the elements that are non-zero.\n\n Returns a tuple of arrays, one for each dimension of a,\n containing the indices of the non-zero elements in that dimension.\n\n Args:\n a (cupy.ndarray): array\n\n Returns:\n tuple of arrays: Indices of elements that are non-zero.\n\n .. seealso:: :func:`numpy.nonzero`\n\n \"\"\"\n return a.nonzero()\n\n\ndef flatnonzero(a):\n \"\"\"Return indices that are non-zero in the flattened version of a.\n\n This is equivalent to a.ravel().nonzero()[0].\n\n Args:\n a (cupy.ndarray): input array\n\n Returns:\n cupy.ndarray: Output array,\n containing the indices of the elements of a.ravel() that are non-zero.\n\n .. seealso:: :func:`numpy.flatnonzero`\n \"\"\"\n return a.ravel().nonzero()[0]\n\n\ndef where(condition, x=None, y=None):\n \"\"\"Return elements, either from x or y, depending on condition.\n\n If only condition is given, return ``condition.nonzero()``.\n\n Args:\n condition (cupy.ndarray): When True, take x, otherwise take y.\n x (cupy.ndarray): Values from which to choose on ``True``.\n y (cupy.ndarray): Values from which to choose on ``False``.\n\n Returns:\n cupy.ndarray: Each element of output contains elements of ``x`` when\n ``condition`` is ``True``, otherwise elements of ``y``. If only\n ``condition`` is given, return the tuple ``condition.nonzero()``,\n the indices where ``condition`` is True.\n\n .. seealso:: :func:`numpy.where`\n\n \"\"\"\n\n missing = (x is None, y is None).count(True)\n\n if missing == 1:\n raise ValueError(\"Must provide both 'x' and 'y' or neither.\")\n if missing == 2:\n return nonzero(condition)\n\n return _where_ufunc(condition.astype('?'), x, y)\n\n\n_where_ufunc = core.create_ufunc(\n 'cupy_where',\n ('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',\n '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f',\n # On CUDA 6.5 these combinations don't work correctly (on CUDA >=7.0, it\n # works).\n # See issue #551.\n '?hd->d', '?Hd->d',\n '?dd->d'),\n 'out0 = in0 ? in1 : in2')\n\n\n# TODO(okuta): Implement searchsorted\n\n\n# TODO(okuta): Implement extract\n", "path": "cupy/sorting/search.py"}], "after_files": [{"content": "from cupy import core\n\n\ndef argmax(a, axis=None, dtype=None, out=None, keepdims=False):\n \"\"\"Returns the indices of the maximum along an axis.\n\n Args:\n a (cupy.ndarray): Array to take argmax.\n axis (int): Along which axis to find the maximum. ``a`` is flattened by\n default.\n dtype: Data type specifier.\n out (cupy.ndarray): Output array.\n keepdims (bool): If ``True``, the axis ``axis`` is preserved as an axis\n of length one.\n\n Returns:\n cupy.ndarray: The indices of the maximum of ``a`` along an axis.\n\n .. seealso:: :func:`numpy.argmax`\n\n \"\"\"\n # TODO(okuta): check type\n return a.argmax(axis=axis, dtype=dtype, out=out, keepdims=keepdims)\n\n\n# TODO(okuta): Implement nanargmax\n\n\ndef argmin(a, axis=None, dtype=None, out=None, keepdims=False):\n \"\"\"Returns the indices of the minimum along an axis.\n\n Args:\n a (cupy.ndarray): Array to take argmin.\n axis (int): Along which axis to find the minimum. ``a`` is flattened by\n default.\n dtype: Data type specifier.\n out (cupy.ndarray): Output array.\n keepdims (bool): If ``True``, the axis ``axis`` is preserved as an axis\n of length one.\n\n Returns:\n cupy.ndarray: The indices of the minimum of ``a`` along an axis.\n\n .. seealso:: :func:`numpy.argmin`\n\n \"\"\"\n # TODO(okuta): check type\n return a.argmin(axis=axis, dtype=dtype, out=out, keepdims=keepdims)\n\n\n# TODO(okuta): Implement nanargmin\n\n\n# TODO(okuta): Implement argwhere\n\n\ndef nonzero(a):\n \"\"\"Return the indices of the elements that are non-zero.\n\n Returns a tuple of arrays, one for each dimension of a,\n containing the indices of the non-zero elements in that dimension.\n\n Args:\n a (cupy.ndarray): array\n\n Returns:\n tuple of arrays: Indices of elements that are non-zero.\n\n .. seealso:: :func:`numpy.nonzero`\n\n \"\"\"\n assert isinstance(a, core.ndarray)\n return a.nonzero()\n\n\ndef flatnonzero(a):\n \"\"\"Return indices that are non-zero in the flattened version of a.\n\n This is equivalent to a.ravel().nonzero()[0].\n\n Args:\n a (cupy.ndarray): input array\n\n Returns:\n cupy.ndarray: Output array,\n containing the indices of the elements of a.ravel() that are non-zero.\n\n .. seealso:: :func:`numpy.flatnonzero`\n \"\"\"\n assert isinstance(a, core.ndarray)\n return a.ravel().nonzero()[0]\n\n\ndef where(condition, x=None, y=None):\n \"\"\"Return elements, either from x or y, depending on condition.\n\n If only condition is given, return ``condition.nonzero()``.\n\n Args:\n condition (cupy.ndarray): When True, take x, otherwise take y.\n x (cupy.ndarray): Values from which to choose on ``True``.\n y (cupy.ndarray): Values from which to choose on ``False``.\n\n Returns:\n cupy.ndarray: Each element of output contains elements of ``x`` when\n ``condition`` is ``True``, otherwise elements of ``y``. If only\n ``condition`` is given, return the tuple ``condition.nonzero()``,\n the indices where ``condition`` is True.\n\n .. seealso:: :func:`numpy.where`\n\n \"\"\"\n\n missing = (x is None, y is None).count(True)\n\n if missing == 1:\n raise ValueError(\"Must provide both 'x' and 'y' or neither.\")\n if missing == 2:\n return nonzero(condition)\n\n return _where_ufunc(condition.astype('?'), x, y)\n\n\n_where_ufunc = core.create_ufunc(\n 'cupy_where',\n ('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',\n '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f',\n # On CUDA 6.5 these combinations don't work correctly (on CUDA >=7.0, it\n # works).\n # See issue #551.\n '?hd->d', '?Hd->d',\n '?dd->d'),\n 'out0 = in0 ? in1 : in2')\n\n\n# TODO(okuta): Implement searchsorted\n\n\n# TODO(okuta): Implement extract\n", "path": "cupy/sorting/search.py"}]}
| 2,350 | 126 |
gh_patches_debug_41733
|
rasdani/github-patches
|
git_diff
|
stephenmcd__mezzanine-1039
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Catch-all signals in mezzanine.generic disable fast deletion
Hi Steve,
Mezzanine's BaseGenericRelation connects its signals up without a sender with the following comment:
```
# For some unknown reason the signal won't be triggered
# if given a sender arg, particularly when running
# Cartridge with the field RichTextPage.keywords - so
# instead of specifying self.rel.to as the sender, we
# check for it inside the signal itself.
post_save.connect(self._related_items_changed)
post_delete.connect(self._related_items_changed)
```
Django has two paths for object deletion – a fast path which transforms the query into a `DELETE FROM`, and a slow one that retrieves the items and then deletes them. A condition of the fast path being used is that the model being deleted has no post_delete handler set up, so by listening for all post_delete signals mezzanine.generic effectively disables the fast path globally.
I've just run the tests with the sender specified on 1.4, 1.5 and 1.6 and everything is passing – if you can remember what wasn't working, or give any more info, I'd quite like to look into this further and see if we can't find another workaround.
As always thanks for your great work.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mezzanine/generic/fields.py`
Content:
```
1 from __future__ import division, unicode_literals
2 from future.builtins import str
3
4 from copy import copy
5
6 from django.conf import settings
7 from django.contrib.contenttypes.generic import GenericRelation
8 from django.core.exceptions import ImproperlyConfigured
9 from django.db.models import get_model, IntegerField, CharField, FloatField
10 from django.db.models.signals import post_save, post_delete
11
12
13 class BaseGenericRelation(GenericRelation):
14 """
15 Extends ``GenericRelation`` to:
16
17 - Add a consistent default value for ``object_id_field`` and
18 check for a ``related_model`` attribute which can be defined
19 on subclasses as a default for the ``to`` argument.
20
21 - Add one or more custom fields to the model that the relation
22 field is applied to, and then call a ``related_items_changed``
23 method each time related items are saved or deleted, so that a
24 calculated value can be stored against the custom fields since
25 aggregates aren't available for GenericRelation instances.
26
27 """
28
29 # Mapping of field names to model fields that will be added.
30 fields = {}
31
32 def __init__(self, *args, **kwargs):
33 """
34 Set up some defaults and check for a ``related_model``
35 attribute for the ``to`` argument.
36 """
37 if kwargs.get("frozen_by_south", False):
38 raise Exception("""
39
40 Your project contains migrations that include one of the fields
41 from mezzanine.generic in its Migration.model dict: possibly
42 KeywordsField, CommentsField or RatingField. These migratons no
43 longer work with the latest versions of Django and South, so you'll
44 need to fix them by hand. This is as simple as commenting out or
45 deleting the field from the Migration.model dict.
46 See http://bit.ly/1hecVsD for an example.
47
48 """)
49
50 kwargs.setdefault("object_id_field", "object_pk")
51 to = getattr(self, "related_model", None)
52 if to:
53 kwargs.setdefault("to", to)
54 super(BaseGenericRelation, self).__init__(*args, **kwargs)
55
56 def contribute_to_class(self, cls, name):
57 """
58 Add each of the names and fields in the ``fields`` attribute
59 to the model the relationship field is applied to, and set up
60 the related item save and delete signals for calling
61 ``related_items_changed``.
62 """
63 for field in cls._meta.many_to_many:
64 if isinstance(field, self.__class__):
65 e = "Multiple %s fields are not supported (%s.%s, %s.%s)" % (
66 self.__class__.__name__, cls.__name__, cls.__name__,
67 name, field.name)
68 raise ImproperlyConfigured(e)
69 self.related_field_name = name
70 super(BaseGenericRelation, self).contribute_to_class(cls, name)
71 # Not applicable to abstract classes, and in fact will break.
72 if not cls._meta.abstract:
73 for (name_string, field) in self.fields.items():
74 if "%s" in name_string:
75 name_string = name_string % name
76 # In Django 1.6, add_to_class will be called on a
77 # parent model's field more than once, so
78 # contribute_to_class needs to be idempotent. We
79 # don't call get_all_field_names() which fill the app
80 # cache get_fields_with_model() is safe.
81 if name_string in [i.name for i, _ in
82 cls._meta.get_fields_with_model()]:
83 continue
84 if field.verbose_name is None:
85 field.verbose_name = self.verbose_name
86 cls.add_to_class(name_string, copy(field))
87 # Add a getter function to the model we can use to retrieve
88 # the field/manager by name.
89 getter_name = "get_%s_name" % self.__class__.__name__.lower()
90 cls.add_to_class(getter_name, lambda self: name)
91 # For some unknown reason the signal won't be triggered
92 # if given a sender arg, particularly when running
93 # Cartridge with the field RichTextPage.keywords - so
94 # instead of specifying self.rel.to as the sender, we
95 # check for it inside the signal itself.
96 post_save.connect(self._related_items_changed)
97 post_delete.connect(self._related_items_changed)
98
99 def _related_items_changed(self, **kwargs):
100 """
101 Ensure that the given related item is actually for the model
102 this field applies to, and pass the instance to the real
103 ``related_items_changed`` handler.
104 """
105 # Manually check that the instance matches the relation,
106 # since we don't specify a sender for the signal.
107 try:
108 to = self.rel.to
109 if isinstance(to, str):
110 to = get_model(*to.split(".", 1))
111 if not isinstance(kwargs["instance"], to):
112 raise TypeError
113 except (TypeError, ValueError):
114 return
115 for_model = kwargs["instance"].content_type.model_class()
116 if issubclass(for_model, self.model):
117 instance_id = kwargs["instance"].object_pk
118 try:
119 instance = for_model.objects.get(id=instance_id)
120 except self.model.DoesNotExist:
121 # Instance itself was deleted - signals are irrelevant.
122 return
123 if hasattr(instance, "get_content_model"):
124 instance = instance.get_content_model()
125 related_manager = getattr(instance, self.related_field_name)
126 self.related_items_changed(instance, related_manager)
127
128 def related_items_changed(self, instance, related_manager):
129 """
130 Can be implemented by subclasses - called whenever the
131 state of related items change, eg they're saved or deleted.
132 The instance for this field and the related manager for the
133 field are passed as arguments.
134 """
135 pass
136
137 def value_from_object(self, obj):
138 """
139 Returns the value of this field in the given model instance.
140 Needed for Django 1.7: https://code.djangoproject.com/ticket/22552
141 """
142 return getattr(obj, self.attname).all()
143
144
145 class CommentsField(BaseGenericRelation):
146 """
147 Stores the number of comments against the
148 ``COMMENTS_FIELD_NAME_count`` field when a comment is saved or
149 deleted.
150 """
151
152 related_model = "generic.ThreadedComment"
153 fields = {"%s_count": IntegerField(editable=False, default=0)}
154
155 def related_items_changed(self, instance, related_manager):
156 """
157 Stores the number of comments. A custom ``count_filter``
158 queryset gets checked for, allowing managers to implement
159 custom count logic.
160 """
161 try:
162 count = related_manager.count_queryset()
163 except AttributeError:
164 count = related_manager.count()
165 count_field_name = list(self.fields.keys())[0] % \
166 self.related_field_name
167 setattr(instance, count_field_name, count)
168 instance.save()
169
170
171 class KeywordsField(BaseGenericRelation):
172 """
173 Stores the keywords as a single string into the
174 ``KEYWORDS_FIELD_NAME_string`` field for convenient access when
175 searching.
176 """
177
178 related_model = "generic.AssignedKeyword"
179 fields = {"%s_string": CharField(editable=False, blank=True,
180 max_length=500)}
181
182 def __init__(self, *args, **kwargs):
183 """
184 Mark the field as editable so that it can be specified in
185 admin class fieldsets and pass validation, and also so that
186 it shows up in the admin form.
187 """
188 super(KeywordsField, self).__init__(*args, **kwargs)
189 self.editable = True
190
191 def formfield(self, **kwargs):
192 """
193 Provide the custom form widget for the admin, since there
194 isn't a form field mapped to ``GenericRelation`` model fields.
195 """
196 from mezzanine.generic.forms import KeywordsWidget
197 kwargs["widget"] = KeywordsWidget
198 return super(KeywordsField, self).formfield(**kwargs)
199
200 def save_form_data(self, instance, data):
201 """
202 The ``KeywordsWidget`` field will return data as a string of
203 comma separated IDs for the ``Keyword`` model - convert these
204 into actual ``AssignedKeyword`` instances. Also delete
205 ``Keyword`` instances if their last related ``AssignedKeyword``
206 instance is being removed.
207 """
208 from mezzanine.generic.models import AssignedKeyword, Keyword
209 related_manager = getattr(instance, self.name)
210 # Get a list of Keyword IDs being removed.
211 old_ids = [str(a.keyword_id) for a in related_manager.all()]
212 new_ids = data.split(",")
213 removed_ids = set(old_ids) - set(new_ids)
214 # Remove current AssignedKeyword instances.
215 related_manager.all().delete()
216 # Convert the data into AssignedKeyword instances.
217 if data:
218 data = [AssignedKeyword(keyword_id=i) for i in new_ids]
219 # Remove Keyword instances than no longer have a
220 # related AssignedKeyword instance.
221 existing = AssignedKeyword.objects.filter(keyword__id__in=removed_ids)
222 existing_ids = set([str(a.keyword_id) for a in existing])
223 unused_ids = removed_ids - existing_ids
224 Keyword.objects.filter(id__in=unused_ids).delete()
225 super(KeywordsField, self).save_form_data(instance, data)
226
227 def contribute_to_class(self, cls, name):
228 """
229 Swap out any reference to ``KeywordsField`` with the
230 ``KEYWORDS_FIELD_string`` field in ``search_fields``.
231 """
232 super(KeywordsField, self).contribute_to_class(cls, name)
233 string_field_name = list(self.fields.keys())[0] % \
234 self.related_field_name
235 if hasattr(cls, "search_fields") and name in cls.search_fields:
236 try:
237 weight = cls.search_fields[name]
238 except TypeError:
239 # search_fields is a sequence.
240 index = cls.search_fields.index(name)
241 search_fields_type = type(cls.search_fields)
242 cls.search_fields = list(cls.search_fields)
243 cls.search_fields[index] = string_field_name
244 cls.search_fields = search_fields_type(cls.search_fields)
245 else:
246 del cls.search_fields[name]
247 cls.search_fields[string_field_name] = weight
248
249 def related_items_changed(self, instance, related_manager):
250 """
251 Stores the keywords as a single string for searching.
252 """
253 assigned = related_manager.select_related("keyword")
254 keywords = " ".join([str(a.keyword) for a in assigned])
255 string_field_name = list(self.fields.keys())[0] % \
256 self.related_field_name
257 if getattr(instance, string_field_name) != keywords:
258 setattr(instance, string_field_name, keywords)
259 instance.save()
260
261
262 class RatingField(BaseGenericRelation):
263 """
264 Stores the rating count and average against the
265 ``RATING_FIELD_NAME_count`` and ``RATING_FIELD_NAME_average``
266 fields when a rating is saved or deleted.
267 """
268
269 related_model = "generic.Rating"
270 fields = {"%s_count": IntegerField(default=0, editable=False),
271 "%s_sum": IntegerField(default=0, editable=False),
272 "%s_average": FloatField(default=0, editable=False)}
273
274 def related_items_changed(self, instance, related_manager):
275 """
276 Calculates and saves the average rating.
277 """
278 ratings = [r.value for r in related_manager.all()]
279 count = len(ratings)
280 _sum = sum(ratings)
281 average = _sum / count if count > 0 else 0
282 setattr(instance, "%s_count" % self.related_field_name, count)
283 setattr(instance, "%s_sum" % self.related_field_name, _sum)
284 setattr(instance, "%s_average" % self.related_field_name, average)
285 instance.save()
286
287
288 # South requires custom fields to be given "rules".
289 # See http://south.aeracode.org/docs/customfields.html
290 if "south" in settings.INSTALLED_APPS:
291 try:
292 from south.modelsinspector import add_introspection_rules
293 add_introspection_rules(rules=[((BaseGenericRelation,), [], {})],
294 patterns=["mezzanine\.generic\.fields\."])
295 except ImportError:
296 pass
297
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mezzanine/generic/fields.py b/mezzanine/generic/fields.py
--- a/mezzanine/generic/fields.py
+++ b/mezzanine/generic/fields.py
@@ -7,7 +7,57 @@
from django.contrib.contenttypes.generic import GenericRelation
from django.core.exceptions import ImproperlyConfigured
from django.db.models import get_model, IntegerField, CharField, FloatField
-from django.db.models.signals import post_save, post_delete
+from django.db.models.signals import post_save, post_delete, class_prepared
+from django.utils import six
+
+
+class LazyModelOperations(object):
+ """
+ This class connects itself to Django's class_prepared signal. Pass a
+ function and a model or model name to its add() method, and the function
+ will be called with the model as its only parameter once the model has been
+ loaded. If the model is already loaded, the function is called immediately.
+
+ Adapted from code found at the top of django/db/models/fields/related.py.
+ """
+ def __init__(self):
+ self.pending_operations = {}
+ class_prepared.connect(self.signal_receiver)
+
+ @staticmethod
+ def model_key(model_or_name):
+ """
+ Returns an (app_label, model_name) tuple from a model or string.
+ """
+ if isinstance(model_or_name, six.string_types):
+ app_label, model_name = model_or_name.split(".")
+ else:
+ # it's actually a model class
+ app_label = model_or_name._meta.app_label
+ model_name = model_or_name._meta.object_name
+ return app_label, model_name
+
+ def add(self, function, model_or_name):
+ model_key = self.model_key(model_or_name)
+
+ # If the model is already loaded, pass it to the function immediately.
+ # Otherwise, delay execution until the class is prepared.
+ model = get_model(*model_key, seed_cache=False, only_installed=False)
+ if model:
+ function(model)
+ else:
+ self.pending_operations.setdefault(model_key, []).append(function)
+
+ def signal_receiver(self, sender, **_):
+ """
+ Receive class_prepared, and pass the freshly prepared model to each
+ function waiting for it.
+ """
+ key = (sender._meta.app_label, sender.__name__)
+ for function in self.pending_operations.pop(key, []):
+ function(sender)
+
+lazy_model_ops = LazyModelOperations()
class BaseGenericRelation(GenericRelation):
@@ -93,8 +143,15 @@
# Cartridge with the field RichTextPage.keywords - so
# instead of specifying self.rel.to as the sender, we
# check for it inside the signal itself.
- post_save.connect(self._related_items_changed)
- post_delete.connect(self._related_items_changed)
+
+ def connect_save(sender):
+ post_save.connect(self._related_items_changed, sender=sender)
+
+ def connect_delete(sender):
+ post_delete.connect(self._related_items_changed, sender=sender)
+
+ lazy_model_ops.add(connect_save, self.rel.to)
+ lazy_model_ops.add(connect_delete, self.rel.to)
def _related_items_changed(self, **kwargs):
"""
@@ -102,16 +159,6 @@
this field applies to, and pass the instance to the real
``related_items_changed`` handler.
"""
- # Manually check that the instance matches the relation,
- # since we don't specify a sender for the signal.
- try:
- to = self.rel.to
- if isinstance(to, str):
- to = get_model(*to.split(".", 1))
- if not isinstance(kwargs["instance"], to):
- raise TypeError
- except (TypeError, ValueError):
- return
for_model = kwargs["instance"].content_type.model_class()
if issubclass(for_model, self.model):
instance_id = kwargs["instance"].object_pk
|
{"golden_diff": "diff --git a/mezzanine/generic/fields.py b/mezzanine/generic/fields.py\n--- a/mezzanine/generic/fields.py\n+++ b/mezzanine/generic/fields.py\n@@ -7,7 +7,57 @@\n from django.contrib.contenttypes.generic import GenericRelation\n from django.core.exceptions import ImproperlyConfigured\n from django.db.models import get_model, IntegerField, CharField, FloatField\n-from django.db.models.signals import post_save, post_delete\n+from django.db.models.signals import post_save, post_delete, class_prepared\n+from django.utils import six\n+\n+\n+class LazyModelOperations(object):\n+ \"\"\"\n+ This class connects itself to Django's class_prepared signal. Pass a\n+ function and a model or model name to its add() method, and the function\n+ will be called with the model as its only parameter once the model has been\n+ loaded. If the model is already loaded, the function is called immediately.\n+\n+ Adapted from code found at the top of django/db/models/fields/related.py.\n+ \"\"\"\n+ def __init__(self):\n+ self.pending_operations = {}\n+ class_prepared.connect(self.signal_receiver)\n+\n+ @staticmethod\n+ def model_key(model_or_name):\n+ \"\"\"\n+ Returns an (app_label, model_name) tuple from a model or string.\n+ \"\"\"\n+ if isinstance(model_or_name, six.string_types):\n+ app_label, model_name = model_or_name.split(\".\")\n+ else:\n+ # it's actually a model class\n+ app_label = model_or_name._meta.app_label\n+ model_name = model_or_name._meta.object_name\n+ return app_label, model_name\n+\n+ def add(self, function, model_or_name):\n+ model_key = self.model_key(model_or_name)\n+\n+ # If the model is already loaded, pass it to the function immediately.\n+ # Otherwise, delay execution until the class is prepared.\n+ model = get_model(*model_key, seed_cache=False, only_installed=False)\n+ if model:\n+ function(model)\n+ else:\n+ self.pending_operations.setdefault(model_key, []).append(function)\n+\n+ def signal_receiver(self, sender, **_):\n+ \"\"\"\n+ Receive class_prepared, and pass the freshly prepared model to each\n+ function waiting for it.\n+ \"\"\"\n+ key = (sender._meta.app_label, sender.__name__)\n+ for function in self.pending_operations.pop(key, []):\n+ function(sender)\n+\n+lazy_model_ops = LazyModelOperations()\n \n \n class BaseGenericRelation(GenericRelation):\n@@ -93,8 +143,15 @@\n # Cartridge with the field RichTextPage.keywords - so\n # instead of specifying self.rel.to as the sender, we\n # check for it inside the signal itself.\n- post_save.connect(self._related_items_changed)\n- post_delete.connect(self._related_items_changed)\n+\n+ def connect_save(sender):\n+ post_save.connect(self._related_items_changed, sender=sender)\n+\n+ def connect_delete(sender):\n+ post_delete.connect(self._related_items_changed, sender=sender)\n+\n+ lazy_model_ops.add(connect_save, self.rel.to)\n+ lazy_model_ops.add(connect_delete, self.rel.to)\n \n def _related_items_changed(self, **kwargs):\n \"\"\"\n@@ -102,16 +159,6 @@\n this field applies to, and pass the instance to the real\n ``related_items_changed`` handler.\n \"\"\"\n- # Manually check that the instance matches the relation,\n- # since we don't specify a sender for the signal.\n- try:\n- to = self.rel.to\n- if isinstance(to, str):\n- to = get_model(*to.split(\".\", 1))\n- if not isinstance(kwargs[\"instance\"], to):\n- raise TypeError\n- except (TypeError, ValueError):\n- return\n for_model = kwargs[\"instance\"].content_type.model_class()\n if issubclass(for_model, self.model):\n instance_id = kwargs[\"instance\"].object_pk\n", "issue": "Catch-all signals in mezzanine.generic disable fast deletion\nHi Steve,\n\nMezzanine's BaseGenericRelation connects its signals up without a sender with the following comment:\n\n```\n# For some unknown reason the signal won't be triggered\n# if given a sender arg, particularly when running\n# Cartridge with the field RichTextPage.keywords - so\n# instead of specifying self.rel.to as the sender, we\n# check for it inside the signal itself.\npost_save.connect(self._related_items_changed)\npost_delete.connect(self._related_items_changed)\n```\n\nDjango has two paths for object deletion \u2013\u00a0a fast path which transforms the query into a `DELETE FROM`, and a slow one that retrieves the items and then deletes them. A condition of the fast path being used is that the model being deleted has no post_delete handler set up, so by listening for all post_delete signals mezzanine.generic effectively disables the fast path globally.\n\nI've just run the tests with the sender specified on 1.4, 1.5 and 1.6 and everything is passing \u2013\u00a0if you can remember what wasn't working, or give any more info, I'd quite like to look into this further and see if we can't find another workaround.\n\nAs always thanks for your great work.\n\n", "before_files": [{"content": "from __future__ import division, unicode_literals\nfrom future.builtins import str\n\nfrom copy import copy\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.generic import GenericRelation\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db.models import get_model, IntegerField, CharField, FloatField\nfrom django.db.models.signals import post_save, post_delete\n\n\nclass BaseGenericRelation(GenericRelation):\n \"\"\"\n Extends ``GenericRelation`` to:\n\n - Add a consistent default value for ``object_id_field`` and\n check for a ``related_model`` attribute which can be defined\n on subclasses as a default for the ``to`` argument.\n\n - Add one or more custom fields to the model that the relation\n field is applied to, and then call a ``related_items_changed``\n method each time related items are saved or deleted, so that a\n calculated value can be stored against the custom fields since\n aggregates aren't available for GenericRelation instances.\n\n \"\"\"\n\n # Mapping of field names to model fields that will be added.\n fields = {}\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Set up some defaults and check for a ``related_model``\n attribute for the ``to`` argument.\n \"\"\"\n if kwargs.get(\"frozen_by_south\", False):\n raise Exception(\"\"\"\n\n Your project contains migrations that include one of the fields\n from mezzanine.generic in its Migration.model dict: possibly\n KeywordsField, CommentsField or RatingField. These migratons no\n longer work with the latest versions of Django and South, so you'll\n need to fix them by hand. This is as simple as commenting out or\n deleting the field from the Migration.model dict.\n See http://bit.ly/1hecVsD for an example.\n\n \"\"\")\n\n kwargs.setdefault(\"object_id_field\", \"object_pk\")\n to = getattr(self, \"related_model\", None)\n if to:\n kwargs.setdefault(\"to\", to)\n super(BaseGenericRelation, self).__init__(*args, **kwargs)\n\n def contribute_to_class(self, cls, name):\n \"\"\"\n Add each of the names and fields in the ``fields`` attribute\n to the model the relationship field is applied to, and set up\n the related item save and delete signals for calling\n ``related_items_changed``.\n \"\"\"\n for field in cls._meta.many_to_many:\n if isinstance(field, self.__class__):\n e = \"Multiple %s fields are not supported (%s.%s, %s.%s)\" % (\n self.__class__.__name__, cls.__name__, cls.__name__,\n name, field.name)\n raise ImproperlyConfigured(e)\n self.related_field_name = name\n super(BaseGenericRelation, self).contribute_to_class(cls, name)\n # Not applicable to abstract classes, and in fact will break.\n if not cls._meta.abstract:\n for (name_string, field) in self.fields.items():\n if \"%s\" in name_string:\n name_string = name_string % name\n # In Django 1.6, add_to_class will be called on a\n # parent model's field more than once, so\n # contribute_to_class needs to be idempotent. We\n # don't call get_all_field_names() which fill the app\n # cache get_fields_with_model() is safe.\n if name_string in [i.name for i, _ in\n cls._meta.get_fields_with_model()]:\n continue\n if field.verbose_name is None:\n field.verbose_name = self.verbose_name\n cls.add_to_class(name_string, copy(field))\n # Add a getter function to the model we can use to retrieve\n # the field/manager by name.\n getter_name = \"get_%s_name\" % self.__class__.__name__.lower()\n cls.add_to_class(getter_name, lambda self: name)\n # For some unknown reason the signal won't be triggered\n # if given a sender arg, particularly when running\n # Cartridge with the field RichTextPage.keywords - so\n # instead of specifying self.rel.to as the sender, we\n # check for it inside the signal itself.\n post_save.connect(self._related_items_changed)\n post_delete.connect(self._related_items_changed)\n\n def _related_items_changed(self, **kwargs):\n \"\"\"\n Ensure that the given related item is actually for the model\n this field applies to, and pass the instance to the real\n ``related_items_changed`` handler.\n \"\"\"\n # Manually check that the instance matches the relation,\n # since we don't specify a sender for the signal.\n try:\n to = self.rel.to\n if isinstance(to, str):\n to = get_model(*to.split(\".\", 1))\n if not isinstance(kwargs[\"instance\"], to):\n raise TypeError\n except (TypeError, ValueError):\n return\n for_model = kwargs[\"instance\"].content_type.model_class()\n if issubclass(for_model, self.model):\n instance_id = kwargs[\"instance\"].object_pk\n try:\n instance = for_model.objects.get(id=instance_id)\n except self.model.DoesNotExist:\n # Instance itself was deleted - signals are irrelevant.\n return\n if hasattr(instance, \"get_content_model\"):\n instance = instance.get_content_model()\n related_manager = getattr(instance, self.related_field_name)\n self.related_items_changed(instance, related_manager)\n\n def related_items_changed(self, instance, related_manager):\n \"\"\"\n Can be implemented by subclasses - called whenever the\n state of related items change, eg they're saved or deleted.\n The instance for this field and the related manager for the\n field are passed as arguments.\n \"\"\"\n pass\n\n def value_from_object(self, obj):\n \"\"\"\n Returns the value of this field in the given model instance.\n Needed for Django 1.7: https://code.djangoproject.com/ticket/22552\n \"\"\"\n return getattr(obj, self.attname).all()\n\n\nclass CommentsField(BaseGenericRelation):\n \"\"\"\n Stores the number of comments against the\n ``COMMENTS_FIELD_NAME_count`` field when a comment is saved or\n deleted.\n \"\"\"\n\n related_model = \"generic.ThreadedComment\"\n fields = {\"%s_count\": IntegerField(editable=False, default=0)}\n\n def related_items_changed(self, instance, related_manager):\n \"\"\"\n Stores the number of comments. A custom ``count_filter``\n queryset gets checked for, allowing managers to implement\n custom count logic.\n \"\"\"\n try:\n count = related_manager.count_queryset()\n except AttributeError:\n count = related_manager.count()\n count_field_name = list(self.fields.keys())[0] % \\\n self.related_field_name\n setattr(instance, count_field_name, count)\n instance.save()\n\n\nclass KeywordsField(BaseGenericRelation):\n \"\"\"\n Stores the keywords as a single string into the\n ``KEYWORDS_FIELD_NAME_string`` field for convenient access when\n searching.\n \"\"\"\n\n related_model = \"generic.AssignedKeyword\"\n fields = {\"%s_string\": CharField(editable=False, blank=True,\n max_length=500)}\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Mark the field as editable so that it can be specified in\n admin class fieldsets and pass validation, and also so that\n it shows up in the admin form.\n \"\"\"\n super(KeywordsField, self).__init__(*args, **kwargs)\n self.editable = True\n\n def formfield(self, **kwargs):\n \"\"\"\n Provide the custom form widget for the admin, since there\n isn't a form field mapped to ``GenericRelation`` model fields.\n \"\"\"\n from mezzanine.generic.forms import KeywordsWidget\n kwargs[\"widget\"] = KeywordsWidget\n return super(KeywordsField, self).formfield(**kwargs)\n\n def save_form_data(self, instance, data):\n \"\"\"\n The ``KeywordsWidget`` field will return data as a string of\n comma separated IDs for the ``Keyword`` model - convert these\n into actual ``AssignedKeyword`` instances. Also delete\n ``Keyword`` instances if their last related ``AssignedKeyword``\n instance is being removed.\n \"\"\"\n from mezzanine.generic.models import AssignedKeyword, Keyword\n related_manager = getattr(instance, self.name)\n # Get a list of Keyword IDs being removed.\n old_ids = [str(a.keyword_id) for a in related_manager.all()]\n new_ids = data.split(\",\")\n removed_ids = set(old_ids) - set(new_ids)\n # Remove current AssignedKeyword instances.\n related_manager.all().delete()\n # Convert the data into AssignedKeyword instances.\n if data:\n data = [AssignedKeyword(keyword_id=i) for i in new_ids]\n # Remove Keyword instances than no longer have a\n # related AssignedKeyword instance.\n existing = AssignedKeyword.objects.filter(keyword__id__in=removed_ids)\n existing_ids = set([str(a.keyword_id) for a in existing])\n unused_ids = removed_ids - existing_ids\n Keyword.objects.filter(id__in=unused_ids).delete()\n super(KeywordsField, self).save_form_data(instance, data)\n\n def contribute_to_class(self, cls, name):\n \"\"\"\n Swap out any reference to ``KeywordsField`` with the\n ``KEYWORDS_FIELD_string`` field in ``search_fields``.\n \"\"\"\n super(KeywordsField, self).contribute_to_class(cls, name)\n string_field_name = list(self.fields.keys())[0] % \\\n self.related_field_name\n if hasattr(cls, \"search_fields\") and name in cls.search_fields:\n try:\n weight = cls.search_fields[name]\n except TypeError:\n # search_fields is a sequence.\n index = cls.search_fields.index(name)\n search_fields_type = type(cls.search_fields)\n cls.search_fields = list(cls.search_fields)\n cls.search_fields[index] = string_field_name\n cls.search_fields = search_fields_type(cls.search_fields)\n else:\n del cls.search_fields[name]\n cls.search_fields[string_field_name] = weight\n\n def related_items_changed(self, instance, related_manager):\n \"\"\"\n Stores the keywords as a single string for searching.\n \"\"\"\n assigned = related_manager.select_related(\"keyword\")\n keywords = \" \".join([str(a.keyword) for a in assigned])\n string_field_name = list(self.fields.keys())[0] % \\\n self.related_field_name\n if getattr(instance, string_field_name) != keywords:\n setattr(instance, string_field_name, keywords)\n instance.save()\n\n\nclass RatingField(BaseGenericRelation):\n \"\"\"\n Stores the rating count and average against the\n ``RATING_FIELD_NAME_count`` and ``RATING_FIELD_NAME_average``\n fields when a rating is saved or deleted.\n \"\"\"\n\n related_model = \"generic.Rating\"\n fields = {\"%s_count\": IntegerField(default=0, editable=False),\n \"%s_sum\": IntegerField(default=0, editable=False),\n \"%s_average\": FloatField(default=0, editable=False)}\n\n def related_items_changed(self, instance, related_manager):\n \"\"\"\n Calculates and saves the average rating.\n \"\"\"\n ratings = [r.value for r in related_manager.all()]\n count = len(ratings)\n _sum = sum(ratings)\n average = _sum / count if count > 0 else 0\n setattr(instance, \"%s_count\" % self.related_field_name, count)\n setattr(instance, \"%s_sum\" % self.related_field_name, _sum)\n setattr(instance, \"%s_average\" % self.related_field_name, average)\n instance.save()\n\n\n# South requires custom fields to be given \"rules\".\n# See http://south.aeracode.org/docs/customfields.html\nif \"south\" in settings.INSTALLED_APPS:\n try:\n from south.modelsinspector import add_introspection_rules\n add_introspection_rules(rules=[((BaseGenericRelation,), [], {})],\n patterns=[\"mezzanine\\.generic\\.fields\\.\"])\n except ImportError:\n pass\n", "path": "mezzanine/generic/fields.py"}], "after_files": [{"content": "from __future__ import division, unicode_literals\nfrom future.builtins import str\n\nfrom copy import copy\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.generic import GenericRelation\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db.models import get_model, IntegerField, CharField, FloatField\nfrom django.db.models.signals import post_save, post_delete, class_prepared\nfrom django.utils import six\n\n\nclass LazyModelOperations(object):\n \"\"\"\n This class connects itself to Django's class_prepared signal. Pass a\n function and a model or model name to its add() method, and the function\n will be called with the model as its only parameter once the model has been\n loaded. If the model is already loaded, the function is called immediately.\n\n Adapted from code found at the top of django/db/models/fields/related.py.\n \"\"\"\n def __init__(self):\n self.pending_operations = {}\n class_prepared.connect(self.signal_receiver)\n\n @staticmethod\n def model_key(model_or_name):\n \"\"\"\n Returns an (app_label, model_name) tuple from a model or string.\n \"\"\"\n if isinstance(model_or_name, six.string_types):\n app_label, model_name = model_or_name.split(\".\")\n else:\n # it's actually a model class\n app_label = model_or_name._meta.app_label\n model_name = model_or_name._meta.object_name\n return app_label, model_name\n\n def add(self, function, model_or_name):\n model_key = self.model_key(model_or_name)\n\n # If the model is already loaded, pass it to the function immediately.\n # Otherwise, delay execution until the class is prepared.\n model = get_model(*model_key, seed_cache=False, only_installed=False)\n if model:\n function(model)\n else:\n self.pending_operations.setdefault(model_key, []).append(function)\n\n def signal_receiver(self, sender, **_):\n \"\"\"\n Receive class_prepared, and pass the freshly prepared model to each\n function waiting for it.\n \"\"\"\n key = (sender._meta.app_label, sender.__name__)\n for function in self.pending_operations.pop(key, []):\n function(sender)\n\nlazy_model_ops = LazyModelOperations()\n\n\nclass BaseGenericRelation(GenericRelation):\n \"\"\"\n Extends ``GenericRelation`` to:\n\n - Add a consistent default value for ``object_id_field`` and\n check for a ``related_model`` attribute which can be defined\n on subclasses as a default for the ``to`` argument.\n\n - Add one or more custom fields to the model that the relation\n field is applied to, and then call a ``related_items_changed``\n method each time related items are saved or deleted, so that a\n calculated value can be stored against the custom fields since\n aggregates aren't available for GenericRelation instances.\n\n \"\"\"\n\n # Mapping of field names to model fields that will be added.\n fields = {}\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Set up some defaults and check for a ``related_model``\n attribute for the ``to`` argument.\n \"\"\"\n if kwargs.get(\"frozen_by_south\", False):\n raise Exception(\"\"\"\n\n Your project contains migrations that include one of the fields\n from mezzanine.generic in its Migration.model dict: possibly\n KeywordsField, CommentsField or RatingField. These migratons no\n longer work with the latest versions of Django and South, so you'll\n need to fix them by hand. This is as simple as commenting out or\n deleting the field from the Migration.model dict.\n See http://bit.ly/1hecVsD for an example.\n\n \"\"\")\n\n kwargs.setdefault(\"object_id_field\", \"object_pk\")\n to = getattr(self, \"related_model\", None)\n if to:\n kwargs.setdefault(\"to\", to)\n super(BaseGenericRelation, self).__init__(*args, **kwargs)\n\n def contribute_to_class(self, cls, name):\n \"\"\"\n Add each of the names and fields in the ``fields`` attribute\n to the model the relationship field is applied to, and set up\n the related item save and delete signals for calling\n ``related_items_changed``.\n \"\"\"\n for field in cls._meta.many_to_many:\n if isinstance(field, self.__class__):\n e = \"Multiple %s fields are not supported (%s.%s, %s.%s)\" % (\n self.__class__.__name__, cls.__name__, cls.__name__,\n name, field.name)\n raise ImproperlyConfigured(e)\n self.related_field_name = name\n super(BaseGenericRelation, self).contribute_to_class(cls, name)\n # Not applicable to abstract classes, and in fact will break.\n if not cls._meta.abstract:\n for (name_string, field) in self.fields.items():\n if \"%s\" in name_string:\n name_string = name_string % name\n # In Django 1.6, add_to_class will be called on a\n # parent model's field more than once, so\n # contribute_to_class needs to be idempotent. We\n # don't call get_all_field_names() which fill the app\n # cache get_fields_with_model() is safe.\n if name_string in [i.name for i, _ in\n cls._meta.get_fields_with_model()]:\n continue\n if field.verbose_name is None:\n field.verbose_name = self.verbose_name\n cls.add_to_class(name_string, copy(field))\n # Add a getter function to the model we can use to retrieve\n # the field/manager by name.\n getter_name = \"get_%s_name\" % self.__class__.__name__.lower()\n cls.add_to_class(getter_name, lambda self: name)\n # For some unknown reason the signal won't be triggered\n # if given a sender arg, particularly when running\n # Cartridge with the field RichTextPage.keywords - so\n # instead of specifying self.rel.to as the sender, we\n # check for it inside the signal itself.\n\n def connect_save(sender):\n post_save.connect(self._related_items_changed, sender=sender)\n\n def connect_delete(sender):\n post_delete.connect(self._related_items_changed, sender=sender)\n\n lazy_model_ops.add(connect_save, self.rel.to)\n lazy_model_ops.add(connect_delete, self.rel.to)\n\n def _related_items_changed(self, **kwargs):\n \"\"\"\n Ensure that the given related item is actually for the model\n this field applies to, and pass the instance to the real\n ``related_items_changed`` handler.\n \"\"\"\n for_model = kwargs[\"instance\"].content_type.model_class()\n if issubclass(for_model, self.model):\n instance_id = kwargs[\"instance\"].object_pk\n try:\n instance = for_model.objects.get(id=instance_id)\n except self.model.DoesNotExist:\n # Instance itself was deleted - signals are irrelevant.\n return\n if hasattr(instance, \"get_content_model\"):\n instance = instance.get_content_model()\n related_manager = getattr(instance, self.related_field_name)\n self.related_items_changed(instance, related_manager)\n\n def related_items_changed(self, instance, related_manager):\n \"\"\"\n Can be implemented by subclasses - called whenever the\n state of related items change, eg they're saved or deleted.\n The instance for this field and the related manager for the\n field are passed as arguments.\n \"\"\"\n pass\n\n def value_from_object(self, obj):\n \"\"\"\n Returns the value of this field in the given model instance.\n Needed for Django 1.7: https://code.djangoproject.com/ticket/22552\n \"\"\"\n return getattr(obj, self.attname).all()\n\n\nclass CommentsField(BaseGenericRelation):\n \"\"\"\n Stores the number of comments against the\n ``COMMENTS_FIELD_NAME_count`` field when a comment is saved or\n deleted.\n \"\"\"\n\n related_model = \"generic.ThreadedComment\"\n fields = {\"%s_count\": IntegerField(editable=False, default=0)}\n\n def related_items_changed(self, instance, related_manager):\n \"\"\"\n Stores the number of comments. A custom ``count_filter``\n queryset gets checked for, allowing managers to implement\n custom count logic.\n \"\"\"\n try:\n count = related_manager.count_queryset()\n except AttributeError:\n count = related_manager.count()\n count_field_name = list(self.fields.keys())[0] % \\\n self.related_field_name\n setattr(instance, count_field_name, count)\n instance.save()\n\n\nclass KeywordsField(BaseGenericRelation):\n \"\"\"\n Stores the keywords as a single string into the\n ``KEYWORDS_FIELD_NAME_string`` field for convenient access when\n searching.\n \"\"\"\n\n related_model = \"generic.AssignedKeyword\"\n fields = {\"%s_string\": CharField(editable=False, blank=True,\n max_length=500)}\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Mark the field as editable so that it can be specified in\n admin class fieldsets and pass validation, and also so that\n it shows up in the admin form.\n \"\"\"\n super(KeywordsField, self).__init__(*args, **kwargs)\n self.editable = True\n\n def formfield(self, **kwargs):\n \"\"\"\n Provide the custom form widget for the admin, since there\n isn't a form field mapped to ``GenericRelation`` model fields.\n \"\"\"\n from mezzanine.generic.forms import KeywordsWidget\n kwargs[\"widget\"] = KeywordsWidget\n return super(KeywordsField, self).formfield(**kwargs)\n\n def save_form_data(self, instance, data):\n \"\"\"\n The ``KeywordsWidget`` field will return data as a string of\n comma separated IDs for the ``Keyword`` model - convert these\n into actual ``AssignedKeyword`` instances. Also delete\n ``Keyword`` instances if their last related ``AssignedKeyword``\n instance is being removed.\n \"\"\"\n from mezzanine.generic.models import AssignedKeyword, Keyword\n related_manager = getattr(instance, self.name)\n # Get a list of Keyword IDs being removed.\n old_ids = [str(a.keyword_id) for a in related_manager.all()]\n new_ids = data.split(\",\")\n removed_ids = set(old_ids) - set(new_ids)\n # Remove current AssignedKeyword instances.\n related_manager.all().delete()\n # Convert the data into AssignedKeyword instances.\n if data:\n data = [AssignedKeyword(keyword_id=i) for i in new_ids]\n # Remove Keyword instances than no longer have a\n # related AssignedKeyword instance.\n existing = AssignedKeyword.objects.filter(keyword__id__in=removed_ids)\n existing_ids = set([str(a.keyword_id) for a in existing])\n unused_ids = removed_ids - existing_ids\n Keyword.objects.filter(id__in=unused_ids).delete()\n super(KeywordsField, self).save_form_data(instance, data)\n\n def contribute_to_class(self, cls, name):\n \"\"\"\n Swap out any reference to ``KeywordsField`` with the\n ``KEYWORDS_FIELD_string`` field in ``search_fields``.\n \"\"\"\n super(KeywordsField, self).contribute_to_class(cls, name)\n string_field_name = list(self.fields.keys())[0] % \\\n self.related_field_name\n if hasattr(cls, \"search_fields\") and name in cls.search_fields:\n try:\n weight = cls.search_fields[name]\n except TypeError:\n # search_fields is a sequence.\n index = cls.search_fields.index(name)\n search_fields_type = type(cls.search_fields)\n cls.search_fields = list(cls.search_fields)\n cls.search_fields[index] = string_field_name\n cls.search_fields = search_fields_type(cls.search_fields)\n else:\n del cls.search_fields[name]\n cls.search_fields[string_field_name] = weight\n\n def related_items_changed(self, instance, related_manager):\n \"\"\"\n Stores the keywords as a single string for searching.\n \"\"\"\n assigned = related_manager.select_related(\"keyword\")\n keywords = \" \".join([str(a.keyword) for a in assigned])\n string_field_name = list(self.fields.keys())[0] % \\\n self.related_field_name\n if getattr(instance, string_field_name) != keywords:\n setattr(instance, string_field_name, keywords)\n instance.save()\n\n\nclass RatingField(BaseGenericRelation):\n \"\"\"\n Stores the rating count and average against the\n ``RATING_FIELD_NAME_count`` and ``RATING_FIELD_NAME_average``\n fields when a rating is saved or deleted.\n \"\"\"\n\n related_model = \"generic.Rating\"\n fields = {\"%s_count\": IntegerField(default=0, editable=False),\n \"%s_sum\": IntegerField(default=0, editable=False),\n \"%s_average\": FloatField(default=0, editable=False)}\n\n def related_items_changed(self, instance, related_manager):\n \"\"\"\n Calculates and saves the average rating.\n \"\"\"\n ratings = [r.value for r in related_manager.all()]\n count = len(ratings)\n _sum = sum(ratings)\n average = _sum / count if count > 0 else 0\n setattr(instance, \"%s_count\" % self.related_field_name, count)\n setattr(instance, \"%s_sum\" % self.related_field_name, _sum)\n setattr(instance, \"%s_average\" % self.related_field_name, average)\n instance.save()\n\n\n# South requires custom fields to be given \"rules\".\n# See http://south.aeracode.org/docs/customfields.html\nif \"south\" in settings.INSTALLED_APPS:\n try:\n from south.modelsinspector import add_introspection_rules\n add_introspection_rules(rules=[((BaseGenericRelation,), [], {})],\n patterns=[\"mezzanine\\.generic\\.fields\\.\"])\n except ImportError:\n pass\n", "path": "mezzanine/generic/fields.py"}]}
| 3,893 | 892 |
gh_patches_debug_21969
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-easyblocks-1663
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CUDA_{ROOT,HOME,PATH} are PATH-like?
See: https://github.com/easybuilders/easybuild-easyblocks/blob/e3a4cd70357103e1fa463751df13378f217c7ad1/easybuild/easyblocks/c/cuda.py#L191-L193
This creates `prepend_path` module commands, instead of `setenv`! Is there a particular reason to do so?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `easybuild/easyblocks/c/cuda.py`
Content:
```
1 ##
2 # This file is an EasyBuild reciPY as per https://github.com/easybuilders/easybuild
3 #
4 # Copyright:: Copyright 2012-2019 Cyprus Institute / CaSToRC, Uni.Lu, NTUA, Ghent University, Forschungszentrum Juelich GmbH
5 # Authors:: George Tsouloupas <[email protected]>, Fotis Georgatos <[email protected]>, Kenneth Hoste, Damian Alvarez
6 # License:: MIT/GPL
7 # $Id$
8 #
9 # This work implements a part of the HPCBIOS project and is a component of the policy:
10 # http://hpcbios.readthedocs.org/en/latest/HPCBIOS_2012-99.html
11 ##
12 """
13 EasyBuild support for CUDA, implemented as an easyblock
14
15 Ref: https://speakerdeck.com/ajdecon/introduction-to-the-cuda-toolkit-for-building-applications
16
17 @author: George Tsouloupas (Cyprus Institute)
18 @author: Fotis Georgatos (Uni.lu)
19 @author: Kenneth Hoste (Ghent University)
20 @author: Damian Alvarez (Forschungszentrum Juelich)
21 @author: Ward Poelmans (Free University of Brussels)
22 """
23 import os
24 import re
25 import stat
26
27 from distutils.version import LooseVersion
28
29 from easybuild.easyblocks.generic.binary import Binary
30 from easybuild.framework.easyconfig import CUSTOM
31 from easybuild.tools.build_log import EasyBuildError
32 from easybuild.tools.filetools import adjust_permissions, patch_perl_script_autoflush, read_file, write_file
33 from easybuild.tools.run import run_cmd, run_cmd_qa
34 from easybuild.tools.systemtools import get_shared_lib_ext
35
36 # Wrapper script definition
37 WRAPPER_TEMPLATE = """#!/bin/sh
38 echo "$@" | grep -e '-ccbin' -e '--compiler-bindir' > /dev/null
39 if [ $? -eq 0 ];
40 then
41 echo "ERROR: do not set -ccbin or --compiler-bindir when using the `basename $0` wrapper"
42 else
43 nvcc -ccbin=%s "$@"
44 exit $?
45 fi """
46
47 class EB_CUDA(Binary):
48 """
49 Support for installing CUDA.
50 """
51
52 @staticmethod
53 def extra_options():
54 """Create a set of wrappers based on a list determined by the easyconfig file"""
55 extra_vars = {
56 'host_compilers': [None, "Host compilers for which a wrapper will be generated", CUSTOM]
57 }
58 return Binary.extra_options(extra_vars)
59
60 def extract_step(self):
61 """Extract installer to have more control, e.g. options, patching Perl scripts, etc."""
62 execpath = self.src[0]['path']
63 run_cmd("/bin/sh " + execpath + " --noexec --nox11 --target " + self.builddir)
64 self.src[0]['finalpath'] = self.builddir
65
66 def install_step(self):
67 """Install CUDA using Perl install script."""
68
69 # define how to run the installer
70 # script has /usr/bin/perl hardcoded, but we want to have control over which perl is being used
71 if LooseVersion(self.version) <= LooseVersion("5"):
72 install_interpreter = "perl"
73 install_script = "install-linux.pl"
74 self.cfg.update('installopts', '--prefix=%s' % self.installdir)
75 elif LooseVersion(self.version) > LooseVersion("5") and LooseVersion(self.version) < LooseVersion("10.1"):
76 install_interpreter = "perl"
77 install_script = "cuda-installer.pl"
78 # note: also including samples (via "-samplespath=%(installdir)s -samples") would require libglut
79 self.cfg.update('installopts', "-verbose -silent -toolkitpath=%s -toolkit" % self.installdir)
80 else:
81 install_interpreter = ""
82 install_script = "./cuda-installer"
83 # note: also including samples (via "-samplespath=%(installdir)s -samples") would require libglut
84 self.cfg.update('installopts', "--silent --toolkit --toolkitpath=%s --defaultroot=%s" % (
85 self.installdir, self.installdir))
86
87 cmd = "%(preinstallopts)s %(interpreter)s %(script)s %(installopts)s" % {
88 'preinstallopts': self.cfg['preinstallopts'],
89 'interpreter': install_interpreter,
90 'script': install_script,
91 'installopts': self.cfg['installopts']
92 }
93
94 # prepare for running install script autonomously
95 qanda = {}
96 stdqa = {
97 # this question is only asked if CUDA tools are already available system-wide
98 r"Would you like to remove all CUDA files under .*? (yes/no/abort): ": "no",
99 }
100 noqanda = [
101 r"^Configuring",
102 r"Installation Complete",
103 r"Verifying archive integrity.*",
104 r"^Uncompressing NVIDIA CUDA",
105 r".* -> .*",
106 ]
107
108 # patch install script to handle Q&A autonomously
109 if install_interpreter == "perl":
110 patch_perl_script_autoflush(os.path.join(self.builddir, install_script))
111
112 # make sure $DISPLAY is not defined, which may lead to (weird) problems
113 # this is workaround for not being able to specify --nox11 to the Perl install scripts
114 if 'DISPLAY' in os.environ:
115 os.environ.pop('DISPLAY')
116
117 # overriding maxhits default value to 300 (300s wait for nothing to change in the output without seeing a known
118 # question)
119 run_cmd_qa(cmd, qanda, std_qa=stdqa, no_qa=noqanda, log_all=True, simple=True, maxhits=300)
120
121 # check if there are patches to apply
122 if len(self.src) > 1:
123 for patch in self.src[1:]:
124 self.log.debug("Running patch %s", patch['name'])
125 run_cmd("/bin/sh " + patch['path'] + " --accept-eula --silent --installdir=" + self.installdir)
126
127 def post_install_step(self):
128 """Create wrappers for the specified host compilers and generate the appropriate stub symlinks"""
129 def create_wrapper(wrapper_name, wrapper_comp):
130 """Create for a particular compiler, with a particular name"""
131 wrapper_f = os.path.join(self.installdir, 'bin', wrapper_name)
132 write_file(wrapper_f, WRAPPER_TEMPLATE % wrapper_comp)
133 adjust_permissions(wrapper_f, stat.S_IXUSR|stat.S_IRUSR|stat.S_IXGRP|stat.S_IRGRP|stat.S_IXOTH|stat.S_IROTH)
134
135 # Prepare wrappers to handle a default host compiler other than g++
136 for comp in (self.cfg['host_compilers'] or []):
137 create_wrapper('nvcc_%s' % comp, comp)
138
139 # Run ldconfig to create missing symlinks in the stubs directory (libcuda.so.1, etc)
140 run_cmd("ldconfig -N " + os.path.join(self.installdir, 'lib64', 'stubs'))
141
142 super(EB_CUDA, self).post_install_step()
143
144 def sanity_check_step(self):
145 """Custom sanity check for CUDA."""
146
147 if LooseVersion(self.version) > LooseVersion("9"):
148 versionfile = read_file(os.path.join(self.installdir, "version.txt"))
149 if not re.search("Version %s$" % self.version, versionfile):
150 raise EasyBuildError("Unable to find the correct version (%s) in the version.txt file", self.version)
151
152 shlib_ext = get_shared_lib_ext()
153
154 chk_libdir = ["lib64"]
155
156 # Versions higher than 6 do not provide 32 bit libraries
157 if LooseVersion(self.version) < LooseVersion("6"):
158 chk_libdir += ["lib"]
159
160 culibs = ["cublas", "cudart", "cufft", "curand", "cusparse"]
161 custom_paths = {
162 'files': [os.path.join("bin", x) for x in ["fatbinary", "nvcc", "nvlink", "ptxas"]] +
163 [os.path.join("%s", "lib%s.%s") % (x, y, shlib_ext) for x in chk_libdir for y in culibs],
164 'dirs': ["include"],
165 }
166
167 if LooseVersion(self.version) < LooseVersion('7'):
168 custom_paths['files'].append(os.path.join('open64', 'bin', 'nvopencc'))
169 if LooseVersion(self.version) >= LooseVersion('7'):
170 custom_paths['files'].append(os.path.join("extras", "CUPTI", "lib64", "libcupti.%s") % shlib_ext)
171 custom_paths['dirs'].append(os.path.join("extras", "CUPTI", "include"))
172
173
174 super(EB_CUDA, self).sanity_check_step(custom_paths=custom_paths)
175
176 def make_module_req_guess(self):
177 """Specify CUDA custom values for PATH etc."""
178
179 guesses = super(EB_CUDA, self).make_module_req_guess()
180
181 # The dirs should be in the order ['open64/bin', 'bin']
182 bin_path = []
183 if LooseVersion(self.version) < LooseVersion('7'):
184 bin_path.append(os.path.join('open64', 'bin'))
185 bin_path.append('bin')
186
187 lib_path = ['lib64']
188 inc_path = ['include']
189 if LooseVersion(self.version) >= LooseVersion('7'):
190 lib_path.append(os.path.join('extras', 'CUPTI', 'lib64'))
191 inc_path.append(os.path.join('extras', 'CUPTI', 'include'))
192 bin_path.append(os.path.join('nvvm', 'bin'))
193 lib_path.append(os.path.join('nvvm', 'lib64'))
194 inc_path.append(os.path.join('nvvm', 'include'))
195
196 guesses.update({
197 'PATH': bin_path,
198 'LD_LIBRARY_PATH': lib_path,
199 'LIBRARY_PATH': ['lib64', os.path.join('lib64', 'stubs')],
200 'CPATH': inc_path,
201 'CUDA_HOME': [''],
202 'CUDA_ROOT': [''],
203 'CUDA_PATH': [''],
204 })
205
206 return guesses
207
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/easybuild/easyblocks/c/cuda.py b/easybuild/easyblocks/c/cuda.py
--- a/easybuild/easyblocks/c/cuda.py
+++ b/easybuild/easyblocks/c/cuda.py
@@ -173,6 +173,15 @@
super(EB_CUDA, self).sanity_check_step(custom_paths=custom_paths)
+ def make_module_extra(self):
+ """Set the install directory as CUDA_HOME, CUDA_ROOT, CUDA_PATH."""
+ txt = super(EB_CUDA, self).make_module_extra()
+ txt += self.module_generator.set_environment('CUDA_HOME', self.installdir)
+ txt += self.module_generator.set_environment('CUDA_ROOT', self.installdir)
+ txt += self.module_generator.set_environment('CUDA_PATH', self.installdir)
+ self.log.debug("make_module_extra added this: %s", txt)
+ return txt
+
def make_module_req_guess(self):
"""Specify CUDA custom values for PATH etc."""
@@ -198,9 +207,6 @@
'LD_LIBRARY_PATH': lib_path,
'LIBRARY_PATH': ['lib64', os.path.join('lib64', 'stubs')],
'CPATH': inc_path,
- 'CUDA_HOME': [''],
- 'CUDA_ROOT': [''],
- 'CUDA_PATH': [''],
})
return guesses
|
{"golden_diff": "diff --git a/easybuild/easyblocks/c/cuda.py b/easybuild/easyblocks/c/cuda.py\n--- a/easybuild/easyblocks/c/cuda.py\n+++ b/easybuild/easyblocks/c/cuda.py\n@@ -173,6 +173,15 @@\n \n super(EB_CUDA, self).sanity_check_step(custom_paths=custom_paths)\n \n+ def make_module_extra(self):\n+ \"\"\"Set the install directory as CUDA_HOME, CUDA_ROOT, CUDA_PATH.\"\"\"\n+ txt = super(EB_CUDA, self).make_module_extra()\n+ txt += self.module_generator.set_environment('CUDA_HOME', self.installdir)\n+ txt += self.module_generator.set_environment('CUDA_ROOT', self.installdir)\n+ txt += self.module_generator.set_environment('CUDA_PATH', self.installdir)\n+ self.log.debug(\"make_module_extra added this: %s\", txt)\n+ return txt\n+\n def make_module_req_guess(self):\n \"\"\"Specify CUDA custom values for PATH etc.\"\"\"\n \n@@ -198,9 +207,6 @@\n 'LD_LIBRARY_PATH': lib_path,\n 'LIBRARY_PATH': ['lib64', os.path.join('lib64', 'stubs')],\n 'CPATH': inc_path,\n- 'CUDA_HOME': [''],\n- 'CUDA_ROOT': [''],\n- 'CUDA_PATH': [''],\n })\n \n return guesses\n", "issue": "CUDA_{ROOT,HOME,PATH} are PATH-like?\nSee: https://github.com/easybuilders/easybuild-easyblocks/blob/e3a4cd70357103e1fa463751df13378f217c7ad1/easybuild/easyblocks/c/cuda.py#L191-L193\r\n\r\nThis creates `prepend_path` module commands, instead of `setenv`! Is there a particular reason to do so?\n", "before_files": [{"content": "##\n# This file is an EasyBuild reciPY as per https://github.com/easybuilders/easybuild\n#\n# Copyright:: Copyright 2012-2019 Cyprus Institute / CaSToRC, Uni.Lu, NTUA, Ghent University, Forschungszentrum Juelich GmbH\n# Authors:: George Tsouloupas <[email protected]>, Fotis Georgatos <[email protected]>, Kenneth Hoste, Damian Alvarez\n# License:: MIT/GPL\n# $Id$\n#\n# This work implements a part of the HPCBIOS project and is a component of the policy:\n# http://hpcbios.readthedocs.org/en/latest/HPCBIOS_2012-99.html\n##\n\"\"\"\nEasyBuild support for CUDA, implemented as an easyblock\n\nRef: https://speakerdeck.com/ajdecon/introduction-to-the-cuda-toolkit-for-building-applications\n\n@author: George Tsouloupas (Cyprus Institute)\n@author: Fotis Georgatos (Uni.lu)\n@author: Kenneth Hoste (Ghent University)\n@author: Damian Alvarez (Forschungszentrum Juelich)\n@author: Ward Poelmans (Free University of Brussels)\n\"\"\"\nimport os\nimport re\nimport stat\n\nfrom distutils.version import LooseVersion\n\nfrom easybuild.easyblocks.generic.binary import Binary\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.filetools import adjust_permissions, patch_perl_script_autoflush, read_file, write_file\nfrom easybuild.tools.run import run_cmd, run_cmd_qa\nfrom easybuild.tools.systemtools import get_shared_lib_ext\n\n# Wrapper script definition\nWRAPPER_TEMPLATE = \"\"\"#!/bin/sh\necho \"$@\" | grep -e '-ccbin' -e '--compiler-bindir' > /dev/null\nif [ $? -eq 0 ];\nthen\n echo \"ERROR: do not set -ccbin or --compiler-bindir when using the `basename $0` wrapper\"\nelse\n nvcc -ccbin=%s \"$@\"\n exit $?\nfi \"\"\"\n\nclass EB_CUDA(Binary):\n \"\"\"\n Support for installing CUDA.\n \"\"\"\n\n @staticmethod\n def extra_options():\n \"\"\"Create a set of wrappers based on a list determined by the easyconfig file\"\"\"\n extra_vars = {\n 'host_compilers': [None, \"Host compilers for which a wrapper will be generated\", CUSTOM]\n }\n return Binary.extra_options(extra_vars)\n\n def extract_step(self):\n \"\"\"Extract installer to have more control, e.g. options, patching Perl scripts, etc.\"\"\"\n execpath = self.src[0]['path']\n run_cmd(\"/bin/sh \" + execpath + \" --noexec --nox11 --target \" + self.builddir)\n self.src[0]['finalpath'] = self.builddir\n\n def install_step(self):\n \"\"\"Install CUDA using Perl install script.\"\"\"\n\n # define how to run the installer\n # script has /usr/bin/perl hardcoded, but we want to have control over which perl is being used\n if LooseVersion(self.version) <= LooseVersion(\"5\"):\n install_interpreter = \"perl\"\n install_script = \"install-linux.pl\"\n self.cfg.update('installopts', '--prefix=%s' % self.installdir)\n elif LooseVersion(self.version) > LooseVersion(\"5\") and LooseVersion(self.version) < LooseVersion(\"10.1\"):\n install_interpreter = \"perl\"\n install_script = \"cuda-installer.pl\"\n # note: also including samples (via \"-samplespath=%(installdir)s -samples\") would require libglut\n self.cfg.update('installopts', \"-verbose -silent -toolkitpath=%s -toolkit\" % self.installdir)\n else:\n install_interpreter = \"\"\n install_script = \"./cuda-installer\"\n # note: also including samples (via \"-samplespath=%(installdir)s -samples\") would require libglut\n self.cfg.update('installopts', \"--silent --toolkit --toolkitpath=%s --defaultroot=%s\" % (\n self.installdir, self.installdir))\n\n cmd = \"%(preinstallopts)s %(interpreter)s %(script)s %(installopts)s\" % {\n 'preinstallopts': self.cfg['preinstallopts'],\n 'interpreter': install_interpreter,\n 'script': install_script,\n 'installopts': self.cfg['installopts']\n }\n\n # prepare for running install script autonomously\n qanda = {}\n stdqa = {\n # this question is only asked if CUDA tools are already available system-wide\n r\"Would you like to remove all CUDA files under .*? (yes/no/abort): \": \"no\",\n }\n noqanda = [\n r\"^Configuring\",\n r\"Installation Complete\",\n r\"Verifying archive integrity.*\",\n r\"^Uncompressing NVIDIA CUDA\",\n r\".* -> .*\",\n ]\n\n # patch install script to handle Q&A autonomously\n if install_interpreter == \"perl\":\n patch_perl_script_autoflush(os.path.join(self.builddir, install_script))\n\n # make sure $DISPLAY is not defined, which may lead to (weird) problems\n # this is workaround for not being able to specify --nox11 to the Perl install scripts\n if 'DISPLAY' in os.environ:\n os.environ.pop('DISPLAY')\n\n # overriding maxhits default value to 300 (300s wait for nothing to change in the output without seeing a known\n # question)\n run_cmd_qa(cmd, qanda, std_qa=stdqa, no_qa=noqanda, log_all=True, simple=True, maxhits=300)\n\n # check if there are patches to apply\n if len(self.src) > 1:\n for patch in self.src[1:]:\n self.log.debug(\"Running patch %s\", patch['name'])\n run_cmd(\"/bin/sh \" + patch['path'] + \" --accept-eula --silent --installdir=\" + self.installdir)\n\n def post_install_step(self):\n \"\"\"Create wrappers for the specified host compilers and generate the appropriate stub symlinks\"\"\"\n def create_wrapper(wrapper_name, wrapper_comp):\n \"\"\"Create for a particular compiler, with a particular name\"\"\"\n wrapper_f = os.path.join(self.installdir, 'bin', wrapper_name)\n write_file(wrapper_f, WRAPPER_TEMPLATE % wrapper_comp)\n adjust_permissions(wrapper_f, stat.S_IXUSR|stat.S_IRUSR|stat.S_IXGRP|stat.S_IRGRP|stat.S_IXOTH|stat.S_IROTH)\n\n # Prepare wrappers to handle a default host compiler other than g++\n for comp in (self.cfg['host_compilers'] or []):\n create_wrapper('nvcc_%s' % comp, comp)\n\n # Run ldconfig to create missing symlinks in the stubs directory (libcuda.so.1, etc)\n run_cmd(\"ldconfig -N \" + os.path.join(self.installdir, 'lib64', 'stubs'))\n\n super(EB_CUDA, self).post_install_step()\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for CUDA.\"\"\"\n\n if LooseVersion(self.version) > LooseVersion(\"9\"):\n versionfile = read_file(os.path.join(self.installdir, \"version.txt\"))\n if not re.search(\"Version %s$\" % self.version, versionfile):\n raise EasyBuildError(\"Unable to find the correct version (%s) in the version.txt file\", self.version)\n\n shlib_ext = get_shared_lib_ext()\n\n chk_libdir = [\"lib64\"]\n\n # Versions higher than 6 do not provide 32 bit libraries\n if LooseVersion(self.version) < LooseVersion(\"6\"):\n chk_libdir += [\"lib\"]\n\n culibs = [\"cublas\", \"cudart\", \"cufft\", \"curand\", \"cusparse\"]\n custom_paths = {\n 'files': [os.path.join(\"bin\", x) for x in [\"fatbinary\", \"nvcc\", \"nvlink\", \"ptxas\"]] +\n [os.path.join(\"%s\", \"lib%s.%s\") % (x, y, shlib_ext) for x in chk_libdir for y in culibs],\n 'dirs': [\"include\"],\n }\n\n if LooseVersion(self.version) < LooseVersion('7'):\n custom_paths['files'].append(os.path.join('open64', 'bin', 'nvopencc'))\n if LooseVersion(self.version) >= LooseVersion('7'):\n custom_paths['files'].append(os.path.join(\"extras\", \"CUPTI\", \"lib64\", \"libcupti.%s\") % shlib_ext)\n custom_paths['dirs'].append(os.path.join(\"extras\", \"CUPTI\", \"include\"))\n\n\n super(EB_CUDA, self).sanity_check_step(custom_paths=custom_paths)\n\n def make_module_req_guess(self):\n \"\"\"Specify CUDA custom values for PATH etc.\"\"\"\n\n guesses = super(EB_CUDA, self).make_module_req_guess()\n\n # The dirs should be in the order ['open64/bin', 'bin']\n bin_path = []\n if LooseVersion(self.version) < LooseVersion('7'):\n bin_path.append(os.path.join('open64', 'bin'))\n bin_path.append('bin')\n\n lib_path = ['lib64']\n inc_path = ['include']\n if LooseVersion(self.version) >= LooseVersion('7'):\n lib_path.append(os.path.join('extras', 'CUPTI', 'lib64'))\n inc_path.append(os.path.join('extras', 'CUPTI', 'include'))\n bin_path.append(os.path.join('nvvm', 'bin'))\n lib_path.append(os.path.join('nvvm', 'lib64'))\n inc_path.append(os.path.join('nvvm', 'include'))\n\n guesses.update({\n 'PATH': bin_path,\n 'LD_LIBRARY_PATH': lib_path,\n 'LIBRARY_PATH': ['lib64', os.path.join('lib64', 'stubs')],\n 'CPATH': inc_path,\n 'CUDA_HOME': [''],\n 'CUDA_ROOT': [''],\n 'CUDA_PATH': [''],\n })\n\n return guesses\n", "path": "easybuild/easyblocks/c/cuda.py"}], "after_files": [{"content": "##\n# This file is an EasyBuild reciPY as per https://github.com/easybuilders/easybuild\n#\n# Copyright:: Copyright 2012-2019 Cyprus Institute / CaSToRC, Uni.Lu, NTUA, Ghent University, Forschungszentrum Juelich GmbH\n# Authors:: George Tsouloupas <[email protected]>, Fotis Georgatos <[email protected]>, Kenneth Hoste, Damian Alvarez\n# License:: MIT/GPL\n# $Id$\n#\n# This work implements a part of the HPCBIOS project and is a component of the policy:\n# http://hpcbios.readthedocs.org/en/latest/HPCBIOS_2012-99.html\n##\n\"\"\"\nEasyBuild support for CUDA, implemented as an easyblock\n\nRef: https://speakerdeck.com/ajdecon/introduction-to-the-cuda-toolkit-for-building-applications\n\n@author: George Tsouloupas (Cyprus Institute)\n@author: Fotis Georgatos (Uni.lu)\n@author: Kenneth Hoste (Ghent University)\n@author: Damian Alvarez (Forschungszentrum Juelich)\n@author: Ward Poelmans (Free University of Brussels)\n\"\"\"\nimport os\nimport re\nimport stat\n\nfrom distutils.version import LooseVersion\n\nfrom easybuild.easyblocks.generic.binary import Binary\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.filetools import adjust_permissions, patch_perl_script_autoflush, read_file, write_file\nfrom easybuild.tools.run import run_cmd, run_cmd_qa\nfrom easybuild.tools.systemtools import get_shared_lib_ext\n\n# Wrapper script definition\nWRAPPER_TEMPLATE = \"\"\"#!/bin/sh\necho \"$@\" | grep -e '-ccbin' -e '--compiler-bindir' > /dev/null\nif [ $? -eq 0 ];\nthen\n echo \"ERROR: do not set -ccbin or --compiler-bindir when using the `basename $0` wrapper\"\nelse\n nvcc -ccbin=%s \"$@\"\n exit $?\nfi \"\"\"\n\nclass EB_CUDA(Binary):\n \"\"\"\n Support for installing CUDA.\n \"\"\"\n\n @staticmethod\n def extra_options():\n \"\"\"Create a set of wrappers based on a list determined by the easyconfig file\"\"\"\n extra_vars = {\n 'host_compilers': [None, \"Host compilers for which a wrapper will be generated\", CUSTOM]\n }\n return Binary.extra_options(extra_vars)\n\n def extract_step(self):\n \"\"\"Extract installer to have more control, e.g. options, patching Perl scripts, etc.\"\"\"\n execpath = self.src[0]['path']\n run_cmd(\"/bin/sh \" + execpath + \" --noexec --nox11 --target \" + self.builddir)\n self.src[0]['finalpath'] = self.builddir\n\n def install_step(self):\n \"\"\"Install CUDA using Perl install script.\"\"\"\n\n # define how to run the installer\n # script has /usr/bin/perl hardcoded, but we want to have control over which perl is being used\n if LooseVersion(self.version) <= LooseVersion(\"5\"):\n install_interpreter = \"perl\"\n install_script = \"install-linux.pl\"\n self.cfg.update('installopts', '--prefix=%s' % self.installdir)\n elif LooseVersion(self.version) > LooseVersion(\"5\") and LooseVersion(self.version) < LooseVersion(\"10.1\"):\n install_interpreter = \"perl\"\n install_script = \"cuda-installer.pl\"\n # note: also including samples (via \"-samplespath=%(installdir)s -samples\") would require libglut\n self.cfg.update('installopts', \"-verbose -silent -toolkitpath=%s -toolkit\" % self.installdir)\n else:\n install_interpreter = \"\"\n install_script = \"./cuda-installer\"\n # note: also including samples (via \"-samplespath=%(installdir)s -samples\") would require libglut\n self.cfg.update('installopts', \"--silent --toolkit --toolkitpath=%s --defaultroot=%s\" % (\n self.installdir, self.installdir))\n\n cmd = \"%(preinstallopts)s %(interpreter)s %(script)s %(installopts)s\" % {\n 'preinstallopts': self.cfg['preinstallopts'],\n 'interpreter': install_interpreter,\n 'script': install_script,\n 'installopts': self.cfg['installopts']\n }\n\n # prepare for running install script autonomously\n qanda = {}\n stdqa = {\n # this question is only asked if CUDA tools are already available system-wide\n r\"Would you like to remove all CUDA files under .*? (yes/no/abort): \": \"no\",\n }\n noqanda = [\n r\"^Configuring\",\n r\"Installation Complete\",\n r\"Verifying archive integrity.*\",\n r\"^Uncompressing NVIDIA CUDA\",\n r\".* -> .*\",\n ]\n\n # patch install script to handle Q&A autonomously\n if install_interpreter == \"perl\":\n patch_perl_script_autoflush(os.path.join(self.builddir, install_script))\n\n # make sure $DISPLAY is not defined, which may lead to (weird) problems\n # this is workaround for not being able to specify --nox11 to the Perl install scripts\n if 'DISPLAY' in os.environ:\n os.environ.pop('DISPLAY')\n\n # overriding maxhits default value to 300 (300s wait for nothing to change in the output without seeing a known\n # question)\n run_cmd_qa(cmd, qanda, std_qa=stdqa, no_qa=noqanda, log_all=True, simple=True, maxhits=300)\n\n # check if there are patches to apply\n if len(self.src) > 1:\n for patch in self.src[1:]:\n self.log.debug(\"Running patch %s\", patch['name'])\n run_cmd(\"/bin/sh \" + patch['path'] + \" --accept-eula --silent --installdir=\" + self.installdir)\n\n def post_install_step(self):\n \"\"\"Create wrappers for the specified host compilers and generate the appropriate stub symlinks\"\"\"\n def create_wrapper(wrapper_name, wrapper_comp):\n \"\"\"Create for a particular compiler, with a particular name\"\"\"\n wrapper_f = os.path.join(self.installdir, 'bin', wrapper_name)\n write_file(wrapper_f, WRAPPER_TEMPLATE % wrapper_comp)\n adjust_permissions(wrapper_f, stat.S_IXUSR|stat.S_IRUSR|stat.S_IXGRP|stat.S_IRGRP|stat.S_IXOTH|stat.S_IROTH)\n\n # Prepare wrappers to handle a default host compiler other than g++\n for comp in (self.cfg['host_compilers'] or []):\n create_wrapper('nvcc_%s' % comp, comp)\n\n # Run ldconfig to create missing symlinks in the stubs directory (libcuda.so.1, etc)\n run_cmd(\"ldconfig -N \" + os.path.join(self.installdir, 'lib64', 'stubs'))\n\n super(EB_CUDA, self).post_install_step()\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for CUDA.\"\"\"\n\n if LooseVersion(self.version) > LooseVersion(\"9\"):\n versionfile = read_file(os.path.join(self.installdir, \"version.txt\"))\n if not re.search(\"Version %s$\" % self.version, versionfile):\n raise EasyBuildError(\"Unable to find the correct version (%s) in the version.txt file\", self.version)\n\n shlib_ext = get_shared_lib_ext()\n\n chk_libdir = [\"lib64\"]\n\n # Versions higher than 6 do not provide 32 bit libraries\n if LooseVersion(self.version) < LooseVersion(\"6\"):\n chk_libdir += [\"lib\"]\n\n culibs = [\"cublas\", \"cudart\", \"cufft\", \"curand\", \"cusparse\"]\n custom_paths = {\n 'files': [os.path.join(\"bin\", x) for x in [\"fatbinary\", \"nvcc\", \"nvlink\", \"ptxas\"]] +\n [os.path.join(\"%s\", \"lib%s.%s\") % (x, y, shlib_ext) for x in chk_libdir for y in culibs],\n 'dirs': [\"include\"],\n }\n\n if LooseVersion(self.version) < LooseVersion('7'):\n custom_paths['files'].append(os.path.join('open64', 'bin', 'nvopencc'))\n if LooseVersion(self.version) >= LooseVersion('7'):\n custom_paths['files'].append(os.path.join(\"extras\", \"CUPTI\", \"lib64\", \"libcupti.%s\") % shlib_ext)\n custom_paths['dirs'].append(os.path.join(\"extras\", \"CUPTI\", \"include\"))\n\n\n super(EB_CUDA, self).sanity_check_step(custom_paths=custom_paths)\n\n def make_module_extra(self):\n \"\"\"Set the install directory as CUDA_HOME, CUDA_ROOT, CUDA_PATH.\"\"\"\n txt = super(EB_CUDA, self).make_module_extra()\n txt += self.module_generator.set_environment('CUDA_HOME', self.installdir)\n txt += self.module_generator.set_environment('CUDA_ROOT', self.installdir)\n txt += self.module_generator.set_environment('CUDA_PATH', self.installdir)\n self.log.debug(\"make_module_extra added this: %s\", txt)\n return txt\n\n def make_module_req_guess(self):\n \"\"\"Specify CUDA custom values for PATH etc.\"\"\"\n\n guesses = super(EB_CUDA, self).make_module_req_guess()\n\n # The dirs should be in the order ['open64/bin', 'bin']\n bin_path = []\n if LooseVersion(self.version) < LooseVersion('7'):\n bin_path.append(os.path.join('open64', 'bin'))\n bin_path.append('bin')\n\n lib_path = ['lib64']\n inc_path = ['include']\n if LooseVersion(self.version) >= LooseVersion('7'):\n lib_path.append(os.path.join('extras', 'CUPTI', 'lib64'))\n inc_path.append(os.path.join('extras', 'CUPTI', 'include'))\n bin_path.append(os.path.join('nvvm', 'bin'))\n lib_path.append(os.path.join('nvvm', 'lib64'))\n inc_path.append(os.path.join('nvvm', 'include'))\n\n guesses.update({\n 'PATH': bin_path,\n 'LD_LIBRARY_PATH': lib_path,\n 'LIBRARY_PATH': ['lib64', os.path.join('lib64', 'stubs')],\n 'CPATH': inc_path,\n })\n\n return guesses\n", "path": "easybuild/easyblocks/c/cuda.py"}]}
| 3,140 | 311 |
gh_patches_debug_5571
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-6099
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
certbot-nginx requires acme >= 0.25
Because of the import of symbols `from acme.magic_typing`, the nginx plugin released in 0.25 depends on acme 0.25 or better. However, setup.py only lists `acme>0.21.1`, leading to a failure to build from source (and potential run-time failures).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `certbot-nginx/setup.py`
Content:
```
1 from setuptools import setup
2 from setuptools import find_packages
3
4
5 version = '0.26.0.dev0'
6
7 # Remember to update local-oldest-requirements.txt when changing the minimum
8 # acme/certbot version.
9 install_requires = [
10 # This plugin works with an older version of acme, but Certbot does not.
11 # 0.22.0 is specified here to work around
12 # https://github.com/pypa/pip/issues/988.
13 'acme>0.21.1',
14 'certbot>0.21.1',
15 'mock',
16 'PyOpenSSL',
17 'pyparsing>=1.5.5', # Python3 support; perhaps unnecessary?
18 'setuptools',
19 'zope.interface',
20 ]
21
22 docs_extras = [
23 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
24 'sphinx_rtd_theme',
25 ]
26
27 setup(
28 name='certbot-nginx',
29 version=version,
30 description="Nginx plugin for Certbot",
31 url='https://github.com/letsencrypt/letsencrypt',
32 author="Certbot Project",
33 author_email='[email protected]',
34 license='Apache License 2.0',
35 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
36 classifiers=[
37 'Development Status :: 3 - Alpha',
38 'Environment :: Plugins',
39 'Intended Audience :: System Administrators',
40 'License :: OSI Approved :: Apache Software License',
41 'Operating System :: POSIX :: Linux',
42 'Programming Language :: Python',
43 'Programming Language :: Python :: 2',
44 'Programming Language :: Python :: 2.7',
45 'Programming Language :: Python :: 3',
46 'Programming Language :: Python :: 3.4',
47 'Programming Language :: Python :: 3.5',
48 'Programming Language :: Python :: 3.6',
49 'Topic :: Internet :: WWW/HTTP',
50 'Topic :: Security',
51 'Topic :: System :: Installation/Setup',
52 'Topic :: System :: Networking',
53 'Topic :: System :: Systems Administration',
54 'Topic :: Utilities',
55 ],
56
57 packages=find_packages(),
58 include_package_data=True,
59 install_requires=install_requires,
60 extras_require={
61 'docs': docs_extras,
62 },
63 entry_points={
64 'certbot.plugins': [
65 'nginx = certbot_nginx.configurator:NginxConfigurator',
66 ],
67 },
68 test_suite='certbot_nginx',
69 )
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/certbot-nginx/setup.py b/certbot-nginx/setup.py
--- a/certbot-nginx/setup.py
+++ b/certbot-nginx/setup.py
@@ -7,10 +7,7 @@
# Remember to update local-oldest-requirements.txt when changing the minimum
# acme/certbot version.
install_requires = [
- # This plugin works with an older version of acme, but Certbot does not.
- # 0.22.0 is specified here to work around
- # https://github.com/pypa/pip/issues/988.
- 'acme>0.21.1',
+ 'acme>=0.25.0',
'certbot>0.21.1',
'mock',
'PyOpenSSL',
|
{"golden_diff": "diff --git a/certbot-nginx/setup.py b/certbot-nginx/setup.py\n--- a/certbot-nginx/setup.py\n+++ b/certbot-nginx/setup.py\n@@ -7,10 +7,7 @@\n # Remember to update local-oldest-requirements.txt when changing the minimum\n # acme/certbot version.\n install_requires = [\n- # This plugin works with an older version of acme, but Certbot does not.\n- # 0.22.0 is specified here to work around\n- # https://github.com/pypa/pip/issues/988.\n- 'acme>0.21.1',\n+ 'acme>=0.25.0',\n 'certbot>0.21.1',\n 'mock',\n 'PyOpenSSL',\n", "issue": "certbot-nginx requires acme >= 0.25\nBecause of the import of symbols `from acme.magic_typing`, the nginx plugin released in 0.25 depends on acme 0.25 or better. However, setup.py only lists `acme>0.21.1`, leading to a failure to build from source (and potential run-time failures).\n", "before_files": [{"content": "from setuptools import setup\nfrom setuptools import find_packages\n\n\nversion = '0.26.0.dev0'\n\n# Remember to update local-oldest-requirements.txt when changing the minimum\n# acme/certbot version.\ninstall_requires = [\n # This plugin works with an older version of acme, but Certbot does not.\n # 0.22.0 is specified here to work around\n # https://github.com/pypa/pip/issues/988.\n 'acme>0.21.1',\n 'certbot>0.21.1',\n 'mock',\n 'PyOpenSSL',\n 'pyparsing>=1.5.5', # Python3 support; perhaps unnecessary?\n 'setuptools',\n 'zope.interface',\n]\n\ndocs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n]\n\nsetup(\n name='certbot-nginx',\n version=version,\n description=\"Nginx plugin for Certbot\",\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Plugins',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'docs': docs_extras,\n },\n entry_points={\n 'certbot.plugins': [\n 'nginx = certbot_nginx.configurator:NginxConfigurator',\n ],\n },\n test_suite='certbot_nginx',\n)\n", "path": "certbot-nginx/setup.py"}], "after_files": [{"content": "from setuptools import setup\nfrom setuptools import find_packages\n\n\nversion = '0.26.0.dev0'\n\n# Remember to update local-oldest-requirements.txt when changing the minimum\n# acme/certbot version.\ninstall_requires = [\n 'acme>=0.25.0',\n 'certbot>0.21.1',\n 'mock',\n 'PyOpenSSL',\n 'pyparsing>=1.5.5', # Python3 support; perhaps unnecessary?\n 'setuptools',\n 'zope.interface',\n]\n\ndocs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n]\n\nsetup(\n name='certbot-nginx',\n version=version,\n description=\"Nginx plugin for Certbot\",\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Environment :: Plugins',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'docs': docs_extras,\n },\n entry_points={\n 'certbot.plugins': [\n 'nginx = certbot_nginx.configurator:NginxConfigurator',\n ],\n },\n test_suite='certbot_nginx',\n)\n", "path": "certbot-nginx/setup.py"}]}
| 1,038 | 182 |
gh_patches_debug_20003
|
rasdani/github-patches
|
git_diff
|
searx__searx-925
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add a software categorie and add the Free software directory search engine
Shame on me I forgot to ask this.
I am a volunteer on the [FSD](https://directory.fsf.org/wiki/Main_Page) (Free software directory)
It would be nice if people could look for free/libre software in the searx engine.
When possible could someone please add the free software directory so that people can easily find free software.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/mediawiki.py`
Content:
```
1 """
2 general mediawiki-engine (Web)
3
4 @website websites built on mediawiki (https://www.mediawiki.org)
5 @provide-api yes (http://www.mediawiki.org/wiki/API:Search)
6
7 @using-api yes
8 @results JSON
9 @stable yes
10 @parse url, title
11
12 @todo content
13 """
14
15 from json import loads
16 from string import Formatter
17 from searx.url_utils import urlencode, quote
18
19 # engine dependent config
20 categories = ['general']
21 language_support = True
22 paging = True
23 number_of_results = 1
24
25 # search-url
26 base_url = 'https://{language}.wikipedia.org/'
27 search_postfix = 'w/api.php?action=query'\
28 '&list=search'\
29 '&{query}'\
30 '&format=json'\
31 '&sroffset={offset}'\
32 '&srlimit={limit}'\
33 '&srwhat=nearmatch' # search for a near match in the title
34
35
36 # do search-request
37 def request(query, params):
38 offset = (params['pageno'] - 1) * number_of_results
39
40 string_args = dict(query=urlencode({'srsearch': query}),
41 offset=offset,
42 limit=number_of_results)
43
44 format_strings = list(Formatter().parse(base_url))
45
46 if params['language'] == 'all':
47 language = 'en'
48 else:
49 language = params['language'].split('-')[0]
50
51 # format_string [('https://', 'language', '', None), ('.wikipedia.org/', None, None, None)]
52 if any(x[1] == 'language' for x in format_strings):
53 string_args['language'] = language
54
55 # write search-language back to params, required in response
56 params['language'] = language
57
58 search_url = base_url + search_postfix
59
60 params['url'] = search_url.format(**string_args)
61
62 return params
63
64
65 # get response from search-request
66 def response(resp):
67 results = []
68
69 search_results = loads(resp.text)
70
71 # return empty array if there are no results
72 if not search_results.get('query', {}).get('search'):
73 return []
74
75 # parse results
76 for result in search_results['query']['search']:
77 if result.get('snippet', '').startswith('#REDIRECT'):
78 continue
79 url = base_url.format(language=resp.search_params['language']) +\
80 'wiki/' + quote(result['title'].replace(' ', '_').encode('utf-8'))
81
82 # append result
83 results.append({'url': url,
84 'title': result['title'],
85 'content': ''})
86
87 # return results
88 return results
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/engines/mediawiki.py b/searx/engines/mediawiki.py
--- a/searx/engines/mediawiki.py
+++ b/searx/engines/mediawiki.py
@@ -21,6 +21,7 @@
language_support = True
paging = True
number_of_results = 1
+search_type = 'nearmatch' # possible values: title, text, nearmatch
# search-url
base_url = 'https://{language}.wikipedia.org/'
@@ -30,7 +31,7 @@
'&format=json'\
'&sroffset={offset}'\
'&srlimit={limit}'\
- '&srwhat=nearmatch' # search for a near match in the title
+ '&srwhat={searchtype}'
# do search-request
@@ -39,7 +40,8 @@
string_args = dict(query=urlencode({'srsearch': query}),
offset=offset,
- limit=number_of_results)
+ limit=number_of_results,
+ searchtype=search_type)
format_strings = list(Formatter().parse(base_url))
|
{"golden_diff": "diff --git a/searx/engines/mediawiki.py b/searx/engines/mediawiki.py\n--- a/searx/engines/mediawiki.py\n+++ b/searx/engines/mediawiki.py\n@@ -21,6 +21,7 @@\n language_support = True\n paging = True\n number_of_results = 1\n+search_type = 'nearmatch' # possible values: title, text, nearmatch\n \n # search-url\n base_url = 'https://{language}.wikipedia.org/'\n@@ -30,7 +31,7 @@\n '&format=json'\\\n '&sroffset={offset}'\\\n '&srlimit={limit}'\\\n- '&srwhat=nearmatch' # search for a near match in the title\n+ '&srwhat={searchtype}'\n \n \n # do search-request\n@@ -39,7 +40,8 @@\n \n string_args = dict(query=urlencode({'srsearch': query}),\n offset=offset,\n- limit=number_of_results)\n+ limit=number_of_results,\n+ searchtype=search_type)\n \n format_strings = list(Formatter().parse(base_url))\n", "issue": "Add a software categorie and add the Free software directory search engine\nShame on me I forgot to ask this.\r\nI am a volunteer on the [FSD](https://directory.fsf.org/wiki/Main_Page) (Free software directory)\r\nIt would be nice if people could look for free/libre software in the searx engine.\r\nWhen possible could someone please add the free software directory so that people can easily find free software.\n", "before_files": [{"content": "\"\"\"\n general mediawiki-engine (Web)\n\n @website websites built on mediawiki (https://www.mediawiki.org)\n @provide-api yes (http://www.mediawiki.org/wiki/API:Search)\n\n @using-api yes\n @results JSON\n @stable yes\n @parse url, title\n\n @todo content\n\"\"\"\n\nfrom json import loads\nfrom string import Formatter\nfrom searx.url_utils import urlencode, quote\n\n# engine dependent config\ncategories = ['general']\nlanguage_support = True\npaging = True\nnumber_of_results = 1\n\n# search-url\nbase_url = 'https://{language}.wikipedia.org/'\nsearch_postfix = 'w/api.php?action=query'\\\n '&list=search'\\\n '&{query}'\\\n '&format=json'\\\n '&sroffset={offset}'\\\n '&srlimit={limit}'\\\n '&srwhat=nearmatch' # search for a near match in the title\n\n\n# do search-request\ndef request(query, params):\n offset = (params['pageno'] - 1) * number_of_results\n\n string_args = dict(query=urlencode({'srsearch': query}),\n offset=offset,\n limit=number_of_results)\n\n format_strings = list(Formatter().parse(base_url))\n\n if params['language'] == 'all':\n language = 'en'\n else:\n language = params['language'].split('-')[0]\n\n # format_string [('https://', 'language', '', None), ('.wikipedia.org/', None, None, None)]\n if any(x[1] == 'language' for x in format_strings):\n string_args['language'] = language\n\n # write search-language back to params, required in response\n params['language'] = language\n\n search_url = base_url + search_postfix\n\n params['url'] = search_url.format(**string_args)\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n search_results = loads(resp.text)\n\n # return empty array if there are no results\n if not search_results.get('query', {}).get('search'):\n return []\n\n # parse results\n for result in search_results['query']['search']:\n if result.get('snippet', '').startswith('#REDIRECT'):\n continue\n url = base_url.format(language=resp.search_params['language']) +\\\n 'wiki/' + quote(result['title'].replace(' ', '_').encode('utf-8'))\n\n # append result\n results.append({'url': url,\n 'title': result['title'],\n 'content': ''})\n\n # return results\n return results\n", "path": "searx/engines/mediawiki.py"}], "after_files": [{"content": "\"\"\"\n general mediawiki-engine (Web)\n\n @website websites built on mediawiki (https://www.mediawiki.org)\n @provide-api yes (http://www.mediawiki.org/wiki/API:Search)\n\n @using-api yes\n @results JSON\n @stable yes\n @parse url, title\n\n @todo content\n\"\"\"\n\nfrom json import loads\nfrom string import Formatter\nfrom searx.url_utils import urlencode, quote\n\n# engine dependent config\ncategories = ['general']\nlanguage_support = True\npaging = True\nnumber_of_results = 1\nsearch_type = 'nearmatch' # possible values: title, text, nearmatch\n\n# search-url\nbase_url = 'https://{language}.wikipedia.org/'\nsearch_postfix = 'w/api.php?action=query'\\\n '&list=search'\\\n '&{query}'\\\n '&format=json'\\\n '&sroffset={offset}'\\\n '&srlimit={limit}'\\\n '&srwhat={searchtype}'\n\n\n# do search-request\ndef request(query, params):\n offset = (params['pageno'] - 1) * number_of_results\n\n string_args = dict(query=urlencode({'srsearch': query}),\n offset=offset,\n limit=number_of_results,\n searchtype=search_type)\n\n format_strings = list(Formatter().parse(base_url))\n\n if params['language'] == 'all':\n language = 'en'\n else:\n language = params['language'].split('-')[0]\n\n # format_string [('https://', 'language', '', None), ('.wikipedia.org/', None, None, None)]\n if any(x[1] == 'language' for x in format_strings):\n string_args['language'] = language\n\n # write search-language back to params, required in response\n params['language'] = language\n\n search_url = base_url + search_postfix\n\n params['url'] = search_url.format(**string_args)\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n search_results = loads(resp.text)\n\n # return empty array if there are no results\n if not search_results.get('query', {}).get('search'):\n return []\n\n # parse results\n for result in search_results['query']['search']:\n if result.get('snippet', '').startswith('#REDIRECT'):\n continue\n url = base_url.format(language=resp.search_params['language']) +\\\n 'wiki/' + quote(result['title'].replace(' ', '_').encode('utf-8'))\n\n # append result\n results.append({'url': url,\n 'title': result['title'],\n 'content': ''})\n\n # return results\n return results\n", "path": "searx/engines/mediawiki.py"}]}
| 1,112 | 258 |
gh_patches_debug_30277
|
rasdani/github-patches
|
git_diff
|
plone__Products.CMFPlone-3719
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't edit the modal property of an action in Plone UI
Most settings of actions can be edited in the actions control panel in the Plone UI. See the classic demo:
https://6-classic.demo.plone.org/@@actions-controlpanel
Some actions have a modal property, for example the `object_buttons/delete` action. You can see it in the ZMI:
https://6-classic.demo.plone.org/portal_actions/object_buttons/delete/manage_propertiesForm
The property value is this:
```
{"actionOptions": {"disableAjaxFormSubmit":true, "redirectOnResponse":true}}
```
It would be nice if this could also be edited in the control panel. Currently this property is now shown at all.
I guess there could be other non-standard properties as well, so bonus points if this shows all properties.
My use case today was actually on Plone 5.2 where I wanted to change the modal property of `user/login`. In Plone 6.0.0 this is an empty dictionary, but on 5.2 it was this:
```
{"prependContent": ".portalMessage", "title": "Log in", "width": "26em", "actionOptions": {"redirectOnResponse": true}}
```
I had to change the width to 18em in a client project today because their base font size was a lot bigger, which led to the login modal being only half visible on mobile. :-)
With my release manager hat on: no I don't want this changed in 5.2, people will have to use the ZMI there. Depending on scale and impact of the needed changes, a fix for this could be done either in a 6.0.x bugfix release, or in 6.1.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/controlpanel/browser/actions.py`
Content:
```
1 from plone.autoform.form import AutoExtensibleForm
2 from plone.base.interfaces import IActionSchema
3 from plone.base.interfaces import INewActionSchema
4 from Products.CMFCore.ActionInformation import Action
5 from Products.CMFCore.interfaces import IAction
6 from Products.CMFCore.interfaces import IActionCategory
7 from Products.CMFCore.utils import getToolByName
8 from Products.CMFPlone import PloneMessageFactory as _
9 from Products.Five import BrowserView
10 from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
11 from z3c.form import form
12 from zope.component import adapts
13 from zope.event import notify
14 from zope.interface import implementer
15 from zope.lifecycleevent import ObjectCreatedEvent
16
17
18 class ActionListControlPanel(BrowserView):
19 """Control panel for the portal actions."""
20
21 template = ViewPageTemplateFile("actions.pt")
22
23 def __init__(self, context, request):
24 self.context = context
25 self.request = request
26 self.portal_actions = getToolByName(self.context, 'portal_actions')
27
28 def display(self):
29 actions = []
30 for category in self.portal_actions.objectValues():
31 if category.id == 'controlpanel':
32 continue
33 if not IActionCategory.providedBy(category):
34 continue
35 cat_infos = {
36 'id': category.id,
37 'title': category.title or category.id,
38 }
39 action_list = []
40 for action in category.objectValues():
41 if IAction.providedBy(action):
42 action_list.append({
43 'id': action.id,
44 'title': action.title,
45 'url': action.absolute_url(),
46 'visible': action.visible,
47 })
48 cat_infos['actions'] = action_list
49 actions.append(cat_infos)
50
51 self.actions = actions
52 return self.template()
53
54 def __call__(self):
55 if self.request.get('delete'):
56 action_id = self.request['actionid']
57 category = self.portal_actions[self.request['category']]
58 category.manage_delObjects([action_id])
59 self.request.RESPONSE.redirect('@@actions-controlpanel')
60 if self.request.get('hide'):
61 action_id = self.request['actionid']
62 category = self.portal_actions[self.request['category']]
63 category[action_id].visible = False
64 self.request.RESPONSE.redirect('@@actions-controlpanel')
65 if self.request.get('show'):
66 action_id = self.request['actionid']
67 category = self.portal_actions[self.request['category']]
68 category[action_id].visible = True
69 self.request.RESPONSE.redirect('@@actions-controlpanel')
70 return self.display()
71
72
73 @implementer(IActionSchema)
74 class ActionControlPanelAdapter:
75 """Adapter for action form."""
76
77 adapts(IAction)
78
79 def __init__(self, context):
80 self.context = context
81 self.current_category = self.context.getParentNode()
82
83 def get_category(self):
84 return self.current_category.id
85
86 def set_category(self, value):
87 portal_actions = getToolByName(self.context, 'portal_actions')
88 new_category = portal_actions.get(value)
89 cookie = self.current_category.manage_cutObjects(ids=[self.context.id])
90 new_category.manage_pasteObjects(cookie)
91
92 category = property(get_category, set_category)
93
94 def get_title(self):
95 return self.context.title
96
97 def set_title(self, value):
98 self.context._setPropValue('title', value)
99
100 title = property(get_title, set_title)
101
102 def get_description(self):
103 return self.context.description
104
105 def set_description(self, value):
106 self.context._setPropValue('description', value)
107
108 description = property(get_description, set_description)
109
110 def get_i18n_domain(self):
111 return self.context.i18n_domain
112
113 def set_i18n_domain(self, value):
114 self.context._setPropValue('i18n_domain', value)
115
116 i18n_domain = property(get_i18n_domain, set_i18n_domain)
117
118 def get_url_expr(self):
119 return self.context.url_expr
120
121 def set_url_expr(self, value):
122 self.context._setPropValue('url_expr', value)
123
124 url_expr = property(get_url_expr, set_url_expr)
125
126 def get_available_expr(self):
127 return self.context.available_expr
128
129 def set_available_expr(self, value):
130 self.context._setPropValue('available_expr', value)
131
132 available_expr = property(get_available_expr, set_available_expr)
133
134 def get_permissions(self):
135 return self.context.permissions
136
137 def set_permissions(self, value):
138 self.context._setPropValue('permissions', value)
139
140 permissions = property(get_permissions, set_permissions)
141
142 def get_visible(self):
143 return self.context.visible
144
145 def set_visible(self, value):
146 self.context._setPropValue('visible', value)
147
148 visible = property(get_visible, set_visible)
149
150 def get_position(self):
151 position = self.current_category.objectIds().index(self.context.id)
152 return position + 1
153
154 def set_position(self, value):
155 current_position = self.current_category.objectIds().index(
156 self.context.id)
157 all_actions = list(self.current_category._objects)
158 current_action = all_actions.pop(current_position)
159 new_position = value - 1
160 all_actions = all_actions[0:new_position] + [current_action] + \
161 all_actions[new_position:]
162 self.current_category._objects = tuple(all_actions)
163
164 position = property(get_position, set_position)
165
166
167 class ActionControlPanel(AutoExtensibleForm, form.EditForm):
168 """A form to edit a portal action."""
169
170 schema = IActionSchema
171 ignoreContext = False
172 label = _('Action Settings')
173
174
175 class NewActionControlPanel(AutoExtensibleForm, form.AddForm):
176 """A form to add a new portal action."""
177
178 schema = INewActionSchema
179 ignoreContext = True
180 label = _('New action')
181
182 def createAndAdd(self, data):
183 portal_actions = getToolByName(self.context, 'portal_actions')
184 category = portal_actions.get(data['category'])
185 action_id = data['id']
186 action = Action(
187 action_id,
188 title=action_id,
189 i18n_domain='plone',
190 permissions=['View'],
191 )
192 category[action_id] = action
193 notify(ObjectCreatedEvent(action))
194
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/Products/CMFPlone/controlpanel/browser/actions.py b/Products/CMFPlone/controlpanel/browser/actions.py
--- a/Products/CMFPlone/controlpanel/browser/actions.py
+++ b/Products/CMFPlone/controlpanel/browser/actions.py
@@ -1,6 +1,7 @@
from plone.autoform.form import AutoExtensibleForm
from plone.base.interfaces import IActionSchema
from plone.base.interfaces import INewActionSchema
+from plone.base.utils import base_hasattr
from Products.CMFCore.ActionInformation import Action
from Products.CMFCore.interfaces import IAction
from Products.CMFCore.interfaces import IActionCategory
@@ -14,6 +15,8 @@
from zope.interface import implementer
from zope.lifecycleevent import ObjectCreatedEvent
+import json
+
class ActionListControlPanel(BrowserView):
"""Control panel for the portal actions."""
@@ -163,6 +166,22 @@
position = property(get_position, set_position)
+ def get_modal(self):
+ return self.context.modal
+
+ def set_modal(self, value):
+ # This property may not exist yet on the context.
+ if not self.context.hasProperty("modal"):
+ if base_hasattr(self.context, "modal"):
+ # We cannot define a property when an attribute with the same
+ # name already exists.
+ delattr(self.context, "modal")
+ self.context._setProperty('modal', value, 'string')
+ else:
+ self.context._setPropValue('modal', value)
+
+ modal = property(get_modal, set_modal)
+
class ActionControlPanel(AutoExtensibleForm, form.EditForm):
"""A form to edit a portal action."""
|
{"golden_diff": "diff --git a/Products/CMFPlone/controlpanel/browser/actions.py b/Products/CMFPlone/controlpanel/browser/actions.py\n--- a/Products/CMFPlone/controlpanel/browser/actions.py\n+++ b/Products/CMFPlone/controlpanel/browser/actions.py\n@@ -1,6 +1,7 @@\n from plone.autoform.form import AutoExtensibleForm\n from plone.base.interfaces import IActionSchema\n from plone.base.interfaces import INewActionSchema\n+from plone.base.utils import base_hasattr\n from Products.CMFCore.ActionInformation import Action\n from Products.CMFCore.interfaces import IAction\n from Products.CMFCore.interfaces import IActionCategory\n@@ -14,6 +15,8 @@\n from zope.interface import implementer\n from zope.lifecycleevent import ObjectCreatedEvent\n \n+import json\n+\n \n class ActionListControlPanel(BrowserView):\n \"\"\"Control panel for the portal actions.\"\"\"\n@@ -163,6 +166,22 @@\n \n position = property(get_position, set_position)\n \n+ def get_modal(self):\n+ return self.context.modal\n+\n+ def set_modal(self, value):\n+ # This property may not exist yet on the context.\n+ if not self.context.hasProperty(\"modal\"):\n+ if base_hasattr(self.context, \"modal\"):\n+ # We cannot define a property when an attribute with the same\n+ # name already exists.\n+ delattr(self.context, \"modal\")\n+ self.context._setProperty('modal', value, 'string')\n+ else:\n+ self.context._setPropValue('modal', value)\n+\n+ modal = property(get_modal, set_modal)\n+\n \n class ActionControlPanel(AutoExtensibleForm, form.EditForm):\n \"\"\"A form to edit a portal action.\"\"\"\n", "issue": "Can't edit the modal property of an action in Plone UI\nMost settings of actions can be edited in the actions control panel in the Plone UI. See the classic demo:\r\nhttps://6-classic.demo.plone.org/@@actions-controlpanel\r\n\r\nSome actions have a modal property, for example the `object_buttons/delete` action. You can see it in the ZMI:\r\nhttps://6-classic.demo.plone.org/portal_actions/object_buttons/delete/manage_propertiesForm\r\nThe property value is this:\r\n\r\n```\r\n{\"actionOptions\": {\"disableAjaxFormSubmit\":true, \"redirectOnResponse\":true}}\r\n```\r\n\r\nIt would be nice if this could also be edited in the control panel. Currently this property is now shown at all.\r\nI guess there could be other non-standard properties as well, so bonus points if this shows all properties.\r\n\r\nMy use case today was actually on Plone 5.2 where I wanted to change the modal property of `user/login`. In Plone 6.0.0 this is an empty dictionary, but on 5.2 it was this:\r\n\r\n```\r\n{\"prependContent\": \".portalMessage\", \"title\": \"Log in\", \"width\": \"26em\", \"actionOptions\": {\"redirectOnResponse\": true}}\r\n```\r\n\r\nI had to change the width to 18em in a client project today because their base font size was a lot bigger, which led to the login modal being only half visible on mobile. :-)\r\n\r\nWith my release manager hat on: no I don't want this changed in 5.2, people will have to use the ZMI there. Depending on scale and impact of the needed changes, a fix for this could be done either in a 6.0.x bugfix release, or in 6.1.\n", "before_files": [{"content": "from plone.autoform.form import AutoExtensibleForm\nfrom plone.base.interfaces import IActionSchema\nfrom plone.base.interfaces import INewActionSchema\nfrom Products.CMFCore.ActionInformation import Action\nfrom Products.CMFCore.interfaces import IAction\nfrom Products.CMFCore.interfaces import IActionCategory\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone import PloneMessageFactory as _\nfrom Products.Five import BrowserView\nfrom Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\nfrom z3c.form import form\nfrom zope.component import adapts\nfrom zope.event import notify\nfrom zope.interface import implementer\nfrom zope.lifecycleevent import ObjectCreatedEvent\n\n\nclass ActionListControlPanel(BrowserView):\n \"\"\"Control panel for the portal actions.\"\"\"\n\n template = ViewPageTemplateFile(\"actions.pt\")\n\n def __init__(self, context, request):\n self.context = context\n self.request = request\n self.portal_actions = getToolByName(self.context, 'portal_actions')\n\n def display(self):\n actions = []\n for category in self.portal_actions.objectValues():\n if category.id == 'controlpanel':\n continue\n if not IActionCategory.providedBy(category):\n continue\n cat_infos = {\n 'id': category.id,\n 'title': category.title or category.id,\n }\n action_list = []\n for action in category.objectValues():\n if IAction.providedBy(action):\n action_list.append({\n 'id': action.id,\n 'title': action.title,\n 'url': action.absolute_url(),\n 'visible': action.visible,\n })\n cat_infos['actions'] = action_list\n actions.append(cat_infos)\n\n self.actions = actions\n return self.template()\n\n def __call__(self):\n if self.request.get('delete'):\n action_id = self.request['actionid']\n category = self.portal_actions[self.request['category']]\n category.manage_delObjects([action_id])\n self.request.RESPONSE.redirect('@@actions-controlpanel')\n if self.request.get('hide'):\n action_id = self.request['actionid']\n category = self.portal_actions[self.request['category']]\n category[action_id].visible = False\n self.request.RESPONSE.redirect('@@actions-controlpanel')\n if self.request.get('show'):\n action_id = self.request['actionid']\n category = self.portal_actions[self.request['category']]\n category[action_id].visible = True\n self.request.RESPONSE.redirect('@@actions-controlpanel')\n return self.display()\n\n\n@implementer(IActionSchema)\nclass ActionControlPanelAdapter:\n \"\"\"Adapter for action form.\"\"\"\n\n adapts(IAction)\n\n def __init__(self, context):\n self.context = context\n self.current_category = self.context.getParentNode()\n\n def get_category(self):\n return self.current_category.id\n\n def set_category(self, value):\n portal_actions = getToolByName(self.context, 'portal_actions')\n new_category = portal_actions.get(value)\n cookie = self.current_category.manage_cutObjects(ids=[self.context.id])\n new_category.manage_pasteObjects(cookie)\n\n category = property(get_category, set_category)\n\n def get_title(self):\n return self.context.title\n\n def set_title(self, value):\n self.context._setPropValue('title', value)\n\n title = property(get_title, set_title)\n\n def get_description(self):\n return self.context.description\n\n def set_description(self, value):\n self.context._setPropValue('description', value)\n\n description = property(get_description, set_description)\n\n def get_i18n_domain(self):\n return self.context.i18n_domain\n\n def set_i18n_domain(self, value):\n self.context._setPropValue('i18n_domain', value)\n\n i18n_domain = property(get_i18n_domain, set_i18n_domain)\n\n def get_url_expr(self):\n return self.context.url_expr\n\n def set_url_expr(self, value):\n self.context._setPropValue('url_expr', value)\n\n url_expr = property(get_url_expr, set_url_expr)\n\n def get_available_expr(self):\n return self.context.available_expr\n\n def set_available_expr(self, value):\n self.context._setPropValue('available_expr', value)\n\n available_expr = property(get_available_expr, set_available_expr)\n\n def get_permissions(self):\n return self.context.permissions\n\n def set_permissions(self, value):\n self.context._setPropValue('permissions', value)\n\n permissions = property(get_permissions, set_permissions)\n\n def get_visible(self):\n return self.context.visible\n\n def set_visible(self, value):\n self.context._setPropValue('visible', value)\n\n visible = property(get_visible, set_visible)\n\n def get_position(self):\n position = self.current_category.objectIds().index(self.context.id)\n return position + 1\n\n def set_position(self, value):\n current_position = self.current_category.objectIds().index(\n self.context.id)\n all_actions = list(self.current_category._objects)\n current_action = all_actions.pop(current_position)\n new_position = value - 1\n all_actions = all_actions[0:new_position] + [current_action] + \\\n all_actions[new_position:]\n self.current_category._objects = tuple(all_actions)\n\n position = property(get_position, set_position)\n\n\nclass ActionControlPanel(AutoExtensibleForm, form.EditForm):\n \"\"\"A form to edit a portal action.\"\"\"\n\n schema = IActionSchema\n ignoreContext = False\n label = _('Action Settings')\n\n\nclass NewActionControlPanel(AutoExtensibleForm, form.AddForm):\n \"\"\"A form to add a new portal action.\"\"\"\n\n schema = INewActionSchema\n ignoreContext = True\n label = _('New action')\n\n def createAndAdd(self, data):\n portal_actions = getToolByName(self.context, 'portal_actions')\n category = portal_actions.get(data['category'])\n action_id = data['id']\n action = Action(\n action_id,\n title=action_id,\n i18n_domain='plone',\n permissions=['View'],\n )\n category[action_id] = action\n notify(ObjectCreatedEvent(action))\n", "path": "Products/CMFPlone/controlpanel/browser/actions.py"}], "after_files": [{"content": "from plone.autoform.form import AutoExtensibleForm\nfrom plone.base.interfaces import IActionSchema\nfrom plone.base.interfaces import INewActionSchema\nfrom plone.base.utils import base_hasattr\nfrom Products.CMFCore.ActionInformation import Action\nfrom Products.CMFCore.interfaces import IAction\nfrom Products.CMFCore.interfaces import IActionCategory\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone import PloneMessageFactory as _\nfrom Products.Five import BrowserView\nfrom Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\nfrom z3c.form import form\nfrom zope.component import adapts\nfrom zope.event import notify\nfrom zope.interface import implementer\nfrom zope.lifecycleevent import ObjectCreatedEvent\n\nimport json\n\n\nclass ActionListControlPanel(BrowserView):\n \"\"\"Control panel for the portal actions.\"\"\"\n\n template = ViewPageTemplateFile(\"actions.pt\")\n\n def __init__(self, context, request):\n self.context = context\n self.request = request\n self.portal_actions = getToolByName(self.context, 'portal_actions')\n\n def display(self):\n actions = []\n for category in self.portal_actions.objectValues():\n if category.id == 'controlpanel':\n continue\n if not IActionCategory.providedBy(category):\n continue\n cat_infos = {\n 'id': category.id,\n 'title': category.title or category.id,\n }\n action_list = []\n for action in category.objectValues():\n if IAction.providedBy(action):\n action_list.append({\n 'id': action.id,\n 'title': action.title,\n 'url': action.absolute_url(),\n 'visible': action.visible,\n })\n cat_infos['actions'] = action_list\n actions.append(cat_infos)\n\n self.actions = actions\n return self.template()\n\n def __call__(self):\n if self.request.get('delete'):\n action_id = self.request['actionid']\n category = self.portal_actions[self.request['category']]\n category.manage_delObjects([action_id])\n self.request.RESPONSE.redirect('@@actions-controlpanel')\n if self.request.get('hide'):\n action_id = self.request['actionid']\n category = self.portal_actions[self.request['category']]\n category[action_id].visible = False\n self.request.RESPONSE.redirect('@@actions-controlpanel')\n if self.request.get('show'):\n action_id = self.request['actionid']\n category = self.portal_actions[self.request['category']]\n category[action_id].visible = True\n self.request.RESPONSE.redirect('@@actions-controlpanel')\n return self.display()\n\n\n@implementer(IActionSchema)\nclass ActionControlPanelAdapter:\n \"\"\"Adapter for action form.\"\"\"\n\n adapts(IAction)\n\n def __init__(self, context):\n self.context = context\n self.current_category = self.context.getParentNode()\n\n def get_category(self):\n return self.current_category.id\n\n def set_category(self, value):\n portal_actions = getToolByName(self.context, 'portal_actions')\n new_category = portal_actions.get(value)\n cookie = self.current_category.manage_cutObjects(ids=[self.context.id])\n new_category.manage_pasteObjects(cookie)\n\n category = property(get_category, set_category)\n\n def get_title(self):\n return self.context.title\n\n def set_title(self, value):\n self.context._setPropValue('title', value)\n\n title = property(get_title, set_title)\n\n def get_description(self):\n return self.context.description\n\n def set_description(self, value):\n self.context._setPropValue('description', value)\n\n description = property(get_description, set_description)\n\n def get_i18n_domain(self):\n return self.context.i18n_domain\n\n def set_i18n_domain(self, value):\n self.context._setPropValue('i18n_domain', value)\n\n i18n_domain = property(get_i18n_domain, set_i18n_domain)\n\n def get_url_expr(self):\n return self.context.url_expr\n\n def set_url_expr(self, value):\n self.context._setPropValue('url_expr', value)\n\n url_expr = property(get_url_expr, set_url_expr)\n\n def get_available_expr(self):\n return self.context.available_expr\n\n def set_available_expr(self, value):\n self.context._setPropValue('available_expr', value)\n\n available_expr = property(get_available_expr, set_available_expr)\n\n def get_permissions(self):\n return self.context.permissions\n\n def set_permissions(self, value):\n self.context._setPropValue('permissions', value)\n\n permissions = property(get_permissions, set_permissions)\n\n def get_visible(self):\n return self.context.visible\n\n def set_visible(self, value):\n self.context._setPropValue('visible', value)\n\n visible = property(get_visible, set_visible)\n\n def get_position(self):\n position = self.current_category.objectIds().index(self.context.id)\n return position + 1\n\n def set_position(self, value):\n current_position = self.current_category.objectIds().index(\n self.context.id)\n all_actions = list(self.current_category._objects)\n current_action = all_actions.pop(current_position)\n new_position = value - 1\n all_actions = all_actions[0:new_position] + [current_action] + \\\n all_actions[new_position:]\n self.current_category._objects = tuple(all_actions)\n\n position = property(get_position, set_position)\n\n def get_modal(self):\n return self.context.modal\n\n def set_modal(self, value):\n # This property may not exist yet on the context.\n if not self.context.hasProperty(\"modal\"):\n if base_hasattr(self.context, \"modal\"):\n # We cannot define a property when an attribute with the same\n # name already exists.\n delattr(self.context, \"modal\")\n self.context._setProperty('modal', value, 'string')\n else:\n self.context._setPropValue('modal', value)\n\n modal = property(get_modal, set_modal)\n\n\nclass ActionControlPanel(AutoExtensibleForm, form.EditForm):\n \"\"\"A form to edit a portal action.\"\"\"\n\n schema = IActionSchema\n ignoreContext = False\n label = _('Action Settings')\n\n\nclass NewActionControlPanel(AutoExtensibleForm, form.AddForm):\n \"\"\"A form to add a new portal action.\"\"\"\n\n schema = INewActionSchema\n ignoreContext = True\n label = _('New action')\n\n def createAndAdd(self, data):\n portal_actions = getToolByName(self.context, 'portal_actions')\n category = portal_actions.get(data['category'])\n action_id = data['id']\n action = Action(\n action_id,\n title=action_id,\n i18n_domain='plone',\n permissions=['View'],\n )\n category[action_id] = action\n notify(ObjectCreatedEvent(action))\n", "path": "Products/CMFPlone/controlpanel/browser/actions.py"}]}
| 2,463 | 387 |
gh_patches_debug_6698
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-8286
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PDF Invoice: Show Discounts in the invoice
The PDF invoice for ticket buyers does not show discount codes clearly. Please implement the following:
* Add the discount code into the description with "Discount code: samplecodehere, Discount: 10% or USD 2.00 etc."
* Show the discount in the item price list with the original price strike through
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/helpers/order.py`
Content:
```
1 import logging
2 from datetime import datetime, timedelta, timezone
3
4 from flask import render_template
5 from flask_rest_jsonapi.exceptions import ObjectNotFound
6
7 from app.api.helpers.db import (
8 get_count,
9 safe_query_without_soft_deleted_entries,
10 save_to_db,
11 )
12 from app.api.helpers.errors import ConflictError, UnprocessableEntityError
13 from app.api.helpers.files import create_save_pdf
14 from app.api.helpers.mail import (
15 send_email_to_attendees,
16 send_order_purchase_organizer_email,
17 )
18 from app.api.helpers.notification import (
19 notify_ticket_purchase_attendee,
20 notify_ticket_purchase_organizer,
21 )
22 from app.api.helpers.storage import UPLOAD_PATHS
23 from app.models import db
24 from app.models.order import OrderTicket
25 from app.models.ticket import Ticket
26 from app.models.ticket_fee import TicketFees
27 from app.models.ticket_holder import TicketHolder
28 from app.models.setting import Setting
29 from app.settings import get_settings
30
31
32 def delete_related_attendees_for_order(order):
33 """
34 Delete the associated attendees of an order when it is cancelled/deleted/expired
35 :param order: Order whose attendees have to be deleted.
36 :return:
37 """
38 for ticket_holder in order.ticket_holders:
39 db.session.delete(ticket_holder)
40 try:
41 db.session.commit()
42 except Exception:
43 logging.exception('DB Exception!')
44 db.session.rollback()
45
46
47 def set_expiry_for_order(order, override=False):
48 """
49 Expire the order after the time slot(10 minutes) if the order is initializing.
50 Also expires the order if we want to expire an order regardless of the state and time.
51 :param order: Order to be expired.
52 :param override: flag to force expiry.
53 :return:
54 """
55 order_expiry_time = get_settings()['order_expiry_time']
56 if (
57 order
58 and not order.paid_via
59 and (
60 override
61 or (
62 order.status == 'initializing'
63 and (order.created_at + timedelta(minutes=order_expiry_time))
64 < datetime.now(timezone.utc)
65 )
66 )
67 ):
68 order.status = 'expired'
69 delete_related_attendees_for_order(order)
70 save_to_db(order)
71 return order
72
73
74 def create_pdf_tickets_for_holder(order):
75 """
76 Create tickets and invoices for the holders of an order.
77 :param order: The order for which to create tickets for.
78 """
79 if order.status == 'completed' or order.status == 'placed':
80 pdf = create_save_pdf(
81 render_template('pdf/ticket_purchaser.html', order=order),
82 UPLOAD_PATHS['pdf']['tickets_all'],
83 dir_path='/static/uploads/pdf/tickets/',
84 identifier=order.identifier,
85 extra_identifiers={'extra_identifier': order.identifier},
86 upload_dir='generated/tickets/',
87 )
88
89 order.tickets_pdf_url = pdf
90
91 for holder in order.ticket_holders:
92 # create attendee pdf for every ticket holder
93 pdf = create_save_pdf(
94 render_template('pdf/ticket_attendee.html', order=order, holder=holder),
95 UPLOAD_PATHS['pdf']['tickets_all'],
96 dir_path='/static/uploads/pdf/tickets/',
97 identifier=order.identifier,
98 extra_identifiers={'extra_identifier': holder.id},
99 upload_dir='generated/tickets/',
100 )
101 holder.pdf_url = pdf
102 save_to_db(holder)
103
104 admin_info = Setting.query.first()
105
106 # create order invoices pdf
107 order_tickets = OrderTicket.query.filter_by(order_id=order.id).all()
108
109 tickets = []
110 for order_ticket in order_tickets:
111 ticket = dict(
112 id=order_ticket.ticket.id,
113 price=order_ticket.ticket.price,
114 quantity=order_ticket.quantity
115 )
116 tickets.append(ticket)
117
118 # calculate order amount using helper function
119 order_amount = calculate_order_amount(tickets, discount_code=order.discount_code)
120
121 create_save_pdf(
122 render_template(
123 'pdf/order_invoice.html',
124 order=order,
125 event=order.event,
126 tax=order.event.tax,
127 order_tickets=order_tickets,
128 event_starts_at=order.event.starts_at_tz.strftime('%d %B %Y'),
129 created_at=order.created_at.strftime('%d %B %Y'),
130 admin_info=admin_info,
131 order_amount=order_amount
132 ),
133 UPLOAD_PATHS['pdf']['order'],
134 dir_path='/static/uploads/pdf/tickets/',
135 identifier=order.identifier,
136 upload_dir='generated/invoices/',
137 new_renderer=True,
138 )
139 save_to_db(order)
140
141
142 def create_onsite_attendees_for_order(data):
143 """
144 Creates on site ticket holders for an order and adds it into the request data.
145 :param data: data initially passed in the POST request for order.
146 :return:
147 """
148 on_site_tickets = data.get('on_site_tickets')
149
150 if not on_site_tickets:
151 raise UnprocessableEntityError(
152 {'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing'
153 )
154
155 data['ticket_holders'] = []
156
157 for on_site_ticket in on_site_tickets:
158 ticket_id = on_site_ticket['id']
159 quantity = int(on_site_ticket['quantity'])
160
161 ticket = safe_query_without_soft_deleted_entries(
162 Ticket, 'id', ticket_id, 'ticket_id'
163 )
164
165 ticket_sold_count = get_count(
166 db.session.query(TicketHolder.id).filter_by(
167 ticket_id=int(ticket.id), deleted_at=None
168 )
169 )
170
171 # Check if the ticket is already sold out or not.
172 if ticket_sold_count + quantity > ticket.quantity:
173 # delete the already created attendees.
174 for holder in data['ticket_holders']:
175 ticket_holder = (
176 db.session.query(TicketHolder).filter(id == int(holder)).one()
177 )
178 db.session.delete(ticket_holder)
179 try:
180 db.session.commit()
181 except Exception:
182 logging.exception('DB Exception!')
183 db.session.rollback()
184
185 raise ConflictError(
186 {'pointer': '/data/attributes/on_site_tickets'},
187 "Ticket with id: {} already sold out. You can buy at most {} tickets".format(
188 ticket_id, ticket.quantity - ticket_sold_count
189 ),
190 )
191
192 for _ in range(1, quantity):
193 ticket_holder = TicketHolder(
194 firstname='onsite',
195 lastname='attendee',
196 email='[email protected]',
197 ticket_id=ticket.id,
198 event_id=data.get('event'),
199 )
200 save_to_db(ticket_holder)
201 data['ticket_holders'].append(ticket_holder.id)
202
203 # delete from the data.
204 del data['on_site_tickets']
205
206
207 def calculate_order_amount(tickets, discount_code=None):
208 from app.api.helpers.ticketing import validate_discount_code, validate_tickets
209 from app.models.discount_code import DiscountCode
210
211 ticket_ids = {ticket['id'] for ticket in tickets}
212 ticket_map = {int(ticket['id']): ticket for ticket in tickets}
213 fetched_tickets = validate_tickets(ticket_ids)
214
215 if tickets and discount_code:
216 discount_code = validate_discount_code(discount_code, tickets=tickets)
217
218 event = tax = tax_included = fees = None
219 total_amount = total_tax = total_discount = 0.0
220 ticket_list = []
221 for ticket in fetched_tickets:
222 ticket_tax = discounted_tax = 0.0
223 ticket_info = ticket_map[ticket.id]
224 discount_amount = 0.0
225 discount_data = None
226 ticket_fee = 0.0
227
228 quantity = ticket_info.get('quantity', 1) # Default to single ticket
229 if not event:
230 event = ticket.event
231
232 if event.deleted_at:
233 raise ObjectNotFound(
234 {'pointer': 'tickets/event'}, f'Event: {event.id} not found'
235 )
236
237 fees = TicketFees.query.filter_by(currency=event.payment_currency).first()
238
239 if not tax and event.tax:
240 tax = event.tax
241 tax_included = tax.is_tax_included_in_price
242
243 if ticket.type == 'donation':
244 price = ticket_info.get('price')
245 if not price or price > ticket.max_price or price < ticket.min_price:
246 raise UnprocessableEntityError(
247 {'pointer': 'tickets/price'},
248 f"Price for donation ticket should be present and within range "
249 f"{ticket.min_price} to {ticket.max_price}",
250 )
251 else:
252 price = ticket.price if ticket.type != 'free' else 0.0
253
254 if tax:
255 if tax_included:
256 ticket_tax = price - price / (1 + tax.rate / 100)
257 else:
258 ticket_tax = price * tax.rate / 100
259
260 if discount_code and ticket.type != 'free':
261 code = (
262 DiscountCode.query.with_parent(ticket)
263 .filter_by(id=discount_code.id)
264 .first()
265 )
266 if code:
267 if discount_code.id == code.id:
268 if code.type == 'amount':
269 discount_amount = min(code.value, price)
270 discount_percent = (discount_amount / price) * 100
271 if tax:
272 if tax_included:
273 discounted_tax = (price - discount_amount) - (price - discount_amount) / (1 + tax.rate / 100)
274 else:
275 discounted_tax = (price - discount_amount) * tax.rate / 100
276 else:
277 discount_amount = (price * code.value) / 100
278 if tax:
279 discounted_tax = ticket_tax - (ticket_tax * code.value / 100)
280 discount_percent = code.value
281 discount_data = {
282 'code': discount_code.code,
283 'percent': round(discount_percent, 2),
284 'amount': round(discount_amount, 2),
285 'total': round(discount_amount * quantity, 2),
286 }
287
288 total_discount += round(discount_amount * quantity, 2)
289 if fees and not ticket.is_fee_absorbed:
290 ticket_fee = fees.service_fee * (price * quantity) / 100
291 if ticket_fee > fees.maximum_fee:
292 ticket_fee = fees.maximum_fee
293 sub_total = ticket_fee + (price - discount_amount) * quantity
294 total_amount = total_amount + sub_total
295 ticket_list.append(
296 {
297 'id': ticket.id,
298 'name': ticket.name,
299 'price': price,
300 'quantity': quantity,
301 'discount': discount_data,
302 'ticket_fee': round(ticket_fee, 2),
303 'sub_total': round(sub_total, 2),
304 'ticket_tax': round(ticket_tax, 2),
305 'discounted_tax': round(discounted_tax, 2)
306 }
307 )
308
309 sub_total = total_amount
310 tax_dict = None
311 if tax:
312 if tax_included:
313 total_tax = total_amount - total_amount / (1 + tax.rate / 100)
314 else:
315 total_tax = total_amount * tax.rate / 100
316 total_amount += total_tax
317 tax_dict = dict(
318 included=tax_included,
319 amount=round(total_tax, 2),
320 percent=tax.rate if tax else 0.0,
321 name=tax.name,
322 )
323
324 return dict(
325 tax=tax_dict,
326 sub_total=round(sub_total, 2),
327 total=round(total_amount, 2),
328 discount=round(total_discount, 2),
329 tickets=ticket_list,
330 )
331
332
333 def on_order_completed(order):
334 # send e-mail and notifications if the order status is completed
335 if not (order.status == 'completed' or order.status == 'placed'):
336 return
337
338 create_pdf_tickets_for_holder(order)
339
340 # send email and notifications.
341 send_email_to_attendees(order)
342 notify_ticket_purchase_attendee(order)
343
344 if order.payment_mode in ['free', 'bank', 'cheque', 'onsite']:
345 order.completed_at = datetime.utcnow()
346
347 organizer_set = set(
348 filter(
349 bool, order.event.organizers + order.event.coorganizers + [order.event.owner]
350 )
351 )
352 send_order_purchase_organizer_email(order, organizer_set)
353 notify_ticket_purchase_organizer(order)
354
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/api/helpers/order.py b/app/api/helpers/order.py
--- a/app/api/helpers/order.py
+++ b/app/api/helpers/order.py
@@ -283,6 +283,7 @@
'percent': round(discount_percent, 2),
'amount': round(discount_amount, 2),
'total': round(discount_amount * quantity, 2),
+ 'type': code.type
}
total_discount += round(discount_amount * quantity, 2)
|
{"golden_diff": "diff --git a/app/api/helpers/order.py b/app/api/helpers/order.py\n--- a/app/api/helpers/order.py\n+++ b/app/api/helpers/order.py\n@@ -283,6 +283,7 @@\n 'percent': round(discount_percent, 2),\n 'amount': round(discount_amount, 2),\n 'total': round(discount_amount * quantity, 2),\n+ 'type': code.type\n }\n \n total_discount += round(discount_amount * quantity, 2)\n", "issue": "PDF Invoice: Show Discounts in the invoice\nThe PDF invoice for ticket buyers does not show discount codes clearly. Please implement the following:\r\n* Add the discount code into the description with \"Discount code: samplecodehere, Discount: 10% or USD 2.00 etc.\"\r\n* Show the discount in the item price list with the original price strike through\n", "before_files": [{"content": "import logging\nfrom datetime import datetime, timedelta, timezone\n\nfrom flask import render_template\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.helpers.db import (\n get_count,\n safe_query_without_soft_deleted_entries,\n save_to_db,\n)\nfrom app.api.helpers.errors import ConflictError, UnprocessableEntityError\nfrom app.api.helpers.files import create_save_pdf\nfrom app.api.helpers.mail import (\n send_email_to_attendees,\n send_order_purchase_organizer_email,\n)\nfrom app.api.helpers.notification import (\n notify_ticket_purchase_attendee,\n notify_ticket_purchase_organizer,\n)\nfrom app.api.helpers.storage import UPLOAD_PATHS\nfrom app.models import db\nfrom app.models.order import OrderTicket\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_fee import TicketFees\nfrom app.models.ticket_holder import TicketHolder\nfrom app.models.setting import Setting\nfrom app.settings import get_settings\n\n\ndef delete_related_attendees_for_order(order):\n \"\"\"\n Delete the associated attendees of an order when it is cancelled/deleted/expired\n :param order: Order whose attendees have to be deleted.\n :return:\n \"\"\"\n for ticket_holder in order.ticket_holders:\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception:\n logging.exception('DB Exception!')\n db.session.rollback()\n\n\ndef set_expiry_for_order(order, override=False):\n \"\"\"\n Expire the order after the time slot(10 minutes) if the order is initializing.\n Also expires the order if we want to expire an order regardless of the state and time.\n :param order: Order to be expired.\n :param override: flag to force expiry.\n :return:\n \"\"\"\n order_expiry_time = get_settings()['order_expiry_time']\n if (\n order\n and not order.paid_via\n and (\n override\n or (\n order.status == 'initializing'\n and (order.created_at + timedelta(minutes=order_expiry_time))\n < datetime.now(timezone.utc)\n )\n )\n ):\n order.status = 'expired'\n delete_related_attendees_for_order(order)\n save_to_db(order)\n return order\n\n\ndef create_pdf_tickets_for_holder(order):\n \"\"\"\n Create tickets and invoices for the holders of an order.\n :param order: The order for which to create tickets for.\n \"\"\"\n if order.status == 'completed' or order.status == 'placed':\n pdf = create_save_pdf(\n render_template('pdf/ticket_purchaser.html', order=order),\n UPLOAD_PATHS['pdf']['tickets_all'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n extra_identifiers={'extra_identifier': order.identifier},\n upload_dir='generated/tickets/',\n )\n\n order.tickets_pdf_url = pdf\n\n for holder in order.ticket_holders:\n # create attendee pdf for every ticket holder\n pdf = create_save_pdf(\n render_template('pdf/ticket_attendee.html', order=order, holder=holder),\n UPLOAD_PATHS['pdf']['tickets_all'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n extra_identifiers={'extra_identifier': holder.id},\n upload_dir='generated/tickets/',\n )\n holder.pdf_url = pdf\n save_to_db(holder)\n \n admin_info = Setting.query.first()\n\n # create order invoices pdf\n order_tickets = OrderTicket.query.filter_by(order_id=order.id).all()\n\n tickets = []\n for order_ticket in order_tickets:\n ticket = dict(\n id=order_ticket.ticket.id,\n price=order_ticket.ticket.price,\n quantity=order_ticket.quantity\n )\n tickets.append(ticket)\n \n # calculate order amount using helper function\n order_amount = calculate_order_amount(tickets, discount_code=order.discount_code) \n\n create_save_pdf(\n render_template(\n 'pdf/order_invoice.html',\n order=order,\n event=order.event,\n tax=order.event.tax,\n order_tickets=order_tickets,\n event_starts_at=order.event.starts_at_tz.strftime('%d %B %Y'),\n created_at=order.created_at.strftime('%d %B %Y'),\n admin_info=admin_info,\n order_amount=order_amount\n ),\n UPLOAD_PATHS['pdf']['order'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n upload_dir='generated/invoices/',\n new_renderer=True,\n )\n save_to_db(order)\n\n\ndef create_onsite_attendees_for_order(data):\n \"\"\"\n Creates on site ticket holders for an order and adds it into the request data.\n :param data: data initially passed in the POST request for order.\n :return:\n \"\"\"\n on_site_tickets = data.get('on_site_tickets')\n\n if not on_site_tickets:\n raise UnprocessableEntityError(\n {'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing'\n )\n\n data['ticket_holders'] = []\n\n for on_site_ticket in on_site_tickets:\n ticket_id = on_site_ticket['id']\n quantity = int(on_site_ticket['quantity'])\n\n ticket = safe_query_without_soft_deleted_entries(\n Ticket, 'id', ticket_id, 'ticket_id'\n )\n\n ticket_sold_count = get_count(\n db.session.query(TicketHolder.id).filter_by(\n ticket_id=int(ticket.id), deleted_at=None\n )\n )\n\n # Check if the ticket is already sold out or not.\n if ticket_sold_count + quantity > ticket.quantity:\n # delete the already created attendees.\n for holder in data['ticket_holders']:\n ticket_holder = (\n db.session.query(TicketHolder).filter(id == int(holder)).one()\n )\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception:\n logging.exception('DB Exception!')\n db.session.rollback()\n\n raise ConflictError(\n {'pointer': '/data/attributes/on_site_tickets'},\n \"Ticket with id: {} already sold out. You can buy at most {} tickets\".format(\n ticket_id, ticket.quantity - ticket_sold_count\n ),\n )\n\n for _ in range(1, quantity):\n ticket_holder = TicketHolder(\n firstname='onsite',\n lastname='attendee',\n email='[email protected]',\n ticket_id=ticket.id,\n event_id=data.get('event'),\n )\n save_to_db(ticket_holder)\n data['ticket_holders'].append(ticket_holder.id)\n\n # delete from the data.\n del data['on_site_tickets']\n\n\ndef calculate_order_amount(tickets, discount_code=None):\n from app.api.helpers.ticketing import validate_discount_code, validate_tickets\n from app.models.discount_code import DiscountCode\n\n ticket_ids = {ticket['id'] for ticket in tickets}\n ticket_map = {int(ticket['id']): ticket for ticket in tickets}\n fetched_tickets = validate_tickets(ticket_ids)\n\n if tickets and discount_code:\n discount_code = validate_discount_code(discount_code, tickets=tickets)\n\n event = tax = tax_included = fees = None\n total_amount = total_tax = total_discount = 0.0\n ticket_list = []\n for ticket in fetched_tickets:\n ticket_tax = discounted_tax = 0.0\n ticket_info = ticket_map[ticket.id]\n discount_amount = 0.0\n discount_data = None\n ticket_fee = 0.0\n\n quantity = ticket_info.get('quantity', 1) # Default to single ticket\n if not event:\n event = ticket.event\n\n if event.deleted_at:\n raise ObjectNotFound(\n {'pointer': 'tickets/event'}, f'Event: {event.id} not found'\n )\n\n fees = TicketFees.query.filter_by(currency=event.payment_currency).first()\n\n if not tax and event.tax:\n tax = event.tax\n tax_included = tax.is_tax_included_in_price\n\n if ticket.type == 'donation':\n price = ticket_info.get('price')\n if not price or price > ticket.max_price or price < ticket.min_price:\n raise UnprocessableEntityError(\n {'pointer': 'tickets/price'},\n f\"Price for donation ticket should be present and within range \"\n f\"{ticket.min_price} to {ticket.max_price}\",\n )\n else:\n price = ticket.price if ticket.type != 'free' else 0.0\n\n if tax:\n if tax_included:\n ticket_tax = price - price / (1 + tax.rate / 100)\n else:\n ticket_tax = price * tax.rate / 100\n\n if discount_code and ticket.type != 'free':\n code = (\n DiscountCode.query.with_parent(ticket)\n .filter_by(id=discount_code.id)\n .first()\n )\n if code:\n if discount_code.id == code.id:\n if code.type == 'amount':\n discount_amount = min(code.value, price)\n discount_percent = (discount_amount / price) * 100\n if tax:\n if tax_included:\n discounted_tax = (price - discount_amount) - (price - discount_amount) / (1 + tax.rate / 100)\n else:\n discounted_tax = (price - discount_amount) * tax.rate / 100\n else:\n discount_amount = (price * code.value) / 100\n if tax:\n discounted_tax = ticket_tax - (ticket_tax * code.value / 100)\n discount_percent = code.value\n discount_data = {\n 'code': discount_code.code,\n 'percent': round(discount_percent, 2),\n 'amount': round(discount_amount, 2),\n 'total': round(discount_amount * quantity, 2),\n }\n\n total_discount += round(discount_amount * quantity, 2)\n if fees and not ticket.is_fee_absorbed:\n ticket_fee = fees.service_fee * (price * quantity) / 100\n if ticket_fee > fees.maximum_fee:\n ticket_fee = fees.maximum_fee\n sub_total = ticket_fee + (price - discount_amount) * quantity\n total_amount = total_amount + sub_total\n ticket_list.append(\n {\n 'id': ticket.id,\n 'name': ticket.name,\n 'price': price,\n 'quantity': quantity,\n 'discount': discount_data,\n 'ticket_fee': round(ticket_fee, 2),\n 'sub_total': round(sub_total, 2),\n 'ticket_tax': round(ticket_tax, 2),\n 'discounted_tax': round(discounted_tax, 2)\n }\n )\n\n sub_total = total_amount\n tax_dict = None\n if tax:\n if tax_included:\n total_tax = total_amount - total_amount / (1 + tax.rate / 100)\n else:\n total_tax = total_amount * tax.rate / 100\n total_amount += total_tax\n tax_dict = dict(\n included=tax_included,\n amount=round(total_tax, 2),\n percent=tax.rate if tax else 0.0,\n name=tax.name,\n )\n\n return dict(\n tax=tax_dict,\n sub_total=round(sub_total, 2),\n total=round(total_amount, 2),\n discount=round(total_discount, 2),\n tickets=ticket_list,\n )\n\n\ndef on_order_completed(order):\n # send e-mail and notifications if the order status is completed\n if not (order.status == 'completed' or order.status == 'placed'):\n return\n\n create_pdf_tickets_for_holder(order)\n\n # send email and notifications.\n send_email_to_attendees(order)\n notify_ticket_purchase_attendee(order)\n\n if order.payment_mode in ['free', 'bank', 'cheque', 'onsite']:\n order.completed_at = datetime.utcnow()\n\n organizer_set = set(\n filter(\n bool, order.event.organizers + order.event.coorganizers + [order.event.owner]\n )\n )\n send_order_purchase_organizer_email(order, organizer_set)\n notify_ticket_purchase_organizer(order)\n", "path": "app/api/helpers/order.py"}], "after_files": [{"content": "import logging\nfrom datetime import datetime, timedelta, timezone\n\nfrom flask import render_template\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.helpers.db import (\n get_count,\n safe_query_without_soft_deleted_entries,\n save_to_db,\n)\nfrom app.api.helpers.errors import ConflictError, UnprocessableEntityError\nfrom app.api.helpers.files import create_save_pdf\nfrom app.api.helpers.mail import (\n send_email_to_attendees,\n send_order_purchase_organizer_email,\n)\nfrom app.api.helpers.notification import (\n notify_ticket_purchase_attendee,\n notify_ticket_purchase_organizer,\n)\nfrom app.api.helpers.storage import UPLOAD_PATHS\nfrom app.models import db\nfrom app.models.order import OrderTicket\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_fee import TicketFees\nfrom app.models.ticket_holder import TicketHolder\nfrom app.models.setting import Setting\nfrom app.settings import get_settings\n\n\ndef delete_related_attendees_for_order(order):\n \"\"\"\n Delete the associated attendees of an order when it is cancelled/deleted/expired\n :param order: Order whose attendees have to be deleted.\n :return:\n \"\"\"\n for ticket_holder in order.ticket_holders:\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception:\n logging.exception('DB Exception!')\n db.session.rollback()\n\n\ndef set_expiry_for_order(order, override=False):\n \"\"\"\n Expire the order after the time slot(10 minutes) if the order is initializing.\n Also expires the order if we want to expire an order regardless of the state and time.\n :param order: Order to be expired.\n :param override: flag to force expiry.\n :return:\n \"\"\"\n order_expiry_time = get_settings()['order_expiry_time']\n if (\n order\n and not order.paid_via\n and (\n override\n or (\n order.status == 'initializing'\n and (order.created_at + timedelta(minutes=order_expiry_time))\n < datetime.now(timezone.utc)\n )\n )\n ):\n order.status = 'expired'\n delete_related_attendees_for_order(order)\n save_to_db(order)\n return order\n\n\ndef create_pdf_tickets_for_holder(order):\n \"\"\"\n Create tickets and invoices for the holders of an order.\n :param order: The order for which to create tickets for.\n \"\"\"\n if order.status == 'completed' or order.status == 'placed':\n pdf = create_save_pdf(\n render_template('pdf/ticket_purchaser.html', order=order),\n UPLOAD_PATHS['pdf']['tickets_all'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n extra_identifiers={'extra_identifier': order.identifier},\n upload_dir='generated/tickets/',\n )\n\n order.tickets_pdf_url = pdf\n\n for holder in order.ticket_holders:\n # create attendee pdf for every ticket holder\n pdf = create_save_pdf(\n render_template('pdf/ticket_attendee.html', order=order, holder=holder),\n UPLOAD_PATHS['pdf']['tickets_all'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n extra_identifiers={'extra_identifier': holder.id},\n upload_dir='generated/tickets/',\n )\n holder.pdf_url = pdf\n save_to_db(holder)\n \n admin_info = Setting.query.first()\n\n # create order invoices pdf\n order_tickets = OrderTicket.query.filter_by(order_id=order.id).all()\n\n tickets = []\n for order_ticket in order_tickets:\n ticket = dict(\n id=order_ticket.ticket.id,\n price=order_ticket.ticket.price,\n quantity=order_ticket.quantity\n )\n tickets.append(ticket)\n \n # calculate order amount using helper function\n order_amount = calculate_order_amount(tickets, discount_code=order.discount_code) \n\n create_save_pdf(\n render_template(\n 'pdf/order_invoice.html',\n order=order,\n event=order.event,\n tax=order.event.tax,\n order_tickets=order_tickets,\n event_starts_at=order.event.starts_at_tz.strftime('%d %B %Y'),\n created_at=order.created_at.strftime('%d %B %Y'),\n admin_info=admin_info,\n order_amount=order_amount\n ),\n UPLOAD_PATHS['pdf']['order'],\n dir_path='/static/uploads/pdf/tickets/',\n identifier=order.identifier,\n upload_dir='generated/invoices/',\n new_renderer=True,\n )\n save_to_db(order)\n\n\ndef create_onsite_attendees_for_order(data):\n \"\"\"\n Creates on site ticket holders for an order and adds it into the request data.\n :param data: data initially passed in the POST request for order.\n :return:\n \"\"\"\n on_site_tickets = data.get('on_site_tickets')\n\n if not on_site_tickets:\n raise UnprocessableEntityError(\n {'pointer': 'data/attributes/on_site_tickets'}, 'on_site_tickets info missing'\n )\n\n data['ticket_holders'] = []\n\n for on_site_ticket in on_site_tickets:\n ticket_id = on_site_ticket['id']\n quantity = int(on_site_ticket['quantity'])\n\n ticket = safe_query_without_soft_deleted_entries(\n Ticket, 'id', ticket_id, 'ticket_id'\n )\n\n ticket_sold_count = get_count(\n db.session.query(TicketHolder.id).filter_by(\n ticket_id=int(ticket.id), deleted_at=None\n )\n )\n\n # Check if the ticket is already sold out or not.\n if ticket_sold_count + quantity > ticket.quantity:\n # delete the already created attendees.\n for holder in data['ticket_holders']:\n ticket_holder = (\n db.session.query(TicketHolder).filter(id == int(holder)).one()\n )\n db.session.delete(ticket_holder)\n try:\n db.session.commit()\n except Exception:\n logging.exception('DB Exception!')\n db.session.rollback()\n\n raise ConflictError(\n {'pointer': '/data/attributes/on_site_tickets'},\n \"Ticket with id: {} already sold out. You can buy at most {} tickets\".format(\n ticket_id, ticket.quantity - ticket_sold_count\n ),\n )\n\n for _ in range(1, quantity):\n ticket_holder = TicketHolder(\n firstname='onsite',\n lastname='attendee',\n email='[email protected]',\n ticket_id=ticket.id,\n event_id=data.get('event'),\n )\n save_to_db(ticket_holder)\n data['ticket_holders'].append(ticket_holder.id)\n\n # delete from the data.\n del data['on_site_tickets']\n\n\ndef calculate_order_amount(tickets, discount_code=None):\n from app.api.helpers.ticketing import validate_discount_code, validate_tickets\n from app.models.discount_code import DiscountCode\n\n ticket_ids = {ticket['id'] for ticket in tickets}\n ticket_map = {int(ticket['id']): ticket for ticket in tickets}\n fetched_tickets = validate_tickets(ticket_ids)\n\n if tickets and discount_code:\n discount_code = validate_discount_code(discount_code, tickets=tickets)\n\n event = tax = tax_included = fees = None\n total_amount = total_tax = total_discount = 0.0\n ticket_list = []\n for ticket in fetched_tickets:\n ticket_tax = discounted_tax = 0.0\n ticket_info = ticket_map[ticket.id]\n discount_amount = 0.0\n discount_data = None\n ticket_fee = 0.0\n\n quantity = ticket_info.get('quantity', 1) # Default to single ticket\n if not event:\n event = ticket.event\n\n if event.deleted_at:\n raise ObjectNotFound(\n {'pointer': 'tickets/event'}, f'Event: {event.id} not found'\n )\n\n fees = TicketFees.query.filter_by(currency=event.payment_currency).first()\n\n if not tax and event.tax:\n tax = event.tax\n tax_included = tax.is_tax_included_in_price\n\n if ticket.type == 'donation':\n price = ticket_info.get('price')\n if not price or price > ticket.max_price or price < ticket.min_price:\n raise UnprocessableEntityError(\n {'pointer': 'tickets/price'},\n f\"Price for donation ticket should be present and within range \"\n f\"{ticket.min_price} to {ticket.max_price}\",\n )\n else:\n price = ticket.price if ticket.type != 'free' else 0.0\n\n if tax:\n if tax_included:\n ticket_tax = price - price / (1 + tax.rate / 100)\n else:\n ticket_tax = price * tax.rate / 100\n\n if discount_code and ticket.type != 'free':\n code = (\n DiscountCode.query.with_parent(ticket)\n .filter_by(id=discount_code.id)\n .first()\n )\n if code:\n if discount_code.id == code.id:\n if code.type == 'amount':\n discount_amount = min(code.value, price)\n discount_percent = (discount_amount / price) * 100\n if tax:\n if tax_included:\n discounted_tax = (price - discount_amount) - (price - discount_amount) / (1 + tax.rate / 100)\n else:\n discounted_tax = (price - discount_amount) * tax.rate / 100\n else:\n discount_amount = (price * code.value) / 100\n if tax:\n discounted_tax = ticket_tax - (ticket_tax * code.value / 100)\n discount_percent = code.value\n discount_data = {\n 'code': discount_code.code,\n 'percent': round(discount_percent, 2),\n 'amount': round(discount_amount, 2),\n 'total': round(discount_amount * quantity, 2),\n 'type': code.type\n }\n\n total_discount += round(discount_amount * quantity, 2)\n if fees and not ticket.is_fee_absorbed:\n ticket_fee = fees.service_fee * (price * quantity) / 100\n if ticket_fee > fees.maximum_fee:\n ticket_fee = fees.maximum_fee\n sub_total = ticket_fee + (price - discount_amount) * quantity\n total_amount = total_amount + sub_total\n ticket_list.append(\n {\n 'id': ticket.id,\n 'name': ticket.name,\n 'price': price,\n 'quantity': quantity,\n 'discount': discount_data,\n 'ticket_fee': round(ticket_fee, 2),\n 'sub_total': round(sub_total, 2),\n 'ticket_tax': round(ticket_tax, 2),\n 'discounted_tax': round(discounted_tax, 2)\n }\n )\n\n sub_total = total_amount\n tax_dict = None\n if tax:\n if tax_included:\n total_tax = total_amount - total_amount / (1 + tax.rate / 100)\n else:\n total_tax = total_amount * tax.rate / 100\n total_amount += total_tax\n tax_dict = dict(\n included=tax_included,\n amount=round(total_tax, 2),\n percent=tax.rate if tax else 0.0,\n name=tax.name,\n )\n\n return dict(\n tax=tax_dict,\n sub_total=round(sub_total, 2),\n total=round(total_amount, 2),\n discount=round(total_discount, 2),\n tickets=ticket_list,\n )\n\n\ndef on_order_completed(order):\n # send e-mail and notifications if the order status is completed\n if not (order.status == 'completed' or order.status == 'placed'):\n return\n\n create_pdf_tickets_for_holder(order)\n\n # send email and notifications.\n send_email_to_attendees(order)\n notify_ticket_purchase_attendee(order)\n\n if order.payment_mode in ['free', 'bank', 'cheque', 'onsite']:\n order.completed_at = datetime.utcnow()\n\n organizer_set = set(\n filter(\n bool, order.event.organizers + order.event.coorganizers + [order.event.owner]\n )\n )\n send_order_purchase_organizer_email(order, organizer_set)\n notify_ticket_purchase_organizer(order)\n", "path": "app/api/helpers/order.py"}]}
| 3,888 | 109 |
gh_patches_debug_6063
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-1627
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MI scraper failing since at least 2017-04-01
MI has been failing since 2017-04-01
Based on automated runs it appears that MI has not run successfully in 5 days (2017-04-01).
```
06:00:31 INFO billy: billy-update abbr=mi
actions=scrape,import,report
types=bills,legislators,votes,committees,alldata,events
sessions=2017-2018
terms=2017-2018
06:00:31 INFO scrapelib: GET - http://www.senate.michigan.gov/senatorinfo.html
File "/usr/local/bin/billy-update", line 9, in <module>
load_entry_point('billy==1.9.0', 'console_scripts', 'billy-update')()
File "/opt/openstates/billy/billy/bin/update.py", line 368, in main
Traceback (most recent call last):
run_record += _run_scraper(stype, args, metadata)
File "/opt/openstates/billy/billy/bin/update.py", line 102, in _run_scraper
response = self.get(url)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 501, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/scrapelib/__init__.py", line 272, in request
raise HTTPError(resp)
scrapelib.HTTPError: 500 while retrieving http://www.senate.michigan.gov/senatorinfo.html
File "/srv/openstates-web/openstates/mi/legislators.py", line 77, in scrape_upper
scraper.scrape(chamber, time)
File "/srv/openstates-web/openstates/mi/legislators.py", line 16, in scrape
return self.scrape_upper(chamber, term)
doc = self.lxmlize(url)
File "/srv/openstates-web/openstates/utils/lxmlize.py", line 19, in lxmlize
```
Visit http://bobsled.openstates.org/ for more info.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/mi/legislators.py`
Content:
```
1 import re
2
3 from billy.scrape.legislators import LegislatorScraper, Legislator
4 from openstates.utils import LXMLMixin
5
6 abbr = {'D': 'Democratic', 'R': 'Republican'}
7
8
9 class MILegislatorScraper(LegislatorScraper, LXMLMixin):
10 jurisdiction = 'mi'
11
12 def scrape(self, chamber, term):
13 self.validate_term(term, latest_only=True)
14 if chamber == 'lower':
15 return self.scrape_lower(chamber, term)
16 return self.scrape_upper(chamber, term)
17
18 def scrape_lower(self, chamber, term):
19 url = 'http://www.house.mi.gov/mhrpublic/frmRepList.aspx'
20 table = [
21 "website",
22 "district",
23 "name",
24 "party",
25 "location",
26 "phone",
27 "email"
28 ]
29 doc = self.lxmlize(url)
30 # skip two rows at top
31 for row in doc.xpath('//table[@id="grvRepInfo"]/*'):
32 tds = row.xpath('.//td')
33 if len(tds) == 0:
34 continue
35 metainf = {}
36 for i in range(0, len(table)):
37 metainf[table[i]] = tds[i]
38 district = str(int(metainf['district'].text_content().strip()))
39 party = metainf['party'].text_content().strip()
40 phone = metainf['phone'].text_content().strip()
41 email = metainf['email'].text_content().strip()
42 leg_url = metainf['website'].xpath("./a")[0].attrib['href']
43 name = metainf['name'].text_content().strip()
44 if name == 'Vacant' or re.match(r'^District \d{1,3}$', name):
45 self.warning('District {} appears vacant, and will be skipped'.format(district))
46 continue
47
48 office = metainf['location'].text_content().strip()
49 office = re.sub(
50 ' HOB',
51 ' Anderson House Office Building\n124 North Capitol Avenue\nLansing, MI 48933',
52 office
53 )
54 office = re.sub(
55 ' CB',
56 ' State Capitol Building\nLansing, MI 48909',
57 office
58 )
59
60 leg = Legislator(term=term,
61 chamber=chamber,
62 full_name=name,
63 district=district,
64 party=abbr[party],
65 url=leg_url)
66
67 leg.add_office('capitol', 'Capitol Office',
68 address=office,
69 phone=phone,
70 email=email)
71
72 leg.add_source(url)
73 self.save_legislator(leg)
74
75 def scrape_upper(self, chamber, term):
76 url = 'http://www.senate.michigan.gov/senatorinfo.html'
77 doc = self.lxmlize(url)
78 for row in doc.xpath('//table[not(@class="calendar")]//tr')[3:]:
79 if len(row) != 7:
80 continue
81
82 # party, dist, member, office_phone, office_fax, office_loc
83 party, dist, member, contact, phone, fax, loc = row.getchildren()
84 if (party.text_content().strip() == "" or
85 'Lieutenant Governor' in member.text_content()):
86 continue
87
88 party = abbr[party.text]
89 district = dist.text_content().strip()
90 name = member.text_content().strip()
91 name = re.sub(r'\s+', " ", name)
92
93 if name == 'Vacant':
94 self.info('district %s is vacant', district)
95 continue
96
97 leg_url = member.xpath('a/@href')[0]
98 office_phone = phone.text
99 office_fax = fax.text
100
101 office_loc = loc.text
102 office_loc = re.sub(
103 ' Farnum Bldg',
104 ' Farnum Office Building\n125 West Allegan Street\nLansing, MI 48933',
105 office_loc
106 )
107 office_loc = re.sub(
108 ' Capitol Bldg',
109 ' State Capitol Building\nLansing, MI 48909',
110 office_loc
111 )
112
113 leg = Legislator(term=term, chamber=chamber,
114 district=district,
115 full_name=name,
116 party=party,
117 url=leg_url)
118
119 leg.add_office('capitol', 'Capitol Office',
120 address=office_loc,
121 fax=office_fax,
122 phone=office_phone)
123
124 leg.add_source(url)
125 self.save_legislator(leg)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openstates/mi/legislators.py b/openstates/mi/legislators.py
--- a/openstates/mi/legislators.py
+++ b/openstates/mi/legislators.py
@@ -73,7 +73,7 @@
self.save_legislator(leg)
def scrape_upper(self, chamber, term):
- url = 'http://www.senate.michigan.gov/senatorinfo.html'
+ url = 'http://www.senate.michigan.gov/senatorinfo_list.html'
doc = self.lxmlize(url)
for row in doc.xpath('//table[not(@class="calendar")]//tr')[3:]:
if len(row) != 7:
|
{"golden_diff": "diff --git a/openstates/mi/legislators.py b/openstates/mi/legislators.py\n--- a/openstates/mi/legislators.py\n+++ b/openstates/mi/legislators.py\n@@ -73,7 +73,7 @@\n self.save_legislator(leg)\n \n def scrape_upper(self, chamber, term):\n- url = 'http://www.senate.michigan.gov/senatorinfo.html'\n+ url = 'http://www.senate.michigan.gov/senatorinfo_list.html'\n doc = self.lxmlize(url)\n for row in doc.xpath('//table[not(@class=\"calendar\")]//tr')[3:]:\n if len(row) != 7:\n", "issue": "MI scraper failing since at least 2017-04-01\nMI has been failing since 2017-04-01\n\nBased on automated runs it appears that MI has not run successfully in 5 days (2017-04-01).\n\n\n```\n 06:00:31 INFO billy: billy-update abbr=mi\n actions=scrape,import,report\n types=bills,legislators,votes,committees,alldata,events\n sessions=2017-2018\n terms=2017-2018\n06:00:31 INFO scrapelib: GET - http://www.senate.michigan.gov/senatorinfo.html\n File \"/usr/local/bin/billy-update\", line 9, in <module>\n load_entry_point('billy==1.9.0', 'console_scripts', 'billy-update')()\n File \"/opt/openstates/billy/billy/bin/update.py\", line 368, in main\nTraceback (most recent call last):\n run_record += _run_scraper(stype, args, metadata)\n File \"/opt/openstates/billy/billy/bin/update.py\", line 102, in _run_scraper\n response = self.get(url)\n File \"/usr/local/lib/python2.7/dist-packages/requests/sessions.py\", line 501, in get\n return self.request('GET', url, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/scrapelib/__init__.py\", line 272, in request\n raise HTTPError(resp)\nscrapelib.HTTPError: 500 while retrieving http://www.senate.michigan.gov/senatorinfo.html\n File \"/srv/openstates-web/openstates/mi/legislators.py\", line 77, in scrape_upper\n scraper.scrape(chamber, time)\n File \"/srv/openstates-web/openstates/mi/legislators.py\", line 16, in scrape\n return self.scrape_upper(chamber, term)\n doc = self.lxmlize(url)\n File \"/srv/openstates-web/openstates/utils/lxmlize.py\", line 19, in lxmlize\n```\n\nVisit http://bobsled.openstates.org/ for more info.\n\n", "before_files": [{"content": "import re\n\nfrom billy.scrape.legislators import LegislatorScraper, Legislator\nfrom openstates.utils import LXMLMixin\n\nabbr = {'D': 'Democratic', 'R': 'Republican'}\n\n\nclass MILegislatorScraper(LegislatorScraper, LXMLMixin):\n jurisdiction = 'mi'\n\n def scrape(self, chamber, term):\n self.validate_term(term, latest_only=True)\n if chamber == 'lower':\n return self.scrape_lower(chamber, term)\n return self.scrape_upper(chamber, term)\n\n def scrape_lower(self, chamber, term):\n url = 'http://www.house.mi.gov/mhrpublic/frmRepList.aspx'\n table = [\n \"website\",\n \"district\",\n \"name\",\n \"party\",\n \"location\",\n \"phone\",\n \"email\"\n ]\n doc = self.lxmlize(url)\n # skip two rows at top\n for row in doc.xpath('//table[@id=\"grvRepInfo\"]/*'):\n tds = row.xpath('.//td')\n if len(tds) == 0:\n continue\n metainf = {}\n for i in range(0, len(table)):\n metainf[table[i]] = tds[i]\n district = str(int(metainf['district'].text_content().strip()))\n party = metainf['party'].text_content().strip()\n phone = metainf['phone'].text_content().strip()\n email = metainf['email'].text_content().strip()\n leg_url = metainf['website'].xpath(\"./a\")[0].attrib['href']\n name = metainf['name'].text_content().strip()\n if name == 'Vacant' or re.match(r'^District \\d{1,3}$', name):\n self.warning('District {} appears vacant, and will be skipped'.format(district))\n continue\n\n office = metainf['location'].text_content().strip()\n office = re.sub(\n ' HOB',\n ' Anderson House Office Building\\n124 North Capitol Avenue\\nLansing, MI 48933',\n office\n )\n office = re.sub(\n ' CB',\n ' State Capitol Building\\nLansing, MI 48909',\n office\n )\n\n leg = Legislator(term=term,\n chamber=chamber,\n full_name=name,\n district=district,\n party=abbr[party],\n url=leg_url)\n\n leg.add_office('capitol', 'Capitol Office',\n address=office,\n phone=phone,\n email=email)\n\n leg.add_source(url)\n self.save_legislator(leg)\n\n def scrape_upper(self, chamber, term):\n url = 'http://www.senate.michigan.gov/senatorinfo.html'\n doc = self.lxmlize(url)\n for row in doc.xpath('//table[not(@class=\"calendar\")]//tr')[3:]:\n if len(row) != 7:\n continue\n\n # party, dist, member, office_phone, office_fax, office_loc\n party, dist, member, contact, phone, fax, loc = row.getchildren()\n if (party.text_content().strip() == \"\" or\n 'Lieutenant Governor' in member.text_content()):\n continue\n\n party = abbr[party.text]\n district = dist.text_content().strip()\n name = member.text_content().strip()\n name = re.sub(r'\\s+', \" \", name)\n\n if name == 'Vacant':\n self.info('district %s is vacant', district)\n continue\n\n leg_url = member.xpath('a/@href')[0]\n office_phone = phone.text\n office_fax = fax.text\n\n office_loc = loc.text\n office_loc = re.sub(\n ' Farnum Bldg',\n ' Farnum Office Building\\n125 West Allegan Street\\nLansing, MI 48933',\n office_loc\n )\n office_loc = re.sub(\n ' Capitol Bldg',\n ' State Capitol Building\\nLansing, MI 48909',\n office_loc\n )\n\n leg = Legislator(term=term, chamber=chamber,\n district=district,\n full_name=name,\n party=party,\n url=leg_url)\n\n leg.add_office('capitol', 'Capitol Office',\n address=office_loc,\n fax=office_fax,\n phone=office_phone)\n\n leg.add_source(url)\n self.save_legislator(leg)\n", "path": "openstates/mi/legislators.py"}], "after_files": [{"content": "import re\n\nfrom billy.scrape.legislators import LegislatorScraper, Legislator\nfrom openstates.utils import LXMLMixin\n\nabbr = {'D': 'Democratic', 'R': 'Republican'}\n\n\nclass MILegislatorScraper(LegislatorScraper, LXMLMixin):\n jurisdiction = 'mi'\n\n def scrape(self, chamber, term):\n self.validate_term(term, latest_only=True)\n if chamber == 'lower':\n return self.scrape_lower(chamber, term)\n return self.scrape_upper(chamber, term)\n\n def scrape_lower(self, chamber, term):\n url = 'http://www.house.mi.gov/mhrpublic/frmRepList.aspx'\n table = [\n \"website\",\n \"district\",\n \"name\",\n \"party\",\n \"location\",\n \"phone\",\n \"email\"\n ]\n doc = self.lxmlize(url)\n # skip two rows at top\n for row in doc.xpath('//table[@id=\"grvRepInfo\"]/*'):\n tds = row.xpath('.//td')\n if len(tds) == 0:\n continue\n metainf = {}\n for i in range(0, len(table)):\n metainf[table[i]] = tds[i]\n district = str(int(metainf['district'].text_content().strip()))\n party = metainf['party'].text_content().strip()\n phone = metainf['phone'].text_content().strip()\n email = metainf['email'].text_content().strip()\n leg_url = metainf['website'].xpath(\"./a\")[0].attrib['href']\n name = metainf['name'].text_content().strip()\n if name == 'Vacant' or re.match(r'^District \\d{1,3}$', name):\n self.warning('District {} appears vacant, and will be skipped'.format(district))\n continue\n\n office = metainf['location'].text_content().strip()\n office = re.sub(\n ' HOB',\n ' Anderson House Office Building\\n124 North Capitol Avenue\\nLansing, MI 48933',\n office\n )\n office = re.sub(\n ' CB',\n ' State Capitol Building\\nLansing, MI 48909',\n office\n )\n\n leg = Legislator(term=term,\n chamber=chamber,\n full_name=name,\n district=district,\n party=abbr[party],\n url=leg_url)\n\n leg.add_office('capitol', 'Capitol Office',\n address=office,\n phone=phone,\n email=email)\n\n leg.add_source(url)\n self.save_legislator(leg)\n\n def scrape_upper(self, chamber, term):\n url = 'http://www.senate.michigan.gov/senatorinfo_list.html'\n doc = self.lxmlize(url)\n for row in doc.xpath('//table[not(@class=\"calendar\")]//tr')[3:]:\n if len(row) != 7:\n continue\n\n # party, dist, member, office_phone, office_fax, office_loc\n party, dist, member, contact, phone, fax, loc = row.getchildren()\n if (party.text_content().strip() == \"\" or\n 'Lieutenant Governor' in member.text_content()):\n continue\n\n party = abbr[party.text]\n district = dist.text_content().strip()\n name = member.text_content().strip()\n name = re.sub(r'\\s+', \" \", name)\n\n if name == 'Vacant':\n self.info('district %s is vacant', district)\n continue\n\n leg_url = member.xpath('a/@href')[0]\n office_phone = phone.text\n office_fax = fax.text\n\n office_loc = loc.text\n office_loc = re.sub(\n ' Farnum Bldg',\n ' Farnum Office Building\\n125 West Allegan Street\\nLansing, MI 48933',\n office_loc\n )\n office_loc = re.sub(\n ' Capitol Bldg',\n ' State Capitol Building\\nLansing, MI 48909',\n office_loc\n )\n\n leg = Legislator(term=term, chamber=chamber,\n district=district,\n full_name=name,\n party=party,\n url=leg_url)\n\n leg.add_office('capitol', 'Capitol Office',\n address=office_loc,\n fax=office_fax,\n phone=office_phone)\n\n leg.add_source(url)\n self.save_legislator(leg)\n", "path": "openstates/mi/legislators.py"}]}
| 2,042 | 159 |
gh_patches_debug_25714
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-443
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`mkdocs new` broken under python2
current master, python 2.7.9 virtualenv
only top directory and mkdocs.yml created, no docs dir or index.md
```
(karasu)[lashni@orphan src]$ mkdocs new karasu
Creating project directory: karasu
Writing config file: karasu/mkdocs.yml
Traceback (most recent call last):
File "/home/lashni/dev/karasu/bin/mkdocs", line 9, in <module>
load_entry_point('mkdocs==0.11.1', 'console_scripts', 'mkdocs')()
File "/home/lashni/dev/karasu/src/mkdocs/mkdocs/main.py", line 74, in run_main
main(cmd, args=sys.argv[2:], options=dict(opts))
File "/home/lashni/dev/karasu/src/mkdocs/mkdocs/main.py", line 58, in main
new(args, options)
File "/home/lashni/dev/karasu/src/mkdocs/mkdocs/new.py", line 47, in new
open(config_path, 'w', encoding='utf-8').write(config_text)
TypeError: must be unicode, not str
```
current master, python 3.4.3 virtualenv, files/dirs created successfully
```
(test)[lashni@orphan src]$ mkdocs new karasu
Creating project directory: karasu
Writing config file: karasu/mkdocs.yml
Writing initial docs: karasu/docs/index.md
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/new.py`
Content:
```
1 # coding: utf-8
2 from __future__ import print_function
3 import os
4 from io import open
5
6 config_text = 'site_name: My Docs\n'
7 index_text = """# Welcome to MkDocs
8
9 For full documentation visit [mkdocs.org](http://mkdocs.org).
10
11 ## Commands
12
13 * `mkdocs new [dir-name]` - Create a new project.
14 * `mkdocs serve` - Start the live-reloading docs server.
15 * `mkdocs build` - Build the documentation site.
16 * `mkdocs help` - Print this help message.
17
18 ## Project layout
19
20 mkdocs.yml # The configuration file.
21 docs/
22 index.md # The documentation homepage.
23 ... # Other markdown pages, images and other files.
24 """
25
26
27 def new(args, options):
28 if len(args) != 1:
29 print("Usage 'mkdocs new [directory-name]'")
30 return
31
32 output_dir = args[0]
33
34 docs_dir = os.path.join(output_dir, 'docs')
35 config_path = os.path.join(output_dir, 'mkdocs.yml')
36 index_path = os.path.join(docs_dir, 'index.md')
37
38 if os.path.exists(config_path):
39 print('Project already exists.')
40 return
41
42 if not os.path.exists(output_dir):
43 print('Creating project directory: %s' % output_dir)
44 os.mkdir(output_dir)
45
46 print('Writing config file: %s' % config_path)
47 open(config_path, 'w', encoding='utf-8').write(config_text)
48
49 if os.path.exists(index_path):
50 return
51
52 print('Writing initial docs: %s' % index_path)
53 if not os.path.exists(docs_dir):
54 os.mkdir(docs_dir)
55 open(index_path, 'w', encoding='utf-8').write(index_text)
56
```
Path: `mkdocs/main.py`
Content:
```
1 #!/usr/bin/env python
2 # coding: utf-8
3 from __future__ import print_function
4
5 import logging
6 import sys
7
8 from mkdocs import __version__
9 from mkdocs.build import build
10 from mkdocs.config import load_config
11 from mkdocs.exceptions import MkDocsException
12 from mkdocs.gh_deploy import gh_deploy
13 from mkdocs.new import new
14 from mkdocs.serve import serve
15
16
17 def configure_logging(options):
18 '''When a --verbose flag is passed, increase the verbosity of mkdocs'''
19 logger = logging.getLogger('mkdocs')
20 logger.addHandler(logging.StreamHandler())
21 if 'verbose' in options:
22 logger.setLevel(logging.DEBUG)
23 else:
24 logger.setLevel(logging.WARNING)
25
26
27 def arg_to_option(arg):
28 """
29 Convert command line arguments into two-tuples of config key/value pairs.
30 """
31 arg = arg.lstrip('--')
32 option = True
33 if '=' in arg:
34 arg, option = arg.split('=', 1)
35 return (arg.replace('-', '_'), option)
36
37
38 def main(cmd, args, options=None):
39 """
40 Build the documentation, and optionally start the devserver.
41 """
42 configure_logging(options)
43 clean_site_dir = 'clean' in options
44 if cmd == 'serve':
45 config = load_config(options=options)
46 serve(config, options=options)
47 elif cmd == 'build':
48 config = load_config(options=options)
49 build(config, clean_site_dir=clean_site_dir)
50 elif cmd == 'json':
51 config = load_config(options=options)
52 build(config, dump_json=True, clean_site_dir=clean_site_dir)
53 elif cmd == 'gh-deploy':
54 config = load_config(options=options)
55 build(config, clean_site_dir=clean_site_dir)
56 gh_deploy(config)
57 elif cmd == 'new':
58 new(args, options)
59 else:
60 print('MkDocs (version {0})'.format(__version__))
61 print('mkdocs [help|new|build|serve|gh-deploy|json] {options}')
62
63
64 def run_main():
65 """
66 Invokes main() with the contents of sys.argv
67
68 This is a separate function so it can be invoked
69 by a setuptools console_script.
70 """
71 cmd = sys.argv[1] if len(sys.argv) >= 2 else None
72 opts = [arg_to_option(arg) for arg in sys.argv[2:] if arg.startswith('--')]
73 try:
74 main(cmd, args=sys.argv[2:], options=dict(opts))
75 except MkDocsException as e:
76 print(e.args[0], file=sys.stderr)
77
78
79 if __name__ == '__main__':
80 run_main()
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mkdocs/main.py b/mkdocs/main.py
--- a/mkdocs/main.py
+++ b/mkdocs/main.py
@@ -55,7 +55,7 @@
build(config, clean_site_dir=clean_site_dir)
gh_deploy(config)
elif cmd == 'new':
- new(args, options)
+ new(args)
else:
print('MkDocs (version {0})'.format(__version__))
print('mkdocs [help|new|build|serve|gh-deploy|json] {options}')
diff --git a/mkdocs/new.py b/mkdocs/new.py
--- a/mkdocs/new.py
+++ b/mkdocs/new.py
@@ -1,10 +1,13 @@
# coding: utf-8
from __future__ import print_function
+
import os
from io import open
-config_text = 'site_name: My Docs\n'
-index_text = """# Welcome to MkDocs
+from mkdocs import compat
+
+config_text = compat.unicode('site_name: My Docs\n')
+index_text = compat.unicode("""# Welcome to MkDocs
For full documentation visit [mkdocs.org](http://mkdocs.org).
@@ -21,10 +24,11 @@
docs/
index.md # The documentation homepage.
... # Other markdown pages, images and other files.
-"""
+""")
+
+def new(args):
-def new(args, options):
if len(args) != 1:
print("Usage 'mkdocs new [directory-name]'")
return
|
{"golden_diff": "diff --git a/mkdocs/main.py b/mkdocs/main.py\n--- a/mkdocs/main.py\n+++ b/mkdocs/main.py\n@@ -55,7 +55,7 @@\n build(config, clean_site_dir=clean_site_dir)\n gh_deploy(config)\n elif cmd == 'new':\n- new(args, options)\n+ new(args)\n else:\n print('MkDocs (version {0})'.format(__version__))\n print('mkdocs [help|new|build|serve|gh-deploy|json] {options}')\ndiff --git a/mkdocs/new.py b/mkdocs/new.py\n--- a/mkdocs/new.py\n+++ b/mkdocs/new.py\n@@ -1,10 +1,13 @@\n # coding: utf-8\n from __future__ import print_function\n+\n import os\n from io import open\n \n-config_text = 'site_name: My Docs\\n'\n-index_text = \"\"\"# Welcome to MkDocs\n+from mkdocs import compat\n+\n+config_text = compat.unicode('site_name: My Docs\\n')\n+index_text = compat.unicode(\"\"\"# Welcome to MkDocs\n \n For full documentation visit [mkdocs.org](http://mkdocs.org).\n \n@@ -21,10 +24,11 @@\n docs/\n index.md # The documentation homepage.\n ... # Other markdown pages, images and other files.\n-\"\"\"\n+\"\"\")\n+\n \n+def new(args):\n \n-def new(args, options):\n if len(args) != 1:\n print(\"Usage 'mkdocs new [directory-name]'\")\n return\n", "issue": "`mkdocs new` broken under python2\ncurrent master, python 2.7.9 virtualenv\nonly top directory and mkdocs.yml created, no docs dir or index.md\n\n```\n(karasu)[lashni@orphan src]$ mkdocs new karasu\nCreating project directory: karasu\nWriting config file: karasu/mkdocs.yml\nTraceback (most recent call last):\n File \"/home/lashni/dev/karasu/bin/mkdocs\", line 9, in <module>\n load_entry_point('mkdocs==0.11.1', 'console_scripts', 'mkdocs')()\n File \"/home/lashni/dev/karasu/src/mkdocs/mkdocs/main.py\", line 74, in run_main\n main(cmd, args=sys.argv[2:], options=dict(opts))\n File \"/home/lashni/dev/karasu/src/mkdocs/mkdocs/main.py\", line 58, in main\n new(args, options)\n File \"/home/lashni/dev/karasu/src/mkdocs/mkdocs/new.py\", line 47, in new\n open(config_path, 'w', encoding='utf-8').write(config_text)\nTypeError: must be unicode, not str\n```\n\ncurrent master, python 3.4.3 virtualenv, files/dirs created successfully\n\n```\n(test)[lashni@orphan src]$ mkdocs new karasu\nCreating project directory: karasu\nWriting config file: karasu/mkdocs.yml\nWriting initial docs: karasu/docs/index.md\n```\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import print_function\nimport os\nfrom io import open\n\nconfig_text = 'site_name: My Docs\\n'\nindex_text = \"\"\"# Welcome to MkDocs\n\nFor full documentation visit [mkdocs.org](http://mkdocs.org).\n\n## Commands\n\n* `mkdocs new [dir-name]` - Create a new project.\n* `mkdocs serve` - Start the live-reloading docs server.\n* `mkdocs build` - Build the documentation site.\n* `mkdocs help` - Print this help message.\n\n## Project layout\n\n mkdocs.yml # The configuration file.\n docs/\n index.md # The documentation homepage.\n ... # Other markdown pages, images and other files.\n\"\"\"\n\n\ndef new(args, options):\n if len(args) != 1:\n print(\"Usage 'mkdocs new [directory-name]'\")\n return\n\n output_dir = args[0]\n\n docs_dir = os.path.join(output_dir, 'docs')\n config_path = os.path.join(output_dir, 'mkdocs.yml')\n index_path = os.path.join(docs_dir, 'index.md')\n\n if os.path.exists(config_path):\n print('Project already exists.')\n return\n\n if not os.path.exists(output_dir):\n print('Creating project directory: %s' % output_dir)\n os.mkdir(output_dir)\n\n print('Writing config file: %s' % config_path)\n open(config_path, 'w', encoding='utf-8').write(config_text)\n\n if os.path.exists(index_path):\n return\n\n print('Writing initial docs: %s' % index_path)\n if not os.path.exists(docs_dir):\n os.mkdir(docs_dir)\n open(index_path, 'w', encoding='utf-8').write(index_text)\n", "path": "mkdocs/new.py"}, {"content": "#!/usr/bin/env python\n# coding: utf-8\nfrom __future__ import print_function\n\nimport logging\nimport sys\n\nfrom mkdocs import __version__\nfrom mkdocs.build import build\nfrom mkdocs.config import load_config\nfrom mkdocs.exceptions import MkDocsException\nfrom mkdocs.gh_deploy import gh_deploy\nfrom mkdocs.new import new\nfrom mkdocs.serve import serve\n\n\ndef configure_logging(options):\n '''When a --verbose flag is passed, increase the verbosity of mkdocs'''\n logger = logging.getLogger('mkdocs')\n logger.addHandler(logging.StreamHandler())\n if 'verbose' in options:\n logger.setLevel(logging.DEBUG)\n else:\n logger.setLevel(logging.WARNING)\n\n\ndef arg_to_option(arg):\n \"\"\"\n Convert command line arguments into two-tuples of config key/value pairs.\n \"\"\"\n arg = arg.lstrip('--')\n option = True\n if '=' in arg:\n arg, option = arg.split('=', 1)\n return (arg.replace('-', '_'), option)\n\n\ndef main(cmd, args, options=None):\n \"\"\"\n Build the documentation, and optionally start the devserver.\n \"\"\"\n configure_logging(options)\n clean_site_dir = 'clean' in options\n if cmd == 'serve':\n config = load_config(options=options)\n serve(config, options=options)\n elif cmd == 'build':\n config = load_config(options=options)\n build(config, clean_site_dir=clean_site_dir)\n elif cmd == 'json':\n config = load_config(options=options)\n build(config, dump_json=True, clean_site_dir=clean_site_dir)\n elif cmd == 'gh-deploy':\n config = load_config(options=options)\n build(config, clean_site_dir=clean_site_dir)\n gh_deploy(config)\n elif cmd == 'new':\n new(args, options)\n else:\n print('MkDocs (version {0})'.format(__version__))\n print('mkdocs [help|new|build|serve|gh-deploy|json] {options}')\n\n\ndef run_main():\n \"\"\"\n Invokes main() with the contents of sys.argv\n\n This is a separate function so it can be invoked\n by a setuptools console_script.\n \"\"\"\n cmd = sys.argv[1] if len(sys.argv) >= 2 else None\n opts = [arg_to_option(arg) for arg in sys.argv[2:] if arg.startswith('--')]\n try:\n main(cmd, args=sys.argv[2:], options=dict(opts))\n except MkDocsException as e:\n print(e.args[0], file=sys.stderr)\n\n\nif __name__ == '__main__':\n run_main()\n", "path": "mkdocs/main.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import print_function\n\nimport os\nfrom io import open\n\nfrom mkdocs import compat\n\nconfig_text = compat.unicode('site_name: My Docs\\n')\nindex_text = compat.unicode(\"\"\"# Welcome to MkDocs\n\nFor full documentation visit [mkdocs.org](http://mkdocs.org).\n\n## Commands\n\n* `mkdocs new [dir-name]` - Create a new project.\n* `mkdocs serve` - Start the live-reloading docs server.\n* `mkdocs build` - Build the documentation site.\n* `mkdocs help` - Print this help message.\n\n## Project layout\n\n mkdocs.yml # The configuration file.\n docs/\n index.md # The documentation homepage.\n ... # Other markdown pages, images and other files.\n\"\"\")\n\n\ndef new(args):\n\n if len(args) != 1:\n print(\"Usage 'mkdocs new [directory-name]'\")\n return\n\n output_dir = args[0]\n\n docs_dir = os.path.join(output_dir, 'docs')\n config_path = os.path.join(output_dir, 'mkdocs.yml')\n index_path = os.path.join(docs_dir, 'index.md')\n\n if os.path.exists(config_path):\n print('Project already exists.')\n return\n\n if not os.path.exists(output_dir):\n print('Creating project directory: %s' % output_dir)\n os.mkdir(output_dir)\n\n print('Writing config file: %s' % config_path)\n open(config_path, 'w', encoding='utf-8').write(config_text)\n\n if os.path.exists(index_path):\n return\n\n print('Writing initial docs: %s' % index_path)\n if not os.path.exists(docs_dir):\n os.mkdir(docs_dir)\n open(index_path, 'w', encoding='utf-8').write(index_text)\n", "path": "mkdocs/new.py"}, {"content": "#!/usr/bin/env python\n# coding: utf-8\nfrom __future__ import print_function\n\nimport logging\nimport sys\n\nfrom mkdocs import __version__\nfrom mkdocs.build import build\nfrom mkdocs.config import load_config\nfrom mkdocs.exceptions import MkDocsException\nfrom mkdocs.gh_deploy import gh_deploy\nfrom mkdocs.new import new\nfrom mkdocs.serve import serve\n\n\ndef configure_logging(options):\n '''When a --verbose flag is passed, increase the verbosity of mkdocs'''\n logger = logging.getLogger('mkdocs')\n logger.addHandler(logging.StreamHandler())\n if 'verbose' in options:\n logger.setLevel(logging.DEBUG)\n else:\n logger.setLevel(logging.WARNING)\n\n\ndef arg_to_option(arg):\n \"\"\"\n Convert command line arguments into two-tuples of config key/value pairs.\n \"\"\"\n arg = arg.lstrip('--')\n option = True\n if '=' in arg:\n arg, option = arg.split('=', 1)\n return (arg.replace('-', '_'), option)\n\n\ndef main(cmd, args, options=None):\n \"\"\"\n Build the documentation, and optionally start the devserver.\n \"\"\"\n configure_logging(options)\n clean_site_dir = 'clean' in options\n if cmd == 'serve':\n config = load_config(options=options)\n serve(config, options=options)\n elif cmd == 'build':\n config = load_config(options=options)\n build(config, clean_site_dir=clean_site_dir)\n elif cmd == 'json':\n config = load_config(options=options)\n build(config, dump_json=True, clean_site_dir=clean_site_dir)\n elif cmd == 'gh-deploy':\n config = load_config(options=options)\n build(config, clean_site_dir=clean_site_dir)\n gh_deploy(config)\n elif cmd == 'new':\n new(args)\n else:\n print('MkDocs (version {0})'.format(__version__))\n print('mkdocs [help|new|build|serve|gh-deploy|json] {options}')\n\n\ndef run_main():\n \"\"\"\n Invokes main() with the contents of sys.argv\n\n This is a separate function so it can be invoked\n by a setuptools console_script.\n \"\"\"\n cmd = sys.argv[1] if len(sys.argv) >= 2 else None\n opts = [arg_to_option(arg) for arg in sys.argv[2:] if arg.startswith('--')]\n try:\n main(cmd, args=sys.argv[2:], options=dict(opts))\n except MkDocsException as e:\n print(e.args[0], file=sys.stderr)\n\n\nif __name__ == '__main__':\n run_main()\n", "path": "mkdocs/main.py"}]}
| 1,816 | 346 |
gh_patches_debug_10062
|
rasdani/github-patches
|
git_diff
|
celery__celery-3752
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Celery Worker crashing after first task with TypeError: 'NoneType' object is not callable
## Checklist
- [X] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
```
software -> celery:4.0.0 (latentcall) kombu:4.0.0 py:3.4.3
billiard:3.5.0.2 py-amqp:2.1.1
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled
```
- [X] I have verified that the issue exists against the `master` branch of Celery.
Yes I've tested and it behaves the same using master.
## Steps to reproduce
Not exactly sure, because other machines with the same specs and requirements are working.
## Expected behavior
Should consume tasks.
## Actual behavior
A task is accepted, then a traceback is logged, then the worker reconnects to the broker for some reason. This repeats forever:
```
[2016-11-23 23:09:00,468: INFO/MainProcess] Connected to amqp://user:**@10.136.131.6:5672//
[2016-11-23 23:09:00,484: INFO/MainProcess] mingle: searching for neighbors
[2016-11-23 23:09:01,921: INFO/MainProcess] mingle: sync with 1 nodes
[2016-11-23 23:09:01,922: INFO/MainProcess] mingle: sync complete
[2016-11-23 23:09:01,970: INFO/MainProcess] Received task: tasks.calculate_user_running_total[ddd103af-d527-4564-83f8-96b747767a0c]
[2016-11-23 23:09:01,972: CRITICAL/MainProcess] Unrecoverable error: TypeError("'NoneType' object is not callable",)
Traceback (most recent call last):
File "./venv/lib/python3.4/site-packages/celery/worker/worker.py", line 203, in start
self.blueprint.start(self)
File "./venv/lib/python3.4/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "./venv/lib/python3.4/site-packages/celery/bootsteps.py", line 370, in start
return self.obj.start()
File "./venv/lib/python3.4/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "./venv/lib/python3.4/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "./venv/lib/python3.4/site-packages/celery/worker/consumer/consumer.py", line 584, in start
c.loop(*c.loop_args())
File "./venv/lib/python3.4/site-packages/celery/worker/loops.py", line 47, in asynloop
consumer.consume()
File "./venv/lib/python3.4/site-packages/kombu/messaging.py", line 470, in consume
self._basic_consume(T, no_ack=no_ack, nowait=False)
File "./venv/lib/python3.4/site-packages/kombu/messaging.py", line 591, in _basic_consume
no_ack=no_ack, nowait=nowait)
File "./venv/lib/python3.4/site-packages/kombu/entity.py", line 737, in consume
arguments=self.consumer_arguments)
File "./venv/lib/python3.4/site-packages/amqp/channel.py", line 1578, in basic_consume
wait=None if nowait else spec.Basic.ConsumeOk,
File "./venv/lib/python3.4/site-packages/amqp/abstract_channel.py", line 73, in send_method
return self.wait(wait, returns_tuple=returns_tuple)
File "./venv/lib/python3.4/site-packages/amqp/abstract_channel.py", line 93, in wait
self.connection.drain_events(timeout=timeout)
File "./venv/lib/python3.4/site-packages/amqp/connection.py", line 464, in drain_events
return self.blocking_read(timeout)
File "./venv/lib/python3.4/site-packages/amqp/connection.py", line 469, in blocking_read
return self.on_inbound_frame(frame)
File "./venv/lib/python3.4/site-packages/amqp/method_framing.py", line 88, in on_frame
callback(channel, msg.frame_method, msg.frame_args, msg)
File "./venv/lib/python3.4/site-packages/amqp/connection.py", line 473, in on_inbound_method
method_sig, payload, content,
File "./venv/lib/python3.4/site-packages/amqp/abstract_channel.py", line 142, in dispatch_method
listener(*args)
File "./venv/lib/python3.4/site-packages/amqp/channel.py", line 1613, in _on_basic_deliver
fun(msg)
File "./venv/lib/python3.4/site-packages/kombu/messaging.py", line 617, in _receive_callback
return on_m(message) if on_m else self.receive(decoded, message)
File "./venv/lib/python3.4/site-packages/celery/worker/consumer/consumer.py", line 558, in on_task_received
callbacks,
File "./venv/lib/python3.4/site-packages/celery/worker/strategy.py", line 145, in task_message_handler
handle(req)
File "./venv/lib/python3.4/site-packages/celery/worker/worker.py", line 221, in _process_task_sem
return self._quick_acquire(self._process_task, req)
File "./venv/lib/python3.4/site-packages/kombu/async/semaphore.py", line 62, in acquire
callback(*partial_args, **partial_kwargs)
File "./venv/lib/python3.4/site-packages/celery/worker/worker.py", line 226, in _process_task
req.execute_using_pool(self.pool)
File "./venv/lib/python3.4/site-packages/celery/worker/request.py", line 532, in execute_using_pool
correlation_id=task_id,
File "./venv/lib/python3.4/site-packages/celery/concurrency/base.py", line 155, in apply_async
**options)
File "./venv/lib/python3.4/site-packages/billiard/pool.py", line 1487, in apply_async
self._quick_put((TASK, (result._job, None, func, args, kwds)))
TypeError: 'NoneType' object is not callable
```
The above lines are keep repeating every few seconds and no tasks are consumed from the queue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celery/worker/loops.py`
Content:
```
1 """The consumers highly-optimized inner loop."""
2 from __future__ import absolute_import, unicode_literals
3 import errno
4 import socket
5 from celery import bootsteps
6 from celery.exceptions import WorkerShutdown, WorkerTerminate, WorkerLostError
7 from celery.utils.log import get_logger
8 from . import state
9
10 __all__ = ['asynloop', 'synloop']
11
12 # pylint: disable=redefined-outer-name
13 # We cache globals and attribute lookups, so disable this warning.
14
15 logger = get_logger(__name__)
16
17
18 def _quick_drain(connection, timeout=0.1):
19 try:
20 connection.drain_events(timeout=timeout)
21 except Exception as exc: # pylint: disable=broad-except
22 exc_errno = getattr(exc, 'errno', None)
23 if exc_errno is not None and exc_errno != errno.EAGAIN:
24 raise
25
26
27 def _enable_amqheartbeats(timer, connection, rate=2.0):
28 if connection:
29 tick = connection.heartbeat_check
30 heartbeat = connection.get_heartbeat_interval() # negotiated
31 if heartbeat and connection.supports_heartbeats:
32 timer.call_repeatedly(heartbeat / rate, tick, (rate,))
33
34
35 def asynloop(obj, connection, consumer, blueprint, hub, qos,
36 heartbeat, clock, hbrate=2.0):
37 """Non-blocking event loop."""
38 RUN = bootsteps.RUN
39 update_qos = qos.update
40 errors = connection.connection_errors
41
42 on_task_received = obj.create_task_handler()
43
44 _enable_amqheartbeats(hub.timer, connection, rate=hbrate)
45
46 consumer.on_message = on_task_received
47 consumer.consume()
48 obj.on_ready()
49 obj.controller.register_with_event_loop(hub)
50 obj.register_with_event_loop(hub)
51
52 # did_start_ok will verify that pool processes were able to start,
53 # but this will only work the first time we start, as
54 # maxtasksperchild will mess up metrics.
55 if not obj.restart_count and not obj.pool.did_start_ok():
56 raise WorkerLostError('Could not start worker processes')
57
58 # consumer.consume() may have prefetched up to our
59 # limit - drain an event so we're in a clean state
60 # prior to starting our event loop.
61 if connection.transport.driver_type == 'amqp':
62 hub.call_soon(_quick_drain, connection)
63
64 # FIXME: Use loop.run_forever
65 # Tried and works, but no time to test properly before release.
66 hub.propagate_errors = errors
67 loop = hub.create_loop()
68
69 try:
70 while blueprint.state == RUN and obj.connection:
71 # shutdown if signal handlers told us to.
72 should_stop, should_terminate = (
73 state.should_stop, state.should_terminate,
74 )
75 # False == EX_OK, so must use is not False
76 if should_stop is not None and should_stop is not False:
77 raise WorkerShutdown(should_stop)
78 elif should_terminate is not None and should_stop is not False:
79 raise WorkerTerminate(should_terminate)
80
81 # We only update QoS when there's no more messages to read.
82 # This groups together qos calls, and makes sure that remote
83 # control commands will be prioritized over task messages.
84 if qos.prev != qos.value:
85 update_qos()
86
87 try:
88 next(loop)
89 except StopIteration:
90 loop = hub.create_loop()
91 finally:
92 try:
93 hub.reset()
94 except Exception as exc: # pylint: disable=broad-except
95 logger.exception(
96 'Error cleaning up after event loop: %r', exc)
97
98
99 def synloop(obj, connection, consumer, blueprint, hub, qos,
100 heartbeat, clock, hbrate=2.0, **kwargs):
101 """Fallback blocking event loop for transports that doesn't support AIO."""
102 RUN = bootsteps.RUN
103 on_task_received = obj.create_task_handler()
104 perform_pending_operations = obj.perform_pending_operations
105 if getattr(obj.pool, 'is_green', False):
106 _enable_amqheartbeats(obj.timer, connection, rate=hbrate)
107 consumer.on_message = on_task_received
108 consumer.consume()
109
110 obj.on_ready()
111
112 while blueprint.state == RUN and obj.connection:
113 state.maybe_shutdown()
114 if qos.prev != qos.value:
115 qos.update()
116 try:
117 perform_pending_operations()
118 connection.drain_events(timeout=2.0)
119 except socket.timeout:
120 pass
121 except socket.error:
122 if blueprint.state == RUN:
123 raise
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/celery/worker/loops.py b/celery/worker/loops.py
--- a/celery/worker/loops.py
+++ b/celery/worker/loops.py
@@ -44,10 +44,10 @@
_enable_amqheartbeats(hub.timer, connection, rate=hbrate)
consumer.on_message = on_task_received
- consumer.consume()
- obj.on_ready()
obj.controller.register_with_event_loop(hub)
obj.register_with_event_loop(hub)
+ consumer.consume()
+ obj.on_ready()
# did_start_ok will verify that pool processes were able to start,
# but this will only work the first time we start, as
|
{"golden_diff": "diff --git a/celery/worker/loops.py b/celery/worker/loops.py\n--- a/celery/worker/loops.py\n+++ b/celery/worker/loops.py\n@@ -44,10 +44,10 @@\n _enable_amqheartbeats(hub.timer, connection, rate=hbrate)\n \n consumer.on_message = on_task_received\n- consumer.consume()\n- obj.on_ready()\n obj.controller.register_with_event_loop(hub)\n obj.register_with_event_loop(hub)\n+ consumer.consume()\n+ obj.on_ready()\n \n # did_start_ok will verify that pool processes were able to start,\n # but this will only work the first time we start, as\n", "issue": "Celery Worker crashing after first task with TypeError: 'NoneType' object is not callable\n## Checklist\r\n\r\n- [X] I have included the output of ``celery -A proj report`` in the issue.\r\n (if you are not able to do this, then at least specify the Celery\r\n version affected).\r\n```\r\nsoftware -> celery:4.0.0 (latentcall) kombu:4.0.0 py:3.4.3\r\n billiard:3.5.0.2 py-amqp:2.1.1\r\nplatform -> system:Linux arch:64bit, ELF imp:CPython\r\nloader -> celery.loaders.default.Loader\r\nsettings -> transport:amqp results:disabled\r\n```\r\n- [X] I have verified that the issue exists against the `master` branch of Celery.\r\nYes I've tested and it behaves the same using master.\r\n\r\n## Steps to reproduce\r\nNot exactly sure, because other machines with the same specs and requirements are working.\r\n\r\n## Expected behavior\r\nShould consume tasks.\r\n\r\n## Actual behavior\r\nA task is accepted, then a traceback is logged, then the worker reconnects to the broker for some reason. This repeats forever:\r\n\r\n```\r\n[2016-11-23 23:09:00,468: INFO/MainProcess] Connected to amqp://user:**@10.136.131.6:5672//\r\n[2016-11-23 23:09:00,484: INFO/MainProcess] mingle: searching for neighbors\r\n[2016-11-23 23:09:01,921: INFO/MainProcess] mingle: sync with 1 nodes\r\n[2016-11-23 23:09:01,922: INFO/MainProcess] mingle: sync complete\r\n[2016-11-23 23:09:01,970: INFO/MainProcess] Received task: tasks.calculate_user_running_total[ddd103af-d527-4564-83f8-96b747767a0c]\r\n[2016-11-23 23:09:01,972: CRITICAL/MainProcess] Unrecoverable error: TypeError(\"'NoneType' object is not callable\",)\r\nTraceback (most recent call last):\r\n File \"./venv/lib/python3.4/site-packages/celery/worker/worker.py\", line 203, in start\r\n self.blueprint.start(self)\r\n File \"./venv/lib/python3.4/site-packages/celery/bootsteps.py\", line 119, in start\r\n step.start(parent)\r\n File \"./venv/lib/python3.4/site-packages/celery/bootsteps.py\", line 370, in start\r\n return self.obj.start()\r\n File \"./venv/lib/python3.4/site-packages/celery/worker/consumer/consumer.py\", line 318, in start\r\n blueprint.start(self)\r\n File \"./venv/lib/python3.4/site-packages/celery/bootsteps.py\", line 119, in start\r\n step.start(parent)\r\n File \"./venv/lib/python3.4/site-packages/celery/worker/consumer/consumer.py\", line 584, in start\r\n c.loop(*c.loop_args())\r\n File \"./venv/lib/python3.4/site-packages/celery/worker/loops.py\", line 47, in asynloop\r\n consumer.consume()\r\n File \"./venv/lib/python3.4/site-packages/kombu/messaging.py\", line 470, in consume\r\n self._basic_consume(T, no_ack=no_ack, nowait=False)\r\n File \"./venv/lib/python3.4/site-packages/kombu/messaging.py\", line 591, in _basic_consume\r\n no_ack=no_ack, nowait=nowait)\r\n File \"./venv/lib/python3.4/site-packages/kombu/entity.py\", line 737, in consume\r\n arguments=self.consumer_arguments)\r\n File \"./venv/lib/python3.4/site-packages/amqp/channel.py\", line 1578, in basic_consume\r\n wait=None if nowait else spec.Basic.ConsumeOk,\r\n File \"./venv/lib/python3.4/site-packages/amqp/abstract_channel.py\", line 73, in send_method\r\n return self.wait(wait, returns_tuple=returns_tuple)\r\n File \"./venv/lib/python3.4/site-packages/amqp/abstract_channel.py\", line 93, in wait\r\n self.connection.drain_events(timeout=timeout)\r\n File \"./venv/lib/python3.4/site-packages/amqp/connection.py\", line 464, in drain_events\r\n return self.blocking_read(timeout)\r\n File \"./venv/lib/python3.4/site-packages/amqp/connection.py\", line 469, in blocking_read\r\n return self.on_inbound_frame(frame)\r\n File \"./venv/lib/python3.4/site-packages/amqp/method_framing.py\", line 88, in on_frame\r\n callback(channel, msg.frame_method, msg.frame_args, msg)\r\n File \"./venv/lib/python3.4/site-packages/amqp/connection.py\", line 473, in on_inbound_method\r\n method_sig, payload, content,\r\n File \"./venv/lib/python3.4/site-packages/amqp/abstract_channel.py\", line 142, in dispatch_method\r\n listener(*args)\r\n File \"./venv/lib/python3.4/site-packages/amqp/channel.py\", line 1613, in _on_basic_deliver\r\n fun(msg)\r\n File \"./venv/lib/python3.4/site-packages/kombu/messaging.py\", line 617, in _receive_callback\r\n return on_m(message) if on_m else self.receive(decoded, message)\r\n File \"./venv/lib/python3.4/site-packages/celery/worker/consumer/consumer.py\", line 558, in on_task_received\r\n callbacks,\r\n File \"./venv/lib/python3.4/site-packages/celery/worker/strategy.py\", line 145, in task_message_handler\r\n handle(req)\r\n File \"./venv/lib/python3.4/site-packages/celery/worker/worker.py\", line 221, in _process_task_sem\r\n return self._quick_acquire(self._process_task, req)\r\n File \"./venv/lib/python3.4/site-packages/kombu/async/semaphore.py\", line 62, in acquire\r\n callback(*partial_args, **partial_kwargs)\r\n File \"./venv/lib/python3.4/site-packages/celery/worker/worker.py\", line 226, in _process_task\r\n req.execute_using_pool(self.pool)\r\n File \"./venv/lib/python3.4/site-packages/celery/worker/request.py\", line 532, in execute_using_pool\r\n correlation_id=task_id,\r\n File \"./venv/lib/python3.4/site-packages/celery/concurrency/base.py\", line 155, in apply_async\r\n **options)\r\n File \"./venv/lib/python3.4/site-packages/billiard/pool.py\", line 1487, in apply_async\r\n self._quick_put((TASK, (result._job, None, func, args, kwds)))\r\nTypeError: 'NoneType' object is not callable\r\n```\r\nThe above lines are keep repeating every few seconds and no tasks are consumed from the queue.\n", "before_files": [{"content": "\"\"\"The consumers highly-optimized inner loop.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\nimport errno\nimport socket\nfrom celery import bootsteps\nfrom celery.exceptions import WorkerShutdown, WorkerTerminate, WorkerLostError\nfrom celery.utils.log import get_logger\nfrom . import state\n\n__all__ = ['asynloop', 'synloop']\n\n# pylint: disable=redefined-outer-name\n# We cache globals and attribute lookups, so disable this warning.\n\nlogger = get_logger(__name__)\n\n\ndef _quick_drain(connection, timeout=0.1):\n try:\n connection.drain_events(timeout=timeout)\n except Exception as exc: # pylint: disable=broad-except\n exc_errno = getattr(exc, 'errno', None)\n if exc_errno is not None and exc_errno != errno.EAGAIN:\n raise\n\n\ndef _enable_amqheartbeats(timer, connection, rate=2.0):\n if connection:\n tick = connection.heartbeat_check\n heartbeat = connection.get_heartbeat_interval() # negotiated\n if heartbeat and connection.supports_heartbeats:\n timer.call_repeatedly(heartbeat / rate, tick, (rate,))\n\n\ndef asynloop(obj, connection, consumer, blueprint, hub, qos,\n heartbeat, clock, hbrate=2.0):\n \"\"\"Non-blocking event loop.\"\"\"\n RUN = bootsteps.RUN\n update_qos = qos.update\n errors = connection.connection_errors\n\n on_task_received = obj.create_task_handler()\n\n _enable_amqheartbeats(hub.timer, connection, rate=hbrate)\n\n consumer.on_message = on_task_received\n consumer.consume()\n obj.on_ready()\n obj.controller.register_with_event_loop(hub)\n obj.register_with_event_loop(hub)\n\n # did_start_ok will verify that pool processes were able to start,\n # but this will only work the first time we start, as\n # maxtasksperchild will mess up metrics.\n if not obj.restart_count and not obj.pool.did_start_ok():\n raise WorkerLostError('Could not start worker processes')\n\n # consumer.consume() may have prefetched up to our\n # limit - drain an event so we're in a clean state\n # prior to starting our event loop.\n if connection.transport.driver_type == 'amqp':\n hub.call_soon(_quick_drain, connection)\n\n # FIXME: Use loop.run_forever\n # Tried and works, but no time to test properly before release.\n hub.propagate_errors = errors\n loop = hub.create_loop()\n\n try:\n while blueprint.state == RUN and obj.connection:\n # shutdown if signal handlers told us to.\n should_stop, should_terminate = (\n state.should_stop, state.should_terminate,\n )\n # False == EX_OK, so must use is not False\n if should_stop is not None and should_stop is not False:\n raise WorkerShutdown(should_stop)\n elif should_terminate is not None and should_stop is not False:\n raise WorkerTerminate(should_terminate)\n\n # We only update QoS when there's no more messages to read.\n # This groups together qos calls, and makes sure that remote\n # control commands will be prioritized over task messages.\n if qos.prev != qos.value:\n update_qos()\n\n try:\n next(loop)\n except StopIteration:\n loop = hub.create_loop()\n finally:\n try:\n hub.reset()\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\n 'Error cleaning up after event loop: %r', exc)\n\n\ndef synloop(obj, connection, consumer, blueprint, hub, qos,\n heartbeat, clock, hbrate=2.0, **kwargs):\n \"\"\"Fallback blocking event loop for transports that doesn't support AIO.\"\"\"\n RUN = bootsteps.RUN\n on_task_received = obj.create_task_handler()\n perform_pending_operations = obj.perform_pending_operations\n if getattr(obj.pool, 'is_green', False):\n _enable_amqheartbeats(obj.timer, connection, rate=hbrate)\n consumer.on_message = on_task_received\n consumer.consume()\n\n obj.on_ready()\n\n while blueprint.state == RUN and obj.connection:\n state.maybe_shutdown()\n if qos.prev != qos.value:\n qos.update()\n try:\n perform_pending_operations()\n connection.drain_events(timeout=2.0)\n except socket.timeout:\n pass\n except socket.error:\n if blueprint.state == RUN:\n raise\n", "path": "celery/worker/loops.py"}], "after_files": [{"content": "\"\"\"The consumers highly-optimized inner loop.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\nimport errno\nimport socket\nfrom celery import bootsteps\nfrom celery.exceptions import WorkerShutdown, WorkerTerminate, WorkerLostError\nfrom celery.utils.log import get_logger\nfrom . import state\n\n__all__ = ['asynloop', 'synloop']\n\n# pylint: disable=redefined-outer-name\n# We cache globals and attribute lookups, so disable this warning.\n\nlogger = get_logger(__name__)\n\n\ndef _quick_drain(connection, timeout=0.1):\n try:\n connection.drain_events(timeout=timeout)\n except Exception as exc: # pylint: disable=broad-except\n exc_errno = getattr(exc, 'errno', None)\n if exc_errno is not None and exc_errno != errno.EAGAIN:\n raise\n\n\ndef _enable_amqheartbeats(timer, connection, rate=2.0):\n if connection:\n tick = connection.heartbeat_check\n heartbeat = connection.get_heartbeat_interval() # negotiated\n if heartbeat and connection.supports_heartbeats:\n timer.call_repeatedly(heartbeat / rate, tick, (rate,))\n\n\ndef asynloop(obj, connection, consumer, blueprint, hub, qos,\n heartbeat, clock, hbrate=2.0):\n \"\"\"Non-blocking event loop.\"\"\"\n RUN = bootsteps.RUN\n update_qos = qos.update\n errors = connection.connection_errors\n\n on_task_received = obj.create_task_handler()\n\n _enable_amqheartbeats(hub.timer, connection, rate=hbrate)\n\n consumer.on_message = on_task_received\n obj.controller.register_with_event_loop(hub)\n obj.register_with_event_loop(hub)\n consumer.consume()\n obj.on_ready()\n\n # did_start_ok will verify that pool processes were able to start,\n # but this will only work the first time we start, as\n # maxtasksperchild will mess up metrics.\n if not obj.restart_count and not obj.pool.did_start_ok():\n raise WorkerLostError('Could not start worker processes')\n\n # consumer.consume() may have prefetched up to our\n # limit - drain an event so we're in a clean state\n # prior to starting our event loop.\n if connection.transport.driver_type == 'amqp':\n hub.call_soon(_quick_drain, connection)\n\n # FIXME: Use loop.run_forever\n # Tried and works, but no time to test properly before release.\n hub.propagate_errors = errors\n loop = hub.create_loop()\n\n try:\n while blueprint.state == RUN and obj.connection:\n # shutdown if signal handlers told us to.\n should_stop, should_terminate = (\n state.should_stop, state.should_terminate,\n )\n # False == EX_OK, so must use is not False\n if should_stop is not None and should_stop is not False:\n raise WorkerShutdown(should_stop)\n elif should_terminate is not None and should_stop is not False:\n raise WorkerTerminate(should_terminate)\n\n # We only update QoS when there's no more messages to read.\n # This groups together qos calls, and makes sure that remote\n # control commands will be prioritized over task messages.\n if qos.prev != qos.value:\n update_qos()\n\n try:\n next(loop)\n except StopIteration:\n loop = hub.create_loop()\n finally:\n try:\n hub.reset()\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\n 'Error cleaning up after event loop: %r', exc)\n\n\ndef synloop(obj, connection, consumer, blueprint, hub, qos,\n heartbeat, clock, hbrate=2.0, **kwargs):\n \"\"\"Fallback blocking event loop for transports that doesn't support AIO.\"\"\"\n RUN = bootsteps.RUN\n on_task_received = obj.create_task_handler()\n perform_pending_operations = obj.perform_pending_operations\n if getattr(obj.pool, 'is_green', False):\n _enable_amqheartbeats(obj.timer, connection, rate=hbrate)\n consumer.on_message = on_task_received\n consumer.consume()\n\n obj.on_ready()\n\n while blueprint.state == RUN and obj.connection:\n state.maybe_shutdown()\n if qos.prev != qos.value:\n qos.update()\n try:\n perform_pending_operations()\n connection.drain_events(timeout=2.0)\n except socket.timeout:\n pass\n except socket.error:\n if blueprint.state == RUN:\n raise\n", "path": "celery/worker/loops.py"}]}
| 3,137 | 158 |
gh_patches_debug_31150
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-834
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
W2030 - Number expected as String
cfn-lint 0.19.0
W2030 - Number expected as String
The following warning has started to appear in the latest version:
W2030 You must specify a valid allowed value for NAME_OF_THE_PARAMETER
Template example:
```
LogsRetentionLength:
AllowedValues:
- 1
- 3
Default: 1
Description: The retention length of the logs.
Type: Number
```
```
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
RetentionInDays: !Ref LogsRetentionLength
LogGroupName: '/name'
```
Error:
```
W2030 You must specify a valid allowed value for LogsRetentionLength (1).
Valid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']
W2030 You must specify a valid allowed value for LogsRetentionLength (3).
Valid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']
W2030 You must specify a valid Default value for LogsRetentionLength (1).
Valid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']
```
Changing the templates as follow fixes the issue but it's incorrect:
```
LogsRetentionLength:
AllowedValues:
- '1'
- '3'
Default: '1'
Description: The retention length of the logs.
Type: Number
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/parameters/AllowedValue.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import six
18 from cfnlint import CloudFormationLintRule
19 from cfnlint import RuleMatch
20
21 from cfnlint.helpers import RESOURCE_SPECS
22
23
24 class AllowedValue(CloudFormationLintRule):
25 """Check if parameters have a valid value"""
26 id = 'W2030'
27 shortdesc = 'Check if parameters have a valid value'
28 description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'
29 source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'
30 tags = ['resources', 'property', 'allowed value']
31
32 def initialize(self, cfn):
33 """Initialize the rule"""
34 for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):
35 self.resource_property_types.append(resource_type_spec)
36 for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):
37 self.resource_sub_property_types.append(property_type_spec)
38
39 def check_value_ref(self, value, **kwargs):
40 """Check Ref"""
41 matches = []
42
43 allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})
44 cfn = kwargs.get('cfn')
45
46 if allowed_value_specs:
47 if value in cfn.template.get('Parameters', {}):
48 param = cfn.template.get('Parameters').get(value, {})
49 parameter_values = param.get('AllowedValues')
50 default_value = param.get('Default')
51 parameter_type = param.get('Type')
52 if isinstance(parameter_type, six.string_types):
53 if ((not parameter_type.startswith('List<')) and
54 (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and
55 parameter_type not in ['CommaDelimitedList']):
56 # Check Allowed Values
57 if parameter_values:
58 for index, allowed_value in enumerate(parameter_values):
59 if allowed_value not in allowed_value_specs:
60 param_path = ['Parameters', value, 'AllowedValues', index]
61 message = 'You must specify a valid allowed value for {0} ({1}).\nValid values are {2}'
62 matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))
63 elif default_value:
64 # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)
65 if default_value not in allowed_value_specs:
66 param_path = ['Parameters', value, 'Default']
67 message = 'You must specify a valid Default value for {0} ({1}).\nValid values are {2}'
68 matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))
69
70 return matches
71
72 def check(self, cfn, properties, value_specs, property_specs, path):
73 """Check itself"""
74 matches = list()
75 for p_value, p_path in properties.items_safe(path[:]):
76 for prop in p_value:
77 if prop in value_specs:
78 value = value_specs.get(prop).get('Value', {})
79 if value:
80 value_type = value.get('ValueType', '')
81 property_type = property_specs.get('Properties').get(prop).get('Type')
82 matches.extend(
83 cfn.check_value(
84 p_value, prop, p_path,
85 check_ref=self.check_value_ref,
86 value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),
87 cfn=cfn, property_type=property_type, property_name=prop
88 )
89 )
90
91 return matches
92
93 def match_resource_sub_properties(self, properties, property_type, path, cfn):
94 """Match for sub properties"""
95 matches = list()
96
97 specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})
98 property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)
99 matches.extend(self.check(cfn, properties, specs, property_specs, path))
100
101 return matches
102
103 def match_resource_properties(self, properties, resource_type, path, cfn):
104 """Check CloudFormation Properties"""
105 matches = list()
106
107 specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})
108 resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)
109 matches.extend(self.check(cfn, properties, specs, resource_specs, path))
110
111 return matches
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cfnlint/rules/parameters/AllowedValue.py b/src/cfnlint/rules/parameters/AllowedValue.py
--- a/src/cfnlint/rules/parameters/AllowedValue.py
+++ b/src/cfnlint/rules/parameters/AllowedValue.py
@@ -56,13 +56,13 @@
# Check Allowed Values
if parameter_values:
for index, allowed_value in enumerate(parameter_values):
- if allowed_value not in allowed_value_specs:
+ if str(allowed_value) not in allowed_value_specs:
param_path = ['Parameters', value, 'AllowedValues', index]
message = 'You must specify a valid allowed value for {0} ({1}).\nValid values are {2}'
matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))
- elif default_value:
+ if default_value:
# Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)
- if default_value not in allowed_value_specs:
+ if str(default_value) not in allowed_value_specs:
param_path = ['Parameters', value, 'Default']
message = 'You must specify a valid Default value for {0} ({1}).\nValid values are {2}'
matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))
|
{"golden_diff": "diff --git a/src/cfnlint/rules/parameters/AllowedValue.py b/src/cfnlint/rules/parameters/AllowedValue.py\n--- a/src/cfnlint/rules/parameters/AllowedValue.py\n+++ b/src/cfnlint/rules/parameters/AllowedValue.py\n@@ -56,13 +56,13 @@\n # Check Allowed Values\n if parameter_values:\n for index, allowed_value in enumerate(parameter_values):\n- if allowed_value not in allowed_value_specs:\n+ if str(allowed_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n- elif default_value:\n+ if default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n- if default_value not in allowed_value_specs:\n+ if str(default_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n", "issue": "W2030 - Number expected as String \ncfn-lint 0.19.0\r\n\r\nW2030 - Number expected as String\r\n\r\nThe following warning has started to appear in the latest version:\r\nW2030 You must specify a valid allowed value for NAME_OF_THE_PARAMETER\r\n\r\nTemplate example:\r\n```\r\n LogsRetentionLength:\r\n AllowedValues:\r\n - 1\r\n - 3\r\n Default: 1\r\n Description: The retention length of the logs.\r\n Type: Number\r\n```\r\n\r\n```\r\n LogGroup:\r\n Type: AWS::Logs::LogGroup\r\n Properties:\r\n RetentionInDays: !Ref LogsRetentionLength\r\n LogGroupName: '/name'\r\n```\r\n\r\nError:\r\n```\r\nW2030 You must specify a valid allowed value for LogsRetentionLength (1).\r\nValid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']\r\nW2030 You must specify a valid allowed value for LogsRetentionLength (3).\r\nValid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']\r\nW2030 You must specify a valid Default value for LogsRetentionLength (1).\r\nValid values are ['1', '3', '5', '7', '14', '30', '60', '90', '120', '150', '180', '365', '400', '545', '731', '1827', '3653']\r\n```\r\n\r\nChanging the templates as follow fixes the issue but it's incorrect:\r\n\r\n```\r\n LogsRetentionLength:\r\n AllowedValues:\r\n - '1'\r\n - '3'\r\n Default: '1'\r\n Description: The retention length of the logs.\r\n Type: Number\r\n```\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport six\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass AllowedValue(CloudFormationLintRule):\n \"\"\"Check if parameters have a valid value\"\"\"\n id = 'W2030'\n shortdesc = 'Check if parameters have a valid value'\n description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'allowed value']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def check_value_ref(self, value, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = []\n\n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n cfn = kwargs.get('cfn')\n\n if allowed_value_specs:\n if value in cfn.template.get('Parameters', {}):\n param = cfn.template.get('Parameters').get(value, {})\n parameter_values = param.get('AllowedValues')\n default_value = param.get('Default')\n parameter_type = param.get('Type')\n if isinstance(parameter_type, six.string_types):\n if ((not parameter_type.startswith('List<')) and\n (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and\n parameter_type not in ['CommaDelimitedList']):\n # Check Allowed Values\n if parameter_values:\n for index, allowed_value in enumerate(parameter_values):\n if allowed_value not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n elif default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n if default_value not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n\n return matches\n\n def check(self, cfn, properties, value_specs, property_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n value = value_specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n property_type = property_specs.get('Properties').get(prop).get('Type')\n matches.extend(\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/parameters/AllowedValue.py"}], "after_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport six\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass AllowedValue(CloudFormationLintRule):\n \"\"\"Check if parameters have a valid value\"\"\"\n id = 'W2030'\n shortdesc = 'Check if parameters have a valid value'\n description = 'Check if parameters have a valid value in case of an enumator. The Parameter''s allowed values is based on the usages in property (Ref)'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'allowed value']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def check_value_ref(self, value, **kwargs):\n \"\"\"Check Ref\"\"\"\n matches = []\n\n allowed_value_specs = kwargs.get('value_specs', {}).get('AllowedValues', {})\n cfn = kwargs.get('cfn')\n\n if allowed_value_specs:\n if value in cfn.template.get('Parameters', {}):\n param = cfn.template.get('Parameters').get(value, {})\n parameter_values = param.get('AllowedValues')\n default_value = param.get('Default')\n parameter_type = param.get('Type')\n if isinstance(parameter_type, six.string_types):\n if ((not parameter_type.startswith('List<')) and\n (not parameter_type.startswith('AWS::SSM::Parameter::Value<')) and\n parameter_type not in ['CommaDelimitedList']):\n # Check Allowed Values\n if parameter_values:\n for index, allowed_value in enumerate(parameter_values):\n if str(allowed_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'AllowedValues', index]\n message = 'You must specify a valid allowed value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, allowed_value, allowed_value_specs)))\n if default_value:\n # Check Default, only if no allowed Values are specified in the parameter (that's covered by E2015)\n if str(default_value) not in allowed_value_specs:\n param_path = ['Parameters', value, 'Default']\n message = 'You must specify a valid Default value for {0} ({1}).\\nValid values are {2}'\n matches.append(RuleMatch(param_path, message.format(value, default_value, allowed_value_specs)))\n\n return matches\n\n def check(self, cfn, properties, value_specs, property_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n value = value_specs.get(prop).get('Value', {})\n if value:\n value_type = value.get('ValueType', '')\n property_type = property_specs.get('Properties').get(prop).get('Type')\n matches.extend(\n cfn.check_value(\n p_value, prop, p_path,\n check_ref=self.check_value_ref,\n value_specs=RESOURCE_SPECS.get(cfn.regions[0]).get('ValueTypes').get(value_type, {}),\n cfn=cfn, property_type=property_type, property_name=prop\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type, {}).get('Properties', {})\n property_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes').get(property_type)\n matches.extend(self.check(cfn, properties, specs, property_specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type, {}).get('Properties', {})\n resource_specs = RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes').get(resource_type)\n matches.extend(self.check(cfn, properties, specs, resource_specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/parameters/AllowedValue.py"}]}
| 2,195 | 298 |
gh_patches_debug_63962
|
rasdani/github-patches
|
git_diff
|
redis__redis-py-1678
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CI run to install the built package
In light of bug #1645 we should amend our CI run to install the built package, in a new virtual env and run something simple like a redis.Redis().ping(). Eventually we could build up to running the full integration test against the package.
CI run to install the built package
In light of bug #1645 we should amend our CI run to install the built package, in a new virtual env and run something simple like a redis.Redis().ping(). Eventually we could build up to running the full integration test against the package.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tasks.py`
Content:
```
1 import os
2 import shutil
3 from invoke import task, run
4
5 with open('tox.ini') as fp:
6 lines = fp.read().split("\n")
7 dockers = [line.split("=")[1].strip() for line in lines
8 if line.find("name") != -1]
9
10
11 @task
12 def devenv(c):
13 """Builds a development environment: downloads, and starts all dockers
14 specified in the tox.ini file.
15 """
16 clean(c)
17 cmd = 'tox -e devenv'
18 for d in dockers:
19 cmd += " --docker-dont-stop={}".format(d)
20 run(cmd)
21
22
23 @task
24 def linters(c):
25 """Run code linters"""
26 run("tox -e linters")
27
28
29 @task
30 def all_tests(c):
31 """Run all linters, and tests in redis-py. This assumes you have all
32 the python versions specified in the tox.ini file.
33 """
34 linters(c)
35 tests(c)
36
37
38 @task
39 def tests(c):
40 """Run the redis-py test suite against the current python,
41 with and without hiredis.
42 """
43 run("tox -e plain -e hiredis")
44
45
46 @task
47 def clean(c):
48 """Stop all dockers, and clean up the built binaries, if generated."""
49 if os.path.isdir("build"):
50 shutil.rmtree("build")
51 if os.path.isdir("dist"):
52 shutil.rmtree("dist")
53 run("docker rm -f {}".format(' '.join(dockers)))
54
55
56 @task
57 def package(c):
58 """Create the python packages"""
59 run("python setup.py build install")
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tasks.py b/tasks.py
--- a/tasks.py
+++ b/tasks.py
@@ -56,4 +56,4 @@
@task
def package(c):
"""Create the python packages"""
- run("python setup.py build install")
+ run("python setup.py sdist bdist_wheel")
|
{"golden_diff": "diff --git a/tasks.py b/tasks.py\n--- a/tasks.py\n+++ b/tasks.py\n@@ -56,4 +56,4 @@\n @task\n def package(c):\n \"\"\"Create the python packages\"\"\"\n- run(\"python setup.py build install\")\n+ run(\"python setup.py sdist bdist_wheel\")\n", "issue": "CI run to install the built package\nIn light of bug #1645 we should amend our CI run to install the built package, in a new virtual env and run something simple like a redis.Redis().ping(). Eventually we could build up to running the full integration test against the package.\nCI run to install the built package\nIn light of bug #1645 we should amend our CI run to install the built package, in a new virtual env and run something simple like a redis.Redis().ping(). Eventually we could build up to running the full integration test against the package.\n", "before_files": [{"content": "import os\nimport shutil\nfrom invoke import task, run\n\nwith open('tox.ini') as fp:\n lines = fp.read().split(\"\\n\")\n dockers = [line.split(\"=\")[1].strip() for line in lines\n if line.find(\"name\") != -1]\n\n\n@task\ndef devenv(c):\n \"\"\"Builds a development environment: downloads, and starts all dockers\n specified in the tox.ini file.\n \"\"\"\n clean(c)\n cmd = 'tox -e devenv'\n for d in dockers:\n cmd += \" --docker-dont-stop={}\".format(d)\n run(cmd)\n\n\n@task\ndef linters(c):\n \"\"\"Run code linters\"\"\"\n run(\"tox -e linters\")\n\n\n@task\ndef all_tests(c):\n \"\"\"Run all linters, and tests in redis-py. This assumes you have all\n the python versions specified in the tox.ini file.\n \"\"\"\n linters(c)\n tests(c)\n\n\n@task\ndef tests(c):\n \"\"\"Run the redis-py test suite against the current python,\n with and without hiredis.\n \"\"\"\n run(\"tox -e plain -e hiredis\")\n\n\n@task\ndef clean(c):\n \"\"\"Stop all dockers, and clean up the built binaries, if generated.\"\"\"\n if os.path.isdir(\"build\"):\n shutil.rmtree(\"build\")\n if os.path.isdir(\"dist\"):\n shutil.rmtree(\"dist\")\n run(\"docker rm -f {}\".format(' '.join(dockers)))\n\n\n@task\ndef package(c):\n \"\"\"Create the python packages\"\"\"\n run(\"python setup.py build install\")\n", "path": "tasks.py"}], "after_files": [{"content": "import os\nimport shutil\nfrom invoke import task, run\n\nwith open('tox.ini') as fp:\n lines = fp.read().split(\"\\n\")\n dockers = [line.split(\"=\")[1].strip() for line in lines\n if line.find(\"name\") != -1]\n\n\n@task\ndef devenv(c):\n \"\"\"Builds a development environment: downloads, and starts all dockers\n specified in the tox.ini file.\n \"\"\"\n clean(c)\n cmd = 'tox -e devenv'\n for d in dockers:\n cmd += \" --docker-dont-stop={}\".format(d)\n run(cmd)\n\n\n@task\ndef linters(c):\n \"\"\"Run code linters\"\"\"\n run(\"tox -e linters\")\n\n\n@task\ndef all_tests(c):\n \"\"\"Run all linters, and tests in redis-py. This assumes you have all\n the python versions specified in the tox.ini file.\n \"\"\"\n linters(c)\n tests(c)\n\n\n@task\ndef tests(c):\n \"\"\"Run the redis-py test suite against the current python,\n with and without hiredis.\n \"\"\"\n run(\"tox -e plain -e hiredis\")\n\n\n@task\ndef clean(c):\n \"\"\"Stop all dockers, and clean up the built binaries, if generated.\"\"\"\n if os.path.isdir(\"build\"):\n shutil.rmtree(\"build\")\n if os.path.isdir(\"dist\"):\n shutil.rmtree(\"dist\")\n run(\"docker rm -f {}\".format(' '.join(dockers)))\n\n\n@task\ndef package(c):\n \"\"\"Create the python packages\"\"\"\n run(\"python setup.py sdist bdist_wheel\")\n", "path": "tasks.py"}]}
| 849 | 69 |
gh_patches_debug_32160
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3322
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider holiday_stationstores is broken
During the global build at 2021-08-18-14-42-26, spider **holiday_stationstores** failed with **552 features** and **10 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/holiday_stationstores.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/holiday_stationstores.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/holiday_stationstores.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/holiday_stationstores.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4 import re
5
6 from locations.items import GeojsonPointItem
7
8
9 class HolidayStationstoreSpider(scrapy.Spider):
10 name = "holiday_stationstores"
11 item_attributes = {'brand': 'Holiday Stationstores',
12 'brand_wikidata': 'Q5880490'}
13 allowed_domains = ["www.holidaystationstores.com"]
14 download_delay = 0.2
15
16 def start_requests(self):
17 yield scrapy.Request('https://www.holidaystationstores.com/Locations/GetAllStores',
18 method='POST',
19 callback=self.parse_all_stores)
20
21 def parse_all_stores(self, response):
22 all_stores = json.loads(response.text)
23
24 for store_id, store in all_stores.items():
25 # GET requests get blocked by their Incapsula bot protection, but POST works fine
26 yield scrapy.Request(f"https://www.holidaystationstores.com/Locations/Detail?storeNumber={store_id}",
27 method='POST',
28 meta={'store': store})
29
30 def parse(self, response):
31 store = response.meta['store']
32
33 address = response.xpath('//div[@class="col-lg-4 col-sm-12"]/text()')[1].extract().strip()
34 phone = response.xpath('//div[@class="HolidayFontColorRed"]/text()').extract_first().strip()
35 services = '|'.join(response.xpath('//ul[@style="list-style-type: none; padding-left: 1.0em; font-size: 12px;"]/li/text()').extract()).lower()
36 open_24_hours = '24 hours' in response.css(
37 '.body-content .col-lg-4').get().lower()
38
39 properties = {
40 'name': f"Holiday #{store['Name']}",
41 'lon': store['Lng'],
42 'lat': store['Lat'],
43 'addr_full': address,
44 'phone': phone,
45 'ref': store['ID'],
46 'opening_hours': '24/7' if open_24_hours else self.opening_hours(response),
47 'extras': {
48 'amenity:fuel': True,
49 'fuel:diesel': 'diesel' in services or None,
50 'atm': 'atm' in services or None,
51 'fuel:e85': 'e85' in services or None,
52 'hgv': 'truck' in services or None,
53 'fuel:propane': 'propane' in services or None,
54 'car_wash': 'car wash' in services or None,
55 'fuel:cng': 'cng' in services or None
56 }
57 }
58
59 yield GeojsonPointItem(**properties)
60
61 def opening_hours(self, response):
62 hour_part_elems = response.xpath('//div[@class="row"][@style="font-size: 12px;"]')
63 day_groups = []
64 this_day_group = None
65
66 if hour_part_elems:
67 for hour_part_elem in hour_part_elems:
68 day = hour_part_elem.xpath('.//div[@class="col-3"]/text()').extract_first()
69 hours = hour_part_elem.xpath('.//div[@class="col-9"]/text()').extract_first()
70
71 if not hours:
72 continue
73
74 day = day[:2]
75 match = re.search(
76 r'^(\d{1,2}):(\d{2})\s*(a|p)m - (\d{1,2}):(\d{2})\s*(a|p)m?$', hours.lower())
77 (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups()
78
79 f_hr = int(f_hr)
80 if f_ampm == 'p':
81 f_hr += 12
82 elif f_ampm == 'a' and f_hr == 12:
83 f_hr = 0
84 t_hr = int(t_hr)
85 if t_ampm == 'p':
86 t_hr += 12
87 elif t_ampm == 'a' and t_hr == 12:
88 t_hr = 0
89
90 hours = '{:02d}:{}-{:02d}:{}'.format(
91 f_hr,
92 f_min,
93 t_hr,
94 t_min,
95 )
96
97 if not this_day_group:
98 this_day_group = {
99 'from_day': day,
100 'to_day': day,
101 'hours': hours
102 }
103 elif this_day_group['hours'] != hours:
104 day_groups.append(this_day_group)
105 this_day_group = {
106 'from_day': day,
107 'to_day': day,
108 'hours': hours
109 }
110 elif this_day_group['hours'] == hours:
111 this_day_group['to_day'] = day
112
113 if this_day_group:
114 day_groups.append(this_day_group)
115
116 hour_part_elems = response.xpath('//span[@style="font-size:90%"]/text()').extract()
117 if hour_part_elems:
118 day_groups.append({'from_day': 'Mo', 'to_day': 'Su', 'hours': '00:00-23:59'})
119
120 opening_hours = ""
121 if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'):
122 opening_hours = '24/7'
123 else:
124 for day_group in day_groups:
125 if day_group['from_day'] == day_group['to_day']:
126 opening_hours += '{from_day} {hours}; '.format(**day_group)
127 elif day_group['from_day'] == 'Su' and day_group['to_day'] == 'Sa':
128 opening_hours += '{hours}; '.format(**day_group)
129 else:
130 opening_hours += '{from_day}-{to_day} {hours}; '.format(
131 **day_group)
132 opening_hours = opening_hours[:-2]
133
134 return opening_hours
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/holiday_stationstores.py b/locations/spiders/holiday_stationstores.py
--- a/locations/spiders/holiday_stationstores.py
+++ b/locations/spiders/holiday_stationstores.py
@@ -31,6 +31,8 @@
store = response.meta['store']
address = response.xpath('//div[@class="col-lg-4 col-sm-12"]/text()')[1].extract().strip()
+ city_state = response.xpath('//div[@class="col-lg-4 col-sm-12"]/text()')[2].extract().strip()
+ city, state = city_state.split(", ")
phone = response.xpath('//div[@class="HolidayFontColorRed"]/text()').extract_first().strip()
services = '|'.join(response.xpath('//ul[@style="list-style-type: none; padding-left: 1.0em; font-size: 12px;"]/li/text()').extract()).lower()
open_24_hours = '24 hours' in response.css(
@@ -43,6 +45,9 @@
'addr_full': address,
'phone': phone,
'ref': store['ID'],
+ 'city': city.strip(),
+ 'state': state.strip(),
+ 'website': response.url,
'opening_hours': '24/7' if open_24_hours else self.opening_hours(response),
'extras': {
'amenity:fuel': True,
@@ -68,7 +73,7 @@
day = hour_part_elem.xpath('.//div[@class="col-3"]/text()').extract_first()
hours = hour_part_elem.xpath('.//div[@class="col-9"]/text()').extract_first()
- if not hours:
+ if not hours or hours.lower() == 'closed':
continue
day = day[:2]
|
{"golden_diff": "diff --git a/locations/spiders/holiday_stationstores.py b/locations/spiders/holiday_stationstores.py\n--- a/locations/spiders/holiday_stationstores.py\n+++ b/locations/spiders/holiday_stationstores.py\n@@ -31,6 +31,8 @@\n store = response.meta['store']\n \n address = response.xpath('//div[@class=\"col-lg-4 col-sm-12\"]/text()')[1].extract().strip()\n+ city_state = response.xpath('//div[@class=\"col-lg-4 col-sm-12\"]/text()')[2].extract().strip()\n+ city, state = city_state.split(\", \")\n phone = response.xpath('//div[@class=\"HolidayFontColorRed\"]/text()').extract_first().strip()\n services = '|'.join(response.xpath('//ul[@style=\"list-style-type: none; padding-left: 1.0em; font-size: 12px;\"]/li/text()').extract()).lower()\n open_24_hours = '24 hours' in response.css(\n@@ -43,6 +45,9 @@\n 'addr_full': address,\n 'phone': phone,\n 'ref': store['ID'],\n+ 'city': city.strip(),\n+ 'state': state.strip(),\n+ 'website': response.url,\n 'opening_hours': '24/7' if open_24_hours else self.opening_hours(response),\n 'extras': {\n 'amenity:fuel': True,\n@@ -68,7 +73,7 @@\n day = hour_part_elem.xpath('.//div[@class=\"col-3\"]/text()').extract_first()\n hours = hour_part_elem.xpath('.//div[@class=\"col-9\"]/text()').extract_first()\n \n- if not hours:\n+ if not hours or hours.lower() == 'closed':\n continue\n \n day = day[:2]\n", "issue": "Spider holiday_stationstores is broken\nDuring the global build at 2021-08-18-14-42-26, spider **holiday_stationstores** failed with **552 features** and **10 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/holiday_stationstores.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/holiday_stationstores.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/holiday_stationstores.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass HolidayStationstoreSpider(scrapy.Spider):\n name = \"holiday_stationstores\"\n item_attributes = {'brand': 'Holiday Stationstores',\n 'brand_wikidata': 'Q5880490'}\n allowed_domains = [\"www.holidaystationstores.com\"]\n download_delay = 0.2\n\n def start_requests(self):\n yield scrapy.Request('https://www.holidaystationstores.com/Locations/GetAllStores',\n method='POST',\n callback=self.parse_all_stores)\n\n def parse_all_stores(self, response):\n all_stores = json.loads(response.text)\n\n for store_id, store in all_stores.items():\n # GET requests get blocked by their Incapsula bot protection, but POST works fine\n yield scrapy.Request(f\"https://www.holidaystationstores.com/Locations/Detail?storeNumber={store_id}\",\n method='POST',\n meta={'store': store})\n\n def parse(self, response):\n store = response.meta['store']\n\n address = response.xpath('//div[@class=\"col-lg-4 col-sm-12\"]/text()')[1].extract().strip()\n phone = response.xpath('//div[@class=\"HolidayFontColorRed\"]/text()').extract_first().strip()\n services = '|'.join(response.xpath('//ul[@style=\"list-style-type: none; padding-left: 1.0em; font-size: 12px;\"]/li/text()').extract()).lower()\n open_24_hours = '24 hours' in response.css(\n '.body-content .col-lg-4').get().lower()\n\n properties = {\n 'name': f\"Holiday #{store['Name']}\",\n 'lon': store['Lng'],\n 'lat': store['Lat'],\n 'addr_full': address,\n 'phone': phone,\n 'ref': store['ID'],\n 'opening_hours': '24/7' if open_24_hours else self.opening_hours(response),\n 'extras': {\n 'amenity:fuel': True,\n 'fuel:diesel': 'diesel' in services or None,\n 'atm': 'atm' in services or None,\n 'fuel:e85': 'e85' in services or None,\n 'hgv': 'truck' in services or None,\n 'fuel:propane': 'propane' in services or None,\n 'car_wash': 'car wash' in services or None,\n 'fuel:cng': 'cng' in services or None\n }\n }\n\n yield GeojsonPointItem(**properties)\n\n def opening_hours(self, response):\n hour_part_elems = response.xpath('//div[@class=\"row\"][@style=\"font-size: 12px;\"]')\n day_groups = []\n this_day_group = None\n\n if hour_part_elems:\n for hour_part_elem in hour_part_elems:\n day = hour_part_elem.xpath('.//div[@class=\"col-3\"]/text()').extract_first()\n hours = hour_part_elem.xpath('.//div[@class=\"col-9\"]/text()').extract_first()\n\n if not hours:\n continue\n\n day = day[:2]\n match = re.search(\n r'^(\\d{1,2}):(\\d{2})\\s*(a|p)m - (\\d{1,2}):(\\d{2})\\s*(a|p)m?$', hours.lower())\n (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups()\n\n f_hr = int(f_hr)\n if f_ampm == 'p':\n f_hr += 12\n elif f_ampm == 'a' and f_hr == 12:\n f_hr = 0\n t_hr = int(t_hr)\n if t_ampm == 'p':\n t_hr += 12\n elif t_ampm == 'a' and t_hr == 12:\n t_hr = 0\n\n hours = '{:02d}:{}-{:02d}:{}'.format(\n f_hr,\n f_min,\n t_hr,\n t_min,\n )\n\n if not this_day_group:\n this_day_group = {\n 'from_day': day,\n 'to_day': day,\n 'hours': hours\n }\n elif this_day_group['hours'] != hours:\n day_groups.append(this_day_group)\n this_day_group = {\n 'from_day': day,\n 'to_day': day,\n 'hours': hours\n }\n elif this_day_group['hours'] == hours:\n this_day_group['to_day'] = day\n\n if this_day_group:\n day_groups.append(this_day_group)\n\n hour_part_elems = response.xpath('//span[@style=\"font-size:90%\"]/text()').extract()\n if hour_part_elems:\n day_groups.append({'from_day': 'Mo', 'to_day': 'Su', 'hours': '00:00-23:59'})\n\n opening_hours = \"\"\n if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'):\n opening_hours = '24/7'\n else:\n for day_group in day_groups:\n if day_group['from_day'] == day_group['to_day']:\n opening_hours += '{from_day} {hours}; '.format(**day_group)\n elif day_group['from_day'] == 'Su' and day_group['to_day'] == 'Sa':\n opening_hours += '{hours}; '.format(**day_group)\n else:\n opening_hours += '{from_day}-{to_day} {hours}; '.format(\n **day_group)\n opening_hours = opening_hours[:-2]\n\n return opening_hours\n", "path": "locations/spiders/holiday_stationstores.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass HolidayStationstoreSpider(scrapy.Spider):\n name = \"holiday_stationstores\"\n item_attributes = {'brand': 'Holiday Stationstores',\n 'brand_wikidata': 'Q5880490'}\n allowed_domains = [\"www.holidaystationstores.com\"]\n download_delay = 0.2\n\n def start_requests(self):\n yield scrapy.Request('https://www.holidaystationstores.com/Locations/GetAllStores',\n method='POST',\n callback=self.parse_all_stores)\n\n def parse_all_stores(self, response):\n all_stores = json.loads(response.text)\n\n for store_id, store in all_stores.items():\n # GET requests get blocked by their Incapsula bot protection, but POST works fine\n yield scrapy.Request(f\"https://www.holidaystationstores.com/Locations/Detail?storeNumber={store_id}\",\n method='POST',\n meta={'store': store})\n\n def parse(self, response):\n store = response.meta['store']\n\n address = response.xpath('//div[@class=\"col-lg-4 col-sm-12\"]/text()')[1].extract().strip()\n city_state = response.xpath('//div[@class=\"col-lg-4 col-sm-12\"]/text()')[2].extract().strip()\n city, state = city_state.split(\", \")\n phone = response.xpath('//div[@class=\"HolidayFontColorRed\"]/text()').extract_first().strip()\n services = '|'.join(response.xpath('//ul[@style=\"list-style-type: none; padding-left: 1.0em; font-size: 12px;\"]/li/text()').extract()).lower()\n open_24_hours = '24 hours' in response.css(\n '.body-content .col-lg-4').get().lower()\n\n properties = {\n 'name': f\"Holiday #{store['Name']}\",\n 'lon': store['Lng'],\n 'lat': store['Lat'],\n 'addr_full': address,\n 'phone': phone,\n 'ref': store['ID'],\n 'city': city.strip(),\n 'state': state.strip(),\n 'website': response.url,\n 'opening_hours': '24/7' if open_24_hours else self.opening_hours(response),\n 'extras': {\n 'amenity:fuel': True,\n 'fuel:diesel': 'diesel' in services or None,\n 'atm': 'atm' in services or None,\n 'fuel:e85': 'e85' in services or None,\n 'hgv': 'truck' in services or None,\n 'fuel:propane': 'propane' in services or None,\n 'car_wash': 'car wash' in services or None,\n 'fuel:cng': 'cng' in services or None\n }\n }\n\n yield GeojsonPointItem(**properties)\n\n def opening_hours(self, response):\n hour_part_elems = response.xpath('//div[@class=\"row\"][@style=\"font-size: 12px;\"]')\n day_groups = []\n this_day_group = None\n\n if hour_part_elems:\n for hour_part_elem in hour_part_elems:\n day = hour_part_elem.xpath('.//div[@class=\"col-3\"]/text()').extract_first()\n hours = hour_part_elem.xpath('.//div[@class=\"col-9\"]/text()').extract_first()\n\n if not hours or hours.lower() == 'closed':\n continue\n\n day = day[:2]\n match = re.search(\n r'^(\\d{1,2}):(\\d{2})\\s*(a|p)m - (\\d{1,2}):(\\d{2})\\s*(a|p)m?$', hours.lower())\n (f_hr, f_min, f_ampm, t_hr, t_min, t_ampm) = match.groups()\n\n f_hr = int(f_hr)\n if f_ampm == 'p':\n f_hr += 12\n elif f_ampm == 'a' and f_hr == 12:\n f_hr = 0\n t_hr = int(t_hr)\n if t_ampm == 'p':\n t_hr += 12\n elif t_ampm == 'a' and t_hr == 12:\n t_hr = 0\n\n hours = '{:02d}:{}-{:02d}:{}'.format(\n f_hr,\n f_min,\n t_hr,\n t_min,\n )\n\n if not this_day_group:\n this_day_group = {\n 'from_day': day,\n 'to_day': day,\n 'hours': hours\n }\n elif this_day_group['hours'] != hours:\n day_groups.append(this_day_group)\n this_day_group = {\n 'from_day': day,\n 'to_day': day,\n 'hours': hours\n }\n elif this_day_group['hours'] == hours:\n this_day_group['to_day'] = day\n\n if this_day_group:\n day_groups.append(this_day_group)\n\n hour_part_elems = response.xpath('//span[@style=\"font-size:90%\"]/text()').extract()\n if hour_part_elems:\n day_groups.append({'from_day': 'Mo', 'to_day': 'Su', 'hours': '00:00-23:59'})\n\n opening_hours = \"\"\n if len(day_groups) == 1 and day_groups[0]['hours'] in ('00:00-23:59', '00:00-00:00'):\n opening_hours = '24/7'\n else:\n for day_group in day_groups:\n if day_group['from_day'] == day_group['to_day']:\n opening_hours += '{from_day} {hours}; '.format(**day_group)\n elif day_group['from_day'] == 'Su' and day_group['to_day'] == 'Sa':\n opening_hours += '{hours}; '.format(**day_group)\n else:\n opening_hours += '{from_day}-{to_day} {hours}; '.format(\n **day_group)\n opening_hours = opening_hours[:-2]\n\n return opening_hours\n", "path": "locations/spiders/holiday_stationstores.py"}]}
| 2,040 | 407 |
gh_patches_debug_49047
|
rasdani/github-patches
|
git_diff
|
arviz-devs__arviz-1403
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error plotting a single variable with plot_density and bokeh backend
**Describe the bug**
Over in ArviZ.jl, we use the Julia equivalent to the below snippet to test Bokeh integration for `plot_density`. It worked fine until recently, where we now get an error with bokeh only but not matplotlib, though I'm not certain whether a change in arviz or bokeh is responsible.
**To Reproduce**
```python
>>> import arviz
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> arr1 = np.random.randn(4, 100)
>>> arr2 = np.random.randn(4, 100)
>>> arviz.plot_density([{"x": arr1}, {"x": arr2}], var_names = ["x"]) # matplotlib works fine
>>> plt.show()
```
<img src=https://user-images.githubusercontent.com/8673634/94775414-9bce2480-0374-11eb-8938-f74a486f97de.png width=400></img>
```python
>>> arviz.plot_density([{"x": arr1}, {"x": arr2}], var_names = ["x"], backend="bokeh") # errors
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/saxen/.julia/conda/3/lib/python3.8/site-packages/arviz/plots/densityplot.py", line 252, in plot_density
ax = plot(**plot_density_kwargs)
File "/Users/saxen/.julia/conda/3/lib/python3.8/site-packages/arviz/plots/backends/bokeh/densityplot.py", line 74, in plot_density
for label, ax_ in zip(all_labels, (item for item in ax.flatten() if item is not None))
AttributeError: 'Figure' object has no attribute 'flatten'
```
**Additional context**
Relevant package versions in the conda environment used:
```
arviz 0.10.0 py_0 conda-forge
bokeh 2.2.1 py38h32f6830_0 conda-forge
matplotlib 3.1.3 py38_0 conda-forge
numpy 1.19.1 py38h3b9f5b6_0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/plots/backends/bokeh/densityplot.py`
Content:
```
1 """Bokeh Densityplot."""
2 from collections import defaultdict
3 from itertools import cycle
4
5 import matplotlib.pyplot as plt
6 import numpy as np
7 from bokeh.models.annotations import Legend, Title
8
9 from ....stats import hdi
10 from ....stats.density_utils import get_bins, histogram, kde
11 from ...plot_utils import _scale_fig_size, calculate_point_estimate, make_label, vectorized_to_hex
12 from .. import show_layout
13 from . import backend_kwarg_defaults, create_axes_grid
14
15
16 def plot_density(
17 ax,
18 all_labels,
19 to_plot,
20 colors,
21 bw,
22 circular,
23 figsize,
24 length_plotters,
25 rows,
26 cols,
27 textsize,
28 hdi_prob,
29 point_estimate,
30 hdi_markers,
31 outline,
32 shade,
33 n_data,
34 data_labels,
35 backend_kwargs,
36 show,
37 ):
38 """Bokeh density plot."""
39 if backend_kwargs is None:
40 backend_kwargs = {}
41
42 backend_kwargs = {
43 **backend_kwarg_defaults(),
44 **backend_kwargs,
45 }
46
47 if colors == "cycle":
48 colors = [
49 prop
50 for _, prop in zip(
51 range(n_data), cycle(plt.rcParams["axes.prop_cycle"].by_key()["color"])
52 )
53 ]
54 elif isinstance(colors, str):
55 colors = [colors for _ in range(n_data)]
56 colors = vectorized_to_hex(colors)
57
58 (figsize, _, _, _, line_width, markersize) = _scale_fig_size(figsize, textsize, rows, cols)
59
60 if ax is None:
61 ax = create_axes_grid(
62 length_plotters,
63 rows,
64 cols,
65 figsize=figsize,
66 squeeze=True,
67 backend_kwargs=backend_kwargs,
68 )
69 else:
70 ax = np.atleast_2d(ax)
71
72 axis_map = {
73 label: ax_
74 for label, ax_ in zip(all_labels, (item for item in ax.flatten() if item is not None))
75 }
76 if data_labels is None:
77 data_labels = {}
78
79 legend_items = defaultdict(list)
80 for m_idx, plotters in enumerate(to_plot):
81 for var_name, selection, values in plotters:
82 label = make_label(var_name, selection)
83
84 if data_labels:
85 data_label = data_labels[m_idx]
86 else:
87 data_label = None
88
89 plotted = _d_helper(
90 values.flatten(),
91 label,
92 colors[m_idx],
93 bw,
94 circular,
95 line_width,
96 markersize,
97 hdi_prob,
98 point_estimate,
99 hdi_markers,
100 outline,
101 shade,
102 axis_map[label],
103 )
104 if data_label is not None:
105 legend_items[axis_map[label]].append((data_label, plotted))
106
107 for ax1, legend in legend_items.items():
108 legend = Legend(
109 items=legend,
110 location="center_right",
111 orientation="horizontal",
112 )
113 ax1.add_layout(legend, "above")
114 ax1.legend.click_policy = "hide"
115
116 show_layout(ax, show)
117
118 return ax
119
120
121 def _d_helper(
122 vec,
123 vname,
124 color,
125 bw,
126 circular,
127 line_width,
128 markersize,
129 hdi_prob,
130 point_estimate,
131 hdi_markers,
132 outline,
133 shade,
134 ax,
135 ):
136
137 extra = dict()
138 plotted = []
139
140 if vec.dtype.kind == "f":
141 if hdi_prob != 1:
142 hdi_ = hdi(vec, hdi_prob, multimodal=False)
143 new_vec = vec[(vec >= hdi_[0]) & (vec <= hdi_[1])]
144 else:
145 new_vec = vec
146
147 x, density = kde(new_vec, circular=circular, bw=bw)
148 density *= hdi_prob
149 xmin, xmax = x[0], x[-1]
150 ymin, ymax = density[0], density[-1]
151
152 if outline:
153 plotted.append(ax.line(x, density, line_color=color, line_width=line_width, **extra))
154 plotted.append(
155 ax.line(
156 [xmin, xmin],
157 [-ymin / 100, ymin],
158 line_color=color,
159 line_dash="solid",
160 line_width=line_width,
161 muted_color=color,
162 muted_alpha=0.2,
163 )
164 )
165 plotted.append(
166 ax.line(
167 [xmax, xmax],
168 [-ymax / 100, ymax],
169 line_color=color,
170 line_dash="solid",
171 line_width=line_width,
172 muted_color=color,
173 muted_alpha=0.2,
174 )
175 )
176
177 if shade:
178 plotted.append(
179 ax.patch(
180 np.r_[x[::-1], x, x[-1:]],
181 np.r_[np.zeros_like(x), density, [0]],
182 fill_color=color,
183 fill_alpha=shade,
184 muted_color=color,
185 muted_alpha=0.2,
186 **extra
187 )
188 )
189
190 else:
191 xmin, xmax = hdi(vec, hdi_prob, multimodal=False)
192 bins = get_bins(vec)
193
194 _, hist, edges = histogram(vec, bins=bins)
195
196 if outline:
197 plotted.append(
198 ax.quad(
199 top=hist,
200 bottom=0,
201 left=edges[:-1],
202 right=edges[1:],
203 line_color=color,
204 fill_color=None,
205 muted_color=color,
206 muted_alpha=0.2,
207 **extra
208 )
209 )
210 else:
211 plotted.append(
212 ax.quad(
213 top=hist,
214 bottom=0,
215 left=edges[:-1],
216 right=edges[1:],
217 line_color=color,
218 fill_color=color,
219 fill_alpha=shade,
220 muted_color=color,
221 muted_alpha=0.2,
222 **extra
223 )
224 )
225
226 if hdi_markers:
227 plotted.append(ax.diamond(xmin, 0, line_color="black", fill_color=color, size=markersize))
228 plotted.append(ax.diamond(xmax, 0, line_color="black", fill_color=color, size=markersize))
229
230 if point_estimate is not None:
231 est = calculate_point_estimate(point_estimate, vec, bw, circular)
232 plotted.append(ax.circle(est, 0, fill_color=color, line_color="black", size=markersize))
233
234 _title = Title()
235 _title.text = vname
236 ax.title = _title
237 ax.title.text_font_size = "13pt"
238
239 return plotted
240
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/arviz/plots/backends/bokeh/densityplot.py b/arviz/plots/backends/bokeh/densityplot.py
--- a/arviz/plots/backends/bokeh/densityplot.py
+++ b/arviz/plots/backends/bokeh/densityplot.py
@@ -63,7 +63,7 @@
rows,
cols,
figsize=figsize,
- squeeze=True,
+ squeeze=False,
backend_kwargs=backend_kwargs,
)
else:
|
{"golden_diff": "diff --git a/arviz/plots/backends/bokeh/densityplot.py b/arviz/plots/backends/bokeh/densityplot.py\n--- a/arviz/plots/backends/bokeh/densityplot.py\n+++ b/arviz/plots/backends/bokeh/densityplot.py\n@@ -63,7 +63,7 @@\n rows,\n cols,\n figsize=figsize,\n- squeeze=True,\n+ squeeze=False,\n backend_kwargs=backend_kwargs,\n )\n else:\n", "issue": "Error plotting a single variable with plot_density and bokeh backend\n**Describe the bug**\r\nOver in ArviZ.jl, we use the Julia equivalent to the below snippet to test Bokeh integration for `plot_density`. It worked fine until recently, where we now get an error with bokeh only but not matplotlib, though I'm not certain whether a change in arviz or bokeh is responsible.\r\n\r\n**To Reproduce**\r\n```python\r\n>>> import arviz\r\n>>> import numpy as np\r\n>>> import matplotlib.pyplot as plt\r\n>>> arr1 = np.random.randn(4, 100)\r\n>>> arr2 = np.random.randn(4, 100)\r\n>>> arviz.plot_density([{\"x\": arr1}, {\"x\": arr2}], var_names = [\"x\"]) # matplotlib works fine\r\n>>> plt.show()\r\n```\r\n<img src=https://user-images.githubusercontent.com/8673634/94775414-9bce2480-0374-11eb-8938-f74a486f97de.png width=400></img>\r\n```python\r\n>>> arviz.plot_density([{\"x\": arr1}, {\"x\": arr2}], var_names = [\"x\"], backend=\"bokeh\") # errors\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/saxen/.julia/conda/3/lib/python3.8/site-packages/arviz/plots/densityplot.py\", line 252, in plot_density\r\n ax = plot(**plot_density_kwargs)\r\n File \"/Users/saxen/.julia/conda/3/lib/python3.8/site-packages/arviz/plots/backends/bokeh/densityplot.py\", line 74, in plot_density\r\n for label, ax_ in zip(all_labels, (item for item in ax.flatten() if item is not None))\r\nAttributeError: 'Figure' object has no attribute 'flatten'\r\n```\r\n\r\n**Additional context**\r\nRelevant package versions in the conda environment used:\r\n```\r\narviz 0.10.0 py_0 conda-forge\r\nbokeh 2.2.1 py38h32f6830_0 conda-forge\r\nmatplotlib 3.1.3 py38_0 conda-forge\r\nnumpy 1.19.1 py38h3b9f5b6_0 \r\n```\n", "before_files": [{"content": "\"\"\"Bokeh Densityplot.\"\"\"\nfrom collections import defaultdict\nfrom itertools import cycle\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom bokeh.models.annotations import Legend, Title\n\nfrom ....stats import hdi\nfrom ....stats.density_utils import get_bins, histogram, kde\nfrom ...plot_utils import _scale_fig_size, calculate_point_estimate, make_label, vectorized_to_hex\nfrom .. import show_layout\nfrom . import backend_kwarg_defaults, create_axes_grid\n\n\ndef plot_density(\n ax,\n all_labels,\n to_plot,\n colors,\n bw,\n circular,\n figsize,\n length_plotters,\n rows,\n cols,\n textsize,\n hdi_prob,\n point_estimate,\n hdi_markers,\n outline,\n shade,\n n_data,\n data_labels,\n backend_kwargs,\n show,\n):\n \"\"\"Bokeh density plot.\"\"\"\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {\n **backend_kwarg_defaults(),\n **backend_kwargs,\n }\n\n if colors == \"cycle\":\n colors = [\n prop\n for _, prop in zip(\n range(n_data), cycle(plt.rcParams[\"axes.prop_cycle\"].by_key()[\"color\"])\n )\n ]\n elif isinstance(colors, str):\n colors = [colors for _ in range(n_data)]\n colors = vectorized_to_hex(colors)\n\n (figsize, _, _, _, line_width, markersize) = _scale_fig_size(figsize, textsize, rows, cols)\n\n if ax is None:\n ax = create_axes_grid(\n length_plotters,\n rows,\n cols,\n figsize=figsize,\n squeeze=True,\n backend_kwargs=backend_kwargs,\n )\n else:\n ax = np.atleast_2d(ax)\n\n axis_map = {\n label: ax_\n for label, ax_ in zip(all_labels, (item for item in ax.flatten() if item is not None))\n }\n if data_labels is None:\n data_labels = {}\n\n legend_items = defaultdict(list)\n for m_idx, plotters in enumerate(to_plot):\n for var_name, selection, values in plotters:\n label = make_label(var_name, selection)\n\n if data_labels:\n data_label = data_labels[m_idx]\n else:\n data_label = None\n\n plotted = _d_helper(\n values.flatten(),\n label,\n colors[m_idx],\n bw,\n circular,\n line_width,\n markersize,\n hdi_prob,\n point_estimate,\n hdi_markers,\n outline,\n shade,\n axis_map[label],\n )\n if data_label is not None:\n legend_items[axis_map[label]].append((data_label, plotted))\n\n for ax1, legend in legend_items.items():\n legend = Legend(\n items=legend,\n location=\"center_right\",\n orientation=\"horizontal\",\n )\n ax1.add_layout(legend, \"above\")\n ax1.legend.click_policy = \"hide\"\n\n show_layout(ax, show)\n\n return ax\n\n\ndef _d_helper(\n vec,\n vname,\n color,\n bw,\n circular,\n line_width,\n markersize,\n hdi_prob,\n point_estimate,\n hdi_markers,\n outline,\n shade,\n ax,\n):\n\n extra = dict()\n plotted = []\n\n if vec.dtype.kind == \"f\":\n if hdi_prob != 1:\n hdi_ = hdi(vec, hdi_prob, multimodal=False)\n new_vec = vec[(vec >= hdi_[0]) & (vec <= hdi_[1])]\n else:\n new_vec = vec\n\n x, density = kde(new_vec, circular=circular, bw=bw)\n density *= hdi_prob\n xmin, xmax = x[0], x[-1]\n ymin, ymax = density[0], density[-1]\n\n if outline:\n plotted.append(ax.line(x, density, line_color=color, line_width=line_width, **extra))\n plotted.append(\n ax.line(\n [xmin, xmin],\n [-ymin / 100, ymin],\n line_color=color,\n line_dash=\"solid\",\n line_width=line_width,\n muted_color=color,\n muted_alpha=0.2,\n )\n )\n plotted.append(\n ax.line(\n [xmax, xmax],\n [-ymax / 100, ymax],\n line_color=color,\n line_dash=\"solid\",\n line_width=line_width,\n muted_color=color,\n muted_alpha=0.2,\n )\n )\n\n if shade:\n plotted.append(\n ax.patch(\n np.r_[x[::-1], x, x[-1:]],\n np.r_[np.zeros_like(x), density, [0]],\n fill_color=color,\n fill_alpha=shade,\n muted_color=color,\n muted_alpha=0.2,\n **extra\n )\n )\n\n else:\n xmin, xmax = hdi(vec, hdi_prob, multimodal=False)\n bins = get_bins(vec)\n\n _, hist, edges = histogram(vec, bins=bins)\n\n if outline:\n plotted.append(\n ax.quad(\n top=hist,\n bottom=0,\n left=edges[:-1],\n right=edges[1:],\n line_color=color,\n fill_color=None,\n muted_color=color,\n muted_alpha=0.2,\n **extra\n )\n )\n else:\n plotted.append(\n ax.quad(\n top=hist,\n bottom=0,\n left=edges[:-1],\n right=edges[1:],\n line_color=color,\n fill_color=color,\n fill_alpha=shade,\n muted_color=color,\n muted_alpha=0.2,\n **extra\n )\n )\n\n if hdi_markers:\n plotted.append(ax.diamond(xmin, 0, line_color=\"black\", fill_color=color, size=markersize))\n plotted.append(ax.diamond(xmax, 0, line_color=\"black\", fill_color=color, size=markersize))\n\n if point_estimate is not None:\n est = calculate_point_estimate(point_estimate, vec, bw, circular)\n plotted.append(ax.circle(est, 0, fill_color=color, line_color=\"black\", size=markersize))\n\n _title = Title()\n _title.text = vname\n ax.title = _title\n ax.title.text_font_size = \"13pt\"\n\n return plotted\n", "path": "arviz/plots/backends/bokeh/densityplot.py"}], "after_files": [{"content": "\"\"\"Bokeh Densityplot.\"\"\"\nfrom collections import defaultdict\nfrom itertools import cycle\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom bokeh.models.annotations import Legend, Title\n\nfrom ....stats import hdi\nfrom ....stats.density_utils import get_bins, histogram, kde\nfrom ...plot_utils import _scale_fig_size, calculate_point_estimate, make_label, vectorized_to_hex\nfrom .. import show_layout\nfrom . import backend_kwarg_defaults, create_axes_grid\n\n\ndef plot_density(\n ax,\n all_labels,\n to_plot,\n colors,\n bw,\n circular,\n figsize,\n length_plotters,\n rows,\n cols,\n textsize,\n hdi_prob,\n point_estimate,\n hdi_markers,\n outline,\n shade,\n n_data,\n data_labels,\n backend_kwargs,\n show,\n):\n \"\"\"Bokeh density plot.\"\"\"\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {\n **backend_kwarg_defaults(),\n **backend_kwargs,\n }\n\n if colors == \"cycle\":\n colors = [\n prop\n for _, prop in zip(\n range(n_data), cycle(plt.rcParams[\"axes.prop_cycle\"].by_key()[\"color\"])\n )\n ]\n elif isinstance(colors, str):\n colors = [colors for _ in range(n_data)]\n colors = vectorized_to_hex(colors)\n\n (figsize, _, _, _, line_width, markersize) = _scale_fig_size(figsize, textsize, rows, cols)\n\n if ax is None:\n ax = create_axes_grid(\n length_plotters,\n rows,\n cols,\n figsize=figsize,\n squeeze=False,\n backend_kwargs=backend_kwargs,\n )\n else:\n ax = np.atleast_2d(ax)\n\n axis_map = {\n label: ax_\n for label, ax_ in zip(all_labels, (item for item in ax.flatten() if item is not None))\n }\n if data_labels is None:\n data_labels = {}\n\n legend_items = defaultdict(list)\n for m_idx, plotters in enumerate(to_plot):\n for var_name, selection, values in plotters:\n label = make_label(var_name, selection)\n\n if data_labels:\n data_label = data_labels[m_idx]\n else:\n data_label = None\n\n plotted = _d_helper(\n values.flatten(),\n label,\n colors[m_idx],\n bw,\n circular,\n line_width,\n markersize,\n hdi_prob,\n point_estimate,\n hdi_markers,\n outline,\n shade,\n axis_map[label],\n )\n if data_label is not None:\n legend_items[axis_map[label]].append((data_label, plotted))\n\n for ax1, legend in legend_items.items():\n legend = Legend(\n items=legend,\n location=\"center_right\",\n orientation=\"horizontal\",\n )\n ax1.add_layout(legend, \"above\")\n ax1.legend.click_policy = \"hide\"\n\n show_layout(ax, show)\n\n return ax\n\n\ndef _d_helper(\n vec,\n vname,\n color,\n bw,\n circular,\n line_width,\n markersize,\n hdi_prob,\n point_estimate,\n hdi_markers,\n outline,\n shade,\n ax,\n):\n\n extra = dict()\n plotted = []\n\n if vec.dtype.kind == \"f\":\n if hdi_prob != 1:\n hdi_ = hdi(vec, hdi_prob, multimodal=False)\n new_vec = vec[(vec >= hdi_[0]) & (vec <= hdi_[1])]\n else:\n new_vec = vec\n\n x, density = kde(new_vec, circular=circular, bw=bw)\n density *= hdi_prob\n xmin, xmax = x[0], x[-1]\n ymin, ymax = density[0], density[-1]\n\n if outline:\n plotted.append(ax.line(x, density, line_color=color, line_width=line_width, **extra))\n plotted.append(\n ax.line(\n [xmin, xmin],\n [-ymin / 100, ymin],\n line_color=color,\n line_dash=\"solid\",\n line_width=line_width,\n muted_color=color,\n muted_alpha=0.2,\n )\n )\n plotted.append(\n ax.line(\n [xmax, xmax],\n [-ymax / 100, ymax],\n line_color=color,\n line_dash=\"solid\",\n line_width=line_width,\n muted_color=color,\n muted_alpha=0.2,\n )\n )\n\n if shade:\n plotted.append(\n ax.patch(\n np.r_[x[::-1], x, x[-1:]],\n np.r_[np.zeros_like(x), density, [0]],\n fill_color=color,\n fill_alpha=shade,\n muted_color=color,\n muted_alpha=0.2,\n **extra\n )\n )\n\n else:\n xmin, xmax = hdi(vec, hdi_prob, multimodal=False)\n bins = get_bins(vec)\n\n _, hist, edges = histogram(vec, bins=bins)\n\n if outline:\n plotted.append(\n ax.quad(\n top=hist,\n bottom=0,\n left=edges[:-1],\n right=edges[1:],\n line_color=color,\n fill_color=None,\n muted_color=color,\n muted_alpha=0.2,\n **extra\n )\n )\n else:\n plotted.append(\n ax.quad(\n top=hist,\n bottom=0,\n left=edges[:-1],\n right=edges[1:],\n line_color=color,\n fill_color=color,\n fill_alpha=shade,\n muted_color=color,\n muted_alpha=0.2,\n **extra\n )\n )\n\n if hdi_markers:\n plotted.append(ax.diamond(xmin, 0, line_color=\"black\", fill_color=color, size=markersize))\n plotted.append(ax.diamond(xmax, 0, line_color=\"black\", fill_color=color, size=markersize))\n\n if point_estimate is not None:\n est = calculate_point_estimate(point_estimate, vec, bw, circular)\n plotted.append(ax.circle(est, 0, fill_color=color, line_color=\"black\", size=markersize))\n\n _title = Title()\n _title.text = vname\n ax.title = _title\n ax.title.text_font_size = \"13pt\"\n\n return plotted\n", "path": "arviz/plots/backends/bokeh/densityplot.py"}]}
| 2,807 | 110 |
gh_patches_debug_34978
|
rasdani/github-patches
|
git_diff
|
horovod__horovod-915
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mxnet+hvd with different random seed won't work with Gluon API
**Environment:**
1. Framework: (MXNet)
2. Framework version: 1.4
3. Horovod version: v0.16.0
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version: 2.7/3.6
8. OS and version:
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.md)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.md)?
**Bug report:**
Please describe errorneous behavior you're observing and steps to reproduce it.
If the shape of a parameter has to be inferred after a batch of data is seen, the parameter will not be broadcast. Therefore, the correctness of the program depends on whether all workers are initialized with the same random seed. So the following program will not work:
```
random.seed(hvd.local_rank())
data = ..
model = mx.gluon.nn.Dense(10)
model.initialize()
// no parameters are broadcast, because initialization is deferred.
hvd.broadcast_parameters(model.collect_parameters())
for batch in data:
// params are initialized after shape is known
pred = model(batch.data[0])
...
...
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/mxnet_mnist.py`
Content:
```
1 import argparse
2 import logging
3 import os
4 import zipfile
5 import time
6
7 import mxnet as mx
8 import horovod.mxnet as hvd
9 from mxnet import autograd, gluon, nd
10 from mxnet.test_utils import download
11
12 # Training settings
13 parser = argparse.ArgumentParser(description='MXNet MNIST Example')
14
15 parser.add_argument('--batch-size', type=int, default=64,
16 help='training batch size (default: 64)')
17 parser.add_argument('--dtype', type=str, default='float32',
18 help='training data type (default: float32)')
19 parser.add_argument('--epochs', type=int, default=5,
20 help='number of training epochs (default: 5)')
21 parser.add_argument('--lr', type=float, default=0.01,
22 help='learning rate (default: 0.01)')
23 parser.add_argument('--momentum', type=float, default=0.9,
24 help='SGD momentum (default: 0.9)')
25 parser.add_argument('--no-cuda', action='store_true', default=False,
26 help='disable training on GPU (default: False)')
27 args = parser.parse_args()
28
29 if not args.no_cuda:
30 # Disable CUDA if there are no GPUs.
31 if not mx.test_utils.list_gpus():
32 args.no_cuda = True
33
34 logging.basicConfig(level=logging.INFO)
35 logging.info(args)
36
37
38 # Function to get mnist iterator given a rank
39 def get_mnist_iterator(rank):
40 data_dir = "data-%d" % rank
41 if not os.path.isdir(data_dir):
42 os.makedirs(data_dir)
43 zip_file_path = download('http://data.mxnet.io/mxnet/data/mnist.zip',
44 dirname=data_dir)
45 with zipfile.ZipFile(zip_file_path) as zf:
46 zf.extractall(data_dir)
47
48 input_shape = (1, 28, 28)
49 batch_size = args.batch_size
50
51 train_iter = mx.io.MNISTIter(
52 image="%s/train-images-idx3-ubyte" % data_dir,
53 label="%s/train-labels-idx1-ubyte" % data_dir,
54 input_shape=input_shape,
55 batch_size=batch_size,
56 shuffle=True,
57 flat=False,
58 num_parts=hvd.size(),
59 part_index=hvd.rank()
60 )
61
62 val_iter = mx.io.MNISTIter(
63 image="%s/t10k-images-idx3-ubyte" % data_dir,
64 label="%s/t10k-labels-idx1-ubyte" % data_dir,
65 input_shape=input_shape,
66 batch_size=batch_size,
67 flat=False,
68 )
69
70 return train_iter, val_iter
71
72
73 # Function to define neural network
74 def conv_nets():
75 net = gluon.nn.HybridSequential()
76 with net.name_scope():
77 net.add(gluon.nn.Conv2D(channels=20, kernel_size=5, activation='relu'))
78 net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))
79 net.add(gluon.nn.Conv2D(channels=50, kernel_size=5, activation='relu'))
80 net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))
81 net.add(gluon.nn.Flatten())
82 net.add(gluon.nn.Dense(512, activation="relu"))
83 net.add(gluon.nn.Dense(10))
84 return net
85
86
87 # Function to evaluate accuracy for a model
88 def evaluate(model, data_iter, context):
89 data_iter.reset()
90 metric = mx.metric.Accuracy()
91 for _, batch in enumerate(data_iter):
92 data = batch.data[0].as_in_context(context)
93 label = batch.label[0].as_in_context(context)
94 output = model(data.astype(args.dtype, copy=False))
95 metric.update([label], [output])
96
97 return metric.get()
98
99
100 # Initialize Horovod
101 hvd.init()
102
103 # Horovod: pin context to local rank
104 context = mx.cpu(hvd.local_rank()) if args.no_cuda else mx.gpu(hvd.local_rank())
105 num_workers = hvd.size()
106
107 # Load training and validation data
108 train_data, val_data = get_mnist_iterator(hvd.rank())
109
110 # Build model
111 model = conv_nets()
112 model.cast(args.dtype)
113 model.hybridize()
114
115 # Define hyper parameters
116 optimizer_params = {'momentum': args.momentum,
117 'learning_rate': args.lr * hvd.size(),
118 'rescale_grad': 1.0 / args.batch_size}
119
120 # Add Horovod Distributed Optimizer
121 opt = mx.optimizer.create('sgd', **optimizer_params)
122 opt = hvd.DistributedOptimizer(opt)
123
124 # Initialize parameters
125 initializer = mx.init.Xavier(rnd_type='gaussian', factor_type="in",
126 magnitude=2)
127 model.initialize(initializer, ctx=context)
128
129 # Fetch and broadcast parameters
130 params = model.collect_params()
131 if params is not None:
132 hvd.broadcast_parameters(params, root_rank=0)
133
134 # Create trainer, loss function and train metric
135 trainer = gluon.Trainer(params, opt, kvstore=None)
136 loss_fn = gluon.loss.SoftmaxCrossEntropyLoss()
137 metric = mx.metric.Accuracy()
138
139 # Train model
140 for epoch in range(args.epochs):
141 tic = time.time()
142 train_data.reset()
143 metric.reset()
144 for nbatch, batch in enumerate(train_data, start=1):
145 data = batch.data[0].as_in_context(context)
146 label = batch.label[0].as_in_context(context)
147 with autograd.record():
148 output = model(data.astype(args.dtype, copy=False))
149 loss = loss_fn(output, label)
150 loss.backward()
151 trainer.step(args.batch_size)
152 metric.update([label], [output])
153
154 if nbatch % 100 == 0:
155 name, acc = metric.get()
156 logging.info('[Epoch %d Batch %d] Training: %s=%f' %
157 (epoch, nbatch, name, acc))
158
159 if hvd.rank() == 0:
160 elapsed = time.time() - tic
161 speed = nbatch * args.batch_size * hvd.size() / elapsed
162 logging.info('Epoch[%d]\tSpeed=%.2f samples/s\tTime cost=%f',
163 epoch, speed, elapsed)
164
165 # Evaluate model accuracy
166 _, train_acc = metric.get()
167 name, val_acc = evaluate(model, val_data, context)
168 if hvd.rank() == 0:
169 logging.info('Epoch[%d]\tTrain: %s=%f\tValidation: %s=%f', epoch, name,
170 train_acc, name, val_acc)
171
172 if hvd.rank() == 0 and epoch == args.epochs - 1:
173 assert val_acc > 0.96, "Achieved accuracy (%f) is lower than expected\
174 (0.96)" % val_acc
175
```
Path: `horovod/mxnet/__init__.py`
Content:
```
1 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15
16 from __future__ import absolute_import
17 from __future__ import division
18 from __future__ import print_function
19
20 from horovod.common import check_extension
21
22 check_extension('horovod.mxnet', 'HOROVOD_WITH_MXNET',
23 __file__, 'mpi_lib')
24
25 from horovod.mxnet.mpi_ops import allgather
26 from horovod.mxnet.mpi_ops import allreduce, allreduce_
27 from horovod.mxnet.mpi_ops import broadcast, broadcast_
28 from horovod.mxnet.mpi_ops import init, shutdown
29 from horovod.mxnet.mpi_ops import size, local_size, rank, local_rank
30 from horovod.mxnet.mpi_ops import mpi_threads_supported
31
32 import mxnet as mx
33
34
35 # This is where Horovod's DistributedOptimizer wrapper for MXNet goes
36 class DistributedOptimizer(mx.optimizer.Optimizer):
37 def __init__(self, optimizer):
38 self._optimizer = optimizer
39
40 def __getattr__(self, item):
41 return getattr(self._optimizer, item)
42
43 def create_state_multi_precision(self, index, weight):
44 return self._optimizer.create_state_multi_precision(index, weight)
45
46 def _do_allreduce(self, index, grad):
47 if isinstance(index, (tuple, list)):
48 for i in range(len(index)):
49 allreduce_(grad[i], average=True, name=str(index[i]))
50 else:
51 allreduce_(grad, average=True, name=str(index))
52
53 def update(self, index, weight, grad, state):
54 self._do_allreduce(index, grad)
55 self._optimizer.update(index, weight, grad, state)
56
57 def update_multi_precision(self, index, weight, grad, state):
58 self._do_allreduce(index, grad)
59 self._optimizer.update_multi_precision(index, weight, grad, state)
60
61 def set_learning_rate(self, lr):
62 self._optimizer.set_learning_rate(lr)
63
64 def set_lr_mult(self, args_lr_mult):
65 self._optimizer.set_lr_mult(args_lr_mult)
66
67 def set_wd_mult(self, args_wd_mult):
68 self._optimizer.set_wd_mult(args_wd_mult)
69
70
71 def broadcast_parameters(params, root_rank=0):
72 """
73 Broadcasts the parameters from root rank to all other processes.
74 Typical usage is to broadcast the `Module.get_params()` or the
75 `Block.collect_params()`.
76
77 Arguments:
78 params: One of the following:
79 - dict of parameters to broadcast
80 - ParameterDict to broadcast
81 root_rank: The rank of the process from which parameters will be
82 broadcasted to all other processes.
83 """
84 tensors = []
85 if isinstance(params, dict):
86 tensors = [p for _, p in sorted(params.items())]
87 elif isinstance(params, mx.gluon.parameter.ParameterDict):
88 for _, p in sorted(params.items()):
89 try:
90 tensors.append(p.data())
91 except mx.gluon.parameter.DeferredInitializationError:
92 # skip broadcasting deferred init param
93 pass
94 else:
95 raise ValueError('invalid params of type: %s' % type(params))
96
97 # Run broadcasts.
98 for i, tensor in enumerate(tensors):
99 broadcast_(tensor, root_rank, str(i))
100
101 # Make sure tensors pushed to MXNet engine get processed such that all
102 # workers are synced before starting training.
103 for tensor in tensors:
104 tensor.wait_to_read()
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/mxnet_mnist.py b/examples/mxnet_mnist.py
--- a/examples/mxnet_mnist.py
+++ b/examples/mxnet_mnist.py
@@ -112,13 +112,12 @@
model.cast(args.dtype)
model.hybridize()
-# Define hyper parameters
+# Create optimizer
optimizer_params = {'momentum': args.momentum,
'learning_rate': args.lr * hvd.size(),
'rescale_grad': 1.0 / args.batch_size}
-
-# Add Horovod Distributed Optimizer
opt = mx.optimizer.create('sgd', **optimizer_params)
+# Horovod: wrap optimizer with DistributedOptimizer
opt = hvd.DistributedOptimizer(opt)
# Initialize parameters
diff --git a/horovod/mxnet/__init__.py b/horovod/mxnet/__init__.py
--- a/horovod/mxnet/__init__.py
+++ b/horovod/mxnet/__init__.py
@@ -30,6 +30,7 @@
from horovod.mxnet.mpi_ops import mpi_threads_supported
import mxnet as mx
+import types
# This is where Horovod's DistributedOptimizer wrapper for MXNet goes
@@ -68,6 +69,16 @@
self._optimizer.set_wd_mult(args_wd_mult)
+# Wrapper to inject Horovod broadcast after parameter initialization
+def _append_broadcast_init(param, root_rank):
+ init_impl = getattr(param, '_init_impl')
+ def wrapped_init_impl(self, *args, **kwargs):
+ init_impl(*args, **kwargs)
+ broadcast_(self.data(), root_rank=root_rank)
+ self.data().wait_to_read()
+ return wrapped_init_impl
+
+
def broadcast_parameters(params, root_rank=0):
"""
Broadcasts the parameters from root rank to all other processes.
@@ -89,8 +100,10 @@
try:
tensors.append(p.data())
except mx.gluon.parameter.DeferredInitializationError:
- # skip broadcasting deferred init param
- pass
+ # Inject wrapper method with post-initialization broadcast to
+ # handle parameters with deferred initialization
+ new_init = _append_broadcast_init(p, root_rank)
+ p._init_impl = types.MethodType(new_init, p)
else:
raise ValueError('invalid params of type: %s' % type(params))
|
{"golden_diff": "diff --git a/examples/mxnet_mnist.py b/examples/mxnet_mnist.py\n--- a/examples/mxnet_mnist.py\n+++ b/examples/mxnet_mnist.py\n@@ -112,13 +112,12 @@\n model.cast(args.dtype)\n model.hybridize()\n \n-# Define hyper parameters\n+# Create optimizer\n optimizer_params = {'momentum': args.momentum,\n 'learning_rate': args.lr * hvd.size(),\n 'rescale_grad': 1.0 / args.batch_size}\n-\n-# Add Horovod Distributed Optimizer\n opt = mx.optimizer.create('sgd', **optimizer_params)\n+# Horovod: wrap optimizer with DistributedOptimizer\n opt = hvd.DistributedOptimizer(opt)\n \n # Initialize parameters\ndiff --git a/horovod/mxnet/__init__.py b/horovod/mxnet/__init__.py\n--- a/horovod/mxnet/__init__.py\n+++ b/horovod/mxnet/__init__.py\n@@ -30,6 +30,7 @@\n from horovod.mxnet.mpi_ops import mpi_threads_supported\n \n import mxnet as mx\n+import types\n \n \n # This is where Horovod's DistributedOptimizer wrapper for MXNet goes\n@@ -68,6 +69,16 @@\n self._optimizer.set_wd_mult(args_wd_mult)\n \n \n+# Wrapper to inject Horovod broadcast after parameter initialization\n+def _append_broadcast_init(param, root_rank):\n+ init_impl = getattr(param, '_init_impl')\n+ def wrapped_init_impl(self, *args, **kwargs):\n+ init_impl(*args, **kwargs)\n+ broadcast_(self.data(), root_rank=root_rank)\n+ self.data().wait_to_read()\n+ return wrapped_init_impl\n+\n+\n def broadcast_parameters(params, root_rank=0):\n \"\"\"\n Broadcasts the parameters from root rank to all other processes.\n@@ -89,8 +100,10 @@\n try:\n tensors.append(p.data())\n except mx.gluon.parameter.DeferredInitializationError:\n- # skip broadcasting deferred init param\n- pass\n+ # Inject wrapper method with post-initialization broadcast to\n+ # handle parameters with deferred initialization\n+ new_init = _append_broadcast_init(p, root_rank)\n+ p._init_impl = types.MethodType(new_init, p)\n else:\n raise ValueError('invalid params of type: %s' % type(params))\n", "issue": "mxnet+hvd with different random seed won't work with Gluon API\n**Environment:**\r\n1. Framework: (MXNet)\r\n2. Framework version: 1.4\r\n3. Horovod version: v0.16.0 \r\n4. MPI version: \r\n5. CUDA version:\r\n6. NCCL version:\r\n7. Python version: 2.7/3.6\r\n8. OS and version: \r\n\r\n**Checklist:**\r\n1. Did you search issues to find if somebody asked this question before?\r\n2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.md)?\r\n3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.md)?\r\n\r\n**Bug report:**\r\nPlease describe errorneous behavior you're observing and steps to reproduce it.\r\n\r\nIf the shape of a parameter has to be inferred after a batch of data is seen, the parameter will not be broadcast. Therefore, the correctness of the program depends on whether all workers are initialized with the same random seed. So the following program will not work:\r\n\r\n```\r\nrandom.seed(hvd.local_rank())\r\ndata = ..\r\nmodel = mx.gluon.nn.Dense(10)\r\nmodel.initialize()\r\n\r\n// no parameters are broadcast, because initialization is deferred.\r\nhvd.broadcast_parameters(model.collect_parameters())\r\n\r\nfor batch in data:\r\n // params are initialized after shape is known\r\n pred = model(batch.data[0])\r\n ...\r\n...\r\n```\r\n\n", "before_files": [{"content": "import argparse\nimport logging\nimport os\nimport zipfile\nimport time\n\nimport mxnet as mx\nimport horovod.mxnet as hvd\nfrom mxnet import autograd, gluon, nd\nfrom mxnet.test_utils import download\n\n# Training settings\nparser = argparse.ArgumentParser(description='MXNet MNIST Example')\n\nparser.add_argument('--batch-size', type=int, default=64,\n help='training batch size (default: 64)')\nparser.add_argument('--dtype', type=str, default='float32',\n help='training data type (default: float32)')\nparser.add_argument('--epochs', type=int, default=5,\n help='number of training epochs (default: 5)')\nparser.add_argument('--lr', type=float, default=0.01,\n help='learning rate (default: 0.01)')\nparser.add_argument('--momentum', type=float, default=0.9,\n help='SGD momentum (default: 0.9)')\nparser.add_argument('--no-cuda', action='store_true', default=False,\n help='disable training on GPU (default: False)')\nargs = parser.parse_args()\n\nif not args.no_cuda:\n # Disable CUDA if there are no GPUs.\n if not mx.test_utils.list_gpus():\n args.no_cuda = True\n\nlogging.basicConfig(level=logging.INFO)\nlogging.info(args)\n\n\n# Function to get mnist iterator given a rank\ndef get_mnist_iterator(rank):\n data_dir = \"data-%d\" % rank\n if not os.path.isdir(data_dir):\n os.makedirs(data_dir)\n zip_file_path = download('http://data.mxnet.io/mxnet/data/mnist.zip',\n dirname=data_dir)\n with zipfile.ZipFile(zip_file_path) as zf:\n zf.extractall(data_dir)\n\n input_shape = (1, 28, 28)\n batch_size = args.batch_size\n\n train_iter = mx.io.MNISTIter(\n image=\"%s/train-images-idx3-ubyte\" % data_dir,\n label=\"%s/train-labels-idx1-ubyte\" % data_dir,\n input_shape=input_shape,\n batch_size=batch_size,\n shuffle=True,\n flat=False,\n num_parts=hvd.size(),\n part_index=hvd.rank()\n )\n\n val_iter = mx.io.MNISTIter(\n image=\"%s/t10k-images-idx3-ubyte\" % data_dir,\n label=\"%s/t10k-labels-idx1-ubyte\" % data_dir,\n input_shape=input_shape,\n batch_size=batch_size,\n flat=False,\n )\n\n return train_iter, val_iter\n\n\n# Function to define neural network\ndef conv_nets():\n net = gluon.nn.HybridSequential()\n with net.name_scope():\n net.add(gluon.nn.Conv2D(channels=20, kernel_size=5, activation='relu'))\n net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))\n net.add(gluon.nn.Conv2D(channels=50, kernel_size=5, activation='relu'))\n net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))\n net.add(gluon.nn.Flatten())\n net.add(gluon.nn.Dense(512, activation=\"relu\"))\n net.add(gluon.nn.Dense(10))\n return net\n\n\n# Function to evaluate accuracy for a model\ndef evaluate(model, data_iter, context):\n data_iter.reset()\n metric = mx.metric.Accuracy()\n for _, batch in enumerate(data_iter):\n data = batch.data[0].as_in_context(context)\n label = batch.label[0].as_in_context(context)\n output = model(data.astype(args.dtype, copy=False))\n metric.update([label], [output])\n\n return metric.get()\n\n\n# Initialize Horovod\nhvd.init()\n\n# Horovod: pin context to local rank\ncontext = mx.cpu(hvd.local_rank()) if args.no_cuda else mx.gpu(hvd.local_rank())\nnum_workers = hvd.size()\n\n# Load training and validation data\ntrain_data, val_data = get_mnist_iterator(hvd.rank())\n\n# Build model\nmodel = conv_nets()\nmodel.cast(args.dtype)\nmodel.hybridize()\n\n# Define hyper parameters\noptimizer_params = {'momentum': args.momentum,\n 'learning_rate': args.lr * hvd.size(),\n 'rescale_grad': 1.0 / args.batch_size}\n\n# Add Horovod Distributed Optimizer\nopt = mx.optimizer.create('sgd', **optimizer_params)\nopt = hvd.DistributedOptimizer(opt)\n\n# Initialize parameters\ninitializer = mx.init.Xavier(rnd_type='gaussian', factor_type=\"in\",\n magnitude=2)\nmodel.initialize(initializer, ctx=context)\n\n# Fetch and broadcast parameters\nparams = model.collect_params()\nif params is not None:\n hvd.broadcast_parameters(params, root_rank=0)\n\n# Create trainer, loss function and train metric\ntrainer = gluon.Trainer(params, opt, kvstore=None)\nloss_fn = gluon.loss.SoftmaxCrossEntropyLoss()\nmetric = mx.metric.Accuracy()\n\n# Train model\nfor epoch in range(args.epochs):\n tic = time.time()\n train_data.reset()\n metric.reset()\n for nbatch, batch in enumerate(train_data, start=1):\n data = batch.data[0].as_in_context(context)\n label = batch.label[0].as_in_context(context)\n with autograd.record():\n output = model(data.astype(args.dtype, copy=False))\n loss = loss_fn(output, label)\n loss.backward()\n trainer.step(args.batch_size)\n metric.update([label], [output])\n\n if nbatch % 100 == 0:\n name, acc = metric.get()\n logging.info('[Epoch %d Batch %d] Training: %s=%f' %\n (epoch, nbatch, name, acc))\n\n if hvd.rank() == 0:\n elapsed = time.time() - tic\n speed = nbatch * args.batch_size * hvd.size() / elapsed\n logging.info('Epoch[%d]\\tSpeed=%.2f samples/s\\tTime cost=%f',\n epoch, speed, elapsed)\n\n # Evaluate model accuracy\n _, train_acc = metric.get()\n name, val_acc = evaluate(model, val_data, context)\n if hvd.rank() == 0:\n logging.info('Epoch[%d]\\tTrain: %s=%f\\tValidation: %s=%f', epoch, name,\n train_acc, name, val_acc)\n\n if hvd.rank() == 0 and epoch == args.epochs - 1:\n assert val_acc > 0.96, \"Achieved accuracy (%f) is lower than expected\\\n (0.96)\" % val_acc\n", "path": "examples/mxnet_mnist.py"}, {"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom horovod.common import check_extension\n\ncheck_extension('horovod.mxnet', 'HOROVOD_WITH_MXNET',\n __file__, 'mpi_lib')\n\nfrom horovod.mxnet.mpi_ops import allgather\nfrom horovod.mxnet.mpi_ops import allreduce, allreduce_\nfrom horovod.mxnet.mpi_ops import broadcast, broadcast_\nfrom horovod.mxnet.mpi_ops import init, shutdown\nfrom horovod.mxnet.mpi_ops import size, local_size, rank, local_rank\nfrom horovod.mxnet.mpi_ops import mpi_threads_supported\n\nimport mxnet as mx\n\n\n# This is where Horovod's DistributedOptimizer wrapper for MXNet goes\nclass DistributedOptimizer(mx.optimizer.Optimizer):\n def __init__(self, optimizer):\n self._optimizer = optimizer\n\n def __getattr__(self, item):\n return getattr(self._optimizer, item)\n\n def create_state_multi_precision(self, index, weight):\n return self._optimizer.create_state_multi_precision(index, weight)\n\n def _do_allreduce(self, index, grad):\n if isinstance(index, (tuple, list)):\n for i in range(len(index)):\n allreduce_(grad[i], average=True, name=str(index[i]))\n else:\n allreduce_(grad, average=True, name=str(index))\n\n def update(self, index, weight, grad, state):\n self._do_allreduce(index, grad)\n self._optimizer.update(index, weight, grad, state)\n\n def update_multi_precision(self, index, weight, grad, state):\n self._do_allreduce(index, grad)\n self._optimizer.update_multi_precision(index, weight, grad, state)\n\n def set_learning_rate(self, lr):\n self._optimizer.set_learning_rate(lr)\n\n def set_lr_mult(self, args_lr_mult):\n self._optimizer.set_lr_mult(args_lr_mult)\n\n def set_wd_mult(self, args_wd_mult):\n self._optimizer.set_wd_mult(args_wd_mult)\n\n\ndef broadcast_parameters(params, root_rank=0):\n \"\"\"\n Broadcasts the parameters from root rank to all other processes.\n Typical usage is to broadcast the `Module.get_params()` or the\n `Block.collect_params()`.\n\n Arguments:\n params: One of the following:\n - dict of parameters to broadcast\n - ParameterDict to broadcast\n root_rank: The rank of the process from which parameters will be\n broadcasted to all other processes.\n \"\"\"\n tensors = []\n if isinstance(params, dict):\n tensors = [p for _, p in sorted(params.items())]\n elif isinstance(params, mx.gluon.parameter.ParameterDict):\n for _, p in sorted(params.items()):\n try:\n tensors.append(p.data())\n except mx.gluon.parameter.DeferredInitializationError:\n # skip broadcasting deferred init param\n pass\n else:\n raise ValueError('invalid params of type: %s' % type(params))\n\n # Run broadcasts.\n for i, tensor in enumerate(tensors):\n broadcast_(tensor, root_rank, str(i))\n\n # Make sure tensors pushed to MXNet engine get processed such that all\n # workers are synced before starting training.\n for tensor in tensors:\n tensor.wait_to_read()\n", "path": "horovod/mxnet/__init__.py"}], "after_files": [{"content": "import argparse\nimport logging\nimport os\nimport zipfile\nimport time\n\nimport mxnet as mx\nimport horovod.mxnet as hvd\nfrom mxnet import autograd, gluon, nd\nfrom mxnet.test_utils import download\n\n# Training settings\nparser = argparse.ArgumentParser(description='MXNet MNIST Example')\n\nparser.add_argument('--batch-size', type=int, default=64,\n help='training batch size (default: 64)')\nparser.add_argument('--dtype', type=str, default='float32',\n help='training data type (default: float32)')\nparser.add_argument('--epochs', type=int, default=5,\n help='number of training epochs (default: 5)')\nparser.add_argument('--lr', type=float, default=0.01,\n help='learning rate (default: 0.01)')\nparser.add_argument('--momentum', type=float, default=0.9,\n help='SGD momentum (default: 0.9)')\nparser.add_argument('--no-cuda', action='store_true', default=False,\n help='disable training on GPU (default: False)')\nargs = parser.parse_args()\n\nif not args.no_cuda:\n # Disable CUDA if there are no GPUs.\n if not mx.test_utils.list_gpus():\n args.no_cuda = True\n\nlogging.basicConfig(level=logging.INFO)\nlogging.info(args)\n\n\n# Function to get mnist iterator given a rank\ndef get_mnist_iterator(rank):\n data_dir = \"data-%d\" % rank\n if not os.path.isdir(data_dir):\n os.makedirs(data_dir)\n zip_file_path = download('http://data.mxnet.io/mxnet/data/mnist.zip',\n dirname=data_dir)\n with zipfile.ZipFile(zip_file_path) as zf:\n zf.extractall(data_dir)\n\n input_shape = (1, 28, 28)\n batch_size = args.batch_size\n\n train_iter = mx.io.MNISTIter(\n image=\"%s/train-images-idx3-ubyte\" % data_dir,\n label=\"%s/train-labels-idx1-ubyte\" % data_dir,\n input_shape=input_shape,\n batch_size=batch_size,\n shuffle=True,\n flat=False,\n num_parts=hvd.size(),\n part_index=hvd.rank()\n )\n\n val_iter = mx.io.MNISTIter(\n image=\"%s/t10k-images-idx3-ubyte\" % data_dir,\n label=\"%s/t10k-labels-idx1-ubyte\" % data_dir,\n input_shape=input_shape,\n batch_size=batch_size,\n flat=False,\n )\n\n return train_iter, val_iter\n\n\n# Function to define neural network\ndef conv_nets():\n net = gluon.nn.HybridSequential()\n with net.name_scope():\n net.add(gluon.nn.Conv2D(channels=20, kernel_size=5, activation='relu'))\n net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))\n net.add(gluon.nn.Conv2D(channels=50, kernel_size=5, activation='relu'))\n net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))\n net.add(gluon.nn.Flatten())\n net.add(gluon.nn.Dense(512, activation=\"relu\"))\n net.add(gluon.nn.Dense(10))\n return net\n\n\n# Function to evaluate accuracy for a model\ndef evaluate(model, data_iter, context):\n data_iter.reset()\n metric = mx.metric.Accuracy()\n for _, batch in enumerate(data_iter):\n data = batch.data[0].as_in_context(context)\n label = batch.label[0].as_in_context(context)\n output = model(data.astype(args.dtype, copy=False))\n metric.update([label], [output])\n\n return metric.get()\n\n\n# Initialize Horovod\nhvd.init()\n\n# Horovod: pin context to local rank\ncontext = mx.cpu(hvd.local_rank()) if args.no_cuda else mx.gpu(hvd.local_rank())\nnum_workers = hvd.size()\n\n# Load training and validation data\ntrain_data, val_data = get_mnist_iterator(hvd.rank())\n\n# Build model\nmodel = conv_nets()\nmodel.cast(args.dtype)\nmodel.hybridize()\n\n# Create optimizer\noptimizer_params = {'momentum': args.momentum,\n 'learning_rate': args.lr * hvd.size(),\n 'rescale_grad': 1.0 / args.batch_size}\nopt = mx.optimizer.create('sgd', **optimizer_params)\n# Horovod: wrap optimizer with DistributedOptimizer\nopt = hvd.DistributedOptimizer(opt)\n\n# Initialize parameters\ninitializer = mx.init.Xavier(rnd_type='gaussian', factor_type=\"in\",\n magnitude=2)\nmodel.initialize(initializer, ctx=context)\n\n# Fetch and broadcast parameters\nparams = model.collect_params()\nif params is not None:\n hvd.broadcast_parameters(params, root_rank=0)\n\n# Create trainer, loss function and train metric\ntrainer = gluon.Trainer(params, opt, kvstore=None)\nloss_fn = gluon.loss.SoftmaxCrossEntropyLoss()\nmetric = mx.metric.Accuracy()\n\n# Train model\nfor epoch in range(args.epochs):\n tic = time.time()\n train_data.reset()\n metric.reset()\n for nbatch, batch in enumerate(train_data, start=1):\n data = batch.data[0].as_in_context(context)\n label = batch.label[0].as_in_context(context)\n with autograd.record():\n output = model(data.astype(args.dtype, copy=False))\n loss = loss_fn(output, label)\n loss.backward()\n trainer.step(args.batch_size)\n metric.update([label], [output])\n\n if nbatch % 100 == 0:\n name, acc = metric.get()\n logging.info('[Epoch %d Batch %d] Training: %s=%f' %\n (epoch, nbatch, name, acc))\n\n if hvd.rank() == 0:\n elapsed = time.time() - tic\n speed = nbatch * args.batch_size * hvd.size() / elapsed\n logging.info('Epoch[%d]\\tSpeed=%.2f samples/s\\tTime cost=%f',\n epoch, speed, elapsed)\n\n # Evaluate model accuracy\n _, train_acc = metric.get()\n name, val_acc = evaluate(model, val_data, context)\n if hvd.rank() == 0:\n logging.info('Epoch[%d]\\tTrain: %s=%f\\tValidation: %s=%f', epoch, name,\n train_acc, name, val_acc)\n\n if hvd.rank() == 0 and epoch == args.epochs - 1:\n assert val_acc > 0.96, \"Achieved accuracy (%f) is lower than expected\\\n (0.96)\" % val_acc\n", "path": "examples/mxnet_mnist.py"}, {"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom horovod.common import check_extension\n\ncheck_extension('horovod.mxnet', 'HOROVOD_WITH_MXNET',\n __file__, 'mpi_lib')\n\nfrom horovod.mxnet.mpi_ops import allgather\nfrom horovod.mxnet.mpi_ops import allreduce, allreduce_\nfrom horovod.mxnet.mpi_ops import broadcast, broadcast_\nfrom horovod.mxnet.mpi_ops import init, shutdown\nfrom horovod.mxnet.mpi_ops import size, local_size, rank, local_rank\nfrom horovod.mxnet.mpi_ops import mpi_threads_supported\n\nimport mxnet as mx\nimport types\n\n\n# This is where Horovod's DistributedOptimizer wrapper for MXNet goes\nclass DistributedOptimizer(mx.optimizer.Optimizer):\n def __init__(self, optimizer):\n self._optimizer = optimizer\n\n def __getattr__(self, item):\n return getattr(self._optimizer, item)\n\n def create_state_multi_precision(self, index, weight):\n return self._optimizer.create_state_multi_precision(index, weight)\n\n def _do_allreduce(self, index, grad):\n if isinstance(index, (tuple, list)):\n for i in range(len(index)):\n allreduce_(grad[i], average=True, name=str(index[i]))\n else:\n allreduce_(grad, average=True, name=str(index))\n\n def update(self, index, weight, grad, state):\n self._do_allreduce(index, grad)\n self._optimizer.update(index, weight, grad, state)\n\n def update_multi_precision(self, index, weight, grad, state):\n self._do_allreduce(index, grad)\n self._optimizer.update_multi_precision(index, weight, grad, state)\n\n def set_learning_rate(self, lr):\n self._optimizer.set_learning_rate(lr)\n\n def set_lr_mult(self, args_lr_mult):\n self._optimizer.set_lr_mult(args_lr_mult)\n\n def set_wd_mult(self, args_wd_mult):\n self._optimizer.set_wd_mult(args_wd_mult)\n\n\n# Wrapper to inject Horovod broadcast after parameter initialization\ndef _append_broadcast_init(param, root_rank):\n init_impl = getattr(param, '_init_impl')\n def wrapped_init_impl(self, *args, **kwargs):\n init_impl(*args, **kwargs)\n broadcast_(self.data(), root_rank=root_rank)\n self.data().wait_to_read()\n return wrapped_init_impl\n\n\ndef broadcast_parameters(params, root_rank=0):\n \"\"\"\n Broadcasts the parameters from root rank to all other processes.\n Typical usage is to broadcast the `Module.get_params()` or the\n `Block.collect_params()`.\n\n Arguments:\n params: One of the following:\n - dict of parameters to broadcast\n - ParameterDict to broadcast\n root_rank: The rank of the process from which parameters will be\n broadcasted to all other processes.\n \"\"\"\n tensors = []\n if isinstance(params, dict):\n tensors = [p for _, p in sorted(params.items())]\n elif isinstance(params, mx.gluon.parameter.ParameterDict):\n for _, p in sorted(params.items()):\n try:\n tensors.append(p.data())\n except mx.gluon.parameter.DeferredInitializationError:\n # Inject wrapper method with post-initialization broadcast to\n # handle parameters with deferred initialization\n new_init = _append_broadcast_init(p, root_rank)\n p._init_impl = types.MethodType(new_init, p)\n else:\n raise ValueError('invalid params of type: %s' % type(params))\n\n # Run broadcasts.\n for i, tensor in enumerate(tensors):\n broadcast_(tensor, root_rank, str(i))\n\n # Make sure tensors pushed to MXNet engine get processed such that all\n # workers are synced before starting training.\n for tensor in tensors:\n tensor.wait_to_read()\n", "path": "horovod/mxnet/__init__.py"}]}
| 3,572 | 532 |
gh_patches_debug_1769
|
rasdani/github-patches
|
git_diff
|
googleapis__google-auth-library-python-697
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing pyOpenSSL Dependency
Thanks for stopping by to let us know something could be better!
**PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response.
Please run down the following list and make sure you've tried the usual "quick fixes":
- Search the issues already opened: https://github.com/googleapis/google-auth-library-python/issues
If you are still having issues, please be sure to include as much information as possible:
#### Environment details
- OS:
- Python version:
- pip version:
- `google-auth` version:
#### Steps to reproduce
1. Missing pyOpenSSL dependency in setup.py
For the tests there is a requirement in https://github.com/googleapis/google-auth-library-python/blob/master/noxfile.py against pyOpenSSL. But there are imports for pyOpenSSL in multiple modules in the code. Should pyOpenSSL be added to the requirements in setup.py?
I created https://github.com/googleapis/google-auth-library-python/pull/550 with the proposal but wanted to get feedback from an issue first as I don't see this showing up in previous issues or pull requests.
Making sure to follow these steps will guarantee the quickest resolution possible.
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16
17 from setuptools import find_packages
18 from setuptools import setup
19
20
21 DEPENDENCIES = (
22 "cachetools>=2.0.0,<5.0",
23 "pyasn1-modules>=0.2.1",
24 # rsa==4.5 is the last version to support 2.7
25 # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233
26 'rsa<4.6; python_version < "3.6"',
27 'rsa>=3.1.4,<5; python_version >= "3.6"',
28 "setuptools>=40.3.0",
29 "six>=1.9.0",
30 )
31
32 extras = {"aiohttp": "aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'"}
33
34 with io.open("README.rst", "r") as fh:
35 long_description = fh.read()
36
37 version = "1.26.1"
38
39 setup(
40 name="google-auth",
41 version=version,
42 author="Google Cloud Platform",
43 author_email="[email protected]",
44 description="Google Authentication Library",
45 long_description=long_description,
46 url="https://github.com/googleapis/google-auth-library-python",
47 packages=find_packages(exclude=("tests*", "system_tests*")),
48 namespace_packages=("google",),
49 install_requires=DEPENDENCIES,
50 extras_require=extras,
51 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*",
52 license="Apache 2.0",
53 keywords="google auth oauth client",
54 classifiers=[
55 "Programming Language :: Python :: 2",
56 "Programming Language :: Python :: 2.7",
57 "Programming Language :: Python :: 3",
58 "Programming Language :: Python :: 3.6",
59 "Programming Language :: Python :: 3.7",
60 "Programming Language :: Python :: 3.8",
61 "Programming Language :: Python :: 3.9",
62 "Development Status :: 5 - Production/Stable",
63 "Intended Audience :: Developers",
64 "License :: OSI Approved :: Apache Software License",
65 "Operating System :: POSIX",
66 "Operating System :: Microsoft :: Windows",
67 "Operating System :: MacOS :: MacOS X",
68 "Operating System :: OS Independent",
69 "Topic :: Internet :: WWW/HTTP",
70 ],
71 )
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,10 @@
"six>=1.9.0",
)
-extras = {"aiohttp": "aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'"}
+extras = {
+ "aiohttp": "aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'",
+ "pyopenssl": "pyopenssl>=20.0.0",
+}
with io.open("README.rst", "r") as fh:
long_description = fh.read()
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,7 +29,10 @@\n \"six>=1.9.0\",\n )\n \n-extras = {\"aiohttp\": \"aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'\"}\n+extras = {\n+ \"aiohttp\": \"aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'\",\n+ \"pyopenssl\": \"pyopenssl>=20.0.0\",\n+}\n \n with io.open(\"README.rst\", \"r\") as fh:\n long_description = fh.read()\n", "issue": "Missing pyOpenSSL Dependency\nThanks for stopping by to let us know something could be better!\r\n\r\n**PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response.\r\n\r\nPlease run down the following list and make sure you've tried the usual \"quick fixes\":\r\n\r\n - Search the issues already opened: https://github.com/googleapis/google-auth-library-python/issues\r\n\r\nIf you are still having issues, please be sure to include as much information as possible:\r\n\r\n#### Environment details\r\n\r\n - OS:\r\n - Python version:\r\n - pip version:\r\n - `google-auth` version:\r\n\r\n#### Steps to reproduce\r\n\r\n 1. Missing pyOpenSSL dependency in setup.py\r\n\r\nFor the tests there is a requirement in https://github.com/googleapis/google-auth-library-python/blob/master/noxfile.py against pyOpenSSL. But there are imports for pyOpenSSL in multiple modules in the code. Should pyOpenSSL be added to the requirements in setup.py?\r\n\r\nI created https://github.com/googleapis/google-auth-library-python/pull/550 with the proposal but wanted to get feedback from an issue first as I don't see this showing up in previous issues or pull requests.\r\n\r\nMaking sure to follow these steps will guarantee the quickest resolution possible.\r\n\r\nThanks!\r\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\nDEPENDENCIES = (\n \"cachetools>=2.0.0,<5.0\",\n \"pyasn1-modules>=0.2.1\",\n # rsa==4.5 is the last version to support 2.7\n # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233\n 'rsa<4.6; python_version < \"3.6\"',\n 'rsa>=3.1.4,<5; python_version >= \"3.6\"',\n \"setuptools>=40.3.0\",\n \"six>=1.9.0\",\n)\n\nextras = {\"aiohttp\": \"aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'\"}\n\nwith io.open(\"README.rst\", \"r\") as fh:\n long_description = fh.read()\n\nversion = \"1.26.1\"\n\nsetup(\n name=\"google-auth\",\n version=version,\n author=\"Google Cloud Platform\",\n author_email=\"[email protected]\",\n description=\"Google Authentication Library\",\n long_description=long_description,\n url=\"https://github.com/googleapis/google-auth-library-python\",\n packages=find_packages(exclude=(\"tests*\", \"system_tests*\")),\n namespace_packages=(\"google\",),\n install_requires=DEPENDENCIES,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*\",\n license=\"Apache 2.0\",\n keywords=\"google auth oauth client\",\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\nDEPENDENCIES = (\n \"cachetools>=2.0.0,<5.0\",\n \"pyasn1-modules>=0.2.1\",\n # rsa==4.5 is the last version to support 2.7\n # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233\n 'rsa<4.6; python_version < \"3.6\"',\n 'rsa>=3.1.4,<5; python_version >= \"3.6\"',\n \"setuptools>=40.3.0\",\n \"six>=1.9.0\",\n)\n\nextras = {\n \"aiohttp\": \"aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'\",\n \"pyopenssl\": \"pyopenssl>=20.0.0\",\n}\n\nwith io.open(\"README.rst\", \"r\") as fh:\n long_description = fh.read()\n\nversion = \"1.26.1\"\n\nsetup(\n name=\"google-auth\",\n version=version,\n author=\"Google Cloud Platform\",\n author_email=\"[email protected]\",\n description=\"Google Authentication Library\",\n long_description=long_description,\n url=\"https://github.com/googleapis/google-auth-library-python\",\n packages=find_packages(exclude=(\"tests*\", \"system_tests*\")),\n namespace_packages=(\"google\",),\n install_requires=DEPENDENCIES,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*\",\n license=\"Apache 2.0\",\n keywords=\"google auth oauth client\",\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}]}
| 1,351 | 164 |
gh_patches_debug_31620
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-1822
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix case in docs
As mentioned in https://github.com/kedro-org/kedro/pull/1760#pullrequestreview-1069581386_
Change `excel`(lowercase) to `Excel`(uppercase).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/extras/datasets/pandas/excel_dataset.py`
Content:
```
1 """``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying
2 filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.
3 """
4 import logging
5 from copy import deepcopy
6 from io import BytesIO
7 from pathlib import PurePosixPath
8 from typing import Any, Dict, Union
9
10 import fsspec
11 import pandas as pd
12
13 from kedro.io.core import (
14 PROTOCOL_DELIMITER,
15 AbstractVersionedDataSet,
16 DataSetError,
17 Version,
18 get_filepath_str,
19 get_protocol_and_path,
20 )
21
22 logger = logging.getLogger(__name__)
23
24
25 class ExcelDataSet(
26 AbstractVersionedDataSet[
27 Union[pd.DataFrame, Dict[str, pd.DataFrame]],
28 Union[pd.DataFrame, Dict[str, pd.DataFrame]],
29 ]
30 ):
31 """``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying
32 filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.
33
34 Example adding a catalog entry with the ``YAML API``:
35
36 .. code-block:: yaml
37
38 >>> rockets:
39 >>> type: pandas.ExcelDataSet
40 >>> filepath: gcs://your_bucket/rockets.xlsx
41 >>> fs_args:
42 >>> project: my-project
43 >>> credentials: my_gcp_credentials
44 >>> save_args:
45 >>> sheet_name: Sheet1
46 >>> load_args:
47 >>> sheet_name: Sheet1
48 >>>
49 >>> shuttles:
50 >>> type: pandas.ExcelDataSet
51 >>> filepath: data/01_raw/shuttles.xlsx
52
53 Example using Python API:
54 ::
55
56 >>> from kedro.extras.datasets.pandas import ExcelDataSet
57 >>> import pandas as pd
58 >>>
59 >>> data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],
60 >>> 'col3': [5, 6]})
61 >>>
62 >>> # data_set = ExcelDataSet(filepath="gcs://bucket/test.xlsx")
63 >>> data_set = ExcelDataSet(filepath="test.xlsx")
64 >>> data_set.save(data)
65 >>> reloaded = data_set.load()
66 >>> assert data.equals(reloaded)
67
68 Note: To save a multi-sheet excel file, no special ``save_args`` are required.
69 Instead, return a dictionary of ``Dict[str, pd.DataFrame]`` where the string
70 keys are your sheet names.
71
72 Example adding a catalog entry for multi-sheet excel file with the ``YAML API``:
73
74 .. code-block:: yaml
75
76 >>> trains:
77 >>> type: pandas.ExcelDataSet
78 >>> filepath: data/02_intermediate/company/trains.xlsx
79 >>> load_args:
80 >>> sheet_name: [Sheet1, Sheet2, Sheet3]
81
82 Example multi-sheet excel file using Python API:
83 ::
84
85 >>> from kedro.extras.datasets.pandas import ExcelDataSet
86 >>> import pandas as pd
87 >>>
88 >>> dataframe = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],
89 >>> 'col3': [5, 6]})
90 >>> another_dataframe = pd.DataFrame({"x": [10, 20], "y": ["hello", "world"]})
91 >>> multiframe = {"Sheet1": dataframe, "Sheet2": another_dataframe}
92 >>> data_set = ExcelDataSet(filepath="test.xlsx", load_args = {"sheet_name": None})
93 >>> data_set.save(multiframe)
94 >>> reloaded = data_set.load()
95 >>> assert multiframe["Sheet1"].equals(reloaded["Sheet1"])
96 >>> assert multiframe["Sheet2"].equals(reloaded["Sheet2"])
97
98 """
99
100 DEFAULT_LOAD_ARGS = {"engine": "openpyxl"}
101 DEFAULT_SAVE_ARGS = {"index": False}
102
103 # pylint: disable=too-many-arguments
104 def __init__(
105 self,
106 filepath: str,
107 engine: str = "openpyxl",
108 load_args: Dict[str, Any] = None,
109 save_args: Dict[str, Any] = None,
110 version: Version = None,
111 credentials: Dict[str, Any] = None,
112 fs_args: Dict[str, Any] = None,
113 ) -> None:
114 """Creates a new instance of ``ExcelDataSet`` pointing to a concrete Excel file
115 on a specific filesystem.
116
117 Args:
118 filepath: Filepath in POSIX format to a Excel file prefixed with a protocol like
119 `s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.
120 The prefix should be any protocol supported by ``fsspec``.
121 Note: `http(s)` doesn't support versioning.
122 engine: The engine used to write to excel files. The default
123 engine is 'openpyxl'.
124 load_args: Pandas options for loading Excel files.
125 Here you can find all available arguments:
126 https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html
127 All defaults are preserved, but "engine", which is set to "openpyxl".
128 Supports multi-sheet Excel files (include `sheet_name = None` in `load_args`).
129 save_args: Pandas options for saving Excel files.
130 Here you can find all available arguments:
131 https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html
132 All defaults are preserved, but "index", which is set to False.
133 If you would like to specify options for the `ExcelWriter`,
134 you can include them under the "writer" key. Here you can
135 find all available arguments:
136 https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.ExcelWriter.html
137 version: If specified, should be an instance of
138 ``kedro.io.core.Version``. If its ``load`` attribute is
139 None, the latest version will be loaded. If its ``save``
140 attribute is None, save version will be autogenerated.
141 credentials: Credentials required to get access to the underlying filesystem.
142 E.g. for ``GCSFileSystem`` it should look like `{"token": None}`.
143 fs_args: Extra arguments to pass into underlying filesystem class constructor
144 (e.g. `{"project": "my-project"}` for ``GCSFileSystem``).
145
146 Raises:
147 DataSetError: If versioning is enabled while in append mode.
148 """
149 _fs_args = deepcopy(fs_args) or {}
150 _credentials = deepcopy(credentials) or {}
151
152 protocol, path = get_protocol_and_path(filepath, version)
153 if protocol == "file":
154 _fs_args.setdefault("auto_mkdir", True)
155
156 self._protocol = protocol
157 self._storage_options = {**_credentials, **_fs_args}
158 self._fs = fsspec.filesystem(self._protocol, **self._storage_options)
159
160 super().__init__(
161 filepath=PurePosixPath(path),
162 version=version,
163 exists_function=self._fs.exists,
164 glob_function=self._fs.glob,
165 )
166
167 # Handle default load arguments
168 self._load_args = deepcopy(self.DEFAULT_LOAD_ARGS)
169 if load_args is not None:
170 self._load_args.update(load_args)
171
172 # Handle default save arguments
173 self._save_args = deepcopy(self.DEFAULT_SAVE_ARGS)
174 if save_args is not None:
175 self._save_args.update(save_args)
176 self._writer_args = self._save_args.pop("writer", {}) # type: ignore
177 self._writer_args.setdefault("engine", engine or "openpyxl") # type: ignore
178
179 if version and self._writer_args.get("mode") == "a": # type: ignore
180 raise DataSetError(
181 "'ExcelDataSet' doesn't support versioning in append mode."
182 )
183
184 if "storage_options" in self._save_args or "storage_options" in self._load_args:
185 logger.warning(
186 "Dropping 'storage_options' for %s, "
187 "please specify them under 'fs_args' or 'credentials'.",
188 self._filepath,
189 )
190 self._save_args.pop("storage_options", None)
191 self._load_args.pop("storage_options", None)
192
193 def _describe(self) -> Dict[str, Any]:
194 return dict(
195 filepath=self._filepath,
196 protocol=self._protocol,
197 load_args=self._load_args,
198 save_args=self._save_args,
199 writer_args=self._writer_args,
200 version=self._version,
201 )
202
203 def _load(self) -> Union[pd.DataFrame, Dict[str, pd.DataFrame]]:
204 load_path = str(self._get_load_path())
205 if self._protocol == "file":
206 # file:// protocol seems to misbehave on Windows
207 # (<urlopen error file not on local host>),
208 # so we don't join that back to the filepath;
209 # storage_options also don't work with local paths
210 return pd.read_excel(load_path, **self._load_args)
211
212 load_path = f"{self._protocol}{PROTOCOL_DELIMITER}{load_path}"
213 return pd.read_excel(
214 load_path, storage_options=self._storage_options, **self._load_args
215 )
216
217 def _save(self, data: Union[pd.DataFrame, Dict[str, pd.DataFrame]]) -> None:
218 output = BytesIO()
219 save_path = get_filepath_str(self._get_save_path(), self._protocol)
220
221 # pylint: disable=abstract-class-instantiated
222 with pd.ExcelWriter(output, **self._writer_args) as writer:
223 if isinstance(data, dict):
224 for sheet_name, sheet_data in data.items():
225 sheet_data.to_excel(
226 writer, sheet_name=sheet_name, **self._save_args
227 )
228 else:
229 data.to_excel(writer, **self._save_args)
230
231 with self._fs.open(save_path, mode="wb") as fs_file:
232 fs_file.write(output.getvalue())
233
234 self._invalidate_cache()
235
236 def _exists(self) -> bool:
237 try:
238 load_path = get_filepath_str(self._get_load_path(), self._protocol)
239 except DataSetError:
240 return False
241
242 return self._fs.exists(load_path)
243
244 def _release(self) -> None:
245 super()._release()
246 self._invalidate_cache()
247
248 def _invalidate_cache(self) -> None:
249 """Invalidate underlying filesystem caches."""
250 filepath = get_filepath_str(self._filepath, self._protocol)
251 self._fs.invalidate_cache(filepath)
252
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kedro/extras/datasets/pandas/excel_dataset.py b/kedro/extras/datasets/pandas/excel_dataset.py
--- a/kedro/extras/datasets/pandas/excel_dataset.py
+++ b/kedro/extras/datasets/pandas/excel_dataset.py
@@ -65,11 +65,11 @@
>>> reloaded = data_set.load()
>>> assert data.equals(reloaded)
- Note: To save a multi-sheet excel file, no special ``save_args`` are required.
+ Note: To save a multi-sheet Excel file, no special ``save_args`` are required.
Instead, return a dictionary of ``Dict[str, pd.DataFrame]`` where the string
keys are your sheet names.
- Example adding a catalog entry for multi-sheet excel file with the ``YAML API``:
+ Example adding a catalog entry for multi-sheet Excel file with the ``YAML API``:
.. code-block:: yaml
@@ -79,7 +79,7 @@
>>> load_args:
>>> sheet_name: [Sheet1, Sheet2, Sheet3]
- Example multi-sheet excel file using Python API:
+ Example multi-sheet Excel file using Python API:
::
>>> from kedro.extras.datasets.pandas import ExcelDataSet
@@ -119,7 +119,7 @@
`s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.
The prefix should be any protocol supported by ``fsspec``.
Note: `http(s)` doesn't support versioning.
- engine: The engine used to write to excel files. The default
+ engine: The engine used to write to Excel files. The default
engine is 'openpyxl'.
load_args: Pandas options for loading Excel files.
Here you can find all available arguments:
|
{"golden_diff": "diff --git a/kedro/extras/datasets/pandas/excel_dataset.py b/kedro/extras/datasets/pandas/excel_dataset.py\n--- a/kedro/extras/datasets/pandas/excel_dataset.py\n+++ b/kedro/extras/datasets/pandas/excel_dataset.py\n@@ -65,11 +65,11 @@\n >>> reloaded = data_set.load()\n >>> assert data.equals(reloaded)\n \n- Note: To save a multi-sheet excel file, no special ``save_args`` are required.\n+ Note: To save a multi-sheet Excel file, no special ``save_args`` are required.\n Instead, return a dictionary of ``Dict[str, pd.DataFrame]`` where the string\n keys are your sheet names.\n \n- Example adding a catalog entry for multi-sheet excel file with the ``YAML API``:\n+ Example adding a catalog entry for multi-sheet Excel file with the ``YAML API``:\n \n .. code-block:: yaml\n \n@@ -79,7 +79,7 @@\n >>> load_args:\n >>> sheet_name: [Sheet1, Sheet2, Sheet3]\n \n- Example multi-sheet excel file using Python API:\n+ Example multi-sheet Excel file using Python API:\n ::\n \n >>> from kedro.extras.datasets.pandas import ExcelDataSet\n@@ -119,7 +119,7 @@\n `s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.\n The prefix should be any protocol supported by ``fsspec``.\n Note: `http(s)` doesn't support versioning.\n- engine: The engine used to write to excel files. The default\n+ engine: The engine used to write to Excel files. The default\n engine is 'openpyxl'.\n load_args: Pandas options for loading Excel files.\n Here you can find all available arguments:\n", "issue": "Fix case in docs\nAs mentioned in https://github.com/kedro-org/kedro/pull/1760#pullrequestreview-1069581386_\r\n\r\nChange `excel`(lowercase) to `Excel`(uppercase).\n", "before_files": [{"content": "\"\"\"``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying\nfilesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.\n\"\"\"\nimport logging\nfrom copy import deepcopy\nfrom io import BytesIO\nfrom pathlib import PurePosixPath\nfrom typing import Any, Dict, Union\n\nimport fsspec\nimport pandas as pd\n\nfrom kedro.io.core import (\n PROTOCOL_DELIMITER,\n AbstractVersionedDataSet,\n DataSetError,\n Version,\n get_filepath_str,\n get_protocol_and_path,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass ExcelDataSet(\n AbstractVersionedDataSet[\n Union[pd.DataFrame, Dict[str, pd.DataFrame]],\n Union[pd.DataFrame, Dict[str, pd.DataFrame]],\n ]\n):\n \"\"\"``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying\n filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.\n\n Example adding a catalog entry with the ``YAML API``:\n\n .. code-block:: yaml\n\n >>> rockets:\n >>> type: pandas.ExcelDataSet\n >>> filepath: gcs://your_bucket/rockets.xlsx\n >>> fs_args:\n >>> project: my-project\n >>> credentials: my_gcp_credentials\n >>> save_args:\n >>> sheet_name: Sheet1\n >>> load_args:\n >>> sheet_name: Sheet1\n >>>\n >>> shuttles:\n >>> type: pandas.ExcelDataSet\n >>> filepath: data/01_raw/shuttles.xlsx\n\n Example using Python API:\n ::\n\n >>> from kedro.extras.datasets.pandas import ExcelDataSet\n >>> import pandas as pd\n >>>\n >>> data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],\n >>> 'col3': [5, 6]})\n >>>\n >>> # data_set = ExcelDataSet(filepath=\"gcs://bucket/test.xlsx\")\n >>> data_set = ExcelDataSet(filepath=\"test.xlsx\")\n >>> data_set.save(data)\n >>> reloaded = data_set.load()\n >>> assert data.equals(reloaded)\n\n Note: To save a multi-sheet excel file, no special ``save_args`` are required.\n Instead, return a dictionary of ``Dict[str, pd.DataFrame]`` where the string\n keys are your sheet names.\n\n Example adding a catalog entry for multi-sheet excel file with the ``YAML API``:\n\n .. code-block:: yaml\n\n >>> trains:\n >>> type: pandas.ExcelDataSet\n >>> filepath: data/02_intermediate/company/trains.xlsx\n >>> load_args:\n >>> sheet_name: [Sheet1, Sheet2, Sheet3]\n\n Example multi-sheet excel file using Python API:\n ::\n\n >>> from kedro.extras.datasets.pandas import ExcelDataSet\n >>> import pandas as pd\n >>>\n >>> dataframe = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],\n >>> 'col3': [5, 6]})\n >>> another_dataframe = pd.DataFrame({\"x\": [10, 20], \"y\": [\"hello\", \"world\"]})\n >>> multiframe = {\"Sheet1\": dataframe, \"Sheet2\": another_dataframe}\n >>> data_set = ExcelDataSet(filepath=\"test.xlsx\", load_args = {\"sheet_name\": None})\n >>> data_set.save(multiframe)\n >>> reloaded = data_set.load()\n >>> assert multiframe[\"Sheet1\"].equals(reloaded[\"Sheet1\"])\n >>> assert multiframe[\"Sheet2\"].equals(reloaded[\"Sheet2\"])\n\n \"\"\"\n\n DEFAULT_LOAD_ARGS = {\"engine\": \"openpyxl\"}\n DEFAULT_SAVE_ARGS = {\"index\": False}\n\n # pylint: disable=too-many-arguments\n def __init__(\n self,\n filepath: str,\n engine: str = \"openpyxl\",\n load_args: Dict[str, Any] = None,\n save_args: Dict[str, Any] = None,\n version: Version = None,\n credentials: Dict[str, Any] = None,\n fs_args: Dict[str, Any] = None,\n ) -> None:\n \"\"\"Creates a new instance of ``ExcelDataSet`` pointing to a concrete Excel file\n on a specific filesystem.\n\n Args:\n filepath: Filepath in POSIX format to a Excel file prefixed with a protocol like\n `s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.\n The prefix should be any protocol supported by ``fsspec``.\n Note: `http(s)` doesn't support versioning.\n engine: The engine used to write to excel files. The default\n engine is 'openpyxl'.\n load_args: Pandas options for loading Excel files.\n Here you can find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html\n All defaults are preserved, but \"engine\", which is set to \"openpyxl\".\n Supports multi-sheet Excel files (include `sheet_name = None` in `load_args`).\n save_args: Pandas options for saving Excel files.\n Here you can find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html\n All defaults are preserved, but \"index\", which is set to False.\n If you would like to specify options for the `ExcelWriter`,\n you can include them under the \"writer\" key. Here you can\n find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.ExcelWriter.html\n version: If specified, should be an instance of\n ``kedro.io.core.Version``. If its ``load`` attribute is\n None, the latest version will be loaded. If its ``save``\n attribute is None, save version will be autogenerated.\n credentials: Credentials required to get access to the underlying filesystem.\n E.g. for ``GCSFileSystem`` it should look like `{\"token\": None}`.\n fs_args: Extra arguments to pass into underlying filesystem class constructor\n (e.g. `{\"project\": \"my-project\"}` for ``GCSFileSystem``).\n\n Raises:\n DataSetError: If versioning is enabled while in append mode.\n \"\"\"\n _fs_args = deepcopy(fs_args) or {}\n _credentials = deepcopy(credentials) or {}\n\n protocol, path = get_protocol_and_path(filepath, version)\n if protocol == \"file\":\n _fs_args.setdefault(\"auto_mkdir\", True)\n\n self._protocol = protocol\n self._storage_options = {**_credentials, **_fs_args}\n self._fs = fsspec.filesystem(self._protocol, **self._storage_options)\n\n super().__init__(\n filepath=PurePosixPath(path),\n version=version,\n exists_function=self._fs.exists,\n glob_function=self._fs.glob,\n )\n\n # Handle default load arguments\n self._load_args = deepcopy(self.DEFAULT_LOAD_ARGS)\n if load_args is not None:\n self._load_args.update(load_args)\n\n # Handle default save arguments\n self._save_args = deepcopy(self.DEFAULT_SAVE_ARGS)\n if save_args is not None:\n self._save_args.update(save_args)\n self._writer_args = self._save_args.pop(\"writer\", {}) # type: ignore\n self._writer_args.setdefault(\"engine\", engine or \"openpyxl\") # type: ignore\n\n if version and self._writer_args.get(\"mode\") == \"a\": # type: ignore\n raise DataSetError(\n \"'ExcelDataSet' doesn't support versioning in append mode.\"\n )\n\n if \"storage_options\" in self._save_args or \"storage_options\" in self._load_args:\n logger.warning(\n \"Dropping 'storage_options' for %s, \"\n \"please specify them under 'fs_args' or 'credentials'.\",\n self._filepath,\n )\n self._save_args.pop(\"storage_options\", None)\n self._load_args.pop(\"storage_options\", None)\n\n def _describe(self) -> Dict[str, Any]:\n return dict(\n filepath=self._filepath,\n protocol=self._protocol,\n load_args=self._load_args,\n save_args=self._save_args,\n writer_args=self._writer_args,\n version=self._version,\n )\n\n def _load(self) -> Union[pd.DataFrame, Dict[str, pd.DataFrame]]:\n load_path = str(self._get_load_path())\n if self._protocol == \"file\":\n # file:// protocol seems to misbehave on Windows\n # (<urlopen error file not on local host>),\n # so we don't join that back to the filepath;\n # storage_options also don't work with local paths\n return pd.read_excel(load_path, **self._load_args)\n\n load_path = f\"{self._protocol}{PROTOCOL_DELIMITER}{load_path}\"\n return pd.read_excel(\n load_path, storage_options=self._storage_options, **self._load_args\n )\n\n def _save(self, data: Union[pd.DataFrame, Dict[str, pd.DataFrame]]) -> None:\n output = BytesIO()\n save_path = get_filepath_str(self._get_save_path(), self._protocol)\n\n # pylint: disable=abstract-class-instantiated\n with pd.ExcelWriter(output, **self._writer_args) as writer:\n if isinstance(data, dict):\n for sheet_name, sheet_data in data.items():\n sheet_data.to_excel(\n writer, sheet_name=sheet_name, **self._save_args\n )\n else:\n data.to_excel(writer, **self._save_args)\n\n with self._fs.open(save_path, mode=\"wb\") as fs_file:\n fs_file.write(output.getvalue())\n\n self._invalidate_cache()\n\n def _exists(self) -> bool:\n try:\n load_path = get_filepath_str(self._get_load_path(), self._protocol)\n except DataSetError:\n return False\n\n return self._fs.exists(load_path)\n\n def _release(self) -> None:\n super()._release()\n self._invalidate_cache()\n\n def _invalidate_cache(self) -> None:\n \"\"\"Invalidate underlying filesystem caches.\"\"\"\n filepath = get_filepath_str(self._filepath, self._protocol)\n self._fs.invalidate_cache(filepath)\n", "path": "kedro/extras/datasets/pandas/excel_dataset.py"}], "after_files": [{"content": "\"\"\"``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying\nfilesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.\n\"\"\"\nimport logging\nfrom copy import deepcopy\nfrom io import BytesIO\nfrom pathlib import PurePosixPath\nfrom typing import Any, Dict, Union\n\nimport fsspec\nimport pandas as pd\n\nfrom kedro.io.core import (\n PROTOCOL_DELIMITER,\n AbstractVersionedDataSet,\n DataSetError,\n Version,\n get_filepath_str,\n get_protocol_and_path,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass ExcelDataSet(\n AbstractVersionedDataSet[\n Union[pd.DataFrame, Dict[str, pd.DataFrame]],\n Union[pd.DataFrame, Dict[str, pd.DataFrame]],\n ]\n):\n \"\"\"``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying\n filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.\n\n Example adding a catalog entry with the ``YAML API``:\n\n .. code-block:: yaml\n\n >>> rockets:\n >>> type: pandas.ExcelDataSet\n >>> filepath: gcs://your_bucket/rockets.xlsx\n >>> fs_args:\n >>> project: my-project\n >>> credentials: my_gcp_credentials\n >>> save_args:\n >>> sheet_name: Sheet1\n >>> load_args:\n >>> sheet_name: Sheet1\n >>>\n >>> shuttles:\n >>> type: pandas.ExcelDataSet\n >>> filepath: data/01_raw/shuttles.xlsx\n\n Example using Python API:\n ::\n\n >>> from kedro.extras.datasets.pandas import ExcelDataSet\n >>> import pandas as pd\n >>>\n >>> data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],\n >>> 'col3': [5, 6]})\n >>>\n >>> # data_set = ExcelDataSet(filepath=\"gcs://bucket/test.xlsx\")\n >>> data_set = ExcelDataSet(filepath=\"test.xlsx\")\n >>> data_set.save(data)\n >>> reloaded = data_set.load()\n >>> assert data.equals(reloaded)\n\n Note: To save a multi-sheet Excel file, no special ``save_args`` are required.\n Instead, return a dictionary of ``Dict[str, pd.DataFrame]`` where the string\n keys are your sheet names.\n\n Example adding a catalog entry for multi-sheet Excel file with the ``YAML API``:\n\n .. code-block:: yaml\n\n >>> trains:\n >>> type: pandas.ExcelDataSet\n >>> filepath: data/02_intermediate/company/trains.xlsx\n >>> load_args:\n >>> sheet_name: [Sheet1, Sheet2, Sheet3]\n\n Example multi-sheet Excel file using Python API:\n ::\n\n >>> from kedro.extras.datasets.pandas import ExcelDataSet\n >>> import pandas as pd\n >>>\n >>> dataframe = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],\n >>> 'col3': [5, 6]})\n >>> another_dataframe = pd.DataFrame({\"x\": [10, 20], \"y\": [\"hello\", \"world\"]})\n >>> multiframe = {\"Sheet1\": dataframe, \"Sheet2\": another_dataframe}\n >>> data_set = ExcelDataSet(filepath=\"test.xlsx\", load_args = {\"sheet_name\": None})\n >>> data_set.save(multiframe)\n >>> reloaded = data_set.load()\n >>> assert multiframe[\"Sheet1\"].equals(reloaded[\"Sheet1\"])\n >>> assert multiframe[\"Sheet2\"].equals(reloaded[\"Sheet2\"])\n\n \"\"\"\n\n DEFAULT_LOAD_ARGS = {\"engine\": \"openpyxl\"}\n DEFAULT_SAVE_ARGS = {\"index\": False}\n\n # pylint: disable=too-many-arguments\n def __init__(\n self,\n filepath: str,\n engine: str = \"openpyxl\",\n load_args: Dict[str, Any] = None,\n save_args: Dict[str, Any] = None,\n version: Version = None,\n credentials: Dict[str, Any] = None,\n fs_args: Dict[str, Any] = None,\n ) -> None:\n \"\"\"Creates a new instance of ``ExcelDataSet`` pointing to a concrete Excel file\n on a specific filesystem.\n\n Args:\n filepath: Filepath in POSIX format to a Excel file prefixed with a protocol like\n `s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.\n The prefix should be any protocol supported by ``fsspec``.\n Note: `http(s)` doesn't support versioning.\n engine: The engine used to write to Excel files. The default\n engine is 'openpyxl'.\n load_args: Pandas options for loading Excel files.\n Here you can find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html\n All defaults are preserved, but \"engine\", which is set to \"openpyxl\".\n Supports multi-sheet Excel files (include `sheet_name = None` in `load_args`).\n save_args: Pandas options for saving Excel files.\n Here you can find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html\n All defaults are preserved, but \"index\", which is set to False.\n If you would like to specify options for the `ExcelWriter`,\n you can include them under the \"writer\" key. Here you can\n find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.ExcelWriter.html\n version: If specified, should be an instance of\n ``kedro.io.core.Version``. If its ``load`` attribute is\n None, the latest version will be loaded. If its ``save``\n attribute is None, save version will be autogenerated.\n credentials: Credentials required to get access to the underlying filesystem.\n E.g. for ``GCSFileSystem`` it should look like `{\"token\": None}`.\n fs_args: Extra arguments to pass into underlying filesystem class constructor\n (e.g. `{\"project\": \"my-project\"}` for ``GCSFileSystem``).\n\n Raises:\n DataSetError: If versioning is enabled while in append mode.\n \"\"\"\n _fs_args = deepcopy(fs_args) or {}\n _credentials = deepcopy(credentials) or {}\n\n protocol, path = get_protocol_and_path(filepath, version)\n if protocol == \"file\":\n _fs_args.setdefault(\"auto_mkdir\", True)\n\n self._protocol = protocol\n self._storage_options = {**_credentials, **_fs_args}\n self._fs = fsspec.filesystem(self._protocol, **self._storage_options)\n\n super().__init__(\n filepath=PurePosixPath(path),\n version=version,\n exists_function=self._fs.exists,\n glob_function=self._fs.glob,\n )\n\n # Handle default load arguments\n self._load_args = deepcopy(self.DEFAULT_LOAD_ARGS)\n if load_args is not None:\n self._load_args.update(load_args)\n\n # Handle default save arguments\n self._save_args = deepcopy(self.DEFAULT_SAVE_ARGS)\n if save_args is not None:\n self._save_args.update(save_args)\n self._writer_args = self._save_args.pop(\"writer\", {}) # type: ignore\n self._writer_args.setdefault(\"engine\", engine or \"openpyxl\") # type: ignore\n\n if version and self._writer_args.get(\"mode\") == \"a\": # type: ignore\n raise DataSetError(\n \"'ExcelDataSet' doesn't support versioning in append mode.\"\n )\n\n if \"storage_options\" in self._save_args or \"storage_options\" in self._load_args:\n logger.warning(\n \"Dropping 'storage_options' for %s, \"\n \"please specify them under 'fs_args' or 'credentials'.\",\n self._filepath,\n )\n self._save_args.pop(\"storage_options\", None)\n self._load_args.pop(\"storage_options\", None)\n\n def _describe(self) -> Dict[str, Any]:\n return dict(\n filepath=self._filepath,\n protocol=self._protocol,\n load_args=self._load_args,\n save_args=self._save_args,\n writer_args=self._writer_args,\n version=self._version,\n )\n\n def _load(self) -> Union[pd.DataFrame, Dict[str, pd.DataFrame]]:\n load_path = str(self._get_load_path())\n if self._protocol == \"file\":\n # file:// protocol seems to misbehave on Windows\n # (<urlopen error file not on local host>),\n # so we don't join that back to the filepath;\n # storage_options also don't work with local paths\n return pd.read_excel(load_path, **self._load_args)\n\n load_path = f\"{self._protocol}{PROTOCOL_DELIMITER}{load_path}\"\n return pd.read_excel(\n load_path, storage_options=self._storage_options, **self._load_args\n )\n\n def _save(self, data: Union[pd.DataFrame, Dict[str, pd.DataFrame]]) -> None:\n output = BytesIO()\n save_path = get_filepath_str(self._get_save_path(), self._protocol)\n\n # pylint: disable=abstract-class-instantiated\n with pd.ExcelWriter(output, **self._writer_args) as writer:\n if isinstance(data, dict):\n for sheet_name, sheet_data in data.items():\n sheet_data.to_excel(\n writer, sheet_name=sheet_name, **self._save_args\n )\n else:\n data.to_excel(writer, **self._save_args)\n\n with self._fs.open(save_path, mode=\"wb\") as fs_file:\n fs_file.write(output.getvalue())\n\n self._invalidate_cache()\n\n def _exists(self) -> bool:\n try:\n load_path = get_filepath_str(self._get_load_path(), self._protocol)\n except DataSetError:\n return False\n\n return self._fs.exists(load_path)\n\n def _release(self) -> None:\n super()._release()\n self._invalidate_cache()\n\n def _invalidate_cache(self) -> None:\n \"\"\"Invalidate underlying filesystem caches.\"\"\"\n filepath = get_filepath_str(self._filepath, self._protocol)\n self._fs.invalidate_cache(filepath)\n", "path": "kedro/extras/datasets/pandas/excel_dataset.py"}]}
| 3,218 | 414 |
gh_patches_debug_16803
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2836
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mitmproxy crashes, when trying to send cuts to the clipboard without copy/paste mechanism for system
##### Steps to reproduce the problem:
1. Run mitmproxy.
2. Press `Ctrl l`
3. Input `: cut.clip 1 2 `. Press `Enter`.
Traceback:
```
Traceback (most recent call last):
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/master.py", line 216, in run
self.loop.run()
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 278, in run
self._run()
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 376, in _run
self.event_loop.run()
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 682, in run
self._loop()
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 719, in _loop
self._watch_files[fd]()
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/raw_display.py", line 393, in <lambda>
event_loop, callback, self.get_available_raw_input())
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/raw_display.py", line 493, in parse_input
callback(processed, processed_codes)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 403, in _update
self.process_input(keys)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py", line 503, in process_input
k = self._topmost_widget.keypress(self.screen_size, k)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py", line 309, in keypress
k = super().keypress(size, k)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py", line 1116, in keypress
return self.footer.keypress((maxcol,),key)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/statusbar.py", line 149, in keypress
return self.ab.keypress(*args, **kwargs)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/statusbar.py", line 104, in keypress
self.prompt_execute(self._w.get_edit_text())
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/statusbar.py", line 124, in prompt_execute
msg = p(txt)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/commandexecutor.py", line 17, in __call__
ret = self.master.commands.call(cmd)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py", line 221, in call
return self.call_args(parts[0], parts[1:])
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py", line 212, in call_args
return self.commands[path].call(args)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py", line 101, in call
ret = self.func(*pargs)
File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/master.py", line 68, in handlecontext
yield
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py", line 101, in call
ret = self.func(*pargs)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py", line 251, in wrapper
return function(*args, **kwargs)
File "/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/addons/cut.py", line 139, in clip
pyperclip.copy(fp.getvalue())
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/pyperclip/__init__.py", line 574, in lazy_load_stub_copy
return copy(text)
File "/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/pyperclip/__init__.py", line 284, in __call__
raise PyperclipException(EXCEPT_MSG)
pyperclip.PyperclipException:
Pyperclip could not find a copy/paste mechanism for your system.
For more information, please visit https://pyperclip.readthedocs.io/en/latest/introduction.html#not-implemented-error
```
##### Any other comments? What have you tried so far?
As traceback says, the issue is relevant for Linux without installed copy/paste mechanism and can be fixed using this https://pyperclip.readthedocs.io/en/latest/introduction.html#not-implemented-error.
I think we must just handle it properly.
##### System information
Mitmproxy: 3.0.0.dev64 (commit 6dd336f)
Python: 3.5.2
OpenSSL: OpenSSL 1.1.0g 2 Nov 2017
Platform: Linux-4.4.0-112-generic-x86_64-with-Ubuntu-16.04-xenial
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/addons/export.py`
Content:
```
1 import typing
2
3 from mitmproxy import ctx
4 from mitmproxy import command
5 from mitmproxy import flow
6 from mitmproxy import exceptions
7 from mitmproxy.utils import strutils
8 from mitmproxy.net.http.http1 import assemble
9 import mitmproxy.types
10
11 import pyperclip
12
13
14 def curl_command(f: flow.Flow) -> str:
15 if not hasattr(f, "request"):
16 raise exceptions.CommandError("Can't export flow with no request.")
17 data = "curl "
18 request = f.request.copy() # type: ignore
19 request.decode(strict=False)
20 for k, v in request.headers.items(multi=True):
21 data += "-H '%s:%s' " % (k, v)
22 if request.method != "GET":
23 data += "-X %s " % request.method
24 data += "'%s'" % request.url
25 if request.content:
26 data += " --data-binary '%s'" % strutils.bytes_to_escaped_str(
27 request.content,
28 escape_single_quotes=True
29 )
30 return data
31
32
33 def raw(f: flow.Flow) -> bytes:
34 if not hasattr(f, "request"):
35 raise exceptions.CommandError("Can't export flow with no request.")
36 return assemble.assemble_request(f.request) # type: ignore
37
38
39 formats = dict(
40 curl = curl_command,
41 raw = raw,
42 )
43
44
45 class Export():
46 @command.command("export.formats")
47 def formats(self) -> typing.Sequence[str]:
48 """
49 Return a list of the supported export formats.
50 """
51 return list(sorted(formats.keys()))
52
53 @command.command("export.file")
54 def file(self, fmt: str, f: flow.Flow, path: mitmproxy.types.Path) -> None:
55 """
56 Export a flow to path.
57 """
58 if fmt not in formats:
59 raise exceptions.CommandError("No such export format: %s" % fmt)
60 func = formats[fmt] # type: typing.Any
61 v = func(f)
62 try:
63 with open(path, "wb") as fp:
64 if isinstance(v, bytes):
65 fp.write(v)
66 else:
67 fp.write(v.encode("utf-8"))
68 except IOError as e:
69 ctx.log.error(str(e))
70
71 @command.command("export.clip")
72 def clip(self, fmt: str, f: flow.Flow) -> None:
73 """
74 Export a flow to the system clipboard.
75 """
76 if fmt not in formats:
77 raise exceptions.CommandError("No such export format: %s" % fmt)
78 func = formats[fmt] # type: typing.Any
79 v = strutils.always_str(func(f))
80 pyperclip.copy(v)
81
```
Path: `mitmproxy/addons/cut.py`
Content:
```
1 import io
2 import csv
3 import typing
4 from mitmproxy import command
5 from mitmproxy import exceptions
6 from mitmproxy import flow
7 from mitmproxy import ctx
8 from mitmproxy import certs
9 from mitmproxy.utils import strutils
10 import mitmproxy.types
11
12 import pyperclip
13
14
15 def headername(spec: str):
16 if not (spec.startswith("header[") and spec.endswith("]")):
17 raise exceptions.CommandError("Invalid header spec: %s" % spec)
18 return spec[len("header["):-1].strip()
19
20
21 def is_addr(v):
22 return isinstance(v, tuple) and len(v) > 1
23
24
25 def extract(cut: str, f: flow.Flow) -> typing.Union[str, bytes]:
26 path = cut.split(".")
27 current = f # type: typing.Any
28 for i, spec in enumerate(path):
29 if spec.startswith("_"):
30 raise exceptions.CommandError("Can't access internal attribute %s" % spec)
31
32 part = getattr(current, spec, None)
33 if i == len(path) - 1:
34 if spec == "port" and is_addr(current):
35 return str(current[1])
36 if spec == "host" and is_addr(current):
37 return str(current[0])
38 elif spec.startswith("header["):
39 if not current:
40 return ""
41 return current.headers.get(headername(spec), "")
42 elif isinstance(part, bytes):
43 return part
44 elif isinstance(part, bool):
45 return "true" if part else "false"
46 elif isinstance(part, certs.Cert):
47 return part.to_pem().decode("ascii")
48 current = part
49 return str(current or "")
50
51
52 class Cut:
53 @command.command("cut")
54 def cut(
55 self,
56 flows: typing.Sequence[flow.Flow],
57 cuts: mitmproxy.types.CutSpec,
58 ) -> mitmproxy.types.Data:
59 """
60 Cut data from a set of flows. Cut specifications are attribute paths
61 from the base of the flow object, with a few conveniences - "port"
62 and "host" retrieve parts of an address tuple, ".header[key]"
63 retrieves a header value. Return values converted to strings or
64 bytes: SSL certicates are converted to PEM format, bools are "true"
65 or "false", "bytes" are preserved, and all other values are
66 converted to strings.
67 """
68 ret = [] # type:typing.List[typing.List[typing.Union[str, bytes]]]
69 for f in flows:
70 ret.append([extract(c, f) for c in cuts])
71 return ret # type: ignore
72
73 @command.command("cut.save")
74 def save(
75 self,
76 flows: typing.Sequence[flow.Flow],
77 cuts: mitmproxy.types.CutSpec,
78 path: mitmproxy.types.Path
79 ) -> None:
80 """
81 Save cuts to file. If there are multiple flows or cuts, the format
82 is UTF-8 encoded CSV. If there is exactly one row and one column,
83 the data is written to file as-is, with raw bytes preserved. If the
84 path is prefixed with a "+", values are appended if there is an
85 existing file.
86 """
87 append = False
88 if path.startswith("+"):
89 append = True
90 path = mitmproxy.types.Path(path[1:])
91 try:
92 if len(cuts) == 1 and len(flows) == 1:
93 with open(path, "ab" if append else "wb") as fp:
94 if fp.tell() > 0:
95 # We're appending to a file that already exists and has content
96 fp.write(b"\n")
97 v = extract(cuts[0], flows[0])
98 if isinstance(v, bytes):
99 fp.write(v)
100 else:
101 fp.write(v.encode("utf8"))
102 ctx.log.alert("Saved single cut.")
103 else:
104 with open(path, "a" if append else "w", newline='', encoding="utf8") as fp:
105 writer = csv.writer(fp)
106 for f in flows:
107 vals = [extract(c, f) for c in cuts]
108 writer.writerow(
109 [strutils.always_str(x) or "" for x in vals] # type: ignore
110 )
111 ctx.log.alert("Saved %s cuts over %d flows as CSV." % (len(cuts), len(flows)))
112 except IOError as e:
113 ctx.log.error(str(e))
114
115 @command.command("cut.clip")
116 def clip(
117 self,
118 flows: typing.Sequence[flow.Flow],
119 cuts: mitmproxy.types.CutSpec,
120 ) -> None:
121 """
122 Send cuts to the clipboard. If there are multiple flows or cuts, the
123 format is UTF-8 encoded CSV. If there is exactly one row and one
124 column, the data is written to file as-is, with raw bytes preserved.
125 """
126 fp = io.StringIO(newline="")
127 if len(cuts) == 1 and len(flows) == 1:
128 v = extract(cuts[0], flows[0])
129 if isinstance(v, bytes):
130 fp.write(strutils.always_str(v))
131 else:
132 fp.write("utf8")
133 ctx.log.alert("Clipped single cut.")
134 else:
135 writer = csv.writer(fp)
136 for f in flows:
137 vals = [extract(c, f) for c in cuts]
138 writer.writerow(
139 [strutils.always_str(v) or "" for v in vals] # type: ignore
140 )
141 ctx.log.alert("Clipped %s cuts as CSV." % len(cuts))
142 pyperclip.copy(fp.getvalue())
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/addons/cut.py b/mitmproxy/addons/cut.py
--- a/mitmproxy/addons/cut.py
+++ b/mitmproxy/addons/cut.py
@@ -139,4 +139,7 @@
[strutils.always_str(v) or "" for v in vals] # type: ignore
)
ctx.log.alert("Clipped %s cuts as CSV." % len(cuts))
- pyperclip.copy(fp.getvalue())
+ try:
+ pyperclip.copy(fp.getvalue())
+ except pyperclip.PyperclipException as e:
+ ctx.log.error(str(e))
diff --git a/mitmproxy/addons/export.py b/mitmproxy/addons/export.py
--- a/mitmproxy/addons/export.py
+++ b/mitmproxy/addons/export.py
@@ -77,4 +77,7 @@
raise exceptions.CommandError("No such export format: %s" % fmt)
func = formats[fmt] # type: typing.Any
v = strutils.always_str(func(f))
- pyperclip.copy(v)
+ try:
+ pyperclip.copy(v)
+ except pyperclip.PyperclipException as e:
+ ctx.log.error(str(e))
|
{"golden_diff": "diff --git a/mitmproxy/addons/cut.py b/mitmproxy/addons/cut.py\n--- a/mitmproxy/addons/cut.py\n+++ b/mitmproxy/addons/cut.py\n@@ -139,4 +139,7 @@\n [strutils.always_str(v) or \"\" for v in vals] # type: ignore\n )\n ctx.log.alert(\"Clipped %s cuts as CSV.\" % len(cuts))\n- pyperclip.copy(fp.getvalue())\n+ try:\n+ pyperclip.copy(fp.getvalue())\n+ except pyperclip.PyperclipException as e:\n+ ctx.log.error(str(e))\ndiff --git a/mitmproxy/addons/export.py b/mitmproxy/addons/export.py\n--- a/mitmproxy/addons/export.py\n+++ b/mitmproxy/addons/export.py\n@@ -77,4 +77,7 @@\n raise exceptions.CommandError(\"No such export format: %s\" % fmt)\n func = formats[fmt] # type: typing.Any\n v = strutils.always_str(func(f))\n- pyperclip.copy(v)\n+ try:\n+ pyperclip.copy(v)\n+ except pyperclip.PyperclipException as e:\n+ ctx.log.error(str(e))\n", "issue": "Mitmproxy crashes, when trying to send cuts to the clipboard without copy/paste mechanism for system\n##### Steps to reproduce the problem:\r\n\r\n1. Run mitmproxy.\r\n2. Press `Ctrl l`\r\n3. Input `: cut.clip 1 2 `. Press `Enter`.\r\n\r\nTraceback:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/master.py\", line 216, in run\r\n self.loop.run()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 278, in run\r\n self._run()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 376, in _run\r\n self.event_loop.run()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 682, in run\r\n self._loop()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 719, in _loop\r\n self._watch_files[fd]()\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/raw_display.py\", line 393, in <lambda>\r\n event_loop, callback, self.get_available_raw_input())\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/raw_display.py\", line 493, in parse_input\r\n callback(processed, processed_codes)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 403, in _update\r\n self.process_input(keys)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/main_loop.py\", line 503, in process_input\r\n k = self._topmost_widget.keypress(self.screen_size, k)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/window.py\", line 309, in keypress\r\n k = super().keypress(size, k)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/urwid/container.py\", line 1116, in keypress\r\n return self.footer.keypress((maxcol,),key)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/statusbar.py\", line 149, in keypress\r\n return self.ab.keypress(*args, **kwargs)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/statusbar.py\", line 104, in keypress\r\n self.prompt_execute(self._w.get_edit_text())\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/statusbar.py\", line 124, in prompt_execute\r\n msg = p(txt)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/tools/console/commandexecutor.py\", line 17, in __call__\r\n ret = self.master.commands.call(cmd)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py\", line 221, in call\r\n return self.call_args(parts[0], parts[1:])\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py\", line 212, in call_args\r\n return self.commands[path].call(args)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py\", line 101, in call\r\n ret = self.func(*pargs)\r\n File \"/usr/lib/python3.5/contextlib.py\", line 77, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/master.py\", line 68, in handlecontext\r\n yield\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py\", line 101, in call\r\n ret = self.func(*pargs)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/command.py\", line 251, in wrapper\r\n return function(*args, **kwargs)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/mitmproxy/addons/cut.py\", line 139, in clip\r\n pyperclip.copy(fp.getvalue())\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/pyperclip/__init__.py\", line 574, in lazy_load_stub_copy\r\n return copy(text)\r\n File \"/home/kajoj/Mitmproxy/mitmproxy/venv/lib/python3.5/site-packages/pyperclip/__init__.py\", line 284, in __call__\r\n raise PyperclipException(EXCEPT_MSG)\r\npyperclip.PyperclipException: \r\n Pyperclip could not find a copy/paste mechanism for your system.\r\n For more information, please visit https://pyperclip.readthedocs.io/en/latest/introduction.html#not-implemented-error \r\n```\r\n\r\n##### Any other comments? What have you tried so far?\r\n\r\nAs traceback says, the issue is relevant for Linux without installed copy/paste mechanism and can be fixed using this https://pyperclip.readthedocs.io/en/latest/introduction.html#not-implemented-error.\r\nI think we must just handle it properly.\r\n\r\n\r\n##### System information\r\n\r\nMitmproxy: 3.0.0.dev64 (commit 6dd336f) \r\nPython: 3.5.2\r\nOpenSSL: OpenSSL 1.1.0g 2 Nov 2017\r\nPlatform: Linux-4.4.0-112-generic-x86_64-with-Ubuntu-16.04-xenial\r\n\n", "before_files": [{"content": "import typing\n\nfrom mitmproxy import ctx\nfrom mitmproxy import command\nfrom mitmproxy import flow\nfrom mitmproxy import exceptions\nfrom mitmproxy.utils import strutils\nfrom mitmproxy.net.http.http1 import assemble\nimport mitmproxy.types\n\nimport pyperclip\n\n\ndef curl_command(f: flow.Flow) -> str:\n if not hasattr(f, \"request\"):\n raise exceptions.CommandError(\"Can't export flow with no request.\")\n data = \"curl \"\n request = f.request.copy() # type: ignore\n request.decode(strict=False)\n for k, v in request.headers.items(multi=True):\n data += \"-H '%s:%s' \" % (k, v)\n if request.method != \"GET\":\n data += \"-X %s \" % request.method\n data += \"'%s'\" % request.url\n if request.content:\n data += \" --data-binary '%s'\" % strutils.bytes_to_escaped_str(\n request.content,\n escape_single_quotes=True\n )\n return data\n\n\ndef raw(f: flow.Flow) -> bytes:\n if not hasattr(f, \"request\"):\n raise exceptions.CommandError(\"Can't export flow with no request.\")\n return assemble.assemble_request(f.request) # type: ignore\n\n\nformats = dict(\n curl = curl_command,\n raw = raw,\n)\n\n\nclass Export():\n @command.command(\"export.formats\")\n def formats(self) -> typing.Sequence[str]:\n \"\"\"\n Return a list of the supported export formats.\n \"\"\"\n return list(sorted(formats.keys()))\n\n @command.command(\"export.file\")\n def file(self, fmt: str, f: flow.Flow, path: mitmproxy.types.Path) -> None:\n \"\"\"\n Export a flow to path.\n \"\"\"\n if fmt not in formats:\n raise exceptions.CommandError(\"No such export format: %s\" % fmt)\n func = formats[fmt] # type: typing.Any\n v = func(f)\n try:\n with open(path, \"wb\") as fp:\n if isinstance(v, bytes):\n fp.write(v)\n else:\n fp.write(v.encode(\"utf-8\"))\n except IOError as e:\n ctx.log.error(str(e))\n\n @command.command(\"export.clip\")\n def clip(self, fmt: str, f: flow.Flow) -> None:\n \"\"\"\n Export a flow to the system clipboard.\n \"\"\"\n if fmt not in formats:\n raise exceptions.CommandError(\"No such export format: %s\" % fmt)\n func = formats[fmt] # type: typing.Any\n v = strutils.always_str(func(f))\n pyperclip.copy(v)\n", "path": "mitmproxy/addons/export.py"}, {"content": "import io\nimport csv\nimport typing\nfrom mitmproxy import command\nfrom mitmproxy import exceptions\nfrom mitmproxy import flow\nfrom mitmproxy import ctx\nfrom mitmproxy import certs\nfrom mitmproxy.utils import strutils\nimport mitmproxy.types\n\nimport pyperclip\n\n\ndef headername(spec: str):\n if not (spec.startswith(\"header[\") and spec.endswith(\"]\")):\n raise exceptions.CommandError(\"Invalid header spec: %s\" % spec)\n return spec[len(\"header[\"):-1].strip()\n\n\ndef is_addr(v):\n return isinstance(v, tuple) and len(v) > 1\n\n\ndef extract(cut: str, f: flow.Flow) -> typing.Union[str, bytes]:\n path = cut.split(\".\")\n current = f # type: typing.Any\n for i, spec in enumerate(path):\n if spec.startswith(\"_\"):\n raise exceptions.CommandError(\"Can't access internal attribute %s\" % spec)\n\n part = getattr(current, spec, None)\n if i == len(path) - 1:\n if spec == \"port\" and is_addr(current):\n return str(current[1])\n if spec == \"host\" and is_addr(current):\n return str(current[0])\n elif spec.startswith(\"header[\"):\n if not current:\n return \"\"\n return current.headers.get(headername(spec), \"\")\n elif isinstance(part, bytes):\n return part\n elif isinstance(part, bool):\n return \"true\" if part else \"false\"\n elif isinstance(part, certs.Cert):\n return part.to_pem().decode(\"ascii\")\n current = part\n return str(current or \"\")\n\n\nclass Cut:\n @command.command(\"cut\")\n def cut(\n self,\n flows: typing.Sequence[flow.Flow],\n cuts: mitmproxy.types.CutSpec,\n ) -> mitmproxy.types.Data:\n \"\"\"\n Cut data from a set of flows. Cut specifications are attribute paths\n from the base of the flow object, with a few conveniences - \"port\"\n and \"host\" retrieve parts of an address tuple, \".header[key]\"\n retrieves a header value. Return values converted to strings or\n bytes: SSL certicates are converted to PEM format, bools are \"true\"\n or \"false\", \"bytes\" are preserved, and all other values are\n converted to strings.\n \"\"\"\n ret = [] # type:typing.List[typing.List[typing.Union[str, bytes]]]\n for f in flows:\n ret.append([extract(c, f) for c in cuts])\n return ret # type: ignore\n\n @command.command(\"cut.save\")\n def save(\n self,\n flows: typing.Sequence[flow.Flow],\n cuts: mitmproxy.types.CutSpec,\n path: mitmproxy.types.Path\n ) -> None:\n \"\"\"\n Save cuts to file. If there are multiple flows or cuts, the format\n is UTF-8 encoded CSV. If there is exactly one row and one column,\n the data is written to file as-is, with raw bytes preserved. If the\n path is prefixed with a \"+\", values are appended if there is an\n existing file.\n \"\"\"\n append = False\n if path.startswith(\"+\"):\n append = True\n path = mitmproxy.types.Path(path[1:])\n try:\n if len(cuts) == 1 and len(flows) == 1:\n with open(path, \"ab\" if append else \"wb\") as fp:\n if fp.tell() > 0:\n # We're appending to a file that already exists and has content\n fp.write(b\"\\n\")\n v = extract(cuts[0], flows[0])\n if isinstance(v, bytes):\n fp.write(v)\n else:\n fp.write(v.encode(\"utf8\"))\n ctx.log.alert(\"Saved single cut.\")\n else:\n with open(path, \"a\" if append else \"w\", newline='', encoding=\"utf8\") as fp:\n writer = csv.writer(fp)\n for f in flows:\n vals = [extract(c, f) for c in cuts]\n writer.writerow(\n [strutils.always_str(x) or \"\" for x in vals] # type: ignore\n )\n ctx.log.alert(\"Saved %s cuts over %d flows as CSV.\" % (len(cuts), len(flows)))\n except IOError as e:\n ctx.log.error(str(e))\n\n @command.command(\"cut.clip\")\n def clip(\n self,\n flows: typing.Sequence[flow.Flow],\n cuts: mitmproxy.types.CutSpec,\n ) -> None:\n \"\"\"\n Send cuts to the clipboard. If there are multiple flows or cuts, the\n format is UTF-8 encoded CSV. If there is exactly one row and one\n column, the data is written to file as-is, with raw bytes preserved.\n \"\"\"\n fp = io.StringIO(newline=\"\")\n if len(cuts) == 1 and len(flows) == 1:\n v = extract(cuts[0], flows[0])\n if isinstance(v, bytes):\n fp.write(strutils.always_str(v))\n else:\n fp.write(\"utf8\")\n ctx.log.alert(\"Clipped single cut.\")\n else:\n writer = csv.writer(fp)\n for f in flows:\n vals = [extract(c, f) for c in cuts]\n writer.writerow(\n [strutils.always_str(v) or \"\" for v in vals] # type: ignore\n )\n ctx.log.alert(\"Clipped %s cuts as CSV.\" % len(cuts))\n pyperclip.copy(fp.getvalue())\n", "path": "mitmproxy/addons/cut.py"}], "after_files": [{"content": "import typing\n\nfrom mitmproxy import ctx\nfrom mitmproxy import command\nfrom mitmproxy import flow\nfrom mitmproxy import exceptions\nfrom mitmproxy.utils import strutils\nfrom mitmproxy.net.http.http1 import assemble\nimport mitmproxy.types\n\nimport pyperclip\n\n\ndef curl_command(f: flow.Flow) -> str:\n if not hasattr(f, \"request\"):\n raise exceptions.CommandError(\"Can't export flow with no request.\")\n data = \"curl \"\n request = f.request.copy() # type: ignore\n request.decode(strict=False)\n for k, v in request.headers.items(multi=True):\n data += \"-H '%s:%s' \" % (k, v)\n if request.method != \"GET\":\n data += \"-X %s \" % request.method\n data += \"'%s'\" % request.url\n if request.content:\n data += \" --data-binary '%s'\" % strutils.bytes_to_escaped_str(\n request.content,\n escape_single_quotes=True\n )\n return data\n\n\ndef raw(f: flow.Flow) -> bytes:\n if not hasattr(f, \"request\"):\n raise exceptions.CommandError(\"Can't export flow with no request.\")\n return assemble.assemble_request(f.request) # type: ignore\n\n\nformats = dict(\n curl = curl_command,\n raw = raw,\n)\n\n\nclass Export():\n @command.command(\"export.formats\")\n def formats(self) -> typing.Sequence[str]:\n \"\"\"\n Return a list of the supported export formats.\n \"\"\"\n return list(sorted(formats.keys()))\n\n @command.command(\"export.file\")\n def file(self, fmt: str, f: flow.Flow, path: mitmproxy.types.Path) -> None:\n \"\"\"\n Export a flow to path.\n \"\"\"\n if fmt not in formats:\n raise exceptions.CommandError(\"No such export format: %s\" % fmt)\n func = formats[fmt] # type: typing.Any\n v = func(f)\n try:\n with open(path, \"wb\") as fp:\n if isinstance(v, bytes):\n fp.write(v)\n else:\n fp.write(v.encode(\"utf-8\"))\n except IOError as e:\n ctx.log.error(str(e))\n\n @command.command(\"export.clip\")\n def clip(self, fmt: str, f: flow.Flow) -> None:\n \"\"\"\n Export a flow to the system clipboard.\n \"\"\"\n if fmt not in formats:\n raise exceptions.CommandError(\"No such export format: %s\" % fmt)\n func = formats[fmt] # type: typing.Any\n v = strutils.always_str(func(f))\n try:\n pyperclip.copy(v)\n except pyperclip.PyperclipException as e:\n ctx.log.error(str(e))\n", "path": "mitmproxy/addons/export.py"}, {"content": "import io\nimport csv\nimport typing\nfrom mitmproxy import command\nfrom mitmproxy import exceptions\nfrom mitmproxy import flow\nfrom mitmproxy import ctx\nfrom mitmproxy import certs\nfrom mitmproxy.utils import strutils\nimport mitmproxy.types\n\nimport pyperclip\n\n\ndef headername(spec: str):\n if not (spec.startswith(\"header[\") and spec.endswith(\"]\")):\n raise exceptions.CommandError(\"Invalid header spec: %s\" % spec)\n return spec[len(\"header[\"):-1].strip()\n\n\ndef is_addr(v):\n return isinstance(v, tuple) and len(v) > 1\n\n\ndef extract(cut: str, f: flow.Flow) -> typing.Union[str, bytes]:\n path = cut.split(\".\")\n current = f # type: typing.Any\n for i, spec in enumerate(path):\n if spec.startswith(\"_\"):\n raise exceptions.CommandError(\"Can't access internal attribute %s\" % spec)\n\n part = getattr(current, spec, None)\n if i == len(path) - 1:\n if spec == \"port\" and is_addr(current):\n return str(current[1])\n if spec == \"host\" and is_addr(current):\n return str(current[0])\n elif spec.startswith(\"header[\"):\n if not current:\n return \"\"\n return current.headers.get(headername(spec), \"\")\n elif isinstance(part, bytes):\n return part\n elif isinstance(part, bool):\n return \"true\" if part else \"false\"\n elif isinstance(part, certs.Cert):\n return part.to_pem().decode(\"ascii\")\n current = part\n return str(current or \"\")\n\n\nclass Cut:\n @command.command(\"cut\")\n def cut(\n self,\n flows: typing.Sequence[flow.Flow],\n cuts: mitmproxy.types.CutSpec,\n ) -> mitmproxy.types.Data:\n \"\"\"\n Cut data from a set of flows. Cut specifications are attribute paths\n from the base of the flow object, with a few conveniences - \"port\"\n and \"host\" retrieve parts of an address tuple, \".header[key]\"\n retrieves a header value. Return values converted to strings or\n bytes: SSL certicates are converted to PEM format, bools are \"true\"\n or \"false\", \"bytes\" are preserved, and all other values are\n converted to strings.\n \"\"\"\n ret = [] # type:typing.List[typing.List[typing.Union[str, bytes]]]\n for f in flows:\n ret.append([extract(c, f) for c in cuts])\n return ret # type: ignore\n\n @command.command(\"cut.save\")\n def save(\n self,\n flows: typing.Sequence[flow.Flow],\n cuts: mitmproxy.types.CutSpec,\n path: mitmproxy.types.Path\n ) -> None:\n \"\"\"\n Save cuts to file. If there are multiple flows or cuts, the format\n is UTF-8 encoded CSV. If there is exactly one row and one column,\n the data is written to file as-is, with raw bytes preserved. If the\n path is prefixed with a \"+\", values are appended if there is an\n existing file.\n \"\"\"\n append = False\n if path.startswith(\"+\"):\n append = True\n path = mitmproxy.types.Path(path[1:])\n try:\n if len(cuts) == 1 and len(flows) == 1:\n with open(path, \"ab\" if append else \"wb\") as fp:\n if fp.tell() > 0:\n # We're appending to a file that already exists and has content\n fp.write(b\"\\n\")\n v = extract(cuts[0], flows[0])\n if isinstance(v, bytes):\n fp.write(v)\n else:\n fp.write(v.encode(\"utf8\"))\n ctx.log.alert(\"Saved single cut.\")\n else:\n with open(path, \"a\" if append else \"w\", newline='', encoding=\"utf8\") as fp:\n writer = csv.writer(fp)\n for f in flows:\n vals = [extract(c, f) for c in cuts]\n writer.writerow(\n [strutils.always_str(x) or \"\" for x in vals] # type: ignore\n )\n ctx.log.alert(\"Saved %s cuts over %d flows as CSV.\" % (len(cuts), len(flows)))\n except IOError as e:\n ctx.log.error(str(e))\n\n @command.command(\"cut.clip\")\n def clip(\n self,\n flows: typing.Sequence[flow.Flow],\n cuts: mitmproxy.types.CutSpec,\n ) -> None:\n \"\"\"\n Send cuts to the clipboard. If there are multiple flows or cuts, the\n format is UTF-8 encoded CSV. If there is exactly one row and one\n column, the data is written to file as-is, with raw bytes preserved.\n \"\"\"\n fp = io.StringIO(newline=\"\")\n if len(cuts) == 1 and len(flows) == 1:\n v = extract(cuts[0], flows[0])\n if isinstance(v, bytes):\n fp.write(strutils.always_str(v))\n else:\n fp.write(\"utf8\")\n ctx.log.alert(\"Clipped single cut.\")\n else:\n writer = csv.writer(fp)\n for f in flows:\n vals = [extract(c, f) for c in cuts]\n writer.writerow(\n [strutils.always_str(v) or \"\" for v in vals] # type: ignore\n )\n ctx.log.alert(\"Clipped %s cuts as CSV.\" % len(cuts))\n try:\n pyperclip.copy(fp.getvalue())\n except pyperclip.PyperclipException as e:\n ctx.log.error(str(e))\n", "path": "mitmproxy/addons/cut.py"}]}
| 3,901 | 278 |
gh_patches_debug_13430
|
rasdani/github-patches
|
git_diff
|
huggingface__text-generation-inference-201
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Request failed during generation: Server error: Expected is_sm90 || is_sm8x || is_sm75 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
Using the docker container ala these instructions:
https://github.com/huggingface/text-generation-inference#docker
in order to run the server locally. I'm using an app very similar to the one here:
https://huggingface.co/spaces/olivierdehaene/chat-llm-streaming to hit that local server.
I'm seeing this error in the server logs:
```
send_error: text_generation_router::infer: router/src/infer.rs:390: Request failed during generation: Server error: Expected is_sm90 || is_sm8x || is_sm75 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```
Any ideas?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `server/text_generation_server/models/__init__.py`
Content:
```
1 import torch
2
3 from loguru import logger
4 from transformers import AutoConfig
5 from transformers.models.auto import modeling_auto
6 from typing import Optional
7
8 from text_generation_server.models.model import Model
9 from text_generation_server.models.causal_lm import CausalLM
10 from text_generation_server.models.flash_causal_lm import FlashCausalLM
11 from text_generation_server.models.bloom import BLOOM, BLOOMSharded
12 from text_generation_server.models.seq2seq_lm import Seq2SeqLM
13 from text_generation_server.models.opt import OPT, OPTSharded
14 from text_generation_server.models.galactica import Galactica, GalacticaSharded
15 from text_generation_server.models.santacoder import SantaCoder
16 from text_generation_server.models.gpt_neox import GPTNeoxSharded
17 from text_generation_server.models.t5 import T5Sharded
18
19 try:
20 from text_generation_server.models.flash_neox import FlashNeoX, FlashNeoXSharded
21 from text_generation_server.models.flash_llama import FlashLlama, FlashLlamaSharded
22 from text_generation_server.models.flash_santacoder import (
23 FlashSantacoder,
24 FlashSantacoderSharded,
25 )
26
27 FLASH_ATTENTION = torch.cuda.is_available()
28 except ImportError:
29 logger.opt(exception=True).warning("Could not import Flash Attention enabled models")
30 FLASH_ATTENTION = False
31
32 __all__ = [
33 "Model",
34 "BLOOM",
35 "BLOOMSharded",
36 "CausalLM",
37 "FlashCausalLM",
38 "Galactica",
39 "GalacticaSharded",
40 "GPTNeoxSharded",
41 "Seq2SeqLM",
42 "Galactica",
43 "GalacticaSharded",
44 "SantaCoder",
45 "OPT",
46 "OPTSharded",
47 "T5Sharded",
48 "get_model",
49 ]
50
51 if FLASH_ATTENTION:
52 __all__.append(FlashNeoX)
53 __all__.append(FlashNeoXSharded)
54 __all__.append(FlashSantacoder)
55 __all__.append(FlashSantacoderSharded)
56 __all__.append(FlashLlama)
57 __all__.append(FlashLlamaSharded)
58
59 FLASH_ATT_ERROR_MESSAGE = (
60 "{} requires Flash Attention CUDA kernels to be installed.\n"
61 "Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) "
62 "or install flash attention with `cd server && make install install-flash-attention`"
63 )
64
65 # The flag below controls whether to allow TF32 on matmul. This flag defaults to False
66 # in PyTorch 1.12 and later.
67 torch.backends.cuda.matmul.allow_tf32 = True
68
69 # The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.
70 torch.backends.cudnn.allow_tf32 = True
71
72 # Disable gradients
73 torch.set_grad_enabled(False)
74
75
76 def get_model(
77 model_id: str, revision: Optional[str], sharded: bool, quantize: bool
78 ) -> Model:
79 if "facebook/galactica" in model_id:
80 if sharded:
81 return GalacticaSharded(model_id, revision, quantize=quantize)
82 else:
83 return Galactica(model_id, revision, quantize=quantize)
84
85 if "bigcode" in model_id:
86 if sharded:
87 if not FLASH_ATTENTION:
88 raise NotImplementedError(
89 FLASH_ATT_ERROR_MESSAGE.format(f"Sharded Santacoder")
90 )
91 return FlashSantacoderSharded(model_id, revision=revision)
92 else:
93 santacoder_cls = FlashSantacoder if FLASH_ATTENTION else SantaCoder
94 return santacoder_cls(model_id, revision, quantize)
95
96 config = AutoConfig.from_pretrained(model_id, revision=revision)
97 model_type = config.model_type
98
99 if model_type == "bloom":
100 if sharded:
101 return BLOOMSharded(model_id, revision, quantize=quantize)
102 else:
103 return BLOOM(model_id, revision, quantize=quantize)
104
105 if model_type == "gpt_neox":
106 if sharded:
107 neox_cls = FlashNeoXSharded if FLASH_ATTENTION else GPTNeoxSharded
108 return neox_cls(model_id, revision, quantize=quantize)
109 else:
110 neox_cls = FlashNeoX if FLASH_ATTENTION else CausalLM
111 return neox_cls(model_id, revision, quantize=quantize)
112
113 if model_type == "llama":
114 if sharded:
115 if FLASH_ATTENTION:
116 return FlashLlamaSharded(model_id, revision, quantize=quantize)
117 raise NotImplementedError(FLASH_ATT_ERROR_MESSAGE.format(f"Sharded Llama"))
118 else:
119 llama_cls = FlashLlama if FLASH_ATTENTION else CausalLM
120 return llama_cls(model_id, revision, quantize=quantize)
121
122 if config.model_type == "opt":
123 if sharded:
124 return OPTSharded(model_id, revision, quantize=quantize)
125 else:
126 return OPT(model_id, revision, quantize=quantize)
127
128 if model_type == "t5":
129 if sharded:
130 return T5Sharded(model_id, revision, quantize=quantize)
131 else:
132 return Seq2SeqLM(model_id, revision, quantize=quantize)
133
134 if sharded:
135 raise ValueError("sharded is not supported for AutoModel")
136
137 if model_type in modeling_auto.MODEL_FOR_CAUSAL_LM_MAPPING_NAMES:
138 return CausalLM(model_id, revision, quantize=quantize)
139 if model_type in modeling_auto.MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES:
140 return Seq2SeqLM(model_id, revision, quantize=quantize)
141
142 raise ValueError(f"Unsupported model type {model_type}")
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/server/text_generation_server/models/__init__.py b/server/text_generation_server/models/__init__.py
--- a/server/text_generation_server/models/__init__.py
+++ b/server/text_generation_server/models/__init__.py
@@ -24,7 +24,18 @@
FlashSantacoderSharded,
)
- FLASH_ATTENTION = torch.cuda.is_available()
+ if torch.cuda.is_available():
+ major, minor = torch.cuda.get_device_capability()
+ is_sm75 = major == 7 and minor == 5
+ is_sm8x = major == 8 and minor >= 0
+ is_sm90 = major == 9 and minor == 0
+
+ supported = is_sm75 or is_sm8x or is_sm90
+ if not supported:
+ raise ImportError(f"GPU with CUDA capability {major} {minor} is not supported")
+ FLASH_ATTENTION = True
+ else:
+ FLASH_ATTENTION = False
except ImportError:
logger.opt(exception=True).warning("Could not import Flash Attention enabled models")
FLASH_ATTENTION = False
|
{"golden_diff": "diff --git a/server/text_generation_server/models/__init__.py b/server/text_generation_server/models/__init__.py\n--- a/server/text_generation_server/models/__init__.py\n+++ b/server/text_generation_server/models/__init__.py\n@@ -24,7 +24,18 @@\n FlashSantacoderSharded,\n )\n \n- FLASH_ATTENTION = torch.cuda.is_available()\n+ if torch.cuda.is_available():\n+ major, minor = torch.cuda.get_device_capability()\n+ is_sm75 = major == 7 and minor == 5\n+ is_sm8x = major == 8 and minor >= 0\n+ is_sm90 = major == 9 and minor == 0\n+\n+ supported = is_sm75 or is_sm8x or is_sm90\n+ if not supported:\n+ raise ImportError(f\"GPU with CUDA capability {major} {minor} is not supported\")\n+ FLASH_ATTENTION = True\n+ else:\n+ FLASH_ATTENTION = False\n except ImportError:\n logger.opt(exception=True).warning(\"Could not import Flash Attention enabled models\")\n FLASH_ATTENTION = False\n", "issue": "Request failed during generation: Server error: Expected is_sm90 || is_sm8x || is_sm75 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)\nUsing the docker container ala these instructions:\r\nhttps://github.com/huggingface/text-generation-inference#docker\r\nin order to run the server locally. I'm using an app very similar to the one here:\r\nhttps://huggingface.co/spaces/olivierdehaene/chat-llm-streaming to hit that local server. \r\n\r\nI'm seeing this error in the server logs:\r\n\r\n```\r\nsend_error: text_generation_router::infer: router/src/infer.rs:390: Request failed during generation: Server error: Expected is_sm90 || is_sm8x || is_sm75 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)\r\n```\r\nAny ideas?\n", "before_files": [{"content": "import torch\n\nfrom loguru import logger\nfrom transformers import AutoConfig\nfrom transformers.models.auto import modeling_auto\nfrom typing import Optional\n\nfrom text_generation_server.models.model import Model\nfrom text_generation_server.models.causal_lm import CausalLM\nfrom text_generation_server.models.flash_causal_lm import FlashCausalLM\nfrom text_generation_server.models.bloom import BLOOM, BLOOMSharded\nfrom text_generation_server.models.seq2seq_lm import Seq2SeqLM\nfrom text_generation_server.models.opt import OPT, OPTSharded\nfrom text_generation_server.models.galactica import Galactica, GalacticaSharded\nfrom text_generation_server.models.santacoder import SantaCoder\nfrom text_generation_server.models.gpt_neox import GPTNeoxSharded\nfrom text_generation_server.models.t5 import T5Sharded\n\ntry:\n from text_generation_server.models.flash_neox import FlashNeoX, FlashNeoXSharded\n from text_generation_server.models.flash_llama import FlashLlama, FlashLlamaSharded\n from text_generation_server.models.flash_santacoder import (\n FlashSantacoder,\n FlashSantacoderSharded,\n )\n\n FLASH_ATTENTION = torch.cuda.is_available()\nexcept ImportError:\n logger.opt(exception=True).warning(\"Could not import Flash Attention enabled models\")\n FLASH_ATTENTION = False\n\n__all__ = [\n \"Model\",\n \"BLOOM\",\n \"BLOOMSharded\",\n \"CausalLM\",\n \"FlashCausalLM\",\n \"Galactica\",\n \"GalacticaSharded\",\n \"GPTNeoxSharded\",\n \"Seq2SeqLM\",\n \"Galactica\",\n \"GalacticaSharded\",\n \"SantaCoder\",\n \"OPT\",\n \"OPTSharded\",\n \"T5Sharded\",\n \"get_model\",\n]\n\nif FLASH_ATTENTION:\n __all__.append(FlashNeoX)\n __all__.append(FlashNeoXSharded)\n __all__.append(FlashSantacoder)\n __all__.append(FlashSantacoderSharded)\n __all__.append(FlashLlama)\n __all__.append(FlashLlamaSharded)\n\nFLASH_ATT_ERROR_MESSAGE = (\n \"{} requires Flash Attention CUDA kernels to be installed.\\n\"\n \"Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) \"\n \"or install flash attention with `cd server && make install install-flash-attention`\"\n)\n\n# The flag below controls whether to allow TF32 on matmul. This flag defaults to False\n# in PyTorch 1.12 and later.\ntorch.backends.cuda.matmul.allow_tf32 = True\n\n# The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.\ntorch.backends.cudnn.allow_tf32 = True\n\n# Disable gradients\ntorch.set_grad_enabled(False)\n\n\ndef get_model(\n model_id: str, revision: Optional[str], sharded: bool, quantize: bool\n) -> Model:\n if \"facebook/galactica\" in model_id:\n if sharded:\n return GalacticaSharded(model_id, revision, quantize=quantize)\n else:\n return Galactica(model_id, revision, quantize=quantize)\n\n if \"bigcode\" in model_id:\n if sharded:\n if not FLASH_ATTENTION:\n raise NotImplementedError(\n FLASH_ATT_ERROR_MESSAGE.format(f\"Sharded Santacoder\")\n )\n return FlashSantacoderSharded(model_id, revision=revision)\n else:\n santacoder_cls = FlashSantacoder if FLASH_ATTENTION else SantaCoder\n return santacoder_cls(model_id, revision, quantize)\n\n config = AutoConfig.from_pretrained(model_id, revision=revision)\n model_type = config.model_type\n\n if model_type == \"bloom\":\n if sharded:\n return BLOOMSharded(model_id, revision, quantize=quantize)\n else:\n return BLOOM(model_id, revision, quantize=quantize)\n\n if model_type == \"gpt_neox\":\n if sharded:\n neox_cls = FlashNeoXSharded if FLASH_ATTENTION else GPTNeoxSharded\n return neox_cls(model_id, revision, quantize=quantize)\n else:\n neox_cls = FlashNeoX if FLASH_ATTENTION else CausalLM\n return neox_cls(model_id, revision, quantize=quantize)\n\n if model_type == \"llama\":\n if sharded:\n if FLASH_ATTENTION:\n return FlashLlamaSharded(model_id, revision, quantize=quantize)\n raise NotImplementedError(FLASH_ATT_ERROR_MESSAGE.format(f\"Sharded Llama\"))\n else:\n llama_cls = FlashLlama if FLASH_ATTENTION else CausalLM\n return llama_cls(model_id, revision, quantize=quantize)\n\n if config.model_type == \"opt\":\n if sharded:\n return OPTSharded(model_id, revision, quantize=quantize)\n else:\n return OPT(model_id, revision, quantize=quantize)\n\n if model_type == \"t5\":\n if sharded:\n return T5Sharded(model_id, revision, quantize=quantize)\n else:\n return Seq2SeqLM(model_id, revision, quantize=quantize)\n\n if sharded:\n raise ValueError(\"sharded is not supported for AutoModel\")\n\n if model_type in modeling_auto.MODEL_FOR_CAUSAL_LM_MAPPING_NAMES:\n return CausalLM(model_id, revision, quantize=quantize)\n if model_type in modeling_auto.MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES:\n return Seq2SeqLM(model_id, revision, quantize=quantize)\n\n raise ValueError(f\"Unsupported model type {model_type}\")\n", "path": "server/text_generation_server/models/__init__.py"}], "after_files": [{"content": "import torch\n\nfrom loguru import logger\nfrom transformers import AutoConfig\nfrom transformers.models.auto import modeling_auto\nfrom typing import Optional\n\nfrom text_generation_server.models.model import Model\nfrom text_generation_server.models.causal_lm import CausalLM\nfrom text_generation_server.models.flash_causal_lm import FlashCausalLM\nfrom text_generation_server.models.bloom import BLOOM, BLOOMSharded\nfrom text_generation_server.models.seq2seq_lm import Seq2SeqLM\nfrom text_generation_server.models.opt import OPT, OPTSharded\nfrom text_generation_server.models.galactica import Galactica, GalacticaSharded\nfrom text_generation_server.models.santacoder import SantaCoder\nfrom text_generation_server.models.gpt_neox import GPTNeoxSharded\nfrom text_generation_server.models.t5 import T5Sharded\n\ntry:\n from text_generation_server.models.flash_neox import FlashNeoX, FlashNeoXSharded\n from text_generation_server.models.flash_llama import FlashLlama, FlashLlamaSharded\n from text_generation_server.models.flash_santacoder import (\n FlashSantacoder,\n FlashSantacoderSharded,\n )\n\n if torch.cuda.is_available():\n major, minor = torch.cuda.get_device_capability()\n is_sm75 = major == 7 and minor == 5\n is_sm8x = major == 8 and minor >= 0\n is_sm90 = major == 9 and minor == 0\n\n supported = is_sm75 or is_sm8x or is_sm90\n if not supported:\n raise ImportError(f\"GPU with CUDA capability {major} {minor} is not supported\")\n FLASH_ATTENTION = True\n else:\n FLASH_ATTENTION = False\nexcept ImportError:\n logger.opt(exception=True).warning(\"Could not import Flash Attention enabled models\")\n FLASH_ATTENTION = False\n\n__all__ = [\n \"Model\",\n \"BLOOM\",\n \"BLOOMSharded\",\n \"CausalLM\",\n \"FlashCausalLM\",\n \"Galactica\",\n \"GalacticaSharded\",\n \"GPTNeoxSharded\",\n \"Seq2SeqLM\",\n \"Galactica\",\n \"GalacticaSharded\",\n \"SantaCoder\",\n \"OPT\",\n \"OPTSharded\",\n \"T5Sharded\",\n \"get_model\",\n]\n\nif FLASH_ATTENTION:\n __all__.append(FlashNeoX)\n __all__.append(FlashNeoXSharded)\n __all__.append(FlashSantacoder)\n __all__.append(FlashSantacoderSharded)\n __all__.append(FlashLlama)\n __all__.append(FlashLlamaSharded)\n\nFLASH_ATT_ERROR_MESSAGE = (\n \"{} requires Flash Attention CUDA kernels to be installed.\\n\"\n \"Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) \"\n \"or install flash attention with `cd server && make install install-flash-attention`\"\n)\n\n# The flag below controls whether to allow TF32 on matmul. This flag defaults to False\n# in PyTorch 1.12 and later.\ntorch.backends.cuda.matmul.allow_tf32 = True\n\n# The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.\ntorch.backends.cudnn.allow_tf32 = True\n\n# Disable gradients\ntorch.set_grad_enabled(False)\n\n\ndef get_model(\n model_id: str, revision: Optional[str], sharded: bool, quantize: bool\n) -> Model:\n if \"facebook/galactica\" in model_id:\n if sharded:\n return GalacticaSharded(model_id, revision, quantize=quantize)\n else:\n return Galactica(model_id, revision, quantize=quantize)\n\n if \"bigcode\" in model_id:\n if sharded:\n if not FLASH_ATTENTION:\n raise NotImplementedError(\n FLASH_ATT_ERROR_MESSAGE.format(f\"Sharded Santacoder\")\n )\n return FlashSantacoderSharded(model_id, revision=revision)\n else:\n santacoder_cls = FlashSantacoder if FLASH_ATTENTION else SantaCoder\n return santacoder_cls(model_id, revision, quantize)\n\n config = AutoConfig.from_pretrained(model_id, revision=revision)\n model_type = config.model_type\n\n if model_type == \"bloom\":\n if sharded:\n return BLOOMSharded(model_id, revision, quantize=quantize)\n else:\n return BLOOM(model_id, revision, quantize=quantize)\n\n if model_type == \"gpt_neox\":\n if sharded:\n neox_cls = FlashNeoXSharded if FLASH_ATTENTION else GPTNeoxSharded\n return neox_cls(model_id, revision, quantize=quantize)\n else:\n neox_cls = FlashNeoX if FLASH_ATTENTION else CausalLM\n return neox_cls(model_id, revision, quantize=quantize)\n\n if model_type == \"llama\":\n if sharded:\n if FLASH_ATTENTION:\n return FlashLlamaSharded(model_id, revision, quantize=quantize)\n raise NotImplementedError(FLASH_ATT_ERROR_MESSAGE.format(f\"Sharded Llama\"))\n else:\n llama_cls = FlashLlama if FLASH_ATTENTION else CausalLM\n return llama_cls(model_id, revision, quantize=quantize)\n\n if config.model_type == \"opt\":\n if sharded:\n return OPTSharded(model_id, revision, quantize=quantize)\n else:\n return OPT(model_id, revision, quantize=quantize)\n\n if model_type == \"t5\":\n if sharded:\n return T5Sharded(model_id, revision, quantize=quantize)\n else:\n return Seq2SeqLM(model_id, revision, quantize=quantize)\n\n if sharded:\n raise ValueError(\"sharded is not supported for AutoModel\")\n\n if model_type in modeling_auto.MODEL_FOR_CAUSAL_LM_MAPPING_NAMES:\n return CausalLM(model_id, revision, quantize=quantize)\n if model_type in modeling_auto.MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES:\n return Seq2SeqLM(model_id, revision, quantize=quantize)\n\n raise ValueError(f\"Unsupported model type {model_type}\")\n", "path": "server/text_generation_server/models/__init__.py"}]}
| 2,067 | 247 |
gh_patches_debug_36382
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-5862
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Letter misprint in jemalloc recipe
https://github.com/conan-io/conan-center-index/blob/a40f8d2e097ffb1f98797011f7694ba7b9efe91d/recipes/jemalloc/all/conanfile.py#L96
Instead of --enable-initial-exec-tld must be --enbal-initial-exec-tls
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/jemalloc/all/conanfile.py`
Content:
```
1 from conans import AutoToolsBuildEnvironment, ConanFile, MSBuild, tools
2 from conans.errors import ConanInvalidConfiguration
3 import os
4 import shutil
5 import string
6
7
8 class JemallocConan(ConanFile):
9 name = "jemalloc"
10 description = "jemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support."
11 url = "https://github.com/conan-io/conan-center-index"
12 license = "BSD-2-Clause"
13 homepage = "http://jemalloc.net/"
14 topics = ("conan", "jemalloc", "malloc", "free")
15 settings = "os", "arch", "compiler", "build_type"
16 options = {
17 "shared": [True, False],
18 "fPIC": [True, False],
19 "prefix": "ANY",
20 "enable_cxx": [True, False],
21 "enable_fill": [True, False],
22 "enable_xmalloc": [True, False],
23 "enable_readlinkat": [True, False],
24 "enable_syscall": [True, False],
25 "enable_lazy_lock": [True, False],
26 "enable_debug_logging": [True, False],
27 "enable_initial_exec_tls": [True, False],
28 "enable_libdl": [True, False],
29 }
30 default_options = {
31 "shared": False,
32 "fPIC": True,
33 "prefix": "",
34 "enable_cxx": True,
35 "enable_fill": True,
36 "enable_xmalloc": False,
37 "enable_readlinkat": False,
38 "enable_syscall": True,
39 "enable_lazy_lock": False,
40 "enable_debug_logging": False,
41 "enable_initial_exec_tls": True,
42 "enable_libdl": True,
43 }
44
45 _autotools = None
46
47 _source_subfolder = "source_subfolder"
48
49 def config_options(self):
50 if self.settings.os == "Windows":
51 del self.options.fPIC
52
53 def configure(self):
54 if self.options.enable_cxx and \
55 self.settings.compiler.get_safe("libcxx") == "libc++" and \
56 self.settings.compiler == "clang" and \
57 tools.Version(self.settings.compiler.version) < "10":
58 raise ConanInvalidConfiguration("clang and libc++ version {} (< 10) is missing a mutex implementation".format(self.settings.compiler.version))
59 if self.settings.compiler == "Visual Studio" and \
60 self.options.shared and \
61 "MT" in self.settings.compiler.runtime:
62 raise ConanInvalidConfiguration("Visual Studio build for shared library with MT runtime is not supported")
63 if self.settings.compiler == "Visual Studio" and self.settings.compiler.version != "15":
64 # https://github.com/jemalloc/jemalloc/issues/1703
65 raise ConanInvalidConfiguration("Only Visual Studio 15 2017 is supported. Please fix this if other versions are supported")
66 if self.options.shared:
67 del self.options.fPIC
68 if not self.options.enable_cxx:
69 del self.settings.compiler.libcxx
70 del self.settings.compiler.cppstd
71 if self.settings.build_type not in ("Release", "Debug", None):
72 raise ConanInvalidConfiguration("Only Release and Debug build_types are supported")
73 if self.settings.compiler == "Visual Studio" and self.settings.arch not in ("x86_64", "x86"):
74 raise ConanInvalidConfiguration("Unsupported arch")
75
76 def source(self):
77 tools.get(**self.conan_data["sources"][self.version])
78 os.rename("{}-{}".format(self.name, self.version), self._source_subfolder)
79
80 def build_requirements(self):
81 if tools.os_info.is_windows and not os.environ.get("CONAN_BASH_PATH", None):
82 self.build_requires("msys2/20200517")
83
84 @property
85 def _autotools_args(self):
86 conf_args = [
87 "--with-jemalloc-prefix={}".format(self.options.prefix),
88 "--enable-debug" if self.settings.build_type == "Debug" else "--disable-debug",
89 "--enable-cxx" if self.options.enable_cxx else "--disable-cxx",
90 "--enable-fill" if self.options.enable_fill else "--disable-fill",
91 "--enable-xmalloc" if self.options.enable_cxx else "--disable-xmalloc",
92 "--enable-readlinkat" if self.options.enable_readlinkat else "--disable-readlinkat",
93 "--enable-syscall" if self.options.enable_syscall else "--disable-syscall",
94 "--enable-lazy-lock" if self.options.enable_lazy_lock else "--disable-lazy-lock",
95 "--enable-log" if self.options.enable_debug_logging else "--disable-log",
96 "--enable-initial-exec-tld" if self.options.enable_initial_exec_tls else "--disable-initial-exec-tls",
97 "--enable-libdl" if self.options.enable_libdl else "--disable-libdl",
98 ]
99 if self.options.shared:
100 conf_args.extend(["--enable-shared", "--disable-static"])
101 else:
102 conf_args.extend(["--disable-shared", "--enable-static"])
103 return conf_args
104
105 def _configure_autotools(self):
106 if self._autotools:
107 return self._autotools
108 self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
109 self._autotools.configure(args=self._autotools_args, configure_dir=self._source_subfolder)
110 return self._autotools
111
112 @property
113 def _msvc_build_type(self):
114 build_type = str(self.settings.build_type) or "Release"
115 if not self.options.shared:
116 build_type += "-static"
117 return build_type
118
119 def _patch_sources(self):
120 if self.settings.os == "Windows":
121 makefile_in = os.path.join(self._source_subfolder, "Makefile.in")
122 tools.replace_in_file(makefile_in,
123 "DSO_LDFLAGS = @DSO_LDFLAGS@",
124 "DSO_LDFLAGS = @DSO_LDFLAGS@ -Wl,--out-implib,lib/libjemalloc.a")
125 tools.replace_in_file(makefile_in,
126 "\t$(INSTALL) -d $(LIBDIR)\n"
127 "\t$(INSTALL) -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(LIBDIR)",
128 "\t$(INSTALL) -d $(BINDIR)\n"
129 "\t$(INSTALL) -d $(LIBDIR)\n"
130 "\t$(INSTALL) -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(BINDIR)\n"
131 "\t$(INSTALL) -m 644 $(objroot)lib/libjemalloc.a $(LIBDIR)")
132
133 def build(self):
134 self._patch_sources()
135 if self.settings.compiler == "Visual Studio":
136 with tools.vcvars(self.settings) if self.settings.compiler == "Visual Studio" else tools.no_op():
137 with tools.environment_append({"CC": "cl", "CXX": "cl"}) if self.settings.compiler == "Visual Studio" else tools.no_op():
138 with tools.chdir(self._source_subfolder):
139 # Do not use AutoToolsBuildEnvironment because we want to run configure as ./configure
140 self.run("./configure {}".format(" ".join(self._autotools_args)), win_bash=tools.os_info.is_windows)
141 msbuild = MSBuild(self)
142 # Do not use the 2015 solution: unresolved external symbols: test_hooks_libc_hook and test_hooks_arena_new_hook
143 sln_file = os.path.join(self._source_subfolder, "msvc", "jemalloc_vc2017.sln")
144 msbuild.build(sln_file, targets=["jemalloc"], build_type=self._msvc_build_type)
145 else:
146 autotools = self._configure_autotools()
147 autotools.make()
148
149 @property
150 def _library_name(self):
151 libname = "jemalloc"
152 if self.settings.compiler == "Visual Studio":
153 if self.options.shared:
154 if self.settings.build_type == "Debug":
155 libname += "d"
156 else:
157 toolset = tools.msvs_toolset(self.settings)
158 toolset_number = "".join(c for c in toolset if c in string.digits)
159 libname += "-vc{}-{}".format(toolset_number, self._msvc_build_type)
160 else:
161 if self.settings.os == "Windows":
162 if not self.options.shared:
163 libname += "_s"
164 else:
165 if not self.options.shared and self.options.fPIC:
166 libname += "_pic"
167 return libname
168
169 def package(self):
170 self.copy(pattern="COPYING", src=self._source_subfolder, dst="licenses")
171 if self.settings.compiler == "Visual Studio":
172 arch_subdir = {
173 "x86_64": "x64",
174 "x86": "x86",
175 }[str(self.settings.arch)]
176 self.copy("*.lib", src=os.path.join(self._source_subfolder, "msvc", arch_subdir, self._msvc_build_type), dst=os.path.join(self.package_folder, "lib"))
177 self.copy("*.dll", src=os.path.join(self._source_subfolder, "msvc", arch_subdir, self._msvc_build_type), dst=os.path.join(self.package_folder, "bin"))
178 self.copy("jemalloc.h", src=os.path.join(self._source_subfolder, "include", "jemalloc"), dst=os.path.join(self.package_folder, "include", "jemalloc"), keep_path=True)
179 shutil.copytree(os.path.join(self._source_subfolder, "include", "msvc_compat"),
180 os.path.join(self.package_folder, "include", "msvc_compat"))
181 else:
182 autotools = self._configure_autotools()
183 # Use install_lib_XXX and install_include to avoid mixing binaries and dll's
184 autotools.make(target="install_lib_shared" if self.options.shared else "install_lib_static")
185 autotools.make(target="install_include")
186 if self.settings.os == "Windows" and self.settings.compiler == "gcc":
187 os.rename(os.path.join(self.package_folder, "lib", "{}.lib".format(self._library_name)),
188 os.path.join(self.package_folder, "lib", "lib{}.a".format(self._library_name)))
189 if not self.options.shared:
190 os.unlink(os.path.join(self.package_folder, "lib", "jemalloc.lib"))
191
192 def package_id(self):
193 if not self.settings.build_type:
194 self.info.settings.build_type = "Release"
195
196 def package_info(self):
197 self.cpp_info.libs = [self._library_name]
198 self.cpp_info.includedirs = [os.path.join(self.package_folder, "include"),
199 os.path.join(self.package_folder, "include", "jemalloc")]
200 if self.settings.compiler == "Visual Studio":
201 self.cpp_info.includedirs.append(os.path.join(self.package_folder, "include", "msvc_compat"))
202 if not self.options.shared:
203 self.cpp_info.defines = ["JEMALLOC_EXPORT="]
204 if self.settings.os == "Linux":
205 self.cpp_info.system_libs.extend(["dl", "pthread", "rt"])
206
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/recipes/jemalloc/all/conanfile.py b/recipes/jemalloc/all/conanfile.py
--- a/recipes/jemalloc/all/conanfile.py
+++ b/recipes/jemalloc/all/conanfile.py
@@ -41,6 +41,7 @@
"enable_initial_exec_tls": True,
"enable_libdl": True,
}
+ exports_sources = ["patches/**"]
_autotools = None
@@ -72,6 +73,8 @@
raise ConanInvalidConfiguration("Only Release and Debug build_types are supported")
if self.settings.compiler == "Visual Studio" and self.settings.arch not in ("x86_64", "x86"):
raise ConanInvalidConfiguration("Unsupported arch")
+ if self.settings.compiler == "clang" and tools.Version(self.settings.compiler.version) <= "3.9":
+ raise ConanInvalidConfiguration("Unsupported compiler version")
def source(self):
tools.get(**self.conan_data["sources"][self.version])
@@ -93,7 +96,7 @@
"--enable-syscall" if self.options.enable_syscall else "--disable-syscall",
"--enable-lazy-lock" if self.options.enable_lazy_lock else "--disable-lazy-lock",
"--enable-log" if self.options.enable_debug_logging else "--disable-log",
- "--enable-initial-exec-tld" if self.options.enable_initial_exec_tls else "--disable-initial-exec-tls",
+ "--enable-initial-exec-tls" if self.options.enable_initial_exec_tls else "--disable-initial-exec-tls",
"--enable-libdl" if self.options.enable_libdl else "--disable-libdl",
]
if self.options.shared:
@@ -130,6 +133,9 @@
"\t$(INSTALL) -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(BINDIR)\n"
"\t$(INSTALL) -m 644 $(objroot)lib/libjemalloc.a $(LIBDIR)")
+ for patch in self.conan_data.get("patches", {}).get(self.version, []):
+ tools.patch(**patch)
+
def build(self):
self._patch_sources()
if self.settings.compiler == "Visual Studio":
|
{"golden_diff": "diff --git a/recipes/jemalloc/all/conanfile.py b/recipes/jemalloc/all/conanfile.py\n--- a/recipes/jemalloc/all/conanfile.py\n+++ b/recipes/jemalloc/all/conanfile.py\n@@ -41,6 +41,7 @@\n \"enable_initial_exec_tls\": True,\n \"enable_libdl\": True,\n }\n+ exports_sources = [\"patches/**\"]\n \n _autotools = None\n \n@@ -72,6 +73,8 @@\n raise ConanInvalidConfiguration(\"Only Release and Debug build_types are supported\")\n if self.settings.compiler == \"Visual Studio\" and self.settings.arch not in (\"x86_64\", \"x86\"):\n raise ConanInvalidConfiguration(\"Unsupported arch\")\n+ if self.settings.compiler == \"clang\" and tools.Version(self.settings.compiler.version) <= \"3.9\":\n+ raise ConanInvalidConfiguration(\"Unsupported compiler version\")\n \n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n@@ -93,7 +96,7 @@\n \"--enable-syscall\" if self.options.enable_syscall else \"--disable-syscall\",\n \"--enable-lazy-lock\" if self.options.enable_lazy_lock else \"--disable-lazy-lock\",\n \"--enable-log\" if self.options.enable_debug_logging else \"--disable-log\",\n- \"--enable-initial-exec-tld\" if self.options.enable_initial_exec_tls else \"--disable-initial-exec-tls\",\n+ \"--enable-initial-exec-tls\" if self.options.enable_initial_exec_tls else \"--disable-initial-exec-tls\",\n \"--enable-libdl\" if self.options.enable_libdl else \"--disable-libdl\",\n ]\n if self.options.shared:\n@@ -130,6 +133,9 @@\n \"\\t$(INSTALL) -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(BINDIR)\\n\"\n \"\\t$(INSTALL) -m 644 $(objroot)lib/libjemalloc.a $(LIBDIR)\")\n \n+ for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n+ tools.patch(**patch)\n+\n def build(self):\n self._patch_sources()\n if self.settings.compiler == \"Visual Studio\":\n", "issue": "Letter misprint in jemalloc recipe\nhttps://github.com/conan-io/conan-center-index/blob/a40f8d2e097ffb1f98797011f7694ba7b9efe91d/recipes/jemalloc/all/conanfile.py#L96\r\n\r\nInstead of --enable-initial-exec-tld must be --enbal-initial-exec-tls\n", "before_files": [{"content": "from conans import AutoToolsBuildEnvironment, ConanFile, MSBuild, tools\nfrom conans.errors import ConanInvalidConfiguration\nimport os\nimport shutil\nimport string\n\n\nclass JemallocConan(ConanFile):\n name = \"jemalloc\"\n description = \"jemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"BSD-2-Clause\"\n homepage = \"http://jemalloc.net/\"\n topics = (\"conan\", \"jemalloc\", \"malloc\", \"free\")\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"prefix\": \"ANY\",\n \"enable_cxx\": [True, False],\n \"enable_fill\": [True, False],\n \"enable_xmalloc\": [True, False],\n \"enable_readlinkat\": [True, False],\n \"enable_syscall\": [True, False],\n \"enable_lazy_lock\": [True, False],\n \"enable_debug_logging\": [True, False],\n \"enable_initial_exec_tls\": [True, False],\n \"enable_libdl\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"prefix\": \"\",\n \"enable_cxx\": True,\n \"enable_fill\": True,\n \"enable_xmalloc\": False,\n \"enable_readlinkat\": False,\n \"enable_syscall\": True,\n \"enable_lazy_lock\": False,\n \"enable_debug_logging\": False,\n \"enable_initial_exec_tls\": True,\n \"enable_libdl\": True,\n }\n\n _autotools = None\n\n _source_subfolder = \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.enable_cxx and \\\n self.settings.compiler.get_safe(\"libcxx\") == \"libc++\" and \\\n self.settings.compiler == \"clang\" and \\\n tools.Version(self.settings.compiler.version) < \"10\":\n raise ConanInvalidConfiguration(\"clang and libc++ version {} (< 10) is missing a mutex implementation\".format(self.settings.compiler.version))\n if self.settings.compiler == \"Visual Studio\" and \\\n self.options.shared and \\\n \"MT\" in self.settings.compiler.runtime:\n raise ConanInvalidConfiguration(\"Visual Studio build for shared library with MT runtime is not supported\")\n if self.settings.compiler == \"Visual Studio\" and self.settings.compiler.version != \"15\":\n # https://github.com/jemalloc/jemalloc/issues/1703\n raise ConanInvalidConfiguration(\"Only Visual Studio 15 2017 is supported. Please fix this if other versions are supported\")\n if self.options.shared:\n del self.options.fPIC\n if not self.options.enable_cxx:\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n if self.settings.build_type not in (\"Release\", \"Debug\", None):\n raise ConanInvalidConfiguration(\"Only Release and Debug build_types are supported\")\n if self.settings.compiler == \"Visual Studio\" and self.settings.arch not in (\"x86_64\", \"x86\"):\n raise ConanInvalidConfiguration(\"Unsupported arch\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(\"{}-{}\".format(self.name, self.version), self._source_subfolder)\n\n def build_requirements(self):\n if tools.os_info.is_windows and not os.environ.get(\"CONAN_BASH_PATH\", None):\n self.build_requires(\"msys2/20200517\")\n\n @property\n def _autotools_args(self):\n conf_args = [\n \"--with-jemalloc-prefix={}\".format(self.options.prefix),\n \"--enable-debug\" if self.settings.build_type == \"Debug\" else \"--disable-debug\",\n \"--enable-cxx\" if self.options.enable_cxx else \"--disable-cxx\",\n \"--enable-fill\" if self.options.enable_fill else \"--disable-fill\",\n \"--enable-xmalloc\" if self.options.enable_cxx else \"--disable-xmalloc\",\n \"--enable-readlinkat\" if self.options.enable_readlinkat else \"--disable-readlinkat\",\n \"--enable-syscall\" if self.options.enable_syscall else \"--disable-syscall\",\n \"--enable-lazy-lock\" if self.options.enable_lazy_lock else \"--disable-lazy-lock\",\n \"--enable-log\" if self.options.enable_debug_logging else \"--disable-log\",\n \"--enable-initial-exec-tld\" if self.options.enable_initial_exec_tls else \"--disable-initial-exec-tls\",\n \"--enable-libdl\" if self.options.enable_libdl else \"--disable-libdl\",\n ]\n if self.options.shared:\n conf_args.extend([\"--enable-shared\", \"--disable-static\"])\n else:\n conf_args.extend([\"--disable-shared\", \"--enable-static\"])\n return conf_args\n\n def _configure_autotools(self):\n if self._autotools:\n return self._autotools\n self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)\n self._autotools.configure(args=self._autotools_args, configure_dir=self._source_subfolder)\n return self._autotools\n\n @property\n def _msvc_build_type(self):\n build_type = str(self.settings.build_type) or \"Release\"\n if not self.options.shared:\n build_type += \"-static\"\n return build_type\n\n def _patch_sources(self):\n if self.settings.os == \"Windows\":\n makefile_in = os.path.join(self._source_subfolder, \"Makefile.in\")\n tools.replace_in_file(makefile_in,\n \"DSO_LDFLAGS = @DSO_LDFLAGS@\",\n \"DSO_LDFLAGS = @DSO_LDFLAGS@ -Wl,--out-implib,lib/libjemalloc.a\")\n tools.replace_in_file(makefile_in,\n \"\\t$(INSTALL) -d $(LIBDIR)\\n\"\n \"\\t$(INSTALL) -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(LIBDIR)\",\n \"\\t$(INSTALL) -d $(BINDIR)\\n\"\n \"\\t$(INSTALL) -d $(LIBDIR)\\n\"\n \"\\t$(INSTALL) -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(BINDIR)\\n\"\n \"\\t$(INSTALL) -m 644 $(objroot)lib/libjemalloc.a $(LIBDIR)\")\n\n def build(self):\n self._patch_sources()\n if self.settings.compiler == \"Visual Studio\":\n with tools.vcvars(self.settings) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n with tools.environment_append({\"CC\": \"cl\", \"CXX\": \"cl\"}) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n with tools.chdir(self._source_subfolder):\n # Do not use AutoToolsBuildEnvironment because we want to run configure as ./configure\n self.run(\"./configure {}\".format(\" \".join(self._autotools_args)), win_bash=tools.os_info.is_windows)\n msbuild = MSBuild(self)\n # Do not use the 2015 solution: unresolved external symbols: test_hooks_libc_hook and test_hooks_arena_new_hook\n sln_file = os.path.join(self._source_subfolder, \"msvc\", \"jemalloc_vc2017.sln\")\n msbuild.build(sln_file, targets=[\"jemalloc\"], build_type=self._msvc_build_type)\n else:\n autotools = self._configure_autotools()\n autotools.make()\n\n @property\n def _library_name(self):\n libname = \"jemalloc\"\n if self.settings.compiler == \"Visual Studio\":\n if self.options.shared:\n if self.settings.build_type == \"Debug\":\n libname += \"d\"\n else:\n toolset = tools.msvs_toolset(self.settings)\n toolset_number = \"\".join(c for c in toolset if c in string.digits)\n libname += \"-vc{}-{}\".format(toolset_number, self._msvc_build_type)\n else:\n if self.settings.os == \"Windows\":\n if not self.options.shared:\n libname += \"_s\"\n else:\n if not self.options.shared and self.options.fPIC:\n libname += \"_pic\"\n return libname\n\n def package(self):\n self.copy(pattern=\"COPYING\", src=self._source_subfolder, dst=\"licenses\")\n if self.settings.compiler == \"Visual Studio\":\n arch_subdir = {\n \"x86_64\": \"x64\",\n \"x86\": \"x86\",\n }[str(self.settings.arch)]\n self.copy(\"*.lib\", src=os.path.join(self._source_subfolder, \"msvc\", arch_subdir, self._msvc_build_type), dst=os.path.join(self.package_folder, \"lib\"))\n self.copy(\"*.dll\", src=os.path.join(self._source_subfolder, \"msvc\", arch_subdir, self._msvc_build_type), dst=os.path.join(self.package_folder, \"bin\"))\n self.copy(\"jemalloc.h\", src=os.path.join(self._source_subfolder, \"include\", \"jemalloc\"), dst=os.path.join(self.package_folder, \"include\", \"jemalloc\"), keep_path=True)\n shutil.copytree(os.path.join(self._source_subfolder, \"include\", \"msvc_compat\"),\n os.path.join(self.package_folder, \"include\", \"msvc_compat\"))\n else:\n autotools = self._configure_autotools()\n # Use install_lib_XXX and install_include to avoid mixing binaries and dll's\n autotools.make(target=\"install_lib_shared\" if self.options.shared else \"install_lib_static\")\n autotools.make(target=\"install_include\")\n if self.settings.os == \"Windows\" and self.settings.compiler == \"gcc\":\n os.rename(os.path.join(self.package_folder, \"lib\", \"{}.lib\".format(self._library_name)),\n os.path.join(self.package_folder, \"lib\", \"lib{}.a\".format(self._library_name)))\n if not self.options.shared:\n os.unlink(os.path.join(self.package_folder, \"lib\", \"jemalloc.lib\"))\n\n def package_id(self):\n if not self.settings.build_type:\n self.info.settings.build_type = \"Release\"\n\n def package_info(self):\n self.cpp_info.libs = [self._library_name]\n self.cpp_info.includedirs = [os.path.join(self.package_folder, \"include\"),\n os.path.join(self.package_folder, \"include\", \"jemalloc\")]\n if self.settings.compiler == \"Visual Studio\":\n self.cpp_info.includedirs.append(os.path.join(self.package_folder, \"include\", \"msvc_compat\"))\n if not self.options.shared:\n self.cpp_info.defines = [\"JEMALLOC_EXPORT=\"]\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.extend([\"dl\", \"pthread\", \"rt\"])\n", "path": "recipes/jemalloc/all/conanfile.py"}], "after_files": [{"content": "from conans import AutoToolsBuildEnvironment, ConanFile, MSBuild, tools\nfrom conans.errors import ConanInvalidConfiguration\nimport os\nimport shutil\nimport string\n\n\nclass JemallocConan(ConanFile):\n name = \"jemalloc\"\n description = \"jemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.\"\n url = \"https://github.com/conan-io/conan-center-index\"\n license = \"BSD-2-Clause\"\n homepage = \"http://jemalloc.net/\"\n topics = (\"conan\", \"jemalloc\", \"malloc\", \"free\")\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"prefix\": \"ANY\",\n \"enable_cxx\": [True, False],\n \"enable_fill\": [True, False],\n \"enable_xmalloc\": [True, False],\n \"enable_readlinkat\": [True, False],\n \"enable_syscall\": [True, False],\n \"enable_lazy_lock\": [True, False],\n \"enable_debug_logging\": [True, False],\n \"enable_initial_exec_tls\": [True, False],\n \"enable_libdl\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"prefix\": \"\",\n \"enable_cxx\": True,\n \"enable_fill\": True,\n \"enable_xmalloc\": False,\n \"enable_readlinkat\": False,\n \"enable_syscall\": True,\n \"enable_lazy_lock\": False,\n \"enable_debug_logging\": False,\n \"enable_initial_exec_tls\": True,\n \"enable_libdl\": True,\n }\n exports_sources = [\"patches/**\"]\n\n _autotools = None\n\n _source_subfolder = \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.enable_cxx and \\\n self.settings.compiler.get_safe(\"libcxx\") == \"libc++\" and \\\n self.settings.compiler == \"clang\" and \\\n tools.Version(self.settings.compiler.version) < \"10\":\n raise ConanInvalidConfiguration(\"clang and libc++ version {} (< 10) is missing a mutex implementation\".format(self.settings.compiler.version))\n if self.settings.compiler == \"Visual Studio\" and \\\n self.options.shared and \\\n \"MT\" in self.settings.compiler.runtime:\n raise ConanInvalidConfiguration(\"Visual Studio build for shared library with MT runtime is not supported\")\n if self.settings.compiler == \"Visual Studio\" and self.settings.compiler.version != \"15\":\n # https://github.com/jemalloc/jemalloc/issues/1703\n raise ConanInvalidConfiguration(\"Only Visual Studio 15 2017 is supported. Please fix this if other versions are supported\")\n if self.options.shared:\n del self.options.fPIC\n if not self.options.enable_cxx:\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n if self.settings.build_type not in (\"Release\", \"Debug\", None):\n raise ConanInvalidConfiguration(\"Only Release and Debug build_types are supported\")\n if self.settings.compiler == \"Visual Studio\" and self.settings.arch not in (\"x86_64\", \"x86\"):\n raise ConanInvalidConfiguration(\"Unsupported arch\")\n if self.settings.compiler == \"clang\" and tools.Version(self.settings.compiler.version) <= \"3.9\":\n raise ConanInvalidConfiguration(\"Unsupported compiler version\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(\"{}-{}\".format(self.name, self.version), self._source_subfolder)\n\n def build_requirements(self):\n if tools.os_info.is_windows and not os.environ.get(\"CONAN_BASH_PATH\", None):\n self.build_requires(\"msys2/20200517\")\n\n @property\n def _autotools_args(self):\n conf_args = [\n \"--with-jemalloc-prefix={}\".format(self.options.prefix),\n \"--enable-debug\" if self.settings.build_type == \"Debug\" else \"--disable-debug\",\n \"--enable-cxx\" if self.options.enable_cxx else \"--disable-cxx\",\n \"--enable-fill\" if self.options.enable_fill else \"--disable-fill\",\n \"--enable-xmalloc\" if self.options.enable_cxx else \"--disable-xmalloc\",\n \"--enable-readlinkat\" if self.options.enable_readlinkat else \"--disable-readlinkat\",\n \"--enable-syscall\" if self.options.enable_syscall else \"--disable-syscall\",\n \"--enable-lazy-lock\" if self.options.enable_lazy_lock else \"--disable-lazy-lock\",\n \"--enable-log\" if self.options.enable_debug_logging else \"--disable-log\",\n \"--enable-initial-exec-tls\" if self.options.enable_initial_exec_tls else \"--disable-initial-exec-tls\",\n \"--enable-libdl\" if self.options.enable_libdl else \"--disable-libdl\",\n ]\n if self.options.shared:\n conf_args.extend([\"--enable-shared\", \"--disable-static\"])\n else:\n conf_args.extend([\"--disable-shared\", \"--enable-static\"])\n return conf_args\n\n def _configure_autotools(self):\n if self._autotools:\n return self._autotools\n self._autotools = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)\n self._autotools.configure(args=self._autotools_args, configure_dir=self._source_subfolder)\n return self._autotools\n\n @property\n def _msvc_build_type(self):\n build_type = str(self.settings.build_type) or \"Release\"\n if not self.options.shared:\n build_type += \"-static\"\n return build_type\n\n def _patch_sources(self):\n if self.settings.os == \"Windows\":\n makefile_in = os.path.join(self._source_subfolder, \"Makefile.in\")\n tools.replace_in_file(makefile_in,\n \"DSO_LDFLAGS = @DSO_LDFLAGS@\",\n \"DSO_LDFLAGS = @DSO_LDFLAGS@ -Wl,--out-implib,lib/libjemalloc.a\")\n tools.replace_in_file(makefile_in,\n \"\\t$(INSTALL) -d $(LIBDIR)\\n\"\n \"\\t$(INSTALL) -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(LIBDIR)\",\n \"\\t$(INSTALL) -d $(BINDIR)\\n\"\n \"\\t$(INSTALL) -d $(LIBDIR)\\n\"\n \"\\t$(INSTALL) -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(BINDIR)\\n\"\n \"\\t$(INSTALL) -m 644 $(objroot)lib/libjemalloc.a $(LIBDIR)\")\n\n for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n tools.patch(**patch)\n\n def build(self):\n self._patch_sources()\n if self.settings.compiler == \"Visual Studio\":\n with tools.vcvars(self.settings) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n with tools.environment_append({\"CC\": \"cl\", \"CXX\": \"cl\"}) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n with tools.chdir(self._source_subfolder):\n # Do not use AutoToolsBuildEnvironment because we want to run configure as ./configure\n self.run(\"./configure {}\".format(\" \".join(self._autotools_args)), win_bash=tools.os_info.is_windows)\n msbuild = MSBuild(self)\n # Do not use the 2015 solution: unresolved external symbols: test_hooks_libc_hook and test_hooks_arena_new_hook\n sln_file = os.path.join(self._source_subfolder, \"msvc\", \"jemalloc_vc2017.sln\")\n msbuild.build(sln_file, targets=[\"jemalloc\"], build_type=self._msvc_build_type)\n else:\n autotools = self._configure_autotools()\n autotools.make()\n\n @property\n def _library_name(self):\n libname = \"jemalloc\"\n if self.settings.compiler == \"Visual Studio\":\n if self.options.shared:\n if self.settings.build_type == \"Debug\":\n libname += \"d\"\n else:\n toolset = tools.msvs_toolset(self.settings)\n toolset_number = \"\".join(c for c in toolset if c in string.digits)\n libname += \"-vc{}-{}\".format(toolset_number, self._msvc_build_type)\n else:\n if self.settings.os == \"Windows\":\n if not self.options.shared:\n libname += \"_s\"\n else:\n if not self.options.shared and self.options.fPIC:\n libname += \"_pic\"\n return libname\n\n def package(self):\n self.copy(pattern=\"COPYING\", src=self._source_subfolder, dst=\"licenses\")\n if self.settings.compiler == \"Visual Studio\":\n arch_subdir = {\n \"x86_64\": \"x64\",\n \"x86\": \"x86\",\n }[str(self.settings.arch)]\n self.copy(\"*.lib\", src=os.path.join(self._source_subfolder, \"msvc\", arch_subdir, self._msvc_build_type), dst=os.path.join(self.package_folder, \"lib\"))\n self.copy(\"*.dll\", src=os.path.join(self._source_subfolder, \"msvc\", arch_subdir, self._msvc_build_type), dst=os.path.join(self.package_folder, \"bin\"))\n self.copy(\"jemalloc.h\", src=os.path.join(self._source_subfolder, \"include\", \"jemalloc\"), dst=os.path.join(self.package_folder, \"include\", \"jemalloc\"), keep_path=True)\n shutil.copytree(os.path.join(self._source_subfolder, \"include\", \"msvc_compat\"),\n os.path.join(self.package_folder, \"include\", \"msvc_compat\"))\n else:\n autotools = self._configure_autotools()\n # Use install_lib_XXX and install_include to avoid mixing binaries and dll's\n autotools.make(target=\"install_lib_shared\" if self.options.shared else \"install_lib_static\")\n autotools.make(target=\"install_include\")\n if self.settings.os == \"Windows\" and self.settings.compiler == \"gcc\":\n os.rename(os.path.join(self.package_folder, \"lib\", \"{}.lib\".format(self._library_name)),\n os.path.join(self.package_folder, \"lib\", \"lib{}.a\".format(self._library_name)))\n if not self.options.shared:\n os.unlink(os.path.join(self.package_folder, \"lib\", \"jemalloc.lib\"))\n\n def package_id(self):\n if not self.settings.build_type:\n self.info.settings.build_type = \"Release\"\n\n def package_info(self):\n self.cpp_info.libs = [self._library_name]\n self.cpp_info.includedirs = [os.path.join(self.package_folder, \"include\"),\n os.path.join(self.package_folder, \"include\", \"jemalloc\")]\n if self.settings.compiler == \"Visual Studio\":\n self.cpp_info.includedirs.append(os.path.join(self.package_folder, \"include\", \"msvc_compat\"))\n if not self.options.shared:\n self.cpp_info.defines = [\"JEMALLOC_EXPORT=\"]\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.extend([\"dl\", \"pthread\", \"rt\"])\n", "path": "recipes/jemalloc/all/conanfile.py"}]}
| 3,261 | 487 |
gh_patches_debug_40269
|
rasdani/github-patches
|
git_diff
|
ddionrails__ddionrails-624
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move import_path() method into mixin
### Subject of the issue
The System and Study model both implement the same `import_path()` method.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddionrails/base/mixins.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """ Mixins for ddionrails.base app """
4
5 from typing import Dict
6
7 from django import forms
8
9 from config.helpers import render_markdown
10
11
12 class ModelMixin:
13 """
14 Default mixins for all classes in DDI on Rails.
15
16 Requires two definition in the ``DOR`` class:
17
18 * io_fields: Fields that are used for the default form and in the default dict.
19 * id_fields: Fields that are used for the get_or_create default method.
20
21 Example:
22
23 ::
24
25 from django.db import models
26 from ddionrails.mixins import ModelMixin
27
28 class Test(models.Model, ModelMixin):
29
30 name = models.CharField(max_length=255, unique=True)
31
32 class DOR:
33 id_fields = ["name"]
34 io_fields = ["name"]
35
36 The default value for DOR is:
37
38 ::
39
40 class DOR:
41 id_fields = ["name"]
42 io_fields = ["name", "label", "description"]
43
44 The ``id_fields`` are also use to construct a default string identifier.
45 It is therefore recommended, to order them from the most general to the
46 most specific one.
47
48 """
49
50 class DOR:
51 id_fields = ["name"]
52 io_fields = ["name", "label", "description"]
53
54 @classmethod
55 def get_or_create(cls, parameters: Dict, lower_strings: bool = True):
56 """
57 Default for the get_or_create based on a dict.
58
59 The method uses only relevant identifiers based on ``DOR.id_fields``.
60
61 By default, all strings are set to lower case (option ``lower_strings``).
62 """
63 definition = {key: parameters[key] for key in cls.DOR.id_fields}
64 for key, value in definition.items():
65 if value.__class__ == str and lower_strings:
66 definition[key] = value.lower()
67 return cls.objects.get_or_create(**definition)[0]
68
69 @classmethod
70 def get(cls, parameters: Dict):
71 """
72 Default for the get_or_create based on a dict.
73
74 The method uses only relevant identifiers based on ``DOR.id_fields``.
75 """
76 try:
77 definition = {key: parameters[key] for key in cls.DOR.id_fields}
78 result = cls.objects.get(**definition)
79 except cls.DoesNotExist:
80 result = None
81 return result
82
83 @classmethod
84 def default_form(cls):
85 """
86 Creates a default form for all attributes defined in ``DOR.io_fields``.
87 """
88
89 class DefaultForm(forms.ModelForm):
90 class Meta:
91 model = cls
92 fields = cls.DOR.io_fields
93
94 return DefaultForm
95
96 def to_dict(self) -> Dict:
97 """
98 Uses the ``DOR.io_fields`` attribute to generate a default
99 dict object for the current instance.
100 """
101 dictionary = dict()
102 for field in self.DOR.io_fields:
103 value = getattr(self, field)
104 try:
105 dictionary[field] = value.pk
106 except AttributeError:
107 dictionary[field] = value
108 return dictionary
109
110 def title(self):
111 """
112 Default for the title. It first looks for a valid label, next for a
113 valid name, and otherwise returns an empty string.
114 """
115 try:
116 name = self.name
117 except AttributeError:
118 name = ""
119 try:
120 label = self.label
121 except AttributeError:
122 label = ""
123 return name if label == "" else label
124
125 def html_description(self):
126 """
127 Uses the ddionrails Markdown parser (ddionrails.helpers) to render
128 the description into HTML.
129 """
130 try:
131 html = render_markdown(self.description)
132 except AttributeError:
133 html = ""
134 return html
135
136 def __str__(self):
137 """ Returns a string reprensentation of the instance, using DOR.id_fields """
138 result = []
139 for field in self.DOR.id_fields:
140 value = getattr(self, field)
141 try:
142 result.append(value.string_id())
143 except AttributeError:
144 result.append(str(value))
145 return "/".join(result)
146
147
148 class AdminMixin:
149 """ A mixin for ModelAdmins to query related models via methods """
150
151 @staticmethod
152 def study_name(obj):
153 """ Return the name of the related study """
154 try:
155 return obj.study.name
156 except AttributeError:
157 return None
158
159 @staticmethod
160 def period_name(obj):
161 """ Return the name of the related period """
162 try:
163 return obj.period.name
164 except AttributeError:
165 return None
166
167 @staticmethod
168 def analysis_unit_name(obj):
169 """ Return the name of the related analysis_unit """
170 try:
171 return obj.analysis_unit.name
172 except AttributeError:
173 return None
174
175 @staticmethod
176 def dataset_name(obj):
177 """ Return the name of the related dataset """
178 try:
179 return obj.dataset.name
180 except AttributeError:
181 return None
182
183 @staticmethod
184 def dataset_study_name(obj):
185 """ Return the name of the related dataset.study """
186 try:
187 return obj.dataset.study.name
188 except AttributeError:
189 return None
190
191 @staticmethod
192 def instrument_name(obj):
193 """ Return the name of the related instrument """
194 try:
195 return obj.instrument.name
196 except AttributeError:
197 return None
198
199 @staticmethod
200 def instrument_study_name(obj):
201 """ Return the name of the related instrument.study """
202 try:
203 return obj.instrument.study.name
204 except AttributeError:
205 return None
206
207 @staticmethod
208 def basket_name(obj):
209 """ Return the name of the related basket """
210 try:
211 return obj.basket.name
212 except AttributeError:
213 return None
214
215 @staticmethod
216 def basket_study_name(obj):
217 """ Return the name of the related basket.study """
218 try:
219 return obj.basket.study.name
220 except AttributeError:
221 return None
222
223 @staticmethod
224 def user_name(obj):
225 """ Return the name of the related basket.user """
226 try:
227 return obj.basket.user.username
228 except AttributeError:
229 return None
230
```
Path: `ddionrails/studies/models.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """ Model definitions for ddionrails.studies app """
3
4 import os
5 from typing import List, Optional
6
7 from django.conf import settings
8 from django.contrib.postgres.fields import ArrayField, JSONField
9 from django.contrib.postgres.fields.jsonb import JSONField as JSONBField
10 from django.db import models
11 from django.urls import reverse
12 from model_utils.models import TimeStampedModel
13
14 from ddionrails.base.mixins import ModelMixin
15
16
17 class TopicList(models.Model):
18
19 # attributes
20 topiclist = JSONBField(
21 default=list,
22 null=True,
23 blank=True,
24 help_text="Topics of the related study (JSON)",
25 )
26
27 # relations
28 study = models.OneToOneField(
29 "Study",
30 blank=True,
31 null=True,
32 related_name="topiclist",
33 on_delete=models.CASCADE,
34 help_text="OneToOneField to studies.Study",
35 )
36
37
38 class Study(ModelMixin, TimeStampedModel):
39 """
40 Stores a single study,
41 related to :model:`data.Dataset`, :model:`instruments.Instrument`,
42 :model:`concepts.Period` and :model:`workspace.Basket`.
43 """
44
45 # attributes
46 name = models.CharField(
47 max_length=255, unique=True, db_index=True, help_text="Name of the study"
48 )
49 label = models.CharField(
50 max_length=255,
51 blank=True,
52 verbose_name="Label (English)",
53 help_text="Label of the study (English)",
54 )
55 label_de = models.CharField(
56 max_length=255,
57 blank=True,
58 null=True,
59 verbose_name="Label (German)",
60 help_text="Label of the study (German)",
61 )
62 description = models.TextField(
63 blank=True, help_text="Description of the study (Markdown)"
64 )
65 repo = models.CharField(
66 max_length=255,
67 blank=True,
68 help_text="Reference to the Git repository without definition of the protocol (e.g. https)",
69 )
70 current_commit = models.CharField(
71 max_length=255,
72 blank=True,
73 help_text="Commit hash of the last metadata import. This field is automatically filled by DDI on Rails",
74 )
75 config = JSONField(
76 default=dict, blank=True, null=True, help_text="Configuration of the study (JSON)"
77 )
78
79 topic_languages = ArrayField(
80 models.CharField(max_length=200),
81 blank=True,
82 default=list,
83 help_text="Topic languages of the study (Array)",
84 )
85
86 class Meta: # pylint: disable=too-few-public-methods
87 """ Django's metadata options """
88
89 verbose_name_plural = "Studies"
90
91 class DOR: # pylint: disable=too-few-public-methods
92 """ ddionrails' metadata options """
93
94 io_fields = ["name", "label", "description"]
95 id_fields = ["name"]
96
97 def __str__(self) -> str:
98 """ Returns a string representation using the "name" field """
99 return f"/{self.name}"
100
101 def get_absolute_url(self) -> str:
102 """ Returns a canonical URL for the model using the "name" field """
103 return reverse("study_detail", kwargs={"study_name": self.name})
104
105 def import_path(self):
106 path = os.path.join(
107 settings.IMPORT_REPO_PATH, self.name, settings.IMPORT_SUB_DIRECTORY
108 )
109 return path
110
111 def repo_url(self) -> str:
112 if settings.GIT_PROTOCOL == "https":
113 return f"https://{self.repo}.git"
114 elif settings.GIT_PROTOCOL == "ssh":
115 return f"git@{self.repo}.git"
116 else:
117 raise Exception("Specify a protocol for Git in your settings.")
118
119 def set_topiclist(self, body: List) -> None:
120 _topiclist, _ = TopicList.objects.get_or_create(study=self)
121 _topiclist.topiclist = body
122 _topiclist.save()
123
124 def has_topics(self) -> bool:
125 """ Returns True if the study has topics False otherwise (evaluates the length of self.topic_languages) """
126 return len(self.topic_languages) > 0
127
128 def get_topiclist(self, language: str = "en") -> Optional[List]:
129 """ Returns the list of topics for a given language or None """
130 try:
131 for topiclist in self.topiclist.topiclist:
132 if topiclist.get("language", "") == language:
133 return topiclist.get("topics")
134 except TopicList.DoesNotExist:
135 return None
136
137
138 def context(request):
139 return dict(all_studies=Study.objects.all().only("name", "label", "description"))
140
```
Path: `ddionrails/base/models.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """ Model definitions for ddionrails.base app """
4
5 from __future__ import annotations
6
7 import pathlib
8
9 from django.conf import settings
10 from django.db import models
11
12
13 class System(models.Model):
14 """ Stores a single system instance """
15
16 name = settings.SYSTEM_NAME
17 current_commit = models.CharField(max_length=255, blank=True)
18
19 @staticmethod
20 def repo_url() -> str:
21 """ Returns the system's repo url from the settings """
22 return settings.SYSTEM_REPO_URL
23
24 def import_path(self) -> pathlib.Path:
25 """ Returns the system's import path """
26 return pathlib.Path(settings.IMPORT_REPO_PATH).joinpath(
27 self.name, settings.IMPORT_SUB_DIRECTORY
28 )
29
30 @classmethod
31 def get(cls) -> System:
32 """ Returns a single system instance """
33 if cls.objects.count() == 0:
34 system = System()
35 system.save()
36 else:
37 system = System.objects.first()
38 return system
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ddionrails/base/mixins.py b/ddionrails/base/mixins.py
--- a/ddionrails/base/mixins.py
+++ b/ddionrails/base/mixins.py
@@ -2,9 +2,11 @@
""" Mixins for ddionrails.base app """
+import pathlib
from typing import Dict
from django import forms
+from django.conf import settings
from config.helpers import render_markdown
@@ -145,6 +147,16 @@
return "/".join(result)
+class ImportPathMixin:
+ """ A mixin for models to return an import_path based on their name attribute """
+
+ def import_path(self) -> pathlib.Path:
+ """ Returns the instance's import path """
+ return pathlib.Path(settings.IMPORT_REPO_PATH).joinpath(
+ self.name, settings.IMPORT_SUB_DIRECTORY
+ )
+
+
class AdminMixin:
""" A mixin for ModelAdmins to query related models via methods """
diff --git a/ddionrails/base/models.py b/ddionrails/base/models.py
--- a/ddionrails/base/models.py
+++ b/ddionrails/base/models.py
@@ -4,13 +4,13 @@
from __future__ import annotations
-import pathlib
-
from django.conf import settings
from django.db import models
+from .mixins import ImportPathMixin
+
-class System(models.Model):
+class System(ImportPathMixin, models.Model):
""" Stores a single system instance """
name = settings.SYSTEM_NAME
@@ -21,12 +21,6 @@
""" Returns the system's repo url from the settings """
return settings.SYSTEM_REPO_URL
- def import_path(self) -> pathlib.Path:
- """ Returns the system's import path """
- return pathlib.Path(settings.IMPORT_REPO_PATH).joinpath(
- self.name, settings.IMPORT_SUB_DIRECTORY
- )
-
@classmethod
def get(cls) -> System:
""" Returns a single system instance """
diff --git a/ddionrails/studies/models.py b/ddionrails/studies/models.py
--- a/ddionrails/studies/models.py
+++ b/ddionrails/studies/models.py
@@ -11,7 +11,7 @@
from django.urls import reverse
from model_utils.models import TimeStampedModel
-from ddionrails.base.mixins import ModelMixin
+from ddionrails.base.mixins import ImportPathMixin, ModelMixin
class TopicList(models.Model):
@@ -35,7 +35,7 @@
)
-class Study(ModelMixin, TimeStampedModel):
+class Study(ImportPathMixin, ModelMixin, TimeStampedModel):
"""
Stores a single study,
related to :model:`data.Dataset`, :model:`instruments.Instrument`,
@@ -102,12 +102,6 @@
""" Returns a canonical URL for the model using the "name" field """
return reverse("study_detail", kwargs={"study_name": self.name})
- def import_path(self):
- path = os.path.join(
- settings.IMPORT_REPO_PATH, self.name, settings.IMPORT_SUB_DIRECTORY
- )
- return path
-
def repo_url(self) -> str:
if settings.GIT_PROTOCOL == "https":
return f"https://{self.repo}.git"
|
{"golden_diff": "diff --git a/ddionrails/base/mixins.py b/ddionrails/base/mixins.py\n--- a/ddionrails/base/mixins.py\n+++ b/ddionrails/base/mixins.py\n@@ -2,9 +2,11 @@\n \n \"\"\" Mixins for ddionrails.base app \"\"\"\n \n+import pathlib\n from typing import Dict\n \n from django import forms\n+from django.conf import settings\n \n from config.helpers import render_markdown\n \n@@ -145,6 +147,16 @@\n return \"/\".join(result)\n \n \n+class ImportPathMixin:\n+ \"\"\" A mixin for models to return an import_path based on their name attribute \"\"\"\n+\n+ def import_path(self) -> pathlib.Path:\n+ \"\"\" Returns the instance's import path \"\"\"\n+ return pathlib.Path(settings.IMPORT_REPO_PATH).joinpath(\n+ self.name, settings.IMPORT_SUB_DIRECTORY\n+ )\n+\n+\n class AdminMixin:\n \"\"\" A mixin for ModelAdmins to query related models via methods \"\"\"\n \ndiff --git a/ddionrails/base/models.py b/ddionrails/base/models.py\n--- a/ddionrails/base/models.py\n+++ b/ddionrails/base/models.py\n@@ -4,13 +4,13 @@\n \n from __future__ import annotations\n \n-import pathlib\n-\n from django.conf import settings\n from django.db import models\n \n+from .mixins import ImportPathMixin\n+\n \n-class System(models.Model):\n+class System(ImportPathMixin, models.Model):\n \"\"\" Stores a single system instance \"\"\"\n \n name = settings.SYSTEM_NAME\n@@ -21,12 +21,6 @@\n \"\"\" Returns the system's repo url from the settings \"\"\"\n return settings.SYSTEM_REPO_URL\n \n- def import_path(self) -> pathlib.Path:\n- \"\"\" Returns the system's import path \"\"\"\n- return pathlib.Path(settings.IMPORT_REPO_PATH).joinpath(\n- self.name, settings.IMPORT_SUB_DIRECTORY\n- )\n-\n @classmethod\n def get(cls) -> System:\n \"\"\" Returns a single system instance \"\"\"\ndiff --git a/ddionrails/studies/models.py b/ddionrails/studies/models.py\n--- a/ddionrails/studies/models.py\n+++ b/ddionrails/studies/models.py\n@@ -11,7 +11,7 @@\n from django.urls import reverse\n from model_utils.models import TimeStampedModel\n \n-from ddionrails.base.mixins import ModelMixin\n+from ddionrails.base.mixins import ImportPathMixin, ModelMixin\n \n \n class TopicList(models.Model):\n@@ -35,7 +35,7 @@\n )\n \n \n-class Study(ModelMixin, TimeStampedModel):\n+class Study(ImportPathMixin, ModelMixin, TimeStampedModel):\n \"\"\"\n Stores a single study,\n related to :model:`data.Dataset`, :model:`instruments.Instrument`,\n@@ -102,12 +102,6 @@\n \"\"\" Returns a canonical URL for the model using the \"name\" field \"\"\"\n return reverse(\"study_detail\", kwargs={\"study_name\": self.name})\n \n- def import_path(self):\n- path = os.path.join(\n- settings.IMPORT_REPO_PATH, self.name, settings.IMPORT_SUB_DIRECTORY\n- )\n- return path\n-\n def repo_url(self) -> str:\n if settings.GIT_PROTOCOL == \"https\":\n return f\"https://{self.repo}.git\"\n", "issue": "Move import_path() method into mixin\n### Subject of the issue\r\n\r\nThe System and Study model both implement the same `import_path()` method.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Mixins for ddionrails.base app \"\"\"\n\nfrom typing import Dict\n\nfrom django import forms\n\nfrom config.helpers import render_markdown\n\n\nclass ModelMixin:\n \"\"\"\n Default mixins for all classes in DDI on Rails.\n\n Requires two definition in the ``DOR`` class:\n\n * io_fields: Fields that are used for the default form and in the default dict.\n * id_fields: Fields that are used for the get_or_create default method.\n\n Example:\n\n ::\n\n from django.db import models\n from ddionrails.mixins import ModelMixin\n\n class Test(models.Model, ModelMixin):\n\n name = models.CharField(max_length=255, unique=True)\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\"]\n\n The default value for DOR is:\n\n ::\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\", \"label\", \"description\"]\n\n The ``id_fields`` are also use to construct a default string identifier.\n It is therefore recommended, to order them from the most general to the\n most specific one.\n\n \"\"\"\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\", \"label\", \"description\"]\n\n @classmethod\n def get_or_create(cls, parameters: Dict, lower_strings: bool = True):\n \"\"\"\n Default for the get_or_create based on a dict.\n\n The method uses only relevant identifiers based on ``DOR.id_fields``.\n\n By default, all strings are set to lower case (option ``lower_strings``).\n \"\"\"\n definition = {key: parameters[key] for key in cls.DOR.id_fields}\n for key, value in definition.items():\n if value.__class__ == str and lower_strings:\n definition[key] = value.lower()\n return cls.objects.get_or_create(**definition)[0]\n\n @classmethod\n def get(cls, parameters: Dict):\n \"\"\"\n Default for the get_or_create based on a dict.\n\n The method uses only relevant identifiers based on ``DOR.id_fields``.\n \"\"\"\n try:\n definition = {key: parameters[key] for key in cls.DOR.id_fields}\n result = cls.objects.get(**definition)\n except cls.DoesNotExist:\n result = None\n return result\n\n @classmethod\n def default_form(cls):\n \"\"\"\n Creates a default form for all attributes defined in ``DOR.io_fields``.\n \"\"\"\n\n class DefaultForm(forms.ModelForm):\n class Meta:\n model = cls\n fields = cls.DOR.io_fields\n\n return DefaultForm\n\n def to_dict(self) -> Dict:\n \"\"\"\n Uses the ``DOR.io_fields`` attribute to generate a default\n dict object for the current instance.\n \"\"\"\n dictionary = dict()\n for field in self.DOR.io_fields:\n value = getattr(self, field)\n try:\n dictionary[field] = value.pk\n except AttributeError:\n dictionary[field] = value\n return dictionary\n\n def title(self):\n \"\"\"\n Default for the title. It first looks for a valid label, next for a\n valid name, and otherwise returns an empty string.\n \"\"\"\n try:\n name = self.name\n except AttributeError:\n name = \"\"\n try:\n label = self.label\n except AttributeError:\n label = \"\"\n return name if label == \"\" else label\n\n def html_description(self):\n \"\"\"\n Uses the ddionrails Markdown parser (ddionrails.helpers) to render\n the description into HTML.\n \"\"\"\n try:\n html = render_markdown(self.description)\n except AttributeError:\n html = \"\"\n return html\n\n def __str__(self):\n \"\"\" Returns a string reprensentation of the instance, using DOR.id_fields \"\"\"\n result = []\n for field in self.DOR.id_fields:\n value = getattr(self, field)\n try:\n result.append(value.string_id())\n except AttributeError:\n result.append(str(value))\n return \"/\".join(result)\n\n\nclass AdminMixin:\n \"\"\" A mixin for ModelAdmins to query related models via methods \"\"\"\n\n @staticmethod\n def study_name(obj):\n \"\"\" Return the name of the related study \"\"\"\n try:\n return obj.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def period_name(obj):\n \"\"\" Return the name of the related period \"\"\"\n try:\n return obj.period.name\n except AttributeError:\n return None\n\n @staticmethod\n def analysis_unit_name(obj):\n \"\"\" Return the name of the related analysis_unit \"\"\"\n try:\n return obj.analysis_unit.name\n except AttributeError:\n return None\n\n @staticmethod\n def dataset_name(obj):\n \"\"\" Return the name of the related dataset \"\"\"\n try:\n return obj.dataset.name\n except AttributeError:\n return None\n\n @staticmethod\n def dataset_study_name(obj):\n \"\"\" Return the name of the related dataset.study \"\"\"\n try:\n return obj.dataset.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def instrument_name(obj):\n \"\"\" Return the name of the related instrument \"\"\"\n try:\n return obj.instrument.name\n except AttributeError:\n return None\n\n @staticmethod\n def instrument_study_name(obj):\n \"\"\" Return the name of the related instrument.study \"\"\"\n try:\n return obj.instrument.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def basket_name(obj):\n \"\"\" Return the name of the related basket \"\"\"\n try:\n return obj.basket.name\n except AttributeError:\n return None\n\n @staticmethod\n def basket_study_name(obj):\n \"\"\" Return the name of the related basket.study \"\"\"\n try:\n return obj.basket.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def user_name(obj):\n \"\"\" Return the name of the related basket.user \"\"\"\n try:\n return obj.basket.user.username\n except AttributeError:\n return None\n", "path": "ddionrails/base/mixins.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\" Model definitions for ddionrails.studies app \"\"\"\n\nimport os\nfrom typing import List, Optional\n\nfrom django.conf import settings\nfrom django.contrib.postgres.fields import ArrayField, JSONField\nfrom django.contrib.postgres.fields.jsonb import JSONField as JSONBField\nfrom django.db import models\nfrom django.urls import reverse\nfrom model_utils.models import TimeStampedModel\n\nfrom ddionrails.base.mixins import ModelMixin\n\n\nclass TopicList(models.Model):\n\n # attributes\n topiclist = JSONBField(\n default=list,\n null=True,\n blank=True,\n help_text=\"Topics of the related study (JSON)\",\n )\n\n # relations\n study = models.OneToOneField(\n \"Study\",\n blank=True,\n null=True,\n related_name=\"topiclist\",\n on_delete=models.CASCADE,\n help_text=\"OneToOneField to studies.Study\",\n )\n\n\nclass Study(ModelMixin, TimeStampedModel):\n \"\"\"\n Stores a single study,\n related to :model:`data.Dataset`, :model:`instruments.Instrument`,\n :model:`concepts.Period` and :model:`workspace.Basket`.\n \"\"\"\n\n # attributes\n name = models.CharField(\n max_length=255, unique=True, db_index=True, help_text=\"Name of the study\"\n )\n label = models.CharField(\n max_length=255,\n blank=True,\n verbose_name=\"Label (English)\",\n help_text=\"Label of the study (English)\",\n )\n label_de = models.CharField(\n max_length=255,\n blank=True,\n null=True,\n verbose_name=\"Label (German)\",\n help_text=\"Label of the study (German)\",\n )\n description = models.TextField(\n blank=True, help_text=\"Description of the study (Markdown)\"\n )\n repo = models.CharField(\n max_length=255,\n blank=True,\n help_text=\"Reference to the Git repository without definition of the protocol (e.g. https)\",\n )\n current_commit = models.CharField(\n max_length=255,\n blank=True,\n help_text=\"Commit hash of the last metadata import. This field is automatically filled by DDI on Rails\",\n )\n config = JSONField(\n default=dict, blank=True, null=True, help_text=\"Configuration of the study (JSON)\"\n )\n\n topic_languages = ArrayField(\n models.CharField(max_length=200),\n blank=True,\n default=list,\n help_text=\"Topic languages of the study (Array)\",\n )\n\n class Meta: # pylint: disable=too-few-public-methods\n \"\"\" Django's metadata options \"\"\"\n\n verbose_name_plural = \"Studies\"\n\n class DOR: # pylint: disable=too-few-public-methods\n \"\"\" ddionrails' metadata options \"\"\"\n\n io_fields = [\"name\", \"label\", \"description\"]\n id_fields = [\"name\"]\n\n def __str__(self) -> str:\n \"\"\" Returns a string representation using the \"name\" field \"\"\"\n return f\"/{self.name}\"\n\n def get_absolute_url(self) -> str:\n \"\"\" Returns a canonical URL for the model using the \"name\" field \"\"\"\n return reverse(\"study_detail\", kwargs={\"study_name\": self.name})\n\n def import_path(self):\n path = os.path.join(\n settings.IMPORT_REPO_PATH, self.name, settings.IMPORT_SUB_DIRECTORY\n )\n return path\n\n def repo_url(self) -> str:\n if settings.GIT_PROTOCOL == \"https\":\n return f\"https://{self.repo}.git\"\n elif settings.GIT_PROTOCOL == \"ssh\":\n return f\"git@{self.repo}.git\"\n else:\n raise Exception(\"Specify a protocol for Git in your settings.\")\n\n def set_topiclist(self, body: List) -> None:\n _topiclist, _ = TopicList.objects.get_or_create(study=self)\n _topiclist.topiclist = body\n _topiclist.save()\n\n def has_topics(self) -> bool:\n \"\"\" Returns True if the study has topics False otherwise (evaluates the length of self.topic_languages) \"\"\"\n return len(self.topic_languages) > 0\n\n def get_topiclist(self, language: str = \"en\") -> Optional[List]:\n \"\"\" Returns the list of topics for a given language or None \"\"\"\n try:\n for topiclist in self.topiclist.topiclist:\n if topiclist.get(\"language\", \"\") == language:\n return topiclist.get(\"topics\")\n except TopicList.DoesNotExist:\n return None\n\n\ndef context(request):\n return dict(all_studies=Study.objects.all().only(\"name\", \"label\", \"description\"))\n", "path": "ddionrails/studies/models.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Model definitions for ddionrails.base app \"\"\"\n\nfrom __future__ import annotations\n\nimport pathlib\n\nfrom django.conf import settings\nfrom django.db import models\n\n\nclass System(models.Model):\n \"\"\" Stores a single system instance \"\"\"\n\n name = settings.SYSTEM_NAME\n current_commit = models.CharField(max_length=255, blank=True)\n\n @staticmethod\n def repo_url() -> str:\n \"\"\" Returns the system's repo url from the settings \"\"\"\n return settings.SYSTEM_REPO_URL\n\n def import_path(self) -> pathlib.Path:\n \"\"\" Returns the system's import path \"\"\"\n return pathlib.Path(settings.IMPORT_REPO_PATH).joinpath(\n self.name, settings.IMPORT_SUB_DIRECTORY\n )\n\n @classmethod\n def get(cls) -> System:\n \"\"\" Returns a single system instance \"\"\"\n if cls.objects.count() == 0:\n system = System()\n system.save()\n else:\n system = System.objects.first()\n return system\n", "path": "ddionrails/base/models.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Mixins for ddionrails.base app \"\"\"\n\nimport pathlib\nfrom typing import Dict\n\nfrom django import forms\nfrom django.conf import settings\n\nfrom config.helpers import render_markdown\n\n\nclass ModelMixin:\n \"\"\"\n Default mixins for all classes in DDI on Rails.\n\n Requires two definition in the ``DOR`` class:\n\n * io_fields: Fields that are used for the default form and in the default dict.\n * id_fields: Fields that are used for the get_or_create default method.\n\n Example:\n\n ::\n\n from django.db import models\n from ddionrails.mixins import ModelMixin\n\n class Test(models.Model, ModelMixin):\n\n name = models.CharField(max_length=255, unique=True)\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\"]\n\n The default value for DOR is:\n\n ::\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\", \"label\", \"description\"]\n\n The ``id_fields`` are also use to construct a default string identifier.\n It is therefore recommended, to order them from the most general to the\n most specific one.\n\n \"\"\"\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\", \"label\", \"description\"]\n\n @classmethod\n def get_or_create(cls, parameters: Dict, lower_strings: bool = True):\n \"\"\"\n Default for the get_or_create based on a dict.\n\n The method uses only relevant identifiers based on ``DOR.id_fields``.\n\n By default, all strings are set to lower case (option ``lower_strings``).\n \"\"\"\n definition = {key: parameters[key] for key in cls.DOR.id_fields}\n for key, value in definition.items():\n if value.__class__ == str and lower_strings:\n definition[key] = value.lower()\n return cls.objects.get_or_create(**definition)[0]\n\n @classmethod\n def get(cls, parameters: Dict):\n \"\"\"\n Default for the get_or_create based on a dict.\n\n The method uses only relevant identifiers based on ``DOR.id_fields``.\n \"\"\"\n try:\n definition = {key: parameters[key] for key in cls.DOR.id_fields}\n result = cls.objects.get(**definition)\n except cls.DoesNotExist:\n result = None\n return result\n\n @classmethod\n def default_form(cls):\n \"\"\"\n Creates a default form for all attributes defined in ``DOR.io_fields``.\n \"\"\"\n\n class DefaultForm(forms.ModelForm):\n class Meta:\n model = cls\n fields = cls.DOR.io_fields\n\n return DefaultForm\n\n def to_dict(self) -> Dict:\n \"\"\"\n Uses the ``DOR.io_fields`` attribute to generate a default\n dict object for the current instance.\n \"\"\"\n dictionary = dict()\n for field in self.DOR.io_fields:\n value = getattr(self, field)\n try:\n dictionary[field] = value.pk\n except AttributeError:\n dictionary[field] = value\n return dictionary\n\n def title(self):\n \"\"\"\n Default for the title. It first looks for a valid label, next for a\n valid name, and otherwise returns an empty string.\n \"\"\"\n try:\n name = self.name\n except AttributeError:\n name = \"\"\n try:\n label = self.label\n except AttributeError:\n label = \"\"\n return name if label == \"\" else label\n\n def html_description(self):\n \"\"\"\n Uses the ddionrails Markdown parser (ddionrails.helpers) to render\n the description into HTML.\n \"\"\"\n try:\n html = render_markdown(self.description)\n except AttributeError:\n html = \"\"\n return html\n\n def __str__(self):\n \"\"\" Returns a string reprensentation of the instance, using DOR.id_fields \"\"\"\n result = []\n for field in self.DOR.id_fields:\n value = getattr(self, field)\n try:\n result.append(value.string_id())\n except AttributeError:\n result.append(str(value))\n return \"/\".join(result)\n\n\nclass ImportPathMixin:\n \"\"\" A mixin for models to return an import_path based on their name attribute \"\"\"\n\n def import_path(self) -> pathlib.Path:\n \"\"\" Returns the instance's import path \"\"\"\n return pathlib.Path(settings.IMPORT_REPO_PATH).joinpath(\n self.name, settings.IMPORT_SUB_DIRECTORY\n )\n\n\nclass AdminMixin:\n \"\"\" A mixin for ModelAdmins to query related models via methods \"\"\"\n\n @staticmethod\n def study_name(obj):\n \"\"\" Return the name of the related study \"\"\"\n try:\n return obj.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def period_name(obj):\n \"\"\" Return the name of the related period \"\"\"\n try:\n return obj.period.name\n except AttributeError:\n return None\n\n @staticmethod\n def analysis_unit_name(obj):\n \"\"\" Return the name of the related analysis_unit \"\"\"\n try:\n return obj.analysis_unit.name\n except AttributeError:\n return None\n\n @staticmethod\n def dataset_name(obj):\n \"\"\" Return the name of the related dataset \"\"\"\n try:\n return obj.dataset.name\n except AttributeError:\n return None\n\n @staticmethod\n def dataset_study_name(obj):\n \"\"\" Return the name of the related dataset.study \"\"\"\n try:\n return obj.dataset.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def instrument_name(obj):\n \"\"\" Return the name of the related instrument \"\"\"\n try:\n return obj.instrument.name\n except AttributeError:\n return None\n\n @staticmethod\n def instrument_study_name(obj):\n \"\"\" Return the name of the related instrument.study \"\"\"\n try:\n return obj.instrument.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def basket_name(obj):\n \"\"\" Return the name of the related basket \"\"\"\n try:\n return obj.basket.name\n except AttributeError:\n return None\n\n @staticmethod\n def basket_study_name(obj):\n \"\"\" Return the name of the related basket.study \"\"\"\n try:\n return obj.basket.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def user_name(obj):\n \"\"\" Return the name of the related basket.user \"\"\"\n try:\n return obj.basket.user.username\n except AttributeError:\n return None\n", "path": "ddionrails/base/mixins.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\" Model definitions for ddionrails.studies app \"\"\"\n\nimport os\nfrom typing import List, Optional\n\nfrom django.conf import settings\nfrom django.contrib.postgres.fields import ArrayField, JSONField\nfrom django.contrib.postgres.fields.jsonb import JSONField as JSONBField\nfrom django.db import models\nfrom django.urls import reverse\nfrom model_utils.models import TimeStampedModel\n\nfrom ddionrails.base.mixins import ImportPathMixin, ModelMixin\n\n\nclass TopicList(models.Model):\n\n # attributes\n topiclist = JSONBField(\n default=list,\n null=True,\n blank=True,\n help_text=\"Topics of the related study (JSON)\",\n )\n\n # relations\n study = models.OneToOneField(\n \"Study\",\n blank=True,\n null=True,\n related_name=\"topiclist\",\n on_delete=models.CASCADE,\n help_text=\"OneToOneField to studies.Study\",\n )\n\n\nclass Study(ImportPathMixin, ModelMixin, TimeStampedModel):\n \"\"\"\n Stores a single study,\n related to :model:`data.Dataset`, :model:`instruments.Instrument`,\n :model:`concepts.Period` and :model:`workspace.Basket`.\n \"\"\"\n\n # attributes\n name = models.CharField(\n max_length=255, unique=True, db_index=True, help_text=\"Name of the study\"\n )\n label = models.CharField(\n max_length=255,\n blank=True,\n verbose_name=\"Label (English)\",\n help_text=\"Label of the study (English)\",\n )\n label_de = models.CharField(\n max_length=255,\n blank=True,\n null=True,\n verbose_name=\"Label (German)\",\n help_text=\"Label of the study (German)\",\n )\n description = models.TextField(\n blank=True, help_text=\"Description of the study (Markdown)\"\n )\n repo = models.CharField(\n max_length=255,\n blank=True,\n help_text=\"Reference to the Git repository without definition of the protocol (e.g. https)\",\n )\n current_commit = models.CharField(\n max_length=255,\n blank=True,\n help_text=\"Commit hash of the last metadata import. This field is automatically filled by DDI on Rails\",\n )\n config = JSONField(\n default=dict, blank=True, null=True, help_text=\"Configuration of the study (JSON)\"\n )\n\n topic_languages = ArrayField(\n models.CharField(max_length=200),\n blank=True,\n default=list,\n help_text=\"Topic languages of the study (Array)\",\n )\n\n class Meta: # pylint: disable=too-few-public-methods\n \"\"\" Django's metadata options \"\"\"\n\n verbose_name_plural = \"Studies\"\n\n class DOR: # pylint: disable=too-few-public-methods\n \"\"\" ddionrails' metadata options \"\"\"\n\n io_fields = [\"name\", \"label\", \"description\"]\n id_fields = [\"name\"]\n\n def __str__(self) -> str:\n \"\"\" Returns a string representation using the \"name\" field \"\"\"\n return f\"/{self.name}\"\n\n def get_absolute_url(self) -> str:\n \"\"\" Returns a canonical URL for the model using the \"name\" field \"\"\"\n return reverse(\"study_detail\", kwargs={\"study_name\": self.name})\n\n def repo_url(self) -> str:\n if settings.GIT_PROTOCOL == \"https\":\n return f\"https://{self.repo}.git\"\n elif settings.GIT_PROTOCOL == \"ssh\":\n return f\"git@{self.repo}.git\"\n else:\n raise Exception(\"Specify a protocol for Git in your settings.\")\n\n def set_topiclist(self, body: List) -> None:\n _topiclist, _ = TopicList.objects.get_or_create(study=self)\n _topiclist.topiclist = body\n _topiclist.save()\n\n def has_topics(self) -> bool:\n \"\"\" Returns True if the study has topics False otherwise (evaluates the length of self.topic_languages) \"\"\"\n return len(self.topic_languages) > 0\n\n def get_topiclist(self, language: str = \"en\") -> Optional[List]:\n \"\"\" Returns the list of topics for a given language or None \"\"\"\n try:\n for topiclist in self.topiclist.topiclist:\n if topiclist.get(\"language\", \"\") == language:\n return topiclist.get(\"topics\")\n except TopicList.DoesNotExist:\n return None\n\n\ndef context(request):\n return dict(all_studies=Study.objects.all().only(\"name\", \"label\", \"description\"))\n", "path": "ddionrails/studies/models.py"}, {"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Model definitions for ddionrails.base app \"\"\"\n\nfrom __future__ import annotations\n\nfrom django.conf import settings\nfrom django.db import models\n\nfrom .mixins import ImportPathMixin\n\n\nclass System(ImportPathMixin, models.Model):\n \"\"\" Stores a single system instance \"\"\"\n\n name = settings.SYSTEM_NAME\n current_commit = models.CharField(max_length=255, blank=True)\n\n @staticmethod\n def repo_url() -> str:\n \"\"\" Returns the system's repo url from the settings \"\"\"\n return settings.SYSTEM_REPO_URL\n\n @classmethod\n def get(cls) -> System:\n \"\"\" Returns a single system instance \"\"\"\n if cls.objects.count() == 0:\n system = System()\n system.save()\n else:\n system = System.objects.first()\n return system\n", "path": "ddionrails/base/models.py"}]}
| 3,803 | 715 |
gh_patches_debug_20039
|
rasdani/github-patches
|
git_diff
|
googleapis__google-auth-library-python-481
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invalid type for expiry when using GCE metadata server
#### Environment details
- OS: Linux
- Python version: 3.8.2
- pip version: 20.0.2
- `google-auth` version: 1.12.0
#### Steps to reproduce
1. Create an ID token using the GCE metadata server:
```python
from google.auth import compute_engine
import google.auth.transport.requests
request = google.auth.transport.requests.Request()
target_audience = 'https://example.com'
creds = compute_engine.IDTokenCredentials(request, target_audience, use_metadata_identity_endpoint=True)
```
2. Try to use the credentials to authorize a request:
```python
from google.auth.transport.requests import AuthorizedSession
session = AuthorizedSession(credentials)
response = session.get('https://example.com')
```
This results in an error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/site-packages/google/auth/credentials.py", line 66, in expired
skewed_expiry = self.expiry - _helpers.CLOCK_SKEW
TypeError: unsupported operand type(s) for -: 'int' and 'datetime.timedelta'
```
I think the issue is on [this line](https://github.com/googleapis/google-auth-library-python/blob/master/google/auth/compute_engine/credentials.py#L294). `payload["exp"]` should (apparently) be getting parsed into a `datetime.timedelta`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/auth/compute_engine/credentials.py`
Content:
```
1 # Copyright 2016 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Google Compute Engine credentials.
16
17 This module provides authentication for application running on Google Compute
18 Engine using the Compute Engine metadata server.
19
20 """
21
22 import datetime
23
24 import six
25
26 from google.auth import _helpers
27 from google.auth import credentials
28 from google.auth import exceptions
29 from google.auth import iam
30 from google.auth import jwt
31 from google.auth.compute_engine import _metadata
32 from google.oauth2 import _client
33
34
35 class Credentials(credentials.ReadOnlyScoped, credentials.Credentials):
36 """Compute Engine Credentials.
37
38 These credentials use the Google Compute Engine metadata server to obtain
39 OAuth 2.0 access tokens associated with the instance's service account.
40
41 For more information about Compute Engine authentication, including how
42 to configure scopes, see the `Compute Engine authentication
43 documentation`_.
44
45 .. note:: Compute Engine instances can be created with scopes and therefore
46 these credentials are considered to be 'scoped'. However, you can
47 not use :meth:`~google.auth.credentials.ScopedCredentials.with_scopes`
48 because it is not possible to change the scopes that the instance
49 has. Also note that
50 :meth:`~google.auth.credentials.ScopedCredentials.has_scopes` will not
51 work until the credentials have been refreshed.
52
53 .. _Compute Engine authentication documentation:
54 https://cloud.google.com/compute/docs/authentication#using
55 """
56
57 def __init__(self, service_account_email="default"):
58 """
59 Args:
60 service_account_email (str): The service account email to use, or
61 'default'. A Compute Engine instance may have multiple service
62 accounts.
63 """
64 super(Credentials, self).__init__()
65 self._service_account_email = service_account_email
66
67 def _retrieve_info(self, request):
68 """Retrieve information about the service account.
69
70 Updates the scopes and retrieves the full service account email.
71
72 Args:
73 request (google.auth.transport.Request): The object used to make
74 HTTP requests.
75 """
76 info = _metadata.get_service_account_info(
77 request, service_account=self._service_account_email
78 )
79
80 self._service_account_email = info["email"]
81 self._scopes = info["scopes"]
82
83 def refresh(self, request):
84 """Refresh the access token and scopes.
85
86 Args:
87 request (google.auth.transport.Request): The object used to make
88 HTTP requests.
89
90 Raises:
91 google.auth.exceptions.RefreshError: If the Compute Engine metadata
92 service can't be reached if if the instance has not
93 credentials.
94 """
95 try:
96 self._retrieve_info(request)
97 self.token, self.expiry = _metadata.get_service_account_token(
98 request, service_account=self._service_account_email
99 )
100 except exceptions.TransportError as caught_exc:
101 new_exc = exceptions.RefreshError(caught_exc)
102 six.raise_from(new_exc, caught_exc)
103
104 @property
105 def service_account_email(self):
106 """The service account email.
107
108 .. note:: This is not guaranteed to be set until :meth:`refresh` has been
109 called.
110 """
111 return self._service_account_email
112
113 @property
114 def requires_scopes(self):
115 """False: Compute Engine credentials can not be scoped."""
116 return False
117
118
119 _DEFAULT_TOKEN_LIFETIME_SECS = 3600 # 1 hour in seconds
120 _DEFAULT_TOKEN_URI = "https://www.googleapis.com/oauth2/v4/token"
121
122
123 class IDTokenCredentials(credentials.Credentials, credentials.Signing):
124 """Open ID Connect ID Token-based service account credentials.
125
126 These credentials relies on the default service account of a GCE instance.
127
128 ID token can be requested from `GCE metadata server identity endpoint`_, IAM
129 token endpoint or other token endpoints you specify. If metadata server
130 identity endpoint is not used, the GCE instance must have been started with
131 a service account that has access to the IAM Cloud API.
132
133 .. _GCE metadata server identity endpoint:
134 https://cloud.google.com/compute/docs/instances/verifying-instance-identity
135 """
136
137 def __init__(
138 self,
139 request,
140 target_audience,
141 token_uri=None,
142 additional_claims=None,
143 service_account_email=None,
144 signer=None,
145 use_metadata_identity_endpoint=False,
146 ):
147 """
148 Args:
149 request (google.auth.transport.Request): The object used to make
150 HTTP requests.
151 target_audience (str): The intended audience for these credentials,
152 used when requesting the ID Token. The ID Token's ``aud`` claim
153 will be set to this string.
154 token_uri (str): The OAuth 2.0 Token URI.
155 additional_claims (Mapping[str, str]): Any additional claims for
156 the JWT assertion used in the authorization grant.
157 service_account_email (str): Optional explicit service account to
158 use to sign JWT tokens.
159 By default, this is the default GCE service account.
160 signer (google.auth.crypt.Signer): The signer used to sign JWTs.
161 In case the signer is specified, the request argument will be
162 ignored.
163 use_metadata_identity_endpoint (bool): Whether to use GCE metadata
164 identity endpoint. For backward compatibility the default value
165 is False. If set to True, ``token_uri``, ``additional_claims``,
166 ``service_account_email``, ``signer`` argument should not be set;
167 otherwise ValueError will be raised.
168
169 Raises:
170 ValueError:
171 If ``use_metadata_identity_endpoint`` is set to True, and one of
172 ``token_uri``, ``additional_claims``, ``service_account_email``,
173 ``signer`` arguments is set.
174 """
175 super(IDTokenCredentials, self).__init__()
176
177 self._use_metadata_identity_endpoint = use_metadata_identity_endpoint
178 self._target_audience = target_audience
179
180 if use_metadata_identity_endpoint:
181 if token_uri or additional_claims or service_account_email or signer:
182 raise ValueError(
183 "If use_metadata_identity_endpoint is set, token_uri, "
184 "additional_claims, service_account_email, signer arguments"
185 " must not be set"
186 )
187 self._token_uri = None
188 self._additional_claims = None
189 self._signer = None
190
191 if service_account_email is None:
192 sa_info = _metadata.get_service_account_info(request)
193 self._service_account_email = sa_info["email"]
194 else:
195 self._service_account_email = service_account_email
196
197 if not use_metadata_identity_endpoint:
198 if signer is None:
199 signer = iam.Signer(
200 request=request,
201 credentials=Credentials(),
202 service_account_email=self._service_account_email,
203 )
204 self._signer = signer
205 self._token_uri = token_uri or _DEFAULT_TOKEN_URI
206
207 if additional_claims is not None:
208 self._additional_claims = additional_claims
209 else:
210 self._additional_claims = {}
211
212 def with_target_audience(self, target_audience):
213 """Create a copy of these credentials with the specified target
214 audience.
215 Args:
216 target_audience (str): The intended audience for these credentials,
217 used when requesting the ID Token.
218 Returns:
219 google.auth.service_account.IDTokenCredentials: A new credentials
220 instance.
221 """
222 # since the signer is already instantiated,
223 # the request is not needed
224 if self._use_metadata_identity_endpoint:
225 return self.__class__(
226 None,
227 target_audience=target_audience,
228 use_metadata_identity_endpoint=True,
229 )
230 else:
231 return self.__class__(
232 None,
233 service_account_email=self._service_account_email,
234 token_uri=self._token_uri,
235 target_audience=target_audience,
236 additional_claims=self._additional_claims.copy(),
237 signer=self.signer,
238 use_metadata_identity_endpoint=False,
239 )
240
241 def _make_authorization_grant_assertion(self):
242 """Create the OAuth 2.0 assertion.
243 This assertion is used during the OAuth 2.0 grant to acquire an
244 ID token.
245 Returns:
246 bytes: The authorization grant assertion.
247 """
248 now = _helpers.utcnow()
249 lifetime = datetime.timedelta(seconds=_DEFAULT_TOKEN_LIFETIME_SECS)
250 expiry = now + lifetime
251
252 payload = {
253 "iat": _helpers.datetime_to_secs(now),
254 "exp": _helpers.datetime_to_secs(expiry),
255 # The issuer must be the service account email.
256 "iss": self.service_account_email,
257 # The audience must be the auth token endpoint's URI
258 "aud": self._token_uri,
259 # The target audience specifies which service the ID token is
260 # intended for.
261 "target_audience": self._target_audience,
262 }
263
264 payload.update(self._additional_claims)
265
266 token = jwt.encode(self._signer, payload)
267
268 return token
269
270 def _call_metadata_identity_endpoint(self, request):
271 """Request ID token from metadata identity endpoint.
272
273 Args:
274 request (google.auth.transport.Request): The object used to make
275 HTTP requests.
276
277 Raises:
278 google.auth.exceptions.RefreshError: If the Compute Engine metadata
279 service can't be reached or if the instance has no credentials.
280 ValueError: If extracting expiry from the obtained ID token fails.
281 """
282 try:
283 id_token = _metadata.get(
284 request,
285 "instance/service-accounts/default/identity?audience={}&format=full".format(
286 self._target_audience
287 ),
288 )
289 except exceptions.TransportError as caught_exc:
290 new_exc = exceptions.RefreshError(caught_exc)
291 six.raise_from(new_exc, caught_exc)
292
293 _, payload, _, _ = jwt._unverified_decode(id_token)
294 return id_token, payload["exp"]
295
296 def refresh(self, request):
297 """Refreshes the ID token.
298
299 Args:
300 request (google.auth.transport.Request): The object used to make
301 HTTP requests.
302
303 Raises:
304 google.auth.exceptions.RefreshError: If the credentials could
305 not be refreshed.
306 ValueError: If extracting expiry from the obtained ID token fails.
307 """
308 if self._use_metadata_identity_endpoint:
309 self.token, self.expiry = self._call_metadata_identity_endpoint(request)
310 else:
311 assertion = self._make_authorization_grant_assertion()
312 access_token, expiry, _ = _client.id_token_jwt_grant(
313 request, self._token_uri, assertion
314 )
315 self.token = access_token
316 self.expiry = expiry
317
318 @property
319 @_helpers.copy_docstring(credentials.Signing)
320 def signer(self):
321 return self._signer
322
323 def sign_bytes(self, message):
324 """Signs the given message.
325
326 Args:
327 message (bytes): The message to sign.
328
329 Returns:
330 bytes: The message's cryptographic signature.
331
332 Raises:
333 ValueError:
334 Signer is not available if metadata identity endpoint is used.
335 """
336 if self._use_metadata_identity_endpoint:
337 raise ValueError(
338 "Signer is not available if metadata identity endpoint is used"
339 )
340 return self._signer.sign(message)
341
342 @property
343 def service_account_email(self):
344 """The service account email."""
345 return self._service_account_email
346
347 @property
348 def signer_email(self):
349 return self._service_account_email
350
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/google/auth/compute_engine/credentials.py b/google/auth/compute_engine/credentials.py
--- a/google/auth/compute_engine/credentials.py
+++ b/google/auth/compute_engine/credentials.py
@@ -274,6 +274,9 @@
request (google.auth.transport.Request): The object used to make
HTTP requests.
+ Returns:
+ Tuple[str, datetime.datetime]: The ID token and the expiry of the ID token.
+
Raises:
google.auth.exceptions.RefreshError: If the Compute Engine metadata
service can't be reached or if the instance has no credentials.
@@ -291,7 +294,7 @@
six.raise_from(new_exc, caught_exc)
_, payload, _, _ = jwt._unverified_decode(id_token)
- return id_token, payload["exp"]
+ return id_token, datetime.datetime.fromtimestamp(payload["exp"])
def refresh(self, request):
"""Refreshes the ID token.
|
{"golden_diff": "diff --git a/google/auth/compute_engine/credentials.py b/google/auth/compute_engine/credentials.py\n--- a/google/auth/compute_engine/credentials.py\n+++ b/google/auth/compute_engine/credentials.py\n@@ -274,6 +274,9 @@\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n \n+ Returns:\n+ Tuple[str, datetime.datetime]: The ID token and the expiry of the ID token.\n+\n Raises:\n google.auth.exceptions.RefreshError: If the Compute Engine metadata\n service can't be reached or if the instance has no credentials.\n@@ -291,7 +294,7 @@\n six.raise_from(new_exc, caught_exc)\n \n _, payload, _, _ = jwt._unverified_decode(id_token)\n- return id_token, payload[\"exp\"]\n+ return id_token, datetime.datetime.fromtimestamp(payload[\"exp\"])\n \n def refresh(self, request):\n \"\"\"Refreshes the ID token.\n", "issue": "Invalid type for expiry when using GCE metadata server\n#### Environment details\r\n\r\n - OS: Linux\r\n - Python version: 3.8.2\r\n - pip version: 20.0.2\r\n - `google-auth` version: 1.12.0\r\n\r\n#### Steps to reproduce\r\n\r\n 1. Create an ID token using the GCE metadata server:\r\n ```python\r\n from google.auth import compute_engine\r\n import google.auth.transport.requests\r\n request = google.auth.transport.requests.Request()\r\n target_audience = 'https://example.com'\r\n creds = compute_engine.IDTokenCredentials(request, target_audience, use_metadata_identity_endpoint=True)\r\n ```\r\n 2. Try to use the credentials to authorize a request:\r\n ```python\r\n from google.auth.transport.requests import AuthorizedSession\r\n session = AuthorizedSession(credentials)\r\n response = session.get('https://example.com')\r\n ```\r\n\r\nThis results in an error:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.8/site-packages/google/auth/credentials.py\", line 66, in expired\r\n skewed_expiry = self.expiry - _helpers.CLOCK_SKEW\r\nTypeError: unsupported operand type(s) for -: 'int' and 'datetime.timedelta'\r\n```\r\n\r\nI think the issue is on [this line](https://github.com/googleapis/google-auth-library-python/blob/master/google/auth/compute_engine/credentials.py#L294). `payload[\"exp\"]` should (apparently) be getting parsed into a `datetime.timedelta`\n", "before_files": [{"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Google Compute Engine credentials.\n\nThis module provides authentication for application running on Google Compute\nEngine using the Compute Engine metadata server.\n\n\"\"\"\n\nimport datetime\n\nimport six\n\nfrom google.auth import _helpers\nfrom google.auth import credentials\nfrom google.auth import exceptions\nfrom google.auth import iam\nfrom google.auth import jwt\nfrom google.auth.compute_engine import _metadata\nfrom google.oauth2 import _client\n\n\nclass Credentials(credentials.ReadOnlyScoped, credentials.Credentials):\n \"\"\"Compute Engine Credentials.\n\n These credentials use the Google Compute Engine metadata server to obtain\n OAuth 2.0 access tokens associated with the instance's service account.\n\n For more information about Compute Engine authentication, including how\n to configure scopes, see the `Compute Engine authentication\n documentation`_.\n\n .. note:: Compute Engine instances can be created with scopes and therefore\n these credentials are considered to be 'scoped'. However, you can\n not use :meth:`~google.auth.credentials.ScopedCredentials.with_scopes`\n because it is not possible to change the scopes that the instance\n has. Also note that\n :meth:`~google.auth.credentials.ScopedCredentials.has_scopes` will not\n work until the credentials have been refreshed.\n\n .. _Compute Engine authentication documentation:\n https://cloud.google.com/compute/docs/authentication#using\n \"\"\"\n\n def __init__(self, service_account_email=\"default\"):\n \"\"\"\n Args:\n service_account_email (str): The service account email to use, or\n 'default'. A Compute Engine instance may have multiple service\n accounts.\n \"\"\"\n super(Credentials, self).__init__()\n self._service_account_email = service_account_email\n\n def _retrieve_info(self, request):\n \"\"\"Retrieve information about the service account.\n\n Updates the scopes and retrieves the full service account email.\n\n Args:\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n \"\"\"\n info = _metadata.get_service_account_info(\n request, service_account=self._service_account_email\n )\n\n self._service_account_email = info[\"email\"]\n self._scopes = info[\"scopes\"]\n\n def refresh(self, request):\n \"\"\"Refresh the access token and scopes.\n\n Args:\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n\n Raises:\n google.auth.exceptions.RefreshError: If the Compute Engine metadata\n service can't be reached if if the instance has not\n credentials.\n \"\"\"\n try:\n self._retrieve_info(request)\n self.token, self.expiry = _metadata.get_service_account_token(\n request, service_account=self._service_account_email\n )\n except exceptions.TransportError as caught_exc:\n new_exc = exceptions.RefreshError(caught_exc)\n six.raise_from(new_exc, caught_exc)\n\n @property\n def service_account_email(self):\n \"\"\"The service account email.\n\n .. note:: This is not guaranteed to be set until :meth:`refresh` has been\n called.\n \"\"\"\n return self._service_account_email\n\n @property\n def requires_scopes(self):\n \"\"\"False: Compute Engine credentials can not be scoped.\"\"\"\n return False\n\n\n_DEFAULT_TOKEN_LIFETIME_SECS = 3600 # 1 hour in seconds\n_DEFAULT_TOKEN_URI = \"https://www.googleapis.com/oauth2/v4/token\"\n\n\nclass IDTokenCredentials(credentials.Credentials, credentials.Signing):\n \"\"\"Open ID Connect ID Token-based service account credentials.\n\n These credentials relies on the default service account of a GCE instance.\n\n ID token can be requested from `GCE metadata server identity endpoint`_, IAM\n token endpoint or other token endpoints you specify. If metadata server\n identity endpoint is not used, the GCE instance must have been started with\n a service account that has access to the IAM Cloud API.\n\n .. _GCE metadata server identity endpoint:\n https://cloud.google.com/compute/docs/instances/verifying-instance-identity\n \"\"\"\n\n def __init__(\n self,\n request,\n target_audience,\n token_uri=None,\n additional_claims=None,\n service_account_email=None,\n signer=None,\n use_metadata_identity_endpoint=False,\n ):\n \"\"\"\n Args:\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n target_audience (str): The intended audience for these credentials,\n used when requesting the ID Token. The ID Token's ``aud`` claim\n will be set to this string.\n token_uri (str): The OAuth 2.0 Token URI.\n additional_claims (Mapping[str, str]): Any additional claims for\n the JWT assertion used in the authorization grant.\n service_account_email (str): Optional explicit service account to\n use to sign JWT tokens.\n By default, this is the default GCE service account.\n signer (google.auth.crypt.Signer): The signer used to sign JWTs.\n In case the signer is specified, the request argument will be\n ignored.\n use_metadata_identity_endpoint (bool): Whether to use GCE metadata\n identity endpoint. For backward compatibility the default value\n is False. If set to True, ``token_uri``, ``additional_claims``,\n ``service_account_email``, ``signer`` argument should not be set;\n otherwise ValueError will be raised.\n\n Raises:\n ValueError:\n If ``use_metadata_identity_endpoint`` is set to True, and one of\n ``token_uri``, ``additional_claims``, ``service_account_email``,\n ``signer`` arguments is set.\n \"\"\"\n super(IDTokenCredentials, self).__init__()\n\n self._use_metadata_identity_endpoint = use_metadata_identity_endpoint\n self._target_audience = target_audience\n\n if use_metadata_identity_endpoint:\n if token_uri or additional_claims or service_account_email or signer:\n raise ValueError(\n \"If use_metadata_identity_endpoint is set, token_uri, \"\n \"additional_claims, service_account_email, signer arguments\"\n \" must not be set\"\n )\n self._token_uri = None\n self._additional_claims = None\n self._signer = None\n\n if service_account_email is None:\n sa_info = _metadata.get_service_account_info(request)\n self._service_account_email = sa_info[\"email\"]\n else:\n self._service_account_email = service_account_email\n\n if not use_metadata_identity_endpoint:\n if signer is None:\n signer = iam.Signer(\n request=request,\n credentials=Credentials(),\n service_account_email=self._service_account_email,\n )\n self._signer = signer\n self._token_uri = token_uri or _DEFAULT_TOKEN_URI\n\n if additional_claims is not None:\n self._additional_claims = additional_claims\n else:\n self._additional_claims = {}\n\n def with_target_audience(self, target_audience):\n \"\"\"Create a copy of these credentials with the specified target\n audience.\n Args:\n target_audience (str): The intended audience for these credentials,\n used when requesting the ID Token.\n Returns:\n google.auth.service_account.IDTokenCredentials: A new credentials\n instance.\n \"\"\"\n # since the signer is already instantiated,\n # the request is not needed\n if self._use_metadata_identity_endpoint:\n return self.__class__(\n None,\n target_audience=target_audience,\n use_metadata_identity_endpoint=True,\n )\n else:\n return self.__class__(\n None,\n service_account_email=self._service_account_email,\n token_uri=self._token_uri,\n target_audience=target_audience,\n additional_claims=self._additional_claims.copy(),\n signer=self.signer,\n use_metadata_identity_endpoint=False,\n )\n\n def _make_authorization_grant_assertion(self):\n \"\"\"Create the OAuth 2.0 assertion.\n This assertion is used during the OAuth 2.0 grant to acquire an\n ID token.\n Returns:\n bytes: The authorization grant assertion.\n \"\"\"\n now = _helpers.utcnow()\n lifetime = datetime.timedelta(seconds=_DEFAULT_TOKEN_LIFETIME_SECS)\n expiry = now + lifetime\n\n payload = {\n \"iat\": _helpers.datetime_to_secs(now),\n \"exp\": _helpers.datetime_to_secs(expiry),\n # The issuer must be the service account email.\n \"iss\": self.service_account_email,\n # The audience must be the auth token endpoint's URI\n \"aud\": self._token_uri,\n # The target audience specifies which service the ID token is\n # intended for.\n \"target_audience\": self._target_audience,\n }\n\n payload.update(self._additional_claims)\n\n token = jwt.encode(self._signer, payload)\n\n return token\n\n def _call_metadata_identity_endpoint(self, request):\n \"\"\"Request ID token from metadata identity endpoint.\n\n Args:\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n\n Raises:\n google.auth.exceptions.RefreshError: If the Compute Engine metadata\n service can't be reached or if the instance has no credentials.\n ValueError: If extracting expiry from the obtained ID token fails.\n \"\"\"\n try:\n id_token = _metadata.get(\n request,\n \"instance/service-accounts/default/identity?audience={}&format=full\".format(\n self._target_audience\n ),\n )\n except exceptions.TransportError as caught_exc:\n new_exc = exceptions.RefreshError(caught_exc)\n six.raise_from(new_exc, caught_exc)\n\n _, payload, _, _ = jwt._unverified_decode(id_token)\n return id_token, payload[\"exp\"]\n\n def refresh(self, request):\n \"\"\"Refreshes the ID token.\n\n Args:\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n\n Raises:\n google.auth.exceptions.RefreshError: If the credentials could\n not be refreshed.\n ValueError: If extracting expiry from the obtained ID token fails.\n \"\"\"\n if self._use_metadata_identity_endpoint:\n self.token, self.expiry = self._call_metadata_identity_endpoint(request)\n else:\n assertion = self._make_authorization_grant_assertion()\n access_token, expiry, _ = _client.id_token_jwt_grant(\n request, self._token_uri, assertion\n )\n self.token = access_token\n self.expiry = expiry\n\n @property\n @_helpers.copy_docstring(credentials.Signing)\n def signer(self):\n return self._signer\n\n def sign_bytes(self, message):\n \"\"\"Signs the given message.\n\n Args:\n message (bytes): The message to sign.\n\n Returns:\n bytes: The message's cryptographic signature.\n\n Raises:\n ValueError:\n Signer is not available if metadata identity endpoint is used.\n \"\"\"\n if self._use_metadata_identity_endpoint:\n raise ValueError(\n \"Signer is not available if metadata identity endpoint is used\"\n )\n return self._signer.sign(message)\n\n @property\n def service_account_email(self):\n \"\"\"The service account email.\"\"\"\n return self._service_account_email\n\n @property\n def signer_email(self):\n return self._service_account_email\n", "path": "google/auth/compute_engine/credentials.py"}], "after_files": [{"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Google Compute Engine credentials.\n\nThis module provides authentication for application running on Google Compute\nEngine using the Compute Engine metadata server.\n\n\"\"\"\n\nimport datetime\n\nimport six\n\nfrom google.auth import _helpers\nfrom google.auth import credentials\nfrom google.auth import exceptions\nfrom google.auth import iam\nfrom google.auth import jwt\nfrom google.auth.compute_engine import _metadata\nfrom google.oauth2 import _client\n\n\nclass Credentials(credentials.ReadOnlyScoped, credentials.Credentials):\n \"\"\"Compute Engine Credentials.\n\n These credentials use the Google Compute Engine metadata server to obtain\n OAuth 2.0 access tokens associated with the instance's service account.\n\n For more information about Compute Engine authentication, including how\n to configure scopes, see the `Compute Engine authentication\n documentation`_.\n\n .. note:: Compute Engine instances can be created with scopes and therefore\n these credentials are considered to be 'scoped'. However, you can\n not use :meth:`~google.auth.credentials.ScopedCredentials.with_scopes`\n because it is not possible to change the scopes that the instance\n has. Also note that\n :meth:`~google.auth.credentials.ScopedCredentials.has_scopes` will not\n work until the credentials have been refreshed.\n\n .. _Compute Engine authentication documentation:\n https://cloud.google.com/compute/docs/authentication#using\n \"\"\"\n\n def __init__(self, service_account_email=\"default\"):\n \"\"\"\n Args:\n service_account_email (str): The service account email to use, or\n 'default'. A Compute Engine instance may have multiple service\n accounts.\n \"\"\"\n super(Credentials, self).__init__()\n self._service_account_email = service_account_email\n\n def _retrieve_info(self, request):\n \"\"\"Retrieve information about the service account.\n\n Updates the scopes and retrieves the full service account email.\n\n Args:\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n \"\"\"\n info = _metadata.get_service_account_info(\n request, service_account=self._service_account_email\n )\n\n self._service_account_email = info[\"email\"]\n self._scopes = info[\"scopes\"]\n\n def refresh(self, request):\n \"\"\"Refresh the access token and scopes.\n\n Args:\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n\n Raises:\n google.auth.exceptions.RefreshError: If the Compute Engine metadata\n service can't be reached if if the instance has not\n credentials.\n \"\"\"\n try:\n self._retrieve_info(request)\n self.token, self.expiry = _metadata.get_service_account_token(\n request, service_account=self._service_account_email\n )\n except exceptions.TransportError as caught_exc:\n new_exc = exceptions.RefreshError(caught_exc)\n six.raise_from(new_exc, caught_exc)\n\n @property\n def service_account_email(self):\n \"\"\"The service account email.\n\n .. note:: This is not guaranteed to be set until :meth:`refresh` has been\n called.\n \"\"\"\n return self._service_account_email\n\n @property\n def requires_scopes(self):\n \"\"\"False: Compute Engine credentials can not be scoped.\"\"\"\n return False\n\n\n_DEFAULT_TOKEN_LIFETIME_SECS = 3600 # 1 hour in seconds\n_DEFAULT_TOKEN_URI = \"https://www.googleapis.com/oauth2/v4/token\"\n\n\nclass IDTokenCredentials(credentials.Credentials, credentials.Signing):\n \"\"\"Open ID Connect ID Token-based service account credentials.\n\n These credentials relies on the default service account of a GCE instance.\n\n ID token can be requested from `GCE metadata server identity endpoint`_, IAM\n token endpoint or other token endpoints you specify. If metadata server\n identity endpoint is not used, the GCE instance must have been started with\n a service account that has access to the IAM Cloud API.\n\n .. _GCE metadata server identity endpoint:\n https://cloud.google.com/compute/docs/instances/verifying-instance-identity\n \"\"\"\n\n def __init__(\n self,\n request,\n target_audience,\n token_uri=None,\n additional_claims=None,\n service_account_email=None,\n signer=None,\n use_metadata_identity_endpoint=False,\n ):\n \"\"\"\n Args:\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n target_audience (str): The intended audience for these credentials,\n used when requesting the ID Token. The ID Token's ``aud`` claim\n will be set to this string.\n token_uri (str): The OAuth 2.0 Token URI.\n additional_claims (Mapping[str, str]): Any additional claims for\n the JWT assertion used in the authorization grant.\n service_account_email (str): Optional explicit service account to\n use to sign JWT tokens.\n By default, this is the default GCE service account.\n signer (google.auth.crypt.Signer): The signer used to sign JWTs.\n In case the signer is specified, the request argument will be\n ignored.\n use_metadata_identity_endpoint (bool): Whether to use GCE metadata\n identity endpoint. For backward compatibility the default value\n is False. If set to True, ``token_uri``, ``additional_claims``,\n ``service_account_email``, ``signer`` argument should not be set;\n otherwise ValueError will be raised.\n\n Raises:\n ValueError:\n If ``use_metadata_identity_endpoint`` is set to True, and one of\n ``token_uri``, ``additional_claims``, ``service_account_email``,\n ``signer`` arguments is set.\n \"\"\"\n super(IDTokenCredentials, self).__init__()\n\n self._use_metadata_identity_endpoint = use_metadata_identity_endpoint\n self._target_audience = target_audience\n\n if use_metadata_identity_endpoint:\n if token_uri or additional_claims or service_account_email or signer:\n raise ValueError(\n \"If use_metadata_identity_endpoint is set, token_uri, \"\n \"additional_claims, service_account_email, signer arguments\"\n \" must not be set\"\n )\n self._token_uri = None\n self._additional_claims = None\n self._signer = None\n\n if service_account_email is None:\n sa_info = _metadata.get_service_account_info(request)\n self._service_account_email = sa_info[\"email\"]\n else:\n self._service_account_email = service_account_email\n\n if not use_metadata_identity_endpoint:\n if signer is None:\n signer = iam.Signer(\n request=request,\n credentials=Credentials(),\n service_account_email=self._service_account_email,\n )\n self._signer = signer\n self._token_uri = token_uri or _DEFAULT_TOKEN_URI\n\n if additional_claims is not None:\n self._additional_claims = additional_claims\n else:\n self._additional_claims = {}\n\n def with_target_audience(self, target_audience):\n \"\"\"Create a copy of these credentials with the specified target\n audience.\n Args:\n target_audience (str): The intended audience for these credentials,\n used when requesting the ID Token.\n Returns:\n google.auth.service_account.IDTokenCredentials: A new credentials\n instance.\n \"\"\"\n # since the signer is already instantiated,\n # the request is not needed\n if self._use_metadata_identity_endpoint:\n return self.__class__(\n None,\n target_audience=target_audience,\n use_metadata_identity_endpoint=True,\n )\n else:\n return self.__class__(\n None,\n service_account_email=self._service_account_email,\n token_uri=self._token_uri,\n target_audience=target_audience,\n additional_claims=self._additional_claims.copy(),\n signer=self.signer,\n use_metadata_identity_endpoint=False,\n )\n\n def _make_authorization_grant_assertion(self):\n \"\"\"Create the OAuth 2.0 assertion.\n This assertion is used during the OAuth 2.0 grant to acquire an\n ID token.\n Returns:\n bytes: The authorization grant assertion.\n \"\"\"\n now = _helpers.utcnow()\n lifetime = datetime.timedelta(seconds=_DEFAULT_TOKEN_LIFETIME_SECS)\n expiry = now + lifetime\n\n payload = {\n \"iat\": _helpers.datetime_to_secs(now),\n \"exp\": _helpers.datetime_to_secs(expiry),\n # The issuer must be the service account email.\n \"iss\": self.service_account_email,\n # The audience must be the auth token endpoint's URI\n \"aud\": self._token_uri,\n # The target audience specifies which service the ID token is\n # intended for.\n \"target_audience\": self._target_audience,\n }\n\n payload.update(self._additional_claims)\n\n token = jwt.encode(self._signer, payload)\n\n return token\n\n def _call_metadata_identity_endpoint(self, request):\n \"\"\"Request ID token from metadata identity endpoint.\n\n Args:\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n\n Returns:\n Tuple[str, datetime.datetime]: The ID token and the expiry of the ID token.\n\n Raises:\n google.auth.exceptions.RefreshError: If the Compute Engine metadata\n service can't be reached or if the instance has no credentials.\n ValueError: If extracting expiry from the obtained ID token fails.\n \"\"\"\n try:\n id_token = _metadata.get(\n request,\n \"instance/service-accounts/default/identity?audience={}&format=full\".format(\n self._target_audience\n ),\n )\n except exceptions.TransportError as caught_exc:\n new_exc = exceptions.RefreshError(caught_exc)\n six.raise_from(new_exc, caught_exc)\n\n _, payload, _, _ = jwt._unverified_decode(id_token)\n return id_token, datetime.datetime.fromtimestamp(payload[\"exp\"])\n\n def refresh(self, request):\n \"\"\"Refreshes the ID token.\n\n Args:\n request (google.auth.transport.Request): The object used to make\n HTTP requests.\n\n Raises:\n google.auth.exceptions.RefreshError: If the credentials could\n not be refreshed.\n ValueError: If extracting expiry from the obtained ID token fails.\n \"\"\"\n if self._use_metadata_identity_endpoint:\n self.token, self.expiry = self._call_metadata_identity_endpoint(request)\n else:\n assertion = self._make_authorization_grant_assertion()\n access_token, expiry, _ = _client.id_token_jwt_grant(\n request, self._token_uri, assertion\n )\n self.token = access_token\n self.expiry = expiry\n\n @property\n @_helpers.copy_docstring(credentials.Signing)\n def signer(self):\n return self._signer\n\n def sign_bytes(self, message):\n \"\"\"Signs the given message.\n\n Args:\n message (bytes): The message to sign.\n\n Returns:\n bytes: The message's cryptographic signature.\n\n Raises:\n ValueError:\n Signer is not available if metadata identity endpoint is used.\n \"\"\"\n if self._use_metadata_identity_endpoint:\n raise ValueError(\n \"Signer is not available if metadata identity endpoint is used\"\n )\n return self._signer.sign(message)\n\n @property\n def service_account_email(self):\n \"\"\"The service account email.\"\"\"\n return self._service_account_email\n\n @property\n def signer_email(self):\n return self._service_account_email\n", "path": "google/auth/compute_engine/credentials.py"}]}
| 4,079 | 211 |
gh_patches_debug_20706
|
rasdani/github-patches
|
git_diff
|
archlinux__archinstall-668
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
use_mirrors bug
https://github.com/archlinux/archinstall/blob/ba725517fd290a60cd4e1ea570dbbf94a47ede05/archinstall/lib/mirrors.py#L116-L123
This code doesn't open destination file in append mode. So, if we pass a dict of mirrors with multiple regions, file will be rewritten `len(regions)` times and only the last entry will be preserved.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/lib/mirrors.py`
Content:
```
1 import urllib.error
2 import urllib.request
3 from typing import Union
4
5 from .general import *
6 from .output import log
7
8 def sort_mirrorlist(raw_data :bytes, sort_order=["https", "http"]) -> bytes:
9 """
10 This function can sort /etc/pacman.d/mirrorlist according to the
11 mirror's URL prefix. By default places HTTPS before HTTP but it also
12 preserves the country/rank-order.
13
14 This assumes /etc/pacman.d/mirrorlist looks like the following:
15
16 ## Comment
17 Server = url
18
19 or
20
21 ## Comment
22 #Server = url
23
24 But the Comments need to start with double-hashmarks to be distringuished
25 from server url definitions (commented or uncommented).
26 """
27 comments_and_whitespaces = b""
28
29 categories = {key: [] for key in sort_order+["Unknown"]}
30 for line in raw_data.split(b"\n"):
31 if line[0:2] in (b'##', b''):
32 comments_and_whitespaces += line + b'\n'
33 elif line[:6].lower() == b'server' or line[:7].lower() == b'#server':
34 opening, url = line.split(b'=', 1)
35 opening, url = opening.strip(), url.strip()
36 if (category := url.split(b'://',1)[0].decode('UTF-8')) in categories:
37 categories[category].append(comments_and_whitespaces)
38 categories[category].append(opening+b' = '+url+b'\n')
39 else:
40 categories["Unknown"].append(comments_and_whitespaces)
41 categories["Unknown"].append(opening+b' = '+url+b'\n')
42
43 comments_and_whitespaces = b""
44
45
46 new_raw_data = b''
47 for category in sort_order+["Unknown"]:
48 for line in categories[category]:
49 new_raw_data += line
50
51 return new_raw_data
52
53
54 def filter_mirrors_by_region(regions, destination='/etc/pacman.d/mirrorlist', sort_order=["https", "http"], *args, **kwargs) -> Union[bool, bytes]:
55 """
56 This function will change the active mirrors on the live medium by
57 filtering which regions are active based on `regions`.
58
59 :param regions: A series of country codes separated by `,`. For instance `SE,US` for sweden and United States.
60 :type regions: str
61 """
62 region_list = [f'country={region}' for region in regions.split(',')]
63 response = urllib.request.urlopen(urllib.request.Request(f"https://archlinux.org/mirrorlist/?{'&'.join(region_list)}&protocol=https&protocol=http&ip_version=4&ip_version=6&use_mirror_status=on'", headers={'User-Agent': 'ArchInstall'}))
64 new_list = response.read().replace(b"#Server", b"Server")
65
66 if sort_order:
67 new_list = sort_mirrorlist(new_list, sort_order=sort_order)
68
69 if destination:
70 with open(destination, "wb") as mirrorlist:
71 mirrorlist.write(new_list)
72
73 return True
74 else:
75 return new_list.decode('UTF-8')
76
77
78 def add_custom_mirrors(mirrors: list, *args, **kwargs):
79 """
80 This will append custom mirror definitions in pacman.conf
81
82 :param mirrors: A list of mirror data according to: `{'url': 'http://url.com', 'signcheck': 'Optional', 'signoptions': 'TrustAll', 'name': 'testmirror'}`
83 :type mirrors: dict
84 """
85 with open('/etc/pacman.conf', 'a') as pacman:
86 for mirror in mirrors:
87 pacman.write(f"[{mirror['name']}]\n")
88 pacman.write(f"SigLevel = {mirror['signcheck']} {mirror['signoptions']}\n")
89 pacman.write(f"Server = {mirror['url']}\n")
90
91 return True
92
93
94 def insert_mirrors(mirrors, *args, **kwargs):
95 """
96 This function will insert a given mirror-list at the top of `/etc/pacman.d/mirrorlist`.
97 It will not flush any other mirrors, just insert new ones.
98
99 :param mirrors: A dictionary of `{'url' : 'country', 'url2' : 'country'}`
100 :type mirrors: dict
101 """
102 original_mirrorlist = ''
103 with open('/etc/pacman.d/mirrorlist', 'r') as original:
104 original_mirrorlist = original.read()
105
106 with open('/etc/pacman.d/mirrorlist', 'w') as new_mirrorlist:
107 for mirror, country in mirrors.items():
108 new_mirrorlist.write(f'## {country}\n')
109 new_mirrorlist.write(f'Server = {mirror}\n')
110 new_mirrorlist.write('\n')
111 new_mirrorlist.write(original_mirrorlist)
112
113 return True
114
115
116 def use_mirrors(regions: dict, destination='/etc/pacman.d/mirrorlist'):
117 log(f'A new package mirror-list has been created: {destination}', level=logging.INFO)
118 for region, mirrors in regions.items():
119 with open(destination, 'w') as mirrorlist:
120 for mirror in mirrors:
121 mirrorlist.write(f'## {region}\n')
122 mirrorlist.write(f'Server = {mirror}\n')
123 return True
124
125
126 def re_rank_mirrors(top=10, *positionals, **kwargs):
127 if SysCommand(f'/usr/bin/rankmirrors -n {top} /etc/pacman.d/mirrorlist > /etc/pacman.d/mirrorlist').exit_code == 0:
128 return True
129 return False
130
131
132 def list_mirrors(sort_order=["https", "http"]):
133 url = "https://archlinux.org/mirrorlist/?protocol=https&protocol=http&ip_version=4&ip_version=6&use_mirror_status=on"
134 regions = {}
135
136 try:
137 response = urllib.request.urlopen(url)
138 except urllib.error.URLError as err:
139 log(f'Could not fetch an active mirror-list: {err}', level=logging.WARNING, fg="yellow")
140 return regions
141
142 mirrorlist = response.read()
143 if sort_order:
144 mirrorlist = sort_mirrorlist(mirrorlist, sort_order=sort_order)
145
146 region = 'Unknown region'
147 for line in mirrorlist.split(b'\n'):
148 if len(line.strip()) == 0:
149 continue
150
151 line = line.decode('UTF-8').strip('\n').strip('\r')
152 if line[:3] == '## ':
153 region = line[3:]
154 elif line[:10] == '#Server = ':
155 regions.setdefault(region, {})
156
157 url = line.lstrip('#Server = ')
158 regions[region][url] = True
159
160 return regions
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/archinstall/lib/mirrors.py b/archinstall/lib/mirrors.py
--- a/archinstall/lib/mirrors.py
+++ b/archinstall/lib/mirrors.py
@@ -1,6 +1,6 @@
import urllib.error
import urllib.request
-from typing import Union
+from typing import Union, Mapping, Iterable
from .general import *
from .output import log
@@ -113,14 +113,16 @@
return True
-def use_mirrors(regions: dict, destination='/etc/pacman.d/mirrorlist'):
+def use_mirrors(
+ regions: Mapping[str, Iterable[str]],
+ destination: str ='/etc/pacman.d/mirrorlist'
+) -> None:
log(f'A new package mirror-list has been created: {destination}', level=logging.INFO)
- for region, mirrors in regions.items():
- with open(destination, 'w') as mirrorlist:
+ with open(destination, 'w') as mirrorlist:
+ for region, mirrors in regions.items():
for mirror in mirrors:
mirrorlist.write(f'## {region}\n')
mirrorlist.write(f'Server = {mirror}\n')
- return True
def re_rank_mirrors(top=10, *positionals, **kwargs):
|
{"golden_diff": "diff --git a/archinstall/lib/mirrors.py b/archinstall/lib/mirrors.py\n--- a/archinstall/lib/mirrors.py\n+++ b/archinstall/lib/mirrors.py\n@@ -1,6 +1,6 @@\n import urllib.error\n import urllib.request\n-from typing import Union\n+from typing import Union, Mapping, Iterable\n \n from .general import *\n from .output import log\n@@ -113,14 +113,16 @@\n \treturn True\n \n \n-def use_mirrors(regions: dict, destination='/etc/pacman.d/mirrorlist'):\n+def use_mirrors(\n+\tregions: Mapping[str, Iterable[str]],\n+\tdestination: str ='/etc/pacman.d/mirrorlist'\n+) -> None:\n \tlog(f'A new package mirror-list has been created: {destination}', level=logging.INFO)\n-\tfor region, mirrors in regions.items():\n-\t\twith open(destination, 'w') as mirrorlist:\n+\twith open(destination, 'w') as mirrorlist:\n+\t\tfor region, mirrors in regions.items():\n \t\t\tfor mirror in mirrors:\n \t\t\t\tmirrorlist.write(f'## {region}\\n')\n \t\t\t\tmirrorlist.write(f'Server = {mirror}\\n')\n-\treturn True\n \n \n def re_rank_mirrors(top=10, *positionals, **kwargs):\n", "issue": "use_mirrors bug\nhttps://github.com/archlinux/archinstall/blob/ba725517fd290a60cd4e1ea570dbbf94a47ede05/archinstall/lib/mirrors.py#L116-L123\r\n\r\nThis code doesn't open destination file in append mode. So, if we pass a dict of mirrors with multiple regions, file will be rewritten `len(regions)` times and only the last entry will be preserved.\n", "before_files": [{"content": "import urllib.error\nimport urllib.request\nfrom typing import Union\n\nfrom .general import *\nfrom .output import log\n\ndef sort_mirrorlist(raw_data :bytes, sort_order=[\"https\", \"http\"]) -> bytes:\n\t\"\"\"\n\tThis function can sort /etc/pacman.d/mirrorlist according to the\n\tmirror's URL prefix. By default places HTTPS before HTTP but it also\n\tpreserves the country/rank-order.\n\n\tThis assumes /etc/pacman.d/mirrorlist looks like the following:\n\n\t## Comment\n\tServer = url\n\n\tor\n\n\t## Comment\n\t#Server = url\n\n\tBut the Comments need to start with double-hashmarks to be distringuished\n\tfrom server url definitions (commented or uncommented).\n\t\"\"\"\n\tcomments_and_whitespaces = b\"\"\n\n\tcategories = {key: [] for key in sort_order+[\"Unknown\"]}\n\tfor line in raw_data.split(b\"\\n\"):\n\t\tif line[0:2] in (b'##', b''):\n\t\t\tcomments_and_whitespaces += line + b'\\n'\n\t\telif line[:6].lower() == b'server' or line[:7].lower() == b'#server':\n\t\t\topening, url = line.split(b'=', 1)\n\t\t\topening, url = opening.strip(), url.strip()\n\t\t\tif (category := url.split(b'://',1)[0].decode('UTF-8')) in categories:\n\t\t\t\tcategories[category].append(comments_and_whitespaces)\n\t\t\t\tcategories[category].append(opening+b' = '+url+b'\\n')\n\t\t\telse:\n\t\t\t\tcategories[\"Unknown\"].append(comments_and_whitespaces)\n\t\t\t\tcategories[\"Unknown\"].append(opening+b' = '+url+b'\\n')\n\n\t\t\tcomments_and_whitespaces = b\"\"\n\n\n\tnew_raw_data = b''\n\tfor category in sort_order+[\"Unknown\"]:\n\t\tfor line in categories[category]:\n\t\t\tnew_raw_data += line\n\n\treturn new_raw_data\n\n\ndef filter_mirrors_by_region(regions, destination='/etc/pacman.d/mirrorlist', sort_order=[\"https\", \"http\"], *args, **kwargs) -> Union[bool, bytes]:\n\t\"\"\"\n\tThis function will change the active mirrors on the live medium by\n\tfiltering which regions are active based on `regions`.\n\n\t:param regions: A series of country codes separated by `,`. For instance `SE,US` for sweden and United States.\n\t:type regions: str\n\t\"\"\"\n\tregion_list = [f'country={region}' for region in regions.split(',')]\n\tresponse = urllib.request.urlopen(urllib.request.Request(f\"https://archlinux.org/mirrorlist/?{'&'.join(region_list)}&protocol=https&protocol=http&ip_version=4&ip_version=6&use_mirror_status=on'\", headers={'User-Agent': 'ArchInstall'}))\n\tnew_list = response.read().replace(b\"#Server\", b\"Server\")\n\n\tif sort_order:\n\t\tnew_list = sort_mirrorlist(new_list, sort_order=sort_order)\n\n\tif destination:\n\t\twith open(destination, \"wb\") as mirrorlist:\n\t\t\tmirrorlist.write(new_list)\n\n\t\treturn True\n\telse:\n\t\treturn new_list.decode('UTF-8')\n\n\ndef add_custom_mirrors(mirrors: list, *args, **kwargs):\n\t\"\"\"\n\tThis will append custom mirror definitions in pacman.conf\n\n\t:param mirrors: A list of mirror data according to: `{'url': 'http://url.com', 'signcheck': 'Optional', 'signoptions': 'TrustAll', 'name': 'testmirror'}`\n\t:type mirrors: dict\n\t\"\"\"\n\twith open('/etc/pacman.conf', 'a') as pacman:\n\t\tfor mirror in mirrors:\n\t\t\tpacman.write(f\"[{mirror['name']}]\\n\")\n\t\t\tpacman.write(f\"SigLevel = {mirror['signcheck']} {mirror['signoptions']}\\n\")\n\t\t\tpacman.write(f\"Server = {mirror['url']}\\n\")\n\n\treturn True\n\n\ndef insert_mirrors(mirrors, *args, **kwargs):\n\t\"\"\"\n\tThis function will insert a given mirror-list at the top of `/etc/pacman.d/mirrorlist`.\n\tIt will not flush any other mirrors, just insert new ones.\n\n\t:param mirrors: A dictionary of `{'url' : 'country', 'url2' : 'country'}`\n\t:type mirrors: dict\n\t\"\"\"\n\toriginal_mirrorlist = ''\n\twith open('/etc/pacman.d/mirrorlist', 'r') as original:\n\t\toriginal_mirrorlist = original.read()\n\n\twith open('/etc/pacman.d/mirrorlist', 'w') as new_mirrorlist:\n\t\tfor mirror, country in mirrors.items():\n\t\t\tnew_mirrorlist.write(f'## {country}\\n')\n\t\t\tnew_mirrorlist.write(f'Server = {mirror}\\n')\n\t\tnew_mirrorlist.write('\\n')\n\t\tnew_mirrorlist.write(original_mirrorlist)\n\n\treturn True\n\n\ndef use_mirrors(regions: dict, destination='/etc/pacman.d/mirrorlist'):\n\tlog(f'A new package mirror-list has been created: {destination}', level=logging.INFO)\n\tfor region, mirrors in regions.items():\n\t\twith open(destination, 'w') as mirrorlist:\n\t\t\tfor mirror in mirrors:\n\t\t\t\tmirrorlist.write(f'## {region}\\n')\n\t\t\t\tmirrorlist.write(f'Server = {mirror}\\n')\n\treturn True\n\n\ndef re_rank_mirrors(top=10, *positionals, **kwargs):\n\tif SysCommand(f'/usr/bin/rankmirrors -n {top} /etc/pacman.d/mirrorlist > /etc/pacman.d/mirrorlist').exit_code == 0:\n\t\treturn True\n\treturn False\n\n\ndef list_mirrors(sort_order=[\"https\", \"http\"]):\n\turl = \"https://archlinux.org/mirrorlist/?protocol=https&protocol=http&ip_version=4&ip_version=6&use_mirror_status=on\"\n\tregions = {}\n\n\ttry:\n\t\tresponse = urllib.request.urlopen(url)\n\texcept urllib.error.URLError as err:\n\t\tlog(f'Could not fetch an active mirror-list: {err}', level=logging.WARNING, fg=\"yellow\")\n\t\treturn regions\n\n\tmirrorlist = response.read()\n\tif sort_order:\n\t\tmirrorlist = sort_mirrorlist(mirrorlist, sort_order=sort_order)\n\n\tregion = 'Unknown region'\n\tfor line in mirrorlist.split(b'\\n'):\n\t\tif len(line.strip()) == 0:\n\t\t\tcontinue\n\n\t\tline = line.decode('UTF-8').strip('\\n').strip('\\r')\n\t\tif line[:3] == '## ':\n\t\t\tregion = line[3:]\n\t\telif line[:10] == '#Server = ':\n\t\t\tregions.setdefault(region, {})\n\n\t\t\turl = line.lstrip('#Server = ')\n\t\t\tregions[region][url] = True\n\n\treturn regions\n", "path": "archinstall/lib/mirrors.py"}], "after_files": [{"content": "import urllib.error\nimport urllib.request\nfrom typing import Union, Mapping, Iterable\n\nfrom .general import *\nfrom .output import log\n\ndef sort_mirrorlist(raw_data :bytes, sort_order=[\"https\", \"http\"]) -> bytes:\n\t\"\"\"\n\tThis function can sort /etc/pacman.d/mirrorlist according to the\n\tmirror's URL prefix. By default places HTTPS before HTTP but it also\n\tpreserves the country/rank-order.\n\n\tThis assumes /etc/pacman.d/mirrorlist looks like the following:\n\n\t## Comment\n\tServer = url\n\n\tor\n\n\t## Comment\n\t#Server = url\n\n\tBut the Comments need to start with double-hashmarks to be distringuished\n\tfrom server url definitions (commented or uncommented).\n\t\"\"\"\n\tcomments_and_whitespaces = b\"\"\n\n\tcategories = {key: [] for key in sort_order+[\"Unknown\"]}\n\tfor line in raw_data.split(b\"\\n\"):\n\t\tif line[0:2] in (b'##', b''):\n\t\t\tcomments_and_whitespaces += line + b'\\n'\n\t\telif line[:6].lower() == b'server' or line[:7].lower() == b'#server':\n\t\t\topening, url = line.split(b'=', 1)\n\t\t\topening, url = opening.strip(), url.strip()\n\t\t\tif (category := url.split(b'://',1)[0].decode('UTF-8')) in categories:\n\t\t\t\tcategories[category].append(comments_and_whitespaces)\n\t\t\t\tcategories[category].append(opening+b' = '+url+b'\\n')\n\t\t\telse:\n\t\t\t\tcategories[\"Unknown\"].append(comments_and_whitespaces)\n\t\t\t\tcategories[\"Unknown\"].append(opening+b' = '+url+b'\\n')\n\n\t\t\tcomments_and_whitespaces = b\"\"\n\n\n\tnew_raw_data = b''\n\tfor category in sort_order+[\"Unknown\"]:\n\t\tfor line in categories[category]:\n\t\t\tnew_raw_data += line\n\n\treturn new_raw_data\n\n\ndef filter_mirrors_by_region(regions, destination='/etc/pacman.d/mirrorlist', sort_order=[\"https\", \"http\"], *args, **kwargs) -> Union[bool, bytes]:\n\t\"\"\"\n\tThis function will change the active mirrors on the live medium by\n\tfiltering which regions are active based on `regions`.\n\n\t:param regions: A series of country codes separated by `,`. For instance `SE,US` for sweden and United States.\n\t:type regions: str\n\t\"\"\"\n\tregion_list = [f'country={region}' for region in regions.split(',')]\n\tresponse = urllib.request.urlopen(urllib.request.Request(f\"https://archlinux.org/mirrorlist/?{'&'.join(region_list)}&protocol=https&protocol=http&ip_version=4&ip_version=6&use_mirror_status=on'\", headers={'User-Agent': 'ArchInstall'}))\n\tnew_list = response.read().replace(b\"#Server\", b\"Server\")\n\n\tif sort_order:\n\t\tnew_list = sort_mirrorlist(new_list, sort_order=sort_order)\n\n\tif destination:\n\t\twith open(destination, \"wb\") as mirrorlist:\n\t\t\tmirrorlist.write(new_list)\n\n\t\treturn True\n\telse:\n\t\treturn new_list.decode('UTF-8')\n\n\ndef add_custom_mirrors(mirrors: list, *args, **kwargs):\n\t\"\"\"\n\tThis will append custom mirror definitions in pacman.conf\n\n\t:param mirrors: A list of mirror data according to: `{'url': 'http://url.com', 'signcheck': 'Optional', 'signoptions': 'TrustAll', 'name': 'testmirror'}`\n\t:type mirrors: dict\n\t\"\"\"\n\twith open('/etc/pacman.conf', 'a') as pacman:\n\t\tfor mirror in mirrors:\n\t\t\tpacman.write(f\"[{mirror['name']}]\\n\")\n\t\t\tpacman.write(f\"SigLevel = {mirror['signcheck']} {mirror['signoptions']}\\n\")\n\t\t\tpacman.write(f\"Server = {mirror['url']}\\n\")\n\n\treturn True\n\n\ndef insert_mirrors(mirrors, *args, **kwargs):\n\t\"\"\"\n\tThis function will insert a given mirror-list at the top of `/etc/pacman.d/mirrorlist`.\n\tIt will not flush any other mirrors, just insert new ones.\n\n\t:param mirrors: A dictionary of `{'url' : 'country', 'url2' : 'country'}`\n\t:type mirrors: dict\n\t\"\"\"\n\toriginal_mirrorlist = ''\n\twith open('/etc/pacman.d/mirrorlist', 'r') as original:\n\t\toriginal_mirrorlist = original.read()\n\n\twith open('/etc/pacman.d/mirrorlist', 'w') as new_mirrorlist:\n\t\tfor mirror, country in mirrors.items():\n\t\t\tnew_mirrorlist.write(f'## {country}\\n')\n\t\t\tnew_mirrorlist.write(f'Server = {mirror}\\n')\n\t\tnew_mirrorlist.write('\\n')\n\t\tnew_mirrorlist.write(original_mirrorlist)\n\n\treturn True\n\n\ndef use_mirrors(\n\tregions: Mapping[str, Iterable[str]],\n\tdestination: str ='/etc/pacman.d/mirrorlist'\n) -> None:\n\tlog(f'A new package mirror-list has been created: {destination}', level=logging.INFO)\n\twith open(destination, 'w') as mirrorlist:\n\t\tfor region, mirrors in regions.items():\n\t\t\tfor mirror in mirrors:\n\t\t\t\tmirrorlist.write(f'## {region}\\n')\n\t\t\t\tmirrorlist.write(f'Server = {mirror}\\n')\n\n\ndef re_rank_mirrors(top=10, *positionals, **kwargs):\n\tif SysCommand(f'/usr/bin/rankmirrors -n {top} /etc/pacman.d/mirrorlist > /etc/pacman.d/mirrorlist').exit_code == 0:\n\t\treturn True\n\treturn False\n\n\ndef list_mirrors(sort_order=[\"https\", \"http\"]):\n\turl = \"https://archlinux.org/mirrorlist/?protocol=https&protocol=http&ip_version=4&ip_version=6&use_mirror_status=on\"\n\tregions = {}\n\n\ttry:\n\t\tresponse = urllib.request.urlopen(url)\n\texcept urllib.error.URLError as err:\n\t\tlog(f'Could not fetch an active mirror-list: {err}', level=logging.WARNING, fg=\"yellow\")\n\t\treturn regions\n\n\tmirrorlist = response.read()\n\tif sort_order:\n\t\tmirrorlist = sort_mirrorlist(mirrorlist, sort_order=sort_order)\n\n\tregion = 'Unknown region'\n\tfor line in mirrorlist.split(b'\\n'):\n\t\tif len(line.strip()) == 0:\n\t\t\tcontinue\n\n\t\tline = line.decode('UTF-8').strip('\\n').strip('\\r')\n\t\tif line[:3] == '## ':\n\t\t\tregion = line[3:]\n\t\telif line[:10] == '#Server = ':\n\t\t\tregions.setdefault(region, {})\n\n\t\t\turl = line.lstrip('#Server = ')\n\t\t\tregions[region][url] = True\n\n\treturn regions\n", "path": "archinstall/lib/mirrors.py"}]}
| 2,249 | 280 |
gh_patches_debug_12271
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-3464
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tagging CloudFormation - Error: Parameters must have value
Initially the stack is created with an input parameter.
**c7n policy**
```
policies:
- name: add-cfn-tag
resource: cfn
filters:
- "tag:testcfn": present
actions:
- type: tag
value: abc
key: BusinessUnit
```
**Error**
An error occurred (ValidationError) when calling the UpdateStack operation: Parameters: [input_param] must have values
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/resources/cfn.py`
Content:
```
1 # Copyright 2015-2017 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 import logging
17
18 from concurrent.futures import as_completed
19
20 from c7n.actions import BaseAction
21 from c7n.manager import resources
22 from c7n.query import QueryResourceManager
23 from c7n.utils import local_session, type_schema
24 from c7n.tags import RemoveTag, Tag
25
26 log = logging.getLogger('custodian.cfn')
27
28
29 @resources.register('cfn')
30 class CloudFormation(QueryResourceManager):
31
32 class resource_type(object):
33 service = 'cloudformation'
34 type = 'stack'
35 enum_spec = ('describe_stacks', 'Stacks[]', None)
36 id = 'StackName'
37 filter_name = 'StackName'
38 filter_type = 'scalar'
39 name = 'StackName'
40 date = 'CreationTime'
41 dimension = None
42 config_type = 'AWS::CloudFormation::Stack'
43
44
45 @CloudFormation.action_registry.register('delete')
46 class Delete(BaseAction):
47 """Action to delete cloudformation stacks
48
49 It is recommended to use a filter to avoid unwanted deletion of stacks
50
51 :example:
52
53 .. code-block:: yaml
54
55 policies:
56 - name: cloudformation-delete-failed-stacks
57 resource: cfn
58 filters:
59 - StackStatus: ROLLBACK_COMPLETE
60 actions:
61 - delete
62 """
63
64 schema = type_schema('delete')
65 permissions = ("cloudformation:DeleteStack",)
66
67 def process(self, stacks):
68 with self.executor_factory(max_workers=10) as w:
69 list(w.map(self.process_stacks, stacks))
70
71 def process_stacks(self, stack):
72 client = local_session(
73 self.manager.session_factory).client('cloudformation')
74 client.delete_stack(StackName=stack['StackName'])
75
76
77 @CloudFormation.action_registry.register('set-protection')
78 class SetProtection(BaseAction):
79 """Action to disable termination protection
80
81 It is recommended to use a filter to avoid unwanted deletion of stacks
82
83 :example:
84
85 .. code-block:: yaml
86
87 policies:
88 - name: cloudformation-disable-protection
89 resource: cfn
90 filters:
91 - StackStatus: CREATE_COMPLETE
92 actions:
93 - type: set-protection
94 state: False
95 """
96
97 schema = type_schema(
98 'set-protection', state={'type': 'boolean', 'default': False})
99
100 permissions = ('cloudformation:UpdateStack',)
101
102 def process(self, stacks):
103 client = local_session(
104 self.manager.session_factory).client('cloudformation')
105
106 with self.executor_factory(max_workers=3) as w:
107 futures = {}
108 for s in stacks:
109 futures[w.submit(self.process_stacks, client, s)] = s
110 for f in as_completed(futures):
111 s = futures[f]
112 if f.exception():
113 self.log.error(
114 "Error updating protection stack:%s error:%s",
115 s['StackName'], f.exception())
116
117 def process_stacks(self, client, stack):
118 client.update_termination_protection(
119 EnableTerminationProtection=self.data.get('state', False),
120 StackName=stack['StackName'])
121
122
123 @CloudFormation.action_registry.register('tag')
124 class CloudFormationAddTag(Tag):
125 """Action to tag a cloudformation stack
126
127 :example:
128
129 .. code-block: yaml
130
131 policies:
132 - name: add-cfn-tag
133 resource: cfn
134 filters:
135 - 'tag:DesiredTag': absent
136 actions:
137 - type: tag
138 key: DesiredTag
139 value: DesiredValue
140 """
141 permissions = ('cloudformation:UpdateStack',)
142
143 def process_resource_set(self, stacks, tags):
144 client = local_session(
145 self.manager.session_factory).client('cloudformation')
146
147 def _tag_stacks(s):
148 client.update_stack(
149 StackName=s['StackName'],
150 UsePreviousTemplate=True,
151 Tags=tags)
152
153 with self.executor_factory(max_workers=2) as w:
154 list(w.map(_tag_stacks, stacks))
155
156
157 @CloudFormation.action_registry.register('remove-tag')
158 class CloudFormationRemoveTag(RemoveTag):
159 """Action to remove tags from a cloudformation stack
160
161 :example:
162
163 .. code-block: yaml
164
165 policies:
166 - name: add-cfn-tag
167 resource: cfn
168 filters:
169 - 'tag:DesiredTag': present
170 actions:
171 - type: remove-tag
172 tags: ['DesiredTag']
173 """
174
175 def process_resource_set(self, stacks, keys):
176 client = local_session(
177 self.manager.session_factory).client('cloudformation')
178
179 def _remove_tag(s):
180 tags = [t for t in s['Tags'] if t['Key'] not in keys]
181 client.update_stack(
182 StackName=s['StackName'],
183 UsePreviousTemplate=True,
184 Tags=tags)
185
186 with self.executor_factory(max_workers=2) as w:
187 list(w.map(_remove_tag, stacks))
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/c7n/resources/cfn.py b/c7n/resources/cfn.py
--- a/c7n/resources/cfn.py
+++ b/c7n/resources/cfn.py
@@ -145,9 +145,14 @@
self.manager.session_factory).client('cloudformation')
def _tag_stacks(s):
+ params = []
+ for p in s.get('Parameters', []):
+ params.append({'ParameterKey': p['ParameterKey'],
+ 'UsePreviousValue': True})
client.update_stack(
StackName=s['StackName'],
UsePreviousTemplate=True,
+ Parameters=params,
Tags=tags)
with self.executor_factory(max_workers=2) as w:
|
{"golden_diff": "diff --git a/c7n/resources/cfn.py b/c7n/resources/cfn.py\n--- a/c7n/resources/cfn.py\n+++ b/c7n/resources/cfn.py\n@@ -145,9 +145,14 @@\n self.manager.session_factory).client('cloudformation')\n \n def _tag_stacks(s):\n+ params = []\n+ for p in s.get('Parameters', []):\n+ params.append({'ParameterKey': p['ParameterKey'],\n+ 'UsePreviousValue': True})\n client.update_stack(\n StackName=s['StackName'],\n UsePreviousTemplate=True,\n+ Parameters=params,\n Tags=tags)\n \n with self.executor_factory(max_workers=2) as w:\n", "issue": "Tagging CloudFormation - Error: Parameters must have value\nInitially the stack is created with an input parameter.\r\n\r\n**c7n policy**\r\n```\r\npolicies:\r\n - name: add-cfn-tag\r\n resource: cfn\r\n filters:\r\n - \"tag:testcfn\": present\r\n actions:\r\n - type: tag\r\n value: abc\r\n key: BusinessUnit\r\n```\r\n**Error**\r\nAn error occurred (ValidationError) when calling the UpdateStack operation: Parameters: [input_param] must have values\n", "before_files": [{"content": "# Copyright 2015-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\n\nfrom concurrent.futures import as_completed\n\nfrom c7n.actions import BaseAction\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager\nfrom c7n.utils import local_session, type_schema\nfrom c7n.tags import RemoveTag, Tag\n\nlog = logging.getLogger('custodian.cfn')\n\n\[email protected]('cfn')\nclass CloudFormation(QueryResourceManager):\n\n class resource_type(object):\n service = 'cloudformation'\n type = 'stack'\n enum_spec = ('describe_stacks', 'Stacks[]', None)\n id = 'StackName'\n filter_name = 'StackName'\n filter_type = 'scalar'\n name = 'StackName'\n date = 'CreationTime'\n dimension = None\n config_type = 'AWS::CloudFormation::Stack'\n\n\[email protected]_registry.register('delete')\nclass Delete(BaseAction):\n \"\"\"Action to delete cloudformation stacks\n\n It is recommended to use a filter to avoid unwanted deletion of stacks\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: cloudformation-delete-failed-stacks\n resource: cfn\n filters:\n - StackStatus: ROLLBACK_COMPLETE\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cloudformation:DeleteStack\",)\n\n def process(self, stacks):\n with self.executor_factory(max_workers=10) as w:\n list(w.map(self.process_stacks, stacks))\n\n def process_stacks(self, stack):\n client = local_session(\n self.manager.session_factory).client('cloudformation')\n client.delete_stack(StackName=stack['StackName'])\n\n\[email protected]_registry.register('set-protection')\nclass SetProtection(BaseAction):\n \"\"\"Action to disable termination protection\n\n It is recommended to use a filter to avoid unwanted deletion of stacks\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: cloudformation-disable-protection\n resource: cfn\n filters:\n - StackStatus: CREATE_COMPLETE\n actions:\n - type: set-protection\n state: False\n \"\"\"\n\n schema = type_schema(\n 'set-protection', state={'type': 'boolean', 'default': False})\n\n permissions = ('cloudformation:UpdateStack',)\n\n def process(self, stacks):\n client = local_session(\n self.manager.session_factory).client('cloudformation')\n\n with self.executor_factory(max_workers=3) as w:\n futures = {}\n for s in stacks:\n futures[w.submit(self.process_stacks, client, s)] = s\n for f in as_completed(futures):\n s = futures[f]\n if f.exception():\n self.log.error(\n \"Error updating protection stack:%s error:%s\",\n s['StackName'], f.exception())\n\n def process_stacks(self, client, stack):\n client.update_termination_protection(\n EnableTerminationProtection=self.data.get('state', False),\n StackName=stack['StackName'])\n\n\[email protected]_registry.register('tag')\nclass CloudFormationAddTag(Tag):\n \"\"\"Action to tag a cloudformation stack\n\n :example:\n\n .. code-block: yaml\n\n policies:\n - name: add-cfn-tag\n resource: cfn\n filters:\n - 'tag:DesiredTag': absent\n actions:\n - type: tag\n key: DesiredTag\n value: DesiredValue\n \"\"\"\n permissions = ('cloudformation:UpdateStack',)\n\n def process_resource_set(self, stacks, tags):\n client = local_session(\n self.manager.session_factory).client('cloudformation')\n\n def _tag_stacks(s):\n client.update_stack(\n StackName=s['StackName'],\n UsePreviousTemplate=True,\n Tags=tags)\n\n with self.executor_factory(max_workers=2) as w:\n list(w.map(_tag_stacks, stacks))\n\n\[email protected]_registry.register('remove-tag')\nclass CloudFormationRemoveTag(RemoveTag):\n \"\"\"Action to remove tags from a cloudformation stack\n\n :example:\n\n .. code-block: yaml\n\n policies:\n - name: add-cfn-tag\n resource: cfn\n filters:\n - 'tag:DesiredTag': present\n actions:\n - type: remove-tag\n tags: ['DesiredTag']\n \"\"\"\n\n def process_resource_set(self, stacks, keys):\n client = local_session(\n self.manager.session_factory).client('cloudformation')\n\n def _remove_tag(s):\n tags = [t for t in s['Tags'] if t['Key'] not in keys]\n client.update_stack(\n StackName=s['StackName'],\n UsePreviousTemplate=True,\n Tags=tags)\n\n with self.executor_factory(max_workers=2) as w:\n list(w.map(_remove_tag, stacks))\n", "path": "c7n/resources/cfn.py"}], "after_files": [{"content": "# Copyright 2015-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\n\nfrom concurrent.futures import as_completed\n\nfrom c7n.actions import BaseAction\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager\nfrom c7n.utils import local_session, type_schema\nfrom c7n.tags import RemoveTag, Tag\n\nlog = logging.getLogger('custodian.cfn')\n\n\[email protected]('cfn')\nclass CloudFormation(QueryResourceManager):\n\n class resource_type(object):\n service = 'cloudformation'\n type = 'stack'\n enum_spec = ('describe_stacks', 'Stacks[]', None)\n id = 'StackName'\n filter_name = 'StackName'\n filter_type = 'scalar'\n name = 'StackName'\n date = 'CreationTime'\n dimension = None\n config_type = 'AWS::CloudFormation::Stack'\n\n\[email protected]_registry.register('delete')\nclass Delete(BaseAction):\n \"\"\"Action to delete cloudformation stacks\n\n It is recommended to use a filter to avoid unwanted deletion of stacks\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: cloudformation-delete-failed-stacks\n resource: cfn\n filters:\n - StackStatus: ROLLBACK_COMPLETE\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cloudformation:DeleteStack\",)\n\n def process(self, stacks):\n with self.executor_factory(max_workers=10) as w:\n list(w.map(self.process_stacks, stacks))\n\n def process_stacks(self, stack):\n client = local_session(\n self.manager.session_factory).client('cloudformation')\n client.delete_stack(StackName=stack['StackName'])\n\n\[email protected]_registry.register('set-protection')\nclass SetProtection(BaseAction):\n \"\"\"Action to disable termination protection\n\n It is recommended to use a filter to avoid unwanted deletion of stacks\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: cloudformation-disable-protection\n resource: cfn\n filters:\n - StackStatus: CREATE_COMPLETE\n actions:\n - type: set-protection\n state: False\n \"\"\"\n\n schema = type_schema(\n 'set-protection', state={'type': 'boolean', 'default': False})\n\n permissions = ('cloudformation:UpdateStack',)\n\n def process(self, stacks):\n client = local_session(\n self.manager.session_factory).client('cloudformation')\n\n with self.executor_factory(max_workers=3) as w:\n futures = {}\n for s in stacks:\n futures[w.submit(self.process_stacks, client, s)] = s\n for f in as_completed(futures):\n s = futures[f]\n if f.exception():\n self.log.error(\n \"Error updating protection stack:%s error:%s\",\n s['StackName'], f.exception())\n\n def process_stacks(self, client, stack):\n client.update_termination_protection(\n EnableTerminationProtection=self.data.get('state', False),\n StackName=stack['StackName'])\n\n\[email protected]_registry.register('tag')\nclass CloudFormationAddTag(Tag):\n \"\"\"Action to tag a cloudformation stack\n\n :example:\n\n .. code-block: yaml\n\n policies:\n - name: add-cfn-tag\n resource: cfn\n filters:\n - 'tag:DesiredTag': absent\n actions:\n - type: tag\n key: DesiredTag\n value: DesiredValue\n \"\"\"\n permissions = ('cloudformation:UpdateStack',)\n\n def process_resource_set(self, stacks, tags):\n client = local_session(\n self.manager.session_factory).client('cloudformation')\n\n def _tag_stacks(s):\n params = []\n for p in s.get('Parameters', []):\n params.append({'ParameterKey': p['ParameterKey'],\n 'UsePreviousValue': True})\n client.update_stack(\n StackName=s['StackName'],\n UsePreviousTemplate=True,\n Parameters=params,\n Tags=tags)\n\n with self.executor_factory(max_workers=2) as w:\n list(w.map(_tag_stacks, stacks))\n\n\[email protected]_registry.register('remove-tag')\nclass CloudFormationRemoveTag(RemoveTag):\n \"\"\"Action to remove tags from a cloudformation stack\n\n :example:\n\n .. code-block: yaml\n\n policies:\n - name: add-cfn-tag\n resource: cfn\n filters:\n - 'tag:DesiredTag': present\n actions:\n - type: remove-tag\n tags: ['DesiredTag']\n \"\"\"\n\n def process_resource_set(self, stacks, keys):\n client = local_session(\n self.manager.session_factory).client('cloudformation')\n\n def _remove_tag(s):\n tags = [t for t in s['Tags'] if t['Key'] not in keys]\n client.update_stack(\n StackName=s['StackName'],\n UsePreviousTemplate=True,\n Tags=tags)\n\n with self.executor_factory(max_workers=2) as w:\n list(w.map(_remove_tag, stacks))\n", "path": "c7n/resources/cfn.py"}]}
| 2,041 | 157 |
gh_patches_debug_7223
|
rasdani/github-patches
|
git_diff
|
pretix__pretix-687
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation of order_paid-signal unclear
This might be just me not being able to english, but I think that the documentation at https://github.com/pretix/pretix/blob/a08cb3b8e4adaa300868e3f730e045bc8e6d0203/src/pretix/base/signals.py#L152-L154 is missing something.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pretix/base/signals.py`
Content:
```
1 import warnings
2 from typing import Any, Callable, List, Tuple
3
4 import django.dispatch
5 from django.apps import apps
6 from django.conf import settings
7 from django.dispatch.dispatcher import NO_RECEIVERS
8
9 from .models import Event
10
11 app_cache = {}
12
13
14 def _populate_app_cache():
15 global app_cache
16 apps.check_apps_ready()
17 for ac in apps.app_configs.values():
18 app_cache[ac.name] = ac
19
20
21 class EventPluginSignal(django.dispatch.Signal):
22 """
23 This is an extension to Django's built-in signals which differs in a way that it sends
24 out it's events only to receivers which belong to plugins that are enabled for the given
25 Event.
26 """
27
28 def send(self, sender: Event, **named) -> List[Tuple[Callable, Any]]:
29 """
30 Send signal from sender to all connected receivers that belong to
31 plugins enabled for the given Event.
32
33 sender is required to be an instance of ``pretix.base.models.Event``.
34 """
35 if sender and not isinstance(sender, Event):
36 raise ValueError("Sender needs to be an event.")
37
38 responses = []
39 if not self.receivers or self.sender_receivers_cache.get(sender) is NO_RECEIVERS:
40 return responses
41
42 if not app_cache:
43 _populate_app_cache()
44
45 for receiver in self._live_receivers(sender):
46 # Find the Django application this belongs to
47 searchpath = receiver.__module__
48 core_module = any([searchpath.startswith(cm) for cm in settings.CORE_MODULES])
49 app = None
50 if not core_module:
51 while True:
52 app = app_cache.get(searchpath)
53 if "." not in searchpath or app:
54 break
55 searchpath, _ = searchpath.rsplit(".", 1)
56
57 # Only fire receivers from active plugins and core modules
58 if core_module or (sender and app and app.name in sender.get_plugins()):
59 if not hasattr(app, 'compatibility_errors') or not app.compatibility_errors:
60 response = receiver(signal=self, sender=sender, **named)
61 responses.append((receiver, response))
62 return sorted(responses, key=lambda r: (receiver.__module__, receiver.__name__))
63
64
65 class DeprecatedSignal(django.dispatch.Signal):
66
67 def connect(self, receiver, sender=None, weak=True, dispatch_uid=None):
68 warnings.warn('This signal is deprecated and will soon be removed', stacklevel=3)
69 super().connect(receiver, sender=None, weak=True, dispatch_uid=None)
70
71
72 event_live_issues = EventPluginSignal(
73 providing_args=[]
74 )
75 """
76 This signal is sent out to determine whether an event can be taken live. If you want to
77 prevent the event from going live, return a string that will be displayed to the user
78 as the error message. If you don't, your receiver should return ``None``.
79
80 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
81 """
82
83
84 register_payment_providers = EventPluginSignal(
85 providing_args=[]
86 )
87 """
88 This signal is sent out to get all known payment providers. Receivers should return a
89 subclass of pretix.base.payment.BasePaymentProvider
90
91 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
92 """
93
94 register_invoice_renderers = EventPluginSignal(
95 providing_args=[]
96 )
97 """
98 This signal is sent out to get all known invoice renderers. Receivers should return a
99 subclass of pretix.base.invoice.BaseInvoiceRenderer
100
101 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
102 """
103
104 register_ticket_outputs = EventPluginSignal(
105 providing_args=[]
106 )
107 """
108 This signal is sent out to get all known ticket outputs. Receivers should return a
109 subclass of pretix.base.ticketoutput.BaseTicketOutput
110
111 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
112 """
113
114 register_data_exporters = EventPluginSignal(
115 providing_args=[]
116 )
117 """
118 This signal is sent out to get all known data exporters. Receivers should return a
119 subclass of pretix.base.exporter.BaseExporter
120
121 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
122 """
123
124 validate_cart = EventPluginSignal(
125 providing_args=["positions"]
126 )
127 """
128 This signal is sent out before the user starts checkout. It includes an iterable
129 with the current CartPosition objects.
130 The response of receivers will be ignored, but you can raise a CartError with an
131 appropriate exception message.
132
133 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
134 """
135
136 order_placed = EventPluginSignal(
137 providing_args=["order"]
138 )
139 """
140 This signal is sent out every time an order is placed. The order object is given
141 as the first argument. This signal is *not* sent out if an order is created through
142 splitting an existing order, so you can not expect to see all orders by listening
143 to this signal.
144
145 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
146 """
147
148 order_paid = EventPluginSignal(
149 providing_args=["order"]
150 )
151 """
152 This signal is sent out every time an order is paid. The order object is given
153 as the first argument. This signal is *not* sent out if an order is marked as paid
154 because it an already-paid order has been splitted.
155
156 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
157 """
158
159 logentry_display = EventPluginSignal(
160 providing_args=["logentry"]
161 )
162 """
163 To display an instance of the ``LogEntry`` model to a human user,
164 ``pretix.base.signals.logentry_display`` will be sent out with a ``logentry`` argument.
165
166 The first received response that is not ``None`` will be used to display the log entry
167 to the user. The receivers are expected to return plain text.
168
169 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
170 """
171
172 logentry_object_link = EventPluginSignal(
173 providing_args=["logentry"]
174 )
175 """
176 To display the relationship of an instance of the ``LogEntry`` model to another model
177 to a human user, ``pretix.base.signals.logentry_object_link`` will be sent out with a
178 ``logentry`` argument.
179
180 The first received response that is not ``None`` will be used to display the related object
181 to the user. The receivers are expected to return a HTML link. The internal implementation
182 builds the links like this::
183
184 a_text = _('Tax rule {val}')
185 a_map = {
186 'href': reverse('control:event.settings.tax.edit', kwargs={
187 'event': sender.slug,
188 'organizer': sender.organizer.slug,
189 'rule': logentry.content_object.id
190 }),
191 'val': escape(logentry.content_object.name),
192 }
193 a_map['val'] = '<a href="{href}">{val}</a>'.format_map(a_map)
194 return a_text.format_map(a_map)
195
196 Make sure that any user content in the HTML code you return is properly escaped!
197 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
198 """
199
200 requiredaction_display = EventPluginSignal(
201 providing_args=["action", "request"]
202 )
203 """
204 To display an instance of the ``RequiredAction`` model to a human user,
205 ``pretix.base.signals.requiredaction_display`` will be sent out with a ``action`` argument.
206 You will also get the current ``request`` in a different argument.
207
208 The first received response that is not ``None`` will be used to display the log entry
209 to the user. The receivers are expected to return HTML code.
210
211 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
212 """
213
214 event_copy_data = EventPluginSignal(
215 providing_args=["other"]
216 )
217 """
218 This signal is sent out when a new event is created as a clone of an existing event, i.e.
219 the settings from the older event are copied to the newer one. You can listen to this
220 signal to copy data or configuration stored within your plugin's models as well.
221
222 You don't need to copy data inside the general settings storage which is cloned automatically,
223 but you might need to modify that data.
224
225 The ``sender`` keyword argument will contain the event of the **new** event. The ``other``
226 keyword argument will contain the event to **copy from**.
227 """
228
229 periodic_task = django.dispatch.Signal()
230 """
231 This is a regular django signal (no pretix event signal) that we send out every
232 time the periodic task cronjob runs. This interval is not sharply defined, it can
233 be everything between a minute and a day. The actions you perform should be
234 idempotent, i.e. it should not make a difference if this is sent out more often
235 than expected.
236 """
237
238 register_global_settings = django.dispatch.Signal()
239 """
240 All plugins that are installed may send fields for the global settings form, as
241 an OrderedDict of (setting name, form field).
242 """
243
244 order_fee_calculation = EventPluginSignal(
245 providing_args=['request']
246 )
247 """
248 This signals allows you to add fees to an order while it is being created. You are expected to
249 return a list of ``OrderFee`` objects that are not yet saved to the database
250 (because there is no order yet).
251
252 As with all plugin signals, the ``sender`` keyword argument will contain the event. A ``positions``
253 argument will contain the cart positions and ``invoice_address`` the invoice address (useful for
254 tax calculation). The argument ``meta_info`` contains the order's meta dictionary.
255 """
256
257 order_fee_type_name = EventPluginSignal(
258 providing_args=['request', 'fee']
259 )
260 """
261 This signals allows you to return a human-readable description for a fee type based on the ``fee_type``
262 and ``internal_type`` attributes of the ``OrderFee`` model that you get as keyword arguments. You are
263 expected to return a string or None, if you don't know about this fee.
264
265 As with all plugin signals, the ``sender`` keyword argument will contain the event.
266 """
267
268 allow_ticket_download = EventPluginSignal(
269 providing_args=['order']
270 )
271 """
272 This signal is sent out to check if tickets for an order can be downloaded. If any receiver returns false,
273 a download will not be offered.
274
275 As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
276 """
277
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pretix/base/signals.py b/src/pretix/base/signals.py
--- a/src/pretix/base/signals.py
+++ b/src/pretix/base/signals.py
@@ -151,7 +151,7 @@
"""
This signal is sent out every time an order is paid. The order object is given
as the first argument. This signal is *not* sent out if an order is marked as paid
-because it an already-paid order has been splitted.
+because an already-paid order has been split.
As with all event-plugin signals, the ``sender`` keyword argument will contain the event.
"""
|
{"golden_diff": "diff --git a/src/pretix/base/signals.py b/src/pretix/base/signals.py\n--- a/src/pretix/base/signals.py\n+++ b/src/pretix/base/signals.py\n@@ -151,7 +151,7 @@\n \"\"\"\n This signal is sent out every time an order is paid. The order object is given\n as the first argument. This signal is *not* sent out if an order is marked as paid\n-because it an already-paid order has been splitted.\n+because an already-paid order has been split.\n \n As with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n \"\"\"\n", "issue": "Documentation of order_paid-signal unclear\nThis might be just me not being able to english, but I think that the documentation at https://github.com/pretix/pretix/blob/a08cb3b8e4adaa300868e3f730e045bc8e6d0203/src/pretix/base/signals.py#L152-L154 is missing something.\n", "before_files": [{"content": "import warnings\nfrom typing import Any, Callable, List, Tuple\n\nimport django.dispatch\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.dispatch.dispatcher import NO_RECEIVERS\n\nfrom .models import Event\n\napp_cache = {}\n\n\ndef _populate_app_cache():\n global app_cache\n apps.check_apps_ready()\n for ac in apps.app_configs.values():\n app_cache[ac.name] = ac\n\n\nclass EventPluginSignal(django.dispatch.Signal):\n \"\"\"\n This is an extension to Django's built-in signals which differs in a way that it sends\n out it's events only to receivers which belong to plugins that are enabled for the given\n Event.\n \"\"\"\n\n def send(self, sender: Event, **named) -> List[Tuple[Callable, Any]]:\n \"\"\"\n Send signal from sender to all connected receivers that belong to\n plugins enabled for the given Event.\n\n sender is required to be an instance of ``pretix.base.models.Event``.\n \"\"\"\n if sender and not isinstance(sender, Event):\n raise ValueError(\"Sender needs to be an event.\")\n\n responses = []\n if not self.receivers or self.sender_receivers_cache.get(sender) is NO_RECEIVERS:\n return responses\n\n if not app_cache:\n _populate_app_cache()\n\n for receiver in self._live_receivers(sender):\n # Find the Django application this belongs to\n searchpath = receiver.__module__\n core_module = any([searchpath.startswith(cm) for cm in settings.CORE_MODULES])\n app = None\n if not core_module:\n while True:\n app = app_cache.get(searchpath)\n if \".\" not in searchpath or app:\n break\n searchpath, _ = searchpath.rsplit(\".\", 1)\n\n # Only fire receivers from active plugins and core modules\n if core_module or (sender and app and app.name in sender.get_plugins()):\n if not hasattr(app, 'compatibility_errors') or not app.compatibility_errors:\n response = receiver(signal=self, sender=sender, **named)\n responses.append((receiver, response))\n return sorted(responses, key=lambda r: (receiver.__module__, receiver.__name__))\n\n\nclass DeprecatedSignal(django.dispatch.Signal):\n\n def connect(self, receiver, sender=None, weak=True, dispatch_uid=None):\n warnings.warn('This signal is deprecated and will soon be removed', stacklevel=3)\n super().connect(receiver, sender=None, weak=True, dispatch_uid=None)\n\n\nevent_live_issues = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to determine whether an event can be taken live. If you want to\nprevent the event from going live, return a string that will be displayed to the user\nas the error message. If you don't, your receiver should return ``None``.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\n\nregister_payment_providers = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to get all known payment providers. Receivers should return a\nsubclass of pretix.base.payment.BasePaymentProvider\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nregister_invoice_renderers = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to get all known invoice renderers. Receivers should return a\nsubclass of pretix.base.invoice.BaseInvoiceRenderer\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nregister_ticket_outputs = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to get all known ticket outputs. Receivers should return a\nsubclass of pretix.base.ticketoutput.BaseTicketOutput\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nregister_data_exporters = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to get all known data exporters. Receivers should return a\nsubclass of pretix.base.exporter.BaseExporter\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nvalidate_cart = EventPluginSignal(\n providing_args=[\"positions\"]\n)\n\"\"\"\nThis signal is sent out before the user starts checkout. It includes an iterable\nwith the current CartPosition objects.\nThe response of receivers will be ignored, but you can raise a CartError with an\nappropriate exception message.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\norder_placed = EventPluginSignal(\n providing_args=[\"order\"]\n)\n\"\"\"\nThis signal is sent out every time an order is placed. The order object is given\nas the first argument. This signal is *not* sent out if an order is created through\nsplitting an existing order, so you can not expect to see all orders by listening\nto this signal.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\norder_paid = EventPluginSignal(\n providing_args=[\"order\"]\n)\n\"\"\"\nThis signal is sent out every time an order is paid. The order object is given\nas the first argument. This signal is *not* sent out if an order is marked as paid\nbecause it an already-paid order has been splitted.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nlogentry_display = EventPluginSignal(\n providing_args=[\"logentry\"]\n)\n\"\"\"\nTo display an instance of the ``LogEntry`` model to a human user,\n``pretix.base.signals.logentry_display`` will be sent out with a ``logentry`` argument.\n\nThe first received response that is not ``None`` will be used to display the log entry\nto the user. The receivers are expected to return plain text.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nlogentry_object_link = EventPluginSignal(\n providing_args=[\"logentry\"]\n)\n\"\"\"\nTo display the relationship of an instance of the ``LogEntry`` model to another model\nto a human user, ``pretix.base.signals.logentry_object_link`` will be sent out with a\n``logentry`` argument.\n\nThe first received response that is not ``None`` will be used to display the related object\nto the user. The receivers are expected to return a HTML link. The internal implementation\nbuilds the links like this::\n\n a_text = _('Tax rule {val}')\n a_map = {\n 'href': reverse('control:event.settings.tax.edit', kwargs={\n 'event': sender.slug,\n 'organizer': sender.organizer.slug,\n 'rule': logentry.content_object.id\n }),\n 'val': escape(logentry.content_object.name),\n }\n a_map['val'] = '<a href=\"{href}\">{val}</a>'.format_map(a_map)\n return a_text.format_map(a_map)\n\nMake sure that any user content in the HTML code you return is properly escaped!\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nrequiredaction_display = EventPluginSignal(\n providing_args=[\"action\", \"request\"]\n)\n\"\"\"\nTo display an instance of the ``RequiredAction`` model to a human user,\n``pretix.base.signals.requiredaction_display`` will be sent out with a ``action`` argument.\nYou will also get the current ``request`` in a different argument.\n\nThe first received response that is not ``None`` will be used to display the log entry\nto the user. The receivers are expected to return HTML code.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nevent_copy_data = EventPluginSignal(\n providing_args=[\"other\"]\n)\n\"\"\"\nThis signal is sent out when a new event is created as a clone of an existing event, i.e.\nthe settings from the older event are copied to the newer one. You can listen to this\nsignal to copy data or configuration stored within your plugin's models as well.\n\nYou don't need to copy data inside the general settings storage which is cloned automatically,\nbut you might need to modify that data.\n\nThe ``sender`` keyword argument will contain the event of the **new** event. The ``other``\nkeyword argument will contain the event to **copy from**.\n\"\"\"\n\nperiodic_task = django.dispatch.Signal()\n\"\"\"\nThis is a regular django signal (no pretix event signal) that we send out every\ntime the periodic task cronjob runs. This interval is not sharply defined, it can\nbe everything between a minute and a day. The actions you perform should be\nidempotent, i.e. it should not make a difference if this is sent out more often\nthan expected.\n\"\"\"\n\nregister_global_settings = django.dispatch.Signal()\n\"\"\"\nAll plugins that are installed may send fields for the global settings form, as\nan OrderedDict of (setting name, form field).\n\"\"\"\n\norder_fee_calculation = EventPluginSignal(\n providing_args=['request']\n)\n\"\"\"\nThis signals allows you to add fees to an order while it is being created. You are expected to\nreturn a list of ``OrderFee`` objects that are not yet saved to the database\n(because there is no order yet).\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event. A ``positions``\nargument will contain the cart positions and ``invoice_address`` the invoice address (useful for\ntax calculation). The argument ``meta_info`` contains the order's meta dictionary.\n\"\"\"\n\norder_fee_type_name = EventPluginSignal(\n providing_args=['request', 'fee']\n)\n\"\"\"\nThis signals allows you to return a human-readable description for a fee type based on the ``fee_type``\nand ``internal_type`` attributes of the ``OrderFee`` model that you get as keyword arguments. You are\nexpected to return a string or None, if you don't know about this fee.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nallow_ticket_download = EventPluginSignal(\n providing_args=['order']\n)\n\"\"\"\nThis signal is sent out to check if tickets for an order can be downloaded. If any receiver returns false,\na download will not be offered.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n", "path": "src/pretix/base/signals.py"}], "after_files": [{"content": "import warnings\nfrom typing import Any, Callable, List, Tuple\n\nimport django.dispatch\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.dispatch.dispatcher import NO_RECEIVERS\n\nfrom .models import Event\n\napp_cache = {}\n\n\ndef _populate_app_cache():\n global app_cache\n apps.check_apps_ready()\n for ac in apps.app_configs.values():\n app_cache[ac.name] = ac\n\n\nclass EventPluginSignal(django.dispatch.Signal):\n \"\"\"\n This is an extension to Django's built-in signals which differs in a way that it sends\n out it's events only to receivers which belong to plugins that are enabled for the given\n Event.\n \"\"\"\n\n def send(self, sender: Event, **named) -> List[Tuple[Callable, Any]]:\n \"\"\"\n Send signal from sender to all connected receivers that belong to\n plugins enabled for the given Event.\n\n sender is required to be an instance of ``pretix.base.models.Event``.\n \"\"\"\n if sender and not isinstance(sender, Event):\n raise ValueError(\"Sender needs to be an event.\")\n\n responses = []\n if not self.receivers or self.sender_receivers_cache.get(sender) is NO_RECEIVERS:\n return responses\n\n if not app_cache:\n _populate_app_cache()\n\n for receiver in self._live_receivers(sender):\n # Find the Django application this belongs to\n searchpath = receiver.__module__\n core_module = any([searchpath.startswith(cm) for cm in settings.CORE_MODULES])\n app = None\n if not core_module:\n while True:\n app = app_cache.get(searchpath)\n if \".\" not in searchpath or app:\n break\n searchpath, _ = searchpath.rsplit(\".\", 1)\n\n # Only fire receivers from active plugins and core modules\n if core_module or (sender and app and app.name in sender.get_plugins()):\n if not hasattr(app, 'compatibility_errors') or not app.compatibility_errors:\n response = receiver(signal=self, sender=sender, **named)\n responses.append((receiver, response))\n return sorted(responses, key=lambda r: (receiver.__module__, receiver.__name__))\n\n\nclass DeprecatedSignal(django.dispatch.Signal):\n\n def connect(self, receiver, sender=None, weak=True, dispatch_uid=None):\n warnings.warn('This signal is deprecated and will soon be removed', stacklevel=3)\n super().connect(receiver, sender=None, weak=True, dispatch_uid=None)\n\n\nevent_live_issues = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to determine whether an event can be taken live. If you want to\nprevent the event from going live, return a string that will be displayed to the user\nas the error message. If you don't, your receiver should return ``None``.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\n\nregister_payment_providers = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to get all known payment providers. Receivers should return a\nsubclass of pretix.base.payment.BasePaymentProvider\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nregister_invoice_renderers = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to get all known invoice renderers. Receivers should return a\nsubclass of pretix.base.invoice.BaseInvoiceRenderer\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nregister_ticket_outputs = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to get all known ticket outputs. Receivers should return a\nsubclass of pretix.base.ticketoutput.BaseTicketOutput\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nregister_data_exporters = EventPluginSignal(\n providing_args=[]\n)\n\"\"\"\nThis signal is sent out to get all known data exporters. Receivers should return a\nsubclass of pretix.base.exporter.BaseExporter\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nvalidate_cart = EventPluginSignal(\n providing_args=[\"positions\"]\n)\n\"\"\"\nThis signal is sent out before the user starts checkout. It includes an iterable\nwith the current CartPosition objects.\nThe response of receivers will be ignored, but you can raise a CartError with an\nappropriate exception message.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\norder_placed = EventPluginSignal(\n providing_args=[\"order\"]\n)\n\"\"\"\nThis signal is sent out every time an order is placed. The order object is given\nas the first argument. This signal is *not* sent out if an order is created through\nsplitting an existing order, so you can not expect to see all orders by listening\nto this signal.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\norder_paid = EventPluginSignal(\n providing_args=[\"order\"]\n)\n\"\"\"\nThis signal is sent out every time an order is paid. The order object is given\nas the first argument. This signal is *not* sent out if an order is marked as paid\nbecause an already-paid order has been split.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nlogentry_display = EventPluginSignal(\n providing_args=[\"logentry\"]\n)\n\"\"\"\nTo display an instance of the ``LogEntry`` model to a human user,\n``pretix.base.signals.logentry_display`` will be sent out with a ``logentry`` argument.\n\nThe first received response that is not ``None`` will be used to display the log entry\nto the user. The receivers are expected to return plain text.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nlogentry_object_link = EventPluginSignal(\n providing_args=[\"logentry\"]\n)\n\"\"\"\nTo display the relationship of an instance of the ``LogEntry`` model to another model\nto a human user, ``pretix.base.signals.logentry_object_link`` will be sent out with a\n``logentry`` argument.\n\nThe first received response that is not ``None`` will be used to display the related object\nto the user. The receivers are expected to return a HTML link. The internal implementation\nbuilds the links like this::\n\n a_text = _('Tax rule {val}')\n a_map = {\n 'href': reverse('control:event.settings.tax.edit', kwargs={\n 'event': sender.slug,\n 'organizer': sender.organizer.slug,\n 'rule': logentry.content_object.id\n }),\n 'val': escape(logentry.content_object.name),\n }\n a_map['val'] = '<a href=\"{href}\">{val}</a>'.format_map(a_map)\n return a_text.format_map(a_map)\n\nMake sure that any user content in the HTML code you return is properly escaped!\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nrequiredaction_display = EventPluginSignal(\n providing_args=[\"action\", \"request\"]\n)\n\"\"\"\nTo display an instance of the ``RequiredAction`` model to a human user,\n``pretix.base.signals.requiredaction_display`` will be sent out with a ``action`` argument.\nYou will also get the current ``request`` in a different argument.\n\nThe first received response that is not ``None`` will be used to display the log entry\nto the user. The receivers are expected to return HTML code.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nevent_copy_data = EventPluginSignal(\n providing_args=[\"other\"]\n)\n\"\"\"\nThis signal is sent out when a new event is created as a clone of an existing event, i.e.\nthe settings from the older event are copied to the newer one. You can listen to this\nsignal to copy data or configuration stored within your plugin's models as well.\n\nYou don't need to copy data inside the general settings storage which is cloned automatically,\nbut you might need to modify that data.\n\nThe ``sender`` keyword argument will contain the event of the **new** event. The ``other``\nkeyword argument will contain the event to **copy from**.\n\"\"\"\n\nperiodic_task = django.dispatch.Signal()\n\"\"\"\nThis is a regular django signal (no pretix event signal) that we send out every\ntime the periodic task cronjob runs. This interval is not sharply defined, it can\nbe everything between a minute and a day. The actions you perform should be\nidempotent, i.e. it should not make a difference if this is sent out more often\nthan expected.\n\"\"\"\n\nregister_global_settings = django.dispatch.Signal()\n\"\"\"\nAll plugins that are installed may send fields for the global settings form, as\nan OrderedDict of (setting name, form field).\n\"\"\"\n\norder_fee_calculation = EventPluginSignal(\n providing_args=['request']\n)\n\"\"\"\nThis signals allows you to add fees to an order while it is being created. You are expected to\nreturn a list of ``OrderFee`` objects that are not yet saved to the database\n(because there is no order yet).\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event. A ``positions``\nargument will contain the cart positions and ``invoice_address`` the invoice address (useful for\ntax calculation). The argument ``meta_info`` contains the order's meta dictionary.\n\"\"\"\n\norder_fee_type_name = EventPluginSignal(\n providing_args=['request', 'fee']\n)\n\"\"\"\nThis signals allows you to return a human-readable description for a fee type based on the ``fee_type``\nand ``internal_type`` attributes of the ``OrderFee`` model that you get as keyword arguments. You are\nexpected to return a string or None, if you don't know about this fee.\n\nAs with all plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n\nallow_ticket_download = EventPluginSignal(\n providing_args=['order']\n)\n\"\"\"\nThis signal is sent out to check if tickets for an order can be downloaded. If any receiver returns false,\na download will not be offered.\n\nAs with all event-plugin signals, the ``sender`` keyword argument will contain the event.\n\"\"\"\n", "path": "src/pretix/base/signals.py"}]}
| 3,251 | 139 |
gh_patches_debug_26653
|
rasdani/github-patches
|
git_diff
|
aio-libs__aiohttp-6309
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Connecting to IPv6-only hosts fails with a `RuntimeError`
### Describe the bug
When aiohttp tries to connect to a host with only IPv6 address (with no IPv4 address), the exception `RuntimeError: coroutine raised StopIteration` is raised in [`TCPConnector._create_direct_connection`](https://github.com/aio-libs/aiohttp/blob/v3.8.0/aiohttp/connector.py#L1153) caused by the `StopIteration` in [`_DNSCacheTable.next_addrs`](https://github.com/aio-libs/aiohttp/blob/v3.8.0/aiohttp/connector.py#L716).
### To Reproduce
1. Create `aiohttp.ClientSession`.
2. Try to request an URL with IPv6-only host, e.g. `ipv6.google.com`.
E.g: `await ClientSession().get('https://ipv6.google.com')`
### Expected behavior
`await ClientSession().get('https://ipv6.google.com')` does not fail and returns a response.
### Logs/tracebacks
```python-traceback
$ python -m asyncio
asyncio REPL 3.9.7 (default, Oct 10 2021, 15:13:22)
[GCC 11.1.0] on linux
>>> import asyncio
>>> from aiohttp import ClientSession
>>> await ClientSession().get('https://ipv6.google.com')
Traceback (most recent call last):
File "/home/***/.local/lib/python3.9/site-packages/aiohttp/connector.py", line 894, in _resolve_host
return self._cached_hosts.next_addrs(key)
File "/home/***/.local/lib/python3.9/site-packages/aiohttp/connector.py", line 716, in next_addrs
next(loop)
StopIteration
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 445, in result
return self.__get_result()
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "<console>", line 1, in <module>
File "/home/***/.local/lib/python3.9/site-packages/aiohttp/client.py", line 535, in _request
conn = await self._connector.connect(
File "/home/***/.local/lib/python3.9/site-packages/aiohttp/connector.py", line 543, in connect
proto = await self._create_connection(req, traces, timeout)
File "/home/***/.local/lib/python3.9/site-packages/aiohttp/connector.py", line 906, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "/home/***/.local/lib/python3.9/site-packages/aiohttp/connector.py", line 1153, in _create_direct_connection
hosts = await asyncio.shield(host_resolved)
RuntimeError: coroutine raised StopIteration
```
### Python Version
```console
$ python --version
Python 3.9.7
```
### aiohttp Version
```console
$ python -m pip show aiohttp
Name: aiohttp
Version: 3.8.0
Summary: Async http client/server framework (asyncio)
Home-page: https://github.com/aio-libs/aiohttp
Author:
Author-email:
License: Apache 2
Location: /home/***/.local/lib/python3.9/site-packages
Requires: aiosignal, async-timeout, attrs, charset-normalizer, frozenlist, multidict, yarl
Required-by:
```
### multidict Version
```console
$ python -m pip show multidict
Name: multidict
Version: 5.1.0
Summary: multidict implementation
Home-page: https://github.com/aio-libs/multidict
Author: Andrew Svetlov
Author-email: [email protected]
License: Apache 2
Location: /home/***/.local/lib/python3.9/site-packages
Requires:
Required-by: aiohttp, grpclib, yarl
```
### yarl Version
```console
$ python -m pip show yarl
Name: yarl
Version: 1.6.3
Summary: Yet another URL library
Home-page: https://github.com/aio-libs/yarl/
Author: Andrew Svetlov
Author-email: [email protected]
License: Apache 2
Location: /home/***/.local/lib/python3.9/site-packages
Requires: idna, multidict
Required-by: aiohttp
```
### OS
Linux
### Related component
Client
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aiohttp/resolver.py`
Content:
```
1 import asyncio
2 import socket
3 from typing import Any, Dict, List, Type, Union
4
5 from .abc import AbstractResolver
6
7 __all__ = ("ThreadedResolver", "AsyncResolver", "DefaultResolver")
8
9 try:
10 import aiodns
11
12 # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')
13 except ImportError: # pragma: no cover
14 aiodns = None
15
16 aiodns_default = False
17
18
19 class ThreadedResolver(AbstractResolver):
20 """Threaded resolver.
21
22 Uses an Executor for synchronous getaddrinfo() calls.
23 concurrent.futures.ThreadPoolExecutor is used by default.
24 """
25
26 def __init__(self) -> None:
27 self._loop = asyncio.get_running_loop()
28
29 async def resolve(
30 self, hostname: str, port: int = 0, family: int = socket.AF_INET
31 ) -> List[Dict[str, Any]]:
32 infos = await self._loop.getaddrinfo(
33 hostname,
34 port,
35 type=socket.SOCK_STREAM,
36 family=family,
37 flags=socket.AI_ADDRCONFIG,
38 )
39
40 hosts = []
41 for family, _, proto, _, address in infos:
42 if family == socket.AF_INET6:
43 if not (socket.has_ipv6 and address[3]): # type: ignore[misc]
44 continue
45 # This is essential for link-local IPv6 addresses.
46 # LL IPv6 is a VERY rare case. Strictly speaking, we should use
47 # getnameinfo() unconditionally, but performance makes sense.
48 host, _port = socket.getnameinfo(
49 address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV
50 )
51 port = int(_port)
52 else:
53 host, port = address[:2]
54 hosts.append(
55 {
56 "hostname": hostname,
57 "host": host,
58 "port": port,
59 "family": family,
60 "proto": proto,
61 "flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
62 }
63 )
64
65 return hosts
66
67 async def close(self) -> None:
68 pass
69
70
71 class AsyncResolver(AbstractResolver):
72 """Use the `aiodns` package to make asynchronous DNS lookups"""
73
74 def __init__(self, *args: Any, **kwargs: Any) -> None:
75 if aiodns is None:
76 raise RuntimeError("Resolver requires aiodns library")
77
78 self._loop = asyncio.get_running_loop()
79 self._resolver = aiodns.DNSResolver(*args, loop=self._loop, **kwargs)
80
81 async def resolve(
82 self, host: str, port: int = 0, family: int = socket.AF_INET
83 ) -> List[Dict[str, Any]]:
84 try:
85 resp = await self._resolver.gethostbyname(host, family)
86 except aiodns.error.DNSError as exc:
87 msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
88 raise OSError(msg) from exc
89 hosts = []
90 for address in resp.addresses:
91 hosts.append(
92 {
93 "hostname": host,
94 "host": address,
95 "port": port,
96 "family": family,
97 "proto": 0,
98 "flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
99 }
100 )
101
102 if not hosts:
103 raise OSError("DNS lookup failed")
104
105 return hosts
106
107 async def close(self) -> None:
108 self._resolver.cancel()
109
110
111 _DefaultType = Type[Union[AsyncResolver, ThreadedResolver]]
112 DefaultResolver: _DefaultType = AsyncResolver if aiodns_default else ThreadedResolver
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py
--- a/aiohttp/resolver.py
+++ b/aiohttp/resolver.py
@@ -40,17 +40,23 @@
hosts = []
for family, _, proto, _, address in infos:
if family == socket.AF_INET6:
- if not (socket.has_ipv6 and address[3]): # type: ignore[misc]
+ if len(address) < 3:
+ # IPv6 is not supported by Python build,
+ # or IPv6 is not enabled in the host
continue
- # This is essential for link-local IPv6 addresses.
- # LL IPv6 is a VERY rare case. Strictly speaking, we should use
- # getnameinfo() unconditionally, but performance makes sense.
- host, _port = socket.getnameinfo(
- address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV
- )
- port = int(_port)
- else:
- host, port = address[:2]
+ if address[3]: # type: ignore[misc]
+ # This is essential for link-local IPv6 addresses.
+ # LL IPv6 is a VERY rare case. Strictly speaking, we should use
+ # getnameinfo() unconditionally, but performance makes sense.
+ host, _port = socket.getnameinfo(
+ address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV
+ )
+ port = int(_port)
+ else:
+ host, port = address[:2]
+ else: # IPv4
+ assert family == socket.AF_INET
+ host, port = address # type: ignore[misc]
hosts.append(
{
"hostname": hostname,
|
{"golden_diff": "diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py\n--- a/aiohttp/resolver.py\n+++ b/aiohttp/resolver.py\n@@ -40,17 +40,23 @@\n hosts = []\n for family, _, proto, _, address in infos:\n if family == socket.AF_INET6:\n- if not (socket.has_ipv6 and address[3]): # type: ignore[misc]\n+ if len(address) < 3:\n+ # IPv6 is not supported by Python build,\n+ # or IPv6 is not enabled in the host\n continue\n- # This is essential for link-local IPv6 addresses.\n- # LL IPv6 is a VERY rare case. Strictly speaking, we should use\n- # getnameinfo() unconditionally, but performance makes sense.\n- host, _port = socket.getnameinfo(\n- address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV\n- )\n- port = int(_port)\n- else:\n- host, port = address[:2]\n+ if address[3]: # type: ignore[misc]\n+ # This is essential for link-local IPv6 addresses.\n+ # LL IPv6 is a VERY rare case. Strictly speaking, we should use\n+ # getnameinfo() unconditionally, but performance makes sense.\n+ host, _port = socket.getnameinfo(\n+ address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV\n+ )\n+ port = int(_port)\n+ else:\n+ host, port = address[:2]\n+ else: # IPv4\n+ assert family == socket.AF_INET\n+ host, port = address # type: ignore[misc]\n hosts.append(\n {\n \"hostname\": hostname,\n", "issue": "Connecting to IPv6-only hosts fails with a `RuntimeError`\n### Describe the bug\r\n\r\nWhen aiohttp tries to connect to a host with only IPv6 address (with no IPv4 address), the exception `RuntimeError: coroutine raised StopIteration` is raised in [`TCPConnector._create_direct_connection`](https://github.com/aio-libs/aiohttp/blob/v3.8.0/aiohttp/connector.py#L1153) caused by the `StopIteration` in [`_DNSCacheTable.next_addrs`](https://github.com/aio-libs/aiohttp/blob/v3.8.0/aiohttp/connector.py#L716).\r\n\r\n### To Reproduce\r\n\r\n1. Create `aiohttp.ClientSession`.\r\n2. Try to request an URL with IPv6-only host, e.g. `ipv6.google.com`.\r\nE.g: `await ClientSession().get('https://ipv6.google.com')`\r\n\r\n### Expected behavior\r\n\r\n`await ClientSession().get('https://ipv6.google.com')` does not fail and returns a response.\r\n\r\n### Logs/tracebacks\r\n\r\n```python-traceback\r\n$ python -m asyncio\r\nasyncio REPL 3.9.7 (default, Oct 10 2021, 15:13:22)\r\n[GCC 11.1.0] on linux\r\n>>> import asyncio\r\n>>> from aiohttp import ClientSession\r\n>>> await ClientSession().get('https://ipv6.google.com')\r\nTraceback (most recent call last):\r\n File \"/home/***/.local/lib/python3.9/site-packages/aiohttp/connector.py\", line 894, in _resolve_host\r\n return self._cached_hosts.next_addrs(key)\r\n File \"/home/***/.local/lib/python3.9/site-packages/aiohttp/connector.py\", line 716, in next_addrs\r\n next(loop)\r\nStopIteration\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.9/concurrent/futures/_base.py\", line 445, in result\r\n return self.__get_result()\r\n File \"/usr/lib/python3.9/concurrent/futures/_base.py\", line 390, in __get_result\r\n raise self._exception\r\n File \"<console>\", line 1, in <module>\r\n File \"/home/***/.local/lib/python3.9/site-packages/aiohttp/client.py\", line 535, in _request\r\n conn = await self._connector.connect(\r\n File \"/home/***/.local/lib/python3.9/site-packages/aiohttp/connector.py\", line 543, in connect\r\n proto = await self._create_connection(req, traces, timeout)\r\n File \"/home/***/.local/lib/python3.9/site-packages/aiohttp/connector.py\", line 906, in _create_connection\r\n _, proto = await self._create_direct_connection(req, traces, timeout)\r\n File \"/home/***/.local/lib/python3.9/site-packages/aiohttp/connector.py\", line 1153, in _create_direct_connection\r\n hosts = await asyncio.shield(host_resolved)\r\nRuntimeError: coroutine raised StopIteration\r\n```\r\n\r\n\r\n### Python Version\r\n\r\n```console\r\n$ python --version\r\nPython 3.9.7\r\n```\r\n\r\n\r\n### aiohttp Version\r\n\r\n```console\r\n$ python -m pip show aiohttp\r\nName: aiohttp\r\nVersion: 3.8.0\r\nSummary: Async http client/server framework (asyncio)\r\nHome-page: https://github.com/aio-libs/aiohttp\r\nAuthor:\r\nAuthor-email:\r\nLicense: Apache 2\r\nLocation: /home/***/.local/lib/python3.9/site-packages\r\nRequires: aiosignal, async-timeout, attrs, charset-normalizer, frozenlist, multidict, yarl\r\nRequired-by:\r\n```\r\n\r\n\r\n### multidict Version\r\n\r\n```console\r\n$ python -m pip show multidict\r\nName: multidict\r\nVersion: 5.1.0\r\nSummary: multidict implementation\r\nHome-page: https://github.com/aio-libs/multidict\r\nAuthor: Andrew Svetlov\r\nAuthor-email: [email protected]\r\nLicense: Apache 2\r\nLocation: /home/***/.local/lib/python3.9/site-packages\r\nRequires:\r\nRequired-by: aiohttp, grpclib, yarl\r\n```\r\n\r\n\r\n### yarl Version\r\n\r\n```console\r\n$ python -m pip show yarl\r\nName: yarl\r\nVersion: 1.6.3\r\nSummary: Yet another URL library\r\nHome-page: https://github.com/aio-libs/yarl/\r\nAuthor: Andrew Svetlov\r\nAuthor-email: [email protected]\r\nLicense: Apache 2\r\nLocation: /home/***/.local/lib/python3.9/site-packages\r\nRequires: idna, multidict\r\nRequired-by: aiohttp\r\n```\r\n\r\n\r\n### OS\r\n\r\nLinux\r\n\r\n### Related component\r\n\r\nClient\r\n\r\n### Additional context\r\n\r\n_No response_\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the aio-libs Code of Conduct\n", "before_files": [{"content": "import asyncio\nimport socket\nfrom typing import Any, Dict, List, Type, Union\n\nfrom .abc import AbstractResolver\n\n__all__ = (\"ThreadedResolver\", \"AsyncResolver\", \"DefaultResolver\")\n\ntry:\n import aiodns\n\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Threaded resolver.\n\n Uses an Executor for synchronous getaddrinfo() calls.\n concurrent.futures.ThreadPoolExecutor is used by default.\n \"\"\"\n\n def __init__(self) -> None:\n self._loop = asyncio.get_running_loop()\n\n async def resolve(\n self, hostname: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n infos = await self._loop.getaddrinfo(\n hostname,\n port,\n type=socket.SOCK_STREAM,\n family=family,\n flags=socket.AI_ADDRCONFIG,\n )\n\n hosts = []\n for family, _, proto, _, address in infos:\n if family == socket.AF_INET6:\n if not (socket.has_ipv6 and address[3]): # type: ignore[misc]\n continue\n # This is essential for link-local IPv6 addresses.\n # LL IPv6 is a VERY rare case. Strictly speaking, we should use\n # getnameinfo() unconditionally, but performance makes sense.\n host, _port = socket.getnameinfo(\n address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV\n )\n port = int(_port)\n else:\n host, port = address[:2]\n hosts.append(\n {\n \"hostname\": hostname,\n \"host\": host,\n \"port\": port,\n \"family\": family,\n \"proto\": proto,\n \"flags\": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n }\n )\n\n return hosts\n\n async def close(self) -> None:\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = asyncio.get_running_loop()\n self._resolver = aiodns.DNSResolver(*args, loop=self._loop, **kwargs)\n\n async def resolve(\n self, host: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n try:\n resp = await self._resolver.gethostbyname(host, family)\n except aiodns.error.DNSError as exc:\n msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n raise OSError(msg) from exc\n hosts = []\n for address in resp.addresses:\n hosts.append(\n {\n \"hostname\": host,\n \"host\": address,\n \"port\": port,\n \"family\": family,\n \"proto\": 0,\n \"flags\": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n }\n )\n\n if not hosts:\n raise OSError(\"DNS lookup failed\")\n\n return hosts\n\n async def close(self) -> None:\n self._resolver.cancel()\n\n\n_DefaultType = Type[Union[AsyncResolver, ThreadedResolver]]\nDefaultResolver: _DefaultType = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}], "after_files": [{"content": "import asyncio\nimport socket\nfrom typing import Any, Dict, List, Type, Union\n\nfrom .abc import AbstractResolver\n\n__all__ = (\"ThreadedResolver\", \"AsyncResolver\", \"DefaultResolver\")\n\ntry:\n import aiodns\n\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Threaded resolver.\n\n Uses an Executor for synchronous getaddrinfo() calls.\n concurrent.futures.ThreadPoolExecutor is used by default.\n \"\"\"\n\n def __init__(self) -> None:\n self._loop = asyncio.get_running_loop()\n\n async def resolve(\n self, hostname: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n infos = await self._loop.getaddrinfo(\n hostname,\n port,\n type=socket.SOCK_STREAM,\n family=family,\n flags=socket.AI_ADDRCONFIG,\n )\n\n hosts = []\n for family, _, proto, _, address in infos:\n if family == socket.AF_INET6:\n if len(address) < 3:\n # IPv6 is not supported by Python build,\n # or IPv6 is not enabled in the host\n continue\n if address[3]: # type: ignore[misc]\n # This is essential for link-local IPv6 addresses.\n # LL IPv6 is a VERY rare case. Strictly speaking, we should use\n # getnameinfo() unconditionally, but performance makes sense.\n host, _port = socket.getnameinfo(\n address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV\n )\n port = int(_port)\n else:\n host, port = address[:2]\n else: # IPv4\n assert family == socket.AF_INET\n host, port = address # type: ignore[misc]\n hosts.append(\n {\n \"hostname\": hostname,\n \"host\": host,\n \"port\": port,\n \"family\": family,\n \"proto\": proto,\n \"flags\": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n }\n )\n\n return hosts\n\n async def close(self) -> None:\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = asyncio.get_running_loop()\n self._resolver = aiodns.DNSResolver(*args, loop=self._loop, **kwargs)\n\n async def resolve(\n self, host: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n try:\n resp = await self._resolver.gethostbyname(host, family)\n except aiodns.error.DNSError as exc:\n msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n raise OSError(msg) from exc\n hosts = []\n for address in resp.addresses:\n hosts.append(\n {\n \"hostname\": host,\n \"host\": address,\n \"port\": port,\n \"family\": family,\n \"proto\": 0,\n \"flags\": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n }\n )\n\n if not hosts:\n raise OSError(\"DNS lookup failed\")\n\n return hosts\n\n async def close(self) -> None:\n self._resolver.cancel()\n\n\n_DefaultType = Type[Union[AsyncResolver, ThreadedResolver]]\nDefaultResolver: _DefaultType = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}]}
| 2,363 | 401 |
gh_patches_debug_33575
|
rasdani/github-patches
|
git_diff
|
Cog-Creators__Red-DiscordBot-1770
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[V3] CLI prefix display
### Type:
- [ ] Suggestion
- [X] Bug
### Brief description of the problem
When using the `--prefix` flag, the bot's loaded message still displays the prefix(es) it had stored prior (in some cases, this could be none)
### Expected behavior
The new prefix(es) should be displayed
### Actual behavior
New prefix is usable, but not reflected in the CLI
### Steps to reproduce
1. launch the bot using the prefix flag and observe
[v3] - Invalid version number.
# Other bugs
<!--
Did you find a bug with something other than a command? Fill out the following:
-->
#### What were you trying to do?
Load the bot
#### What were you expecting to happen?
Bot stays quiet, or shows a splash
#### What actually happened?
```[29/05/2018 20:59] ERROR events on_error 181: Exception in on_ready
Traceback (most recent call last):
File "/root/.pyenv/versions/3.6.5/lib/python3.6/site-packages/discord/client.py", line 224, in _run_event
yield from coro(*args, **kwargs)
File "/root/.pyenv/versions/3.6.5/lib/python3.6/site-packages/redbot/core/events.py", line 125, in on_ready
if StrictVersion(data["info"]["version"]) > StrictVersion(red_version):
File "/root/.pyenv/versions/3.6.5/lib/python3.6/distutils/version.py", line 40, in __init__
self.parse(vstring)
File "/root/.pyenv/versions/3.6.5/lib/python3.6/distutils/version.py", line 137, in parse
raise ValueError("invalid version number '%s'" % vstring)
ValueError: invalid version number '3.0.0b15.post2'
```
#### How can we reproduce this issue?
Unsure. I just loaded the bot on a fresh and clean server.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redbot/core/events.py`
Content:
```
1 import sys
2 import codecs
3 import datetime
4 import logging
5 from distutils.version import StrictVersion
6
7 import aiohttp
8 import pkg_resources
9 import traceback
10 from pkg_resources import DistributionNotFound
11
12
13 import discord
14 from discord.ext import commands
15
16 from . import __version__
17 from .data_manager import storage_type
18 from .utils.chat_formatting import inline, bordered, pagify, box
19 from .utils import fuzzy_command_search
20 from colorama import Fore, Style, init
21
22 log = logging.getLogger("red")
23 sentry_log = logging.getLogger("red.sentry")
24 init()
25
26 INTRO = """
27 ______ _ ______ _ _ ______ _
28 | ___ \ | | | _ (_) | | | ___ \ | |
29 | |_/ /___ __| | ______ | | | |_ ___ ___ ___ _ __ __| | | |_/ / ___ | |_
30 | // _ \/ _` | |______| | | | | / __|/ __/ _ \| '__/ _` | | ___ \/ _ \| __|
31 | |\ \ __/ (_| | | |/ /| \__ \ (_| (_) | | | (_| | | |_/ / (_) | |_
32 \_| \_\___|\__,_| |___/ |_|___/\___\___/|_| \__,_| \____/ \___/ \__|
33 """
34
35
36 def should_log_sentry(exception) -> bool:
37 e = exception
38 while e.__cause__ is not None:
39 e = e.__cause__
40
41 tb = e.__traceback__
42 tb_frame = None
43 while tb is not None:
44 tb_frame = tb.tb_frame
45 tb = tb.tb_next
46
47 module = tb_frame.f_globals.get("__name__")
48 return module.startswith("redbot")
49
50
51 def init_events(bot, cli_flags):
52 @bot.event
53 async def on_connect():
54 if bot.uptime is None:
55 print("Connected to Discord. Getting ready...")
56
57 @bot.event
58 async def on_ready():
59 if bot.uptime is not None:
60 return
61
62 bot.uptime = datetime.datetime.utcnow()
63 packages = []
64
65 if cli_flags.no_cogs is False:
66 packages.extend(await bot.db.packages())
67
68 if cli_flags.load_cogs:
69 packages.extend(cli_flags.load_cogs)
70
71 if packages:
72 to_remove = []
73 print("Loading packages...")
74 for package in packages:
75 try:
76 spec = await bot.cog_mgr.find_cog(package)
77 await bot.load_extension(spec)
78 except Exception as e:
79 log.exception("Failed to load package {}".format(package), exc_info=e)
80 await bot.remove_loaded_package(package)
81 to_remove.append(package)
82 for package in to_remove:
83 packages.remove(package)
84 if packages:
85 print("Loaded packages: " + ", ".join(packages))
86
87 guilds = len(bot.guilds)
88 users = len(set([m for m in bot.get_all_members()]))
89
90 try:
91 data = await bot.application_info()
92 invite_url = discord.utils.oauth_url(data.id)
93 except:
94 if bot.user.bot:
95 invite_url = "Could not fetch invite url"
96 else:
97 invite_url = None
98
99 prefixes = await bot.db.prefix()
100 lang = await bot.db.locale()
101 red_version = __version__
102 red_pkg = pkg_resources.get_distribution("Red-DiscordBot")
103 dpy_version = discord.__version__
104
105 INFO = [
106 str(bot.user),
107 "Prefixes: {}".format(", ".join(prefixes)),
108 "Language: {}".format(lang),
109 "Red Bot Version: {}".format(red_version),
110 "Discord.py Version: {}".format(dpy_version),
111 "Shards: {}".format(bot.shard_count),
112 ]
113
114 if guilds:
115 INFO.extend(("Servers: {}".format(guilds), "Users: {}".format(users)))
116 else:
117 print("Ready. I'm not in any server yet!")
118
119 INFO.append("{} cogs with {} commands".format(len(bot.cogs), len(bot.commands)))
120
121 async with aiohttp.ClientSession() as session:
122 async with session.get("https://pypi.python.org/pypi/red-discordbot/json") as r:
123 data = await r.json()
124 if StrictVersion(data["info"]["version"]) > StrictVersion(red_version):
125 INFO.append(
126 "Outdated version! {} is available "
127 "but you're using {}".format(data["info"]["version"], red_version)
128 )
129 owner = discord.utils.get(bot.get_all_members(), id=bot.owner_id)
130 try:
131 await owner.send(
132 "Your Red instance is out of date! {} is the current "
133 "version, however you are using {}!".format(
134 data["info"]["version"], red_version
135 )
136 )
137 except:
138 pass
139 INFO2 = []
140
141 sentry = await bot.db.enable_sentry()
142 mongo_enabled = storage_type() != "JSON"
143 reqs_installed = {"voice": None, "docs": None, "test": None}
144 for key in reqs_installed.keys():
145 reqs = [x.name for x in red_pkg._dep_map[key]]
146 try:
147 pkg_resources.require(reqs)
148 except DistributionNotFound:
149 reqs_installed[key] = False
150 else:
151 reqs_installed[key] = True
152
153 options = (
154 ("Error Reporting", sentry),
155 ("MongoDB", mongo_enabled),
156 ("Voice", reqs_installed["voice"]),
157 ("Docs", reqs_installed["docs"]),
158 ("Tests", reqs_installed["test"]),
159 )
160
161 on_symbol, off_symbol, ascii_border = _get_startup_screen_specs()
162
163 for option, enabled in options:
164 enabled = on_symbol if enabled else off_symbol
165 INFO2.append("{} {}".format(enabled, option))
166
167 print(Fore.RED + INTRO)
168 print(Style.RESET_ALL)
169 print(bordered(INFO, INFO2, ascii_border=ascii_border))
170
171 if invite_url:
172 print("\nInvite URL: {}\n".format(invite_url))
173
174 bot.color = discord.Colour(await bot.db.color())
175 if bot.rpc_enabled:
176 await bot.rpc.initialize()
177
178 @bot.event
179 async def on_error(event_method, *args, **kwargs):
180 sentry_log.exception("Exception in {}".format(event_method))
181
182 @bot.event
183 async def on_command_error(ctx, error):
184 if isinstance(error, commands.MissingRequiredArgument):
185 await ctx.send_help()
186 elif isinstance(error, commands.BadArgument):
187 await ctx.send_help()
188 elif isinstance(error, commands.DisabledCommand):
189 await ctx.send("That command is disabled.")
190 elif isinstance(error, commands.CommandInvokeError):
191 # Need to test if the following still works
192 """
193 no_dms = "Cannot send messages to this user"
194 is_help_cmd = ctx.command.qualified_name == "help"
195 is_forbidden = isinstance(error.original, discord.Forbidden)
196 if is_help_cmd and is_forbidden and error.original.text == no_dms:
197 msg = ("I couldn't send the help message to you in DM. Either"
198 " you blocked me or you disabled DMs in this server.")
199 await ctx.send(msg)
200 return
201 """
202 log.exception(
203 "Exception in command '{}'" "".format(ctx.command.qualified_name),
204 exc_info=error.original,
205 )
206 if should_log_sentry(error):
207 sentry_log.exception(
208 "Exception in command '{}'" "".format(ctx.command.qualified_name),
209 exc_info=error.original,
210 )
211
212 message = (
213 "Error in command '{}'. Check your console or "
214 "logs for details."
215 "".format(ctx.command.qualified_name)
216 )
217 exception_log = "Exception in command '{}'\n" "".format(ctx.command.qualified_name)
218 exception_log += "".join(
219 traceback.format_exception(type(error), error, error.__traceback__)
220 )
221 bot._last_exception = exception_log
222 if not hasattr(ctx.cog, "_{0.command.cog_name}__error".format(ctx)):
223 await ctx.send(inline(message))
224 elif isinstance(error, commands.CommandNotFound):
225 term = ctx.invoked_with + " "
226 if len(ctx.args) > 1:
227 term += " ".join(ctx.args[1:])
228 await ctx.maybe_send_embed(fuzzy_command_search(ctx, ctx.invoked_with))
229 elif isinstance(error, commands.CheckFailure):
230 pass
231 elif isinstance(error, commands.NoPrivateMessage):
232 await ctx.send("That command is not available in DMs.")
233 elif isinstance(error, commands.CommandOnCooldown):
234 await ctx.send(
235 "This command is on cooldown. " "Try again in {:.2f}s" "".format(error.retry_after)
236 )
237 else:
238 log.exception(type(error).__name__, exc_info=error)
239 try:
240 sentry_error = error.original
241 except AttributeError:
242 sentry_error = error
243
244 if should_log_sentry(sentry_error):
245 sentry_log.exception("Unhandled command error.", exc_info=sentry_error)
246
247 @bot.event
248 async def on_message(message):
249 bot.counter["messages_read"] += 1
250 await bot.process_commands(message)
251
252 @bot.event
253 async def on_resumed():
254 bot.counter["sessions_resumed"] += 1
255
256 @bot.event
257 async def on_command(command):
258 bot.counter["processed_commands"] += 1
259
260
261 def _get_startup_screen_specs():
262 """Get specs for displaying the startup screen on stdout.
263
264 This is so we don't get encoding errors when trying to print unicode
265 emojis to stdout (particularly with Windows Command Prompt).
266
267 Returns
268 -------
269 `tuple`
270 Tuple in the form (`str`, `str`, `bool`) containing (in order) the
271 on symbol, off symbol and whether or not the border should be pure ascii.
272
273 """
274 encoder = codecs.getencoder(sys.stdout.encoding)
275 check_mark = "\N{SQUARE ROOT}"
276 try:
277 encoder(check_mark)
278 except UnicodeEncodeError:
279 on_symbol = "[X]"
280 off_symbol = "[ ]"
281 else:
282 on_symbol = check_mark
283 off_symbol = "X"
284
285 try:
286 encoder("┌┐└┘─│") # border symbols
287 except UnicodeEncodeError:
288 ascii_border = True
289 else:
290 ascii_border = False
291
292 return on_symbol, off_symbol, ascii_border
293
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redbot/core/events.py b/redbot/core/events.py
--- a/redbot/core/events.py
+++ b/redbot/core/events.py
@@ -96,7 +96,7 @@
else:
invite_url = None
- prefixes = await bot.db.prefix()
+ prefixes = cli_flags.prefix or (await bot.db.prefix())
lang = await bot.db.locale()
red_version = __version__
red_pkg = pkg_resources.get_distribution("Red-DiscordBot")
@@ -118,24 +118,24 @@
INFO.append("{} cogs with {} commands".format(len(bot.cogs), len(bot.commands)))
- async with aiohttp.ClientSession() as session:
- async with session.get("https://pypi.python.org/pypi/red-discordbot/json") as r:
- data = await r.json()
- if StrictVersion(data["info"]["version"]) > StrictVersion(red_version):
- INFO.append(
- "Outdated version! {} is available "
- "but you're using {}".format(data["info"]["version"], red_version)
- )
- owner = discord.utils.get(bot.get_all_members(), id=bot.owner_id)
- try:
+ try:
+ async with aiohttp.ClientSession() as session:
+ async with session.get("https://pypi.python.org/pypi/red-discordbot/json") as r:
+ data = await r.json()
+ if StrictVersion(data["info"]["version"]) > StrictVersion(red_version):
+ INFO.append(
+ "Outdated version! {} is available "
+ "but you're using {}".format(data["info"]["version"], red_version)
+ )
+ owner = discord.utils.get(bot.get_all_members(), id=bot.owner_id)
await owner.send(
"Your Red instance is out of date! {} is the current "
"version, however you are using {}!".format(
data["info"]["version"], red_version
)
)
- except:
- pass
+ except:
+ pass
INFO2 = []
sentry = await bot.db.enable_sentry()
|
{"golden_diff": "diff --git a/redbot/core/events.py b/redbot/core/events.py\n--- a/redbot/core/events.py\n+++ b/redbot/core/events.py\n@@ -96,7 +96,7 @@\n else:\n invite_url = None\n \n- prefixes = await bot.db.prefix()\n+ prefixes = cli_flags.prefix or (await bot.db.prefix())\n lang = await bot.db.locale()\n red_version = __version__\n red_pkg = pkg_resources.get_distribution(\"Red-DiscordBot\")\n@@ -118,24 +118,24 @@\n \n INFO.append(\"{} cogs with {} commands\".format(len(bot.cogs), len(bot.commands)))\n \n- async with aiohttp.ClientSession() as session:\n- async with session.get(\"https://pypi.python.org/pypi/red-discordbot/json\") as r:\n- data = await r.json()\n- if StrictVersion(data[\"info\"][\"version\"]) > StrictVersion(red_version):\n- INFO.append(\n- \"Outdated version! {} is available \"\n- \"but you're using {}\".format(data[\"info\"][\"version\"], red_version)\n- )\n- owner = discord.utils.get(bot.get_all_members(), id=bot.owner_id)\n- try:\n+ try:\n+ async with aiohttp.ClientSession() as session:\n+ async with session.get(\"https://pypi.python.org/pypi/red-discordbot/json\") as r:\n+ data = await r.json()\n+ if StrictVersion(data[\"info\"][\"version\"]) > StrictVersion(red_version):\n+ INFO.append(\n+ \"Outdated version! {} is available \"\n+ \"but you're using {}\".format(data[\"info\"][\"version\"], red_version)\n+ )\n+ owner = discord.utils.get(bot.get_all_members(), id=bot.owner_id)\n await owner.send(\n \"Your Red instance is out of date! {} is the current \"\n \"version, however you are using {}!\".format(\n data[\"info\"][\"version\"], red_version\n )\n )\n- except:\n- pass\n+ except:\n+ pass\n INFO2 = []\n \n sentry = await bot.db.enable_sentry()\n", "issue": "[V3] CLI prefix display\n\r\n\r\n### Type:\r\n\r\n- [ ] Suggestion\r\n- [X] Bug\r\n\r\n### Brief description of the problem\r\nWhen using the `--prefix` flag, the bot's loaded message still displays the prefix(es) it had stored prior (in some cases, this could be none)\r\n### Expected behavior\r\nThe new prefix(es) should be displayed\r\n### Actual behavior\r\nNew prefix is usable, but not reflected in the CLI\r\n### Steps to reproduce\r\n\r\n1. launch the bot using the prefix flag and observe\r\n\r\n\n[v3] - Invalid version number.\n# Other bugs\r\n\r\n<!-- \r\nDid you find a bug with something other than a command? Fill out the following:\r\n-->\r\n\r\n#### What were you trying to do?\r\nLoad the bot\r\n\r\n#### What were you expecting to happen?\r\n\r\nBot stays quiet, or shows a splash\r\n\r\n#### What actually happened?\r\n\r\n```[29/05/2018 20:59] ERROR events on_error 181: Exception in on_ready\r\nTraceback (most recent call last):\r\n File \"/root/.pyenv/versions/3.6.5/lib/python3.6/site-packages/discord/client.py\", line 224, in _run_event\r\n yield from coro(*args, **kwargs)\r\n File \"/root/.pyenv/versions/3.6.5/lib/python3.6/site-packages/redbot/core/events.py\", line 125, in on_ready\r\n if StrictVersion(data[\"info\"][\"version\"]) > StrictVersion(red_version):\r\n File \"/root/.pyenv/versions/3.6.5/lib/python3.6/distutils/version.py\", line 40, in __init__\r\n self.parse(vstring)\r\n File \"/root/.pyenv/versions/3.6.5/lib/python3.6/distutils/version.py\", line 137, in parse\r\n raise ValueError(\"invalid version number '%s'\" % vstring)\r\nValueError: invalid version number '3.0.0b15.post2'\r\n```\r\n\r\n#### How can we reproduce this issue?\r\n\r\nUnsure. I just loaded the bot on a fresh and clean server.\r\n\n", "before_files": [{"content": "import sys\nimport codecs\nimport datetime\nimport logging\nfrom distutils.version import StrictVersion\n\nimport aiohttp\nimport pkg_resources\nimport traceback\nfrom pkg_resources import DistributionNotFound\n\n\nimport discord\nfrom discord.ext import commands\n\nfrom . import __version__\nfrom .data_manager import storage_type\nfrom .utils.chat_formatting import inline, bordered, pagify, box\nfrom .utils import fuzzy_command_search\nfrom colorama import Fore, Style, init\n\nlog = logging.getLogger(\"red\")\nsentry_log = logging.getLogger(\"red.sentry\")\ninit()\n\nINTRO = \"\"\"\n______ _ ______ _ _ ______ _ \n| ___ \\ | | | _ (_) | | | ___ \\ | | \n| |_/ /___ __| | ______ | | | |_ ___ ___ ___ _ __ __| | | |_/ / ___ | |_ \n| // _ \\/ _` | |______| | | | | / __|/ __/ _ \\| '__/ _` | | ___ \\/ _ \\| __|\n| |\\ \\ __/ (_| | | |/ /| \\__ \\ (_| (_) | | | (_| | | |_/ / (_) | |_ \n\\_| \\_\\___|\\__,_| |___/ |_|___/\\___\\___/|_| \\__,_| \\____/ \\___/ \\__|\n\"\"\"\n\n\ndef should_log_sentry(exception) -> bool:\n e = exception\n while e.__cause__ is not None:\n e = e.__cause__\n\n tb = e.__traceback__\n tb_frame = None\n while tb is not None:\n tb_frame = tb.tb_frame\n tb = tb.tb_next\n\n module = tb_frame.f_globals.get(\"__name__\")\n return module.startswith(\"redbot\")\n\n\ndef init_events(bot, cli_flags):\n @bot.event\n async def on_connect():\n if bot.uptime is None:\n print(\"Connected to Discord. Getting ready...\")\n\n @bot.event\n async def on_ready():\n if bot.uptime is not None:\n return\n\n bot.uptime = datetime.datetime.utcnow()\n packages = []\n\n if cli_flags.no_cogs is False:\n packages.extend(await bot.db.packages())\n\n if cli_flags.load_cogs:\n packages.extend(cli_flags.load_cogs)\n\n if packages:\n to_remove = []\n print(\"Loading packages...\")\n for package in packages:\n try:\n spec = await bot.cog_mgr.find_cog(package)\n await bot.load_extension(spec)\n except Exception as e:\n log.exception(\"Failed to load package {}\".format(package), exc_info=e)\n await bot.remove_loaded_package(package)\n to_remove.append(package)\n for package in to_remove:\n packages.remove(package)\n if packages:\n print(\"Loaded packages: \" + \", \".join(packages))\n\n guilds = len(bot.guilds)\n users = len(set([m for m in bot.get_all_members()]))\n\n try:\n data = await bot.application_info()\n invite_url = discord.utils.oauth_url(data.id)\n except:\n if bot.user.bot:\n invite_url = \"Could not fetch invite url\"\n else:\n invite_url = None\n\n prefixes = await bot.db.prefix()\n lang = await bot.db.locale()\n red_version = __version__\n red_pkg = pkg_resources.get_distribution(\"Red-DiscordBot\")\n dpy_version = discord.__version__\n\n INFO = [\n str(bot.user),\n \"Prefixes: {}\".format(\", \".join(prefixes)),\n \"Language: {}\".format(lang),\n \"Red Bot Version: {}\".format(red_version),\n \"Discord.py Version: {}\".format(dpy_version),\n \"Shards: {}\".format(bot.shard_count),\n ]\n\n if guilds:\n INFO.extend((\"Servers: {}\".format(guilds), \"Users: {}\".format(users)))\n else:\n print(\"Ready. I'm not in any server yet!\")\n\n INFO.append(\"{} cogs with {} commands\".format(len(bot.cogs), len(bot.commands)))\n\n async with aiohttp.ClientSession() as session:\n async with session.get(\"https://pypi.python.org/pypi/red-discordbot/json\") as r:\n data = await r.json()\n if StrictVersion(data[\"info\"][\"version\"]) > StrictVersion(red_version):\n INFO.append(\n \"Outdated version! {} is available \"\n \"but you're using {}\".format(data[\"info\"][\"version\"], red_version)\n )\n owner = discord.utils.get(bot.get_all_members(), id=bot.owner_id)\n try:\n await owner.send(\n \"Your Red instance is out of date! {} is the current \"\n \"version, however you are using {}!\".format(\n data[\"info\"][\"version\"], red_version\n )\n )\n except:\n pass\n INFO2 = []\n\n sentry = await bot.db.enable_sentry()\n mongo_enabled = storage_type() != \"JSON\"\n reqs_installed = {\"voice\": None, \"docs\": None, \"test\": None}\n for key in reqs_installed.keys():\n reqs = [x.name for x in red_pkg._dep_map[key]]\n try:\n pkg_resources.require(reqs)\n except DistributionNotFound:\n reqs_installed[key] = False\n else:\n reqs_installed[key] = True\n\n options = (\n (\"Error Reporting\", sentry),\n (\"MongoDB\", mongo_enabled),\n (\"Voice\", reqs_installed[\"voice\"]),\n (\"Docs\", reqs_installed[\"docs\"]),\n (\"Tests\", reqs_installed[\"test\"]),\n )\n\n on_symbol, off_symbol, ascii_border = _get_startup_screen_specs()\n\n for option, enabled in options:\n enabled = on_symbol if enabled else off_symbol\n INFO2.append(\"{} {}\".format(enabled, option))\n\n print(Fore.RED + INTRO)\n print(Style.RESET_ALL)\n print(bordered(INFO, INFO2, ascii_border=ascii_border))\n\n if invite_url:\n print(\"\\nInvite URL: {}\\n\".format(invite_url))\n\n bot.color = discord.Colour(await bot.db.color())\n if bot.rpc_enabled:\n await bot.rpc.initialize()\n\n @bot.event\n async def on_error(event_method, *args, **kwargs):\n sentry_log.exception(\"Exception in {}\".format(event_method))\n\n @bot.event\n async def on_command_error(ctx, error):\n if isinstance(error, commands.MissingRequiredArgument):\n await ctx.send_help()\n elif isinstance(error, commands.BadArgument):\n await ctx.send_help()\n elif isinstance(error, commands.DisabledCommand):\n await ctx.send(\"That command is disabled.\")\n elif isinstance(error, commands.CommandInvokeError):\n # Need to test if the following still works\n \"\"\"\n no_dms = \"Cannot send messages to this user\"\n is_help_cmd = ctx.command.qualified_name == \"help\"\n is_forbidden = isinstance(error.original, discord.Forbidden)\n if is_help_cmd and is_forbidden and error.original.text == no_dms:\n msg = (\"I couldn't send the help message to you in DM. Either\"\n \" you blocked me or you disabled DMs in this server.\")\n await ctx.send(msg)\n return\n \"\"\"\n log.exception(\n \"Exception in command '{}'\" \"\".format(ctx.command.qualified_name),\n exc_info=error.original,\n )\n if should_log_sentry(error):\n sentry_log.exception(\n \"Exception in command '{}'\" \"\".format(ctx.command.qualified_name),\n exc_info=error.original,\n )\n\n message = (\n \"Error in command '{}'. Check your console or \"\n \"logs for details.\"\n \"\".format(ctx.command.qualified_name)\n )\n exception_log = \"Exception in command '{}'\\n\" \"\".format(ctx.command.qualified_name)\n exception_log += \"\".join(\n traceback.format_exception(type(error), error, error.__traceback__)\n )\n bot._last_exception = exception_log\n if not hasattr(ctx.cog, \"_{0.command.cog_name}__error\".format(ctx)):\n await ctx.send(inline(message))\n elif isinstance(error, commands.CommandNotFound):\n term = ctx.invoked_with + \" \"\n if len(ctx.args) > 1:\n term += \" \".join(ctx.args[1:])\n await ctx.maybe_send_embed(fuzzy_command_search(ctx, ctx.invoked_with))\n elif isinstance(error, commands.CheckFailure):\n pass\n elif isinstance(error, commands.NoPrivateMessage):\n await ctx.send(\"That command is not available in DMs.\")\n elif isinstance(error, commands.CommandOnCooldown):\n await ctx.send(\n \"This command is on cooldown. \" \"Try again in {:.2f}s\" \"\".format(error.retry_after)\n )\n else:\n log.exception(type(error).__name__, exc_info=error)\n try:\n sentry_error = error.original\n except AttributeError:\n sentry_error = error\n\n if should_log_sentry(sentry_error):\n sentry_log.exception(\"Unhandled command error.\", exc_info=sentry_error)\n\n @bot.event\n async def on_message(message):\n bot.counter[\"messages_read\"] += 1\n await bot.process_commands(message)\n\n @bot.event\n async def on_resumed():\n bot.counter[\"sessions_resumed\"] += 1\n\n @bot.event\n async def on_command(command):\n bot.counter[\"processed_commands\"] += 1\n\n\ndef _get_startup_screen_specs():\n \"\"\"Get specs for displaying the startup screen on stdout.\n\n This is so we don't get encoding errors when trying to print unicode\n emojis to stdout (particularly with Windows Command Prompt).\n\n Returns\n -------\n `tuple`\n Tuple in the form (`str`, `str`, `bool`) containing (in order) the\n on symbol, off symbol and whether or not the border should be pure ascii.\n\n \"\"\"\n encoder = codecs.getencoder(sys.stdout.encoding)\n check_mark = \"\\N{SQUARE ROOT}\"\n try:\n encoder(check_mark)\n except UnicodeEncodeError:\n on_symbol = \"[X]\"\n off_symbol = \"[ ]\"\n else:\n on_symbol = check_mark\n off_symbol = \"X\"\n\n try:\n encoder(\"\u250c\u2510\u2514\u2518\u2500\u2502\") # border symbols\n except UnicodeEncodeError:\n ascii_border = True\n else:\n ascii_border = False\n\n return on_symbol, off_symbol, ascii_border\n", "path": "redbot/core/events.py"}], "after_files": [{"content": "import sys\nimport codecs\nimport datetime\nimport logging\nfrom distutils.version import StrictVersion\n\nimport aiohttp\nimport pkg_resources\nimport traceback\nfrom pkg_resources import DistributionNotFound\n\n\nimport discord\nfrom discord.ext import commands\n\nfrom . import __version__\nfrom .data_manager import storage_type\nfrom .utils.chat_formatting import inline, bordered, pagify, box\nfrom .utils import fuzzy_command_search\nfrom colorama import Fore, Style, init\n\nlog = logging.getLogger(\"red\")\nsentry_log = logging.getLogger(\"red.sentry\")\ninit()\n\nINTRO = \"\"\"\n______ _ ______ _ _ ______ _ \n| ___ \\ | | | _ (_) | | | ___ \\ | | \n| |_/ /___ __| | ______ | | | |_ ___ ___ ___ _ __ __| | | |_/ / ___ | |_ \n| // _ \\/ _` | |______| | | | | / __|/ __/ _ \\| '__/ _` | | ___ \\/ _ \\| __|\n| |\\ \\ __/ (_| | | |/ /| \\__ \\ (_| (_) | | | (_| | | |_/ / (_) | |_ \n\\_| \\_\\___|\\__,_| |___/ |_|___/\\___\\___/|_| \\__,_| \\____/ \\___/ \\__|\n\"\"\"\n\n\ndef should_log_sentry(exception) -> bool:\n e = exception\n while e.__cause__ is not None:\n e = e.__cause__\n\n tb = e.__traceback__\n tb_frame = None\n while tb is not None:\n tb_frame = tb.tb_frame\n tb = tb.tb_next\n\n module = tb_frame.f_globals.get(\"__name__\")\n return module.startswith(\"redbot\")\n\n\ndef init_events(bot, cli_flags):\n @bot.event\n async def on_connect():\n if bot.uptime is None:\n print(\"Connected to Discord. Getting ready...\")\n\n @bot.event\n async def on_ready():\n if bot.uptime is not None:\n return\n\n bot.uptime = datetime.datetime.utcnow()\n packages = []\n\n if cli_flags.no_cogs is False:\n packages.extend(await bot.db.packages())\n\n if cli_flags.load_cogs:\n packages.extend(cli_flags.load_cogs)\n\n if packages:\n to_remove = []\n print(\"Loading packages...\")\n for package in packages:\n try:\n spec = await bot.cog_mgr.find_cog(package)\n await bot.load_extension(spec)\n except Exception as e:\n log.exception(\"Failed to load package {}\".format(package), exc_info=e)\n await bot.remove_loaded_package(package)\n to_remove.append(package)\n for package in to_remove:\n packages.remove(package)\n if packages:\n print(\"Loaded packages: \" + \", \".join(packages))\n\n guilds = len(bot.guilds)\n users = len(set([m for m in bot.get_all_members()]))\n\n try:\n data = await bot.application_info()\n invite_url = discord.utils.oauth_url(data.id)\n except:\n if bot.user.bot:\n invite_url = \"Could not fetch invite url\"\n else:\n invite_url = None\n\n prefixes = cli_flags.prefix or (await bot.db.prefix())\n lang = await bot.db.locale()\n red_version = __version__\n red_pkg = pkg_resources.get_distribution(\"Red-DiscordBot\")\n dpy_version = discord.__version__\n\n INFO = [\n str(bot.user),\n \"Prefixes: {}\".format(\", \".join(prefixes)),\n \"Language: {}\".format(lang),\n \"Red Bot Version: {}\".format(red_version),\n \"Discord.py Version: {}\".format(dpy_version),\n \"Shards: {}\".format(bot.shard_count),\n ]\n\n if guilds:\n INFO.extend((\"Servers: {}\".format(guilds), \"Users: {}\".format(users)))\n else:\n print(\"Ready. I'm not in any server yet!\")\n\n INFO.append(\"{} cogs with {} commands\".format(len(bot.cogs), len(bot.commands)))\n\n try:\n async with aiohttp.ClientSession() as session:\n async with session.get(\"https://pypi.python.org/pypi/red-discordbot/json\") as r:\n data = await r.json()\n if StrictVersion(data[\"info\"][\"version\"]) > StrictVersion(red_version):\n INFO.append(\n \"Outdated version! {} is available \"\n \"but you're using {}\".format(data[\"info\"][\"version\"], red_version)\n )\n owner = discord.utils.get(bot.get_all_members(), id=bot.owner_id)\n await owner.send(\n \"Your Red instance is out of date! {} is the current \"\n \"version, however you are using {}!\".format(\n data[\"info\"][\"version\"], red_version\n )\n )\n except:\n pass\n INFO2 = []\n\n sentry = await bot.db.enable_sentry()\n mongo_enabled = storage_type() != \"JSON\"\n reqs_installed = {\"voice\": None, \"docs\": None, \"test\": None}\n for key in reqs_installed.keys():\n reqs = [x.name for x in red_pkg._dep_map[key]]\n try:\n pkg_resources.require(reqs)\n except DistributionNotFound:\n reqs_installed[key] = False\n else:\n reqs_installed[key] = True\n\n options = (\n (\"Error Reporting\", sentry),\n (\"MongoDB\", mongo_enabled),\n (\"Voice\", reqs_installed[\"voice\"]),\n (\"Docs\", reqs_installed[\"docs\"]),\n (\"Tests\", reqs_installed[\"test\"]),\n )\n\n on_symbol, off_symbol, ascii_border = _get_startup_screen_specs()\n\n for option, enabled in options:\n enabled = on_symbol if enabled else off_symbol\n INFO2.append(\"{} {}\".format(enabled, option))\n\n print(Fore.RED + INTRO)\n print(Style.RESET_ALL)\n print(bordered(INFO, INFO2, ascii_border=ascii_border))\n\n if invite_url:\n print(\"\\nInvite URL: {}\\n\".format(invite_url))\n\n bot.color = discord.Colour(await bot.db.color())\n if bot.rpc_enabled:\n await bot.rpc.initialize()\n\n @bot.event\n async def on_error(event_method, *args, **kwargs):\n sentry_log.exception(\"Exception in {}\".format(event_method))\n\n @bot.event\n async def on_command_error(ctx, error):\n if isinstance(error, commands.MissingRequiredArgument):\n await ctx.send_help()\n elif isinstance(error, commands.BadArgument):\n await ctx.send_help()\n elif isinstance(error, commands.DisabledCommand):\n await ctx.send(\"That command is disabled.\")\n elif isinstance(error, commands.CommandInvokeError):\n # Need to test if the following still works\n \"\"\"\n no_dms = \"Cannot send messages to this user\"\n is_help_cmd = ctx.command.qualified_name == \"help\"\n is_forbidden = isinstance(error.original, discord.Forbidden)\n if is_help_cmd and is_forbidden and error.original.text == no_dms:\n msg = (\"I couldn't send the help message to you in DM. Either\"\n \" you blocked me or you disabled DMs in this server.\")\n await ctx.send(msg)\n return\n \"\"\"\n log.exception(\n \"Exception in command '{}'\" \"\".format(ctx.command.qualified_name),\n exc_info=error.original,\n )\n if should_log_sentry(error):\n sentry_log.exception(\n \"Exception in command '{}'\" \"\".format(ctx.command.qualified_name),\n exc_info=error.original,\n )\n\n message = (\n \"Error in command '{}'. Check your console or \"\n \"logs for details.\"\n \"\".format(ctx.command.qualified_name)\n )\n exception_log = \"Exception in command '{}'\\n\" \"\".format(ctx.command.qualified_name)\n exception_log += \"\".join(\n traceback.format_exception(type(error), error, error.__traceback__)\n )\n bot._last_exception = exception_log\n if not hasattr(ctx.cog, \"_{0.command.cog_name}__error\".format(ctx)):\n await ctx.send(inline(message))\n elif isinstance(error, commands.CommandNotFound):\n term = ctx.invoked_with + \" \"\n if len(ctx.args) > 1:\n term += \" \".join(ctx.args[1:])\n await ctx.maybe_send_embed(fuzzy_command_search(ctx, ctx.invoked_with))\n elif isinstance(error, commands.CheckFailure):\n pass\n elif isinstance(error, commands.NoPrivateMessage):\n await ctx.send(\"That command is not available in DMs.\")\n elif isinstance(error, commands.CommandOnCooldown):\n await ctx.send(\n \"This command is on cooldown. \" \"Try again in {:.2f}s\" \"\".format(error.retry_after)\n )\n else:\n log.exception(type(error).__name__, exc_info=error)\n try:\n sentry_error = error.original\n except AttributeError:\n sentry_error = error\n\n if should_log_sentry(sentry_error):\n sentry_log.exception(\"Unhandled command error.\", exc_info=sentry_error)\n\n @bot.event\n async def on_message(message):\n bot.counter[\"messages_read\"] += 1\n await bot.process_commands(message)\n\n @bot.event\n async def on_resumed():\n bot.counter[\"sessions_resumed\"] += 1\n\n @bot.event\n async def on_command(command):\n bot.counter[\"processed_commands\"] += 1\n\n\ndef _get_startup_screen_specs():\n \"\"\"Get specs for displaying the startup screen on stdout.\n\n This is so we don't get encoding errors when trying to print unicode\n emojis to stdout (particularly with Windows Command Prompt).\n\n Returns\n -------\n `tuple`\n Tuple in the form (`str`, `str`, `bool`) containing (in order) the\n on symbol, off symbol and whether or not the border should be pure ascii.\n\n \"\"\"\n encoder = codecs.getencoder(sys.stdout.encoding)\n check_mark = \"\\N{SQUARE ROOT}\"\n try:\n encoder(check_mark)\n except UnicodeEncodeError:\n on_symbol = \"[X]\"\n off_symbol = \"[ ]\"\n else:\n on_symbol = check_mark\n off_symbol = \"X\"\n\n try:\n encoder(\"\u250c\u2510\u2514\u2518\u2500\u2502\") # border symbols\n except UnicodeEncodeError:\n ascii_border = True\n else:\n ascii_border = False\n\n return on_symbol, off_symbol, ascii_border\n", "path": "redbot/core/events.py"}]}
| 3,717 | 463 |
gh_patches_debug_235
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-1460
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Logging configuration in contrib/utils
# Question
`pyhf.contrib.utils` sets up logging:
https://github.com/scikit-hep/pyhf/blob/6b769fd6f5e1473deba2b4c55d49ebdb3db5b447/src/pyhf/contrib/utils.py#L9
This interferes with custom logging users may want to set up. To achieve this now, they would have to do so before `from pyhf.contrib.utils import download`. To avoid this issue, the logging should not be configured in this part of the code (and only for the CLI).
# Relevant Issues and Pull Requests
#865
User-defined log formatting
# Description
`pyhf` uses `logging` for outputs, and calls `logging.basicConfig()` in a few places.
This has the effect of preventing the user to set their desired logging behavior after `pyhf` import.
While calling this a bug might be a bit of a stretch, I think it might be unintentional since `pyhf` does not apply any logging formatting as far as I can tell.
# Expected Behavior
I expect no calls to `logging.basicConfig()` within `pyhf` to leave the formatting fully up to the user, no matter whether they want to set it before or after importing `pyhf`.
# Actual Behavior
User-defined `logging` formatting only works before importing `pyhf`.
# Steps to Reproduce
importing `pyhf` before formatting:
```
import logging
import pyhf
print(pyhf.__version__)
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
log.info("message")
```
output:
```
0.4.1
```
and when applying formatting before input, the expected behavior:
```
import logging
logging.basicConfig(level=logging.INFO)
import pyhf
print(pyhf.__version__)
log = logging.getLogger(__name__)
log.info("message")
```
output:
```
0.4.1
INFO:__main__:message
```
# Checklist
- [ ] Run `git fetch` to get the most up to date version of `master`
- no, but checked code on master to confirm that the relevant part is unchanged
- [X] Searched through existing Issues to confirm this is not a duplicate issue
- [X] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyhf/contrib/utils.py`
Content:
```
1 """Helper utilities for common tasks."""
2
3 from urllib.parse import urlparse
4 import tarfile
5 from io import BytesIO
6 import logging
7 from .. import exceptions
8
9 logging.basicConfig()
10 log = logging.getLogger(__name__)
11
12 __all__ = ["download"]
13
14
15 def __dir__():
16 return __all__
17
18
19 try:
20 import requests
21
22 def download(archive_url, output_directory, force=False, compress=False):
23 """
24 Download the patchset archive from the remote URL and extract it in a
25 directory at the path given.
26
27 Example:
28
29 >>> from pyhf.contrib.utils import download
30 >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods")
31 >>> import os
32 >>> sorted(os.listdir("1Lbb-likelihoods"))
33 ['BkgOnly.json', 'README.md', 'patchset.json']
34 >>> download("https://doi.org/10.17182/hepdata.90607.v3/r3", "1Lbb-likelihoods.tar.gz", compress=True)
35 >>> import glob
36 >>> glob.glob("1Lbb-likelihoods.tar.gz")
37 ['1Lbb-likelihoods.tar.gz']
38
39 Args:
40 archive_url (:obj:`str`): The URL of the :class:`~pyhf.patchset.PatchSet` archive to download.
41 output_directory (:obj:`str`): Name of the directory to unpack the archive into.
42 force (:obj:`bool`): Force download from non-approved host. Default is ``False``.
43 compress (:obj:`bool`): Keep the archive in a compressed ``tar.gz`` form. Default is ``False``.
44
45 Raises:
46 :class:`~pyhf.exceptions.InvalidArchiveHost`: if the provided archive host name is not known to be valid
47 """
48 if not force:
49 valid_hosts = ["www.hepdata.net", "doi.org"]
50 netloc = urlparse(archive_url).netloc
51 if netloc not in valid_hosts:
52 raise exceptions.InvalidArchiveHost(
53 f"{netloc} is not an approved archive host: {', '.join(str(host) for host in valid_hosts)}\n"
54 + "To download an archive from this host use the --force option."
55 )
56
57 with requests.get(archive_url) as response:
58 if compress:
59 with open(output_directory, "wb") as archive:
60 archive.write(response.content)
61 else:
62 with tarfile.open(
63 mode="r|gz", fileobj=BytesIO(response.content)
64 ) as archive:
65 archive.extractall(output_directory)
66
67
68 except ModuleNotFoundError:
69 log.error(
70 "\nInstallation of the contrib extra is required to use pyhf.contrib.utils.download"
71 + "\nPlease install with: python -m pip install pyhf[contrib]\n",
72 exc_info=True,
73 )
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pyhf/contrib/utils.py b/src/pyhf/contrib/utils.py
--- a/src/pyhf/contrib/utils.py
+++ b/src/pyhf/contrib/utils.py
@@ -6,7 +6,6 @@
import logging
from .. import exceptions
-logging.basicConfig()
log = logging.getLogger(__name__)
__all__ = ["download"]
|
{"golden_diff": "diff --git a/src/pyhf/contrib/utils.py b/src/pyhf/contrib/utils.py\n--- a/src/pyhf/contrib/utils.py\n+++ b/src/pyhf/contrib/utils.py\n@@ -6,7 +6,6 @@\n import logging\n from .. import exceptions\n \n-logging.basicConfig()\n log = logging.getLogger(__name__)\n \n __all__ = [\"download\"]\n", "issue": "Logging configuration in contrib/utils\n# Question\r\n\r\n`pyhf.contrib.utils` sets up logging:\r\nhttps://github.com/scikit-hep/pyhf/blob/6b769fd6f5e1473deba2b4c55d49ebdb3db5b447/src/pyhf/contrib/utils.py#L9 \r\n\r\nThis interferes with custom logging users may want to set up. To achieve this now, they would have to do so before `from pyhf.contrib.utils import download`. To avoid this issue, the logging should not be configured in this part of the code (and only for the CLI).\r\n\r\n# Relevant Issues and Pull Requests\r\n\r\n#865\r\n\nUser-defined log formatting\n# Description\r\n\r\n`pyhf` uses `logging` for outputs, and calls `logging.basicConfig()` in a few places.\r\nThis has the effect of preventing the user to set their desired logging behavior after `pyhf` import.\r\nWhile calling this a bug might be a bit of a stretch, I think it might be unintentional since `pyhf` does not apply any logging formatting as far as I can tell.\r\n\r\n# Expected Behavior\r\n\r\nI expect no calls to `logging.basicConfig()` within `pyhf` to leave the formatting fully up to the user, no matter whether they want to set it before or after importing `pyhf`.\r\n\r\n# Actual Behavior\r\n\r\nUser-defined `logging` formatting only works before importing `pyhf`.\r\n\r\n# Steps to Reproduce\r\n\r\nimporting `pyhf` before formatting:\r\n```\r\nimport logging\r\nimport pyhf\r\nprint(pyhf.__version__)\r\nlogging.basicConfig(level=logging.INFO)\r\nlog = logging.getLogger(__name__)\r\nlog.info(\"message\")\r\n```\r\noutput:\r\n```\r\n0.4.1\r\n```\r\nand when applying formatting before input, the expected behavior:\r\n```\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\nimport pyhf\r\nprint(pyhf.__version__)\r\nlog = logging.getLogger(__name__)\r\nlog.info(\"message\")\r\n```\r\noutput:\r\n```\r\n0.4.1\r\nINFO:__main__:message\r\n``` \r\n\r\n# Checklist\r\n\r\n- [ ] Run `git fetch` to get the most up to date version of `master`\r\n - no, but checked code on master to confirm that the relevant part is unchanged\r\n- [X] Searched through existing Issues to confirm this is not a duplicate issue\r\n- [X] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue\r\n\n", "before_files": [{"content": "\"\"\"Helper utilities for common tasks.\"\"\"\n\nfrom urllib.parse import urlparse\nimport tarfile\nfrom io import BytesIO\nimport logging\nfrom .. import exceptions\n\nlogging.basicConfig()\nlog = logging.getLogger(__name__)\n\n__all__ = [\"download\"]\n\n\ndef __dir__():\n return __all__\n\n\ntry:\n import requests\n\n def download(archive_url, output_directory, force=False, compress=False):\n \"\"\"\n Download the patchset archive from the remote URL and extract it in a\n directory at the path given.\n\n Example:\n\n >>> from pyhf.contrib.utils import download\n >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods\")\n >>> import os\n >>> sorted(os.listdir(\"1Lbb-likelihoods\"))\n ['BkgOnly.json', 'README.md', 'patchset.json']\n >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods.tar.gz\", compress=True)\n >>> import glob\n >>> glob.glob(\"1Lbb-likelihoods.tar.gz\")\n ['1Lbb-likelihoods.tar.gz']\n\n Args:\n archive_url (:obj:`str`): The URL of the :class:`~pyhf.patchset.PatchSet` archive to download.\n output_directory (:obj:`str`): Name of the directory to unpack the archive into.\n force (:obj:`bool`): Force download from non-approved host. Default is ``False``.\n compress (:obj:`bool`): Keep the archive in a compressed ``tar.gz`` form. Default is ``False``.\n\n Raises:\n :class:`~pyhf.exceptions.InvalidArchiveHost`: if the provided archive host name is not known to be valid\n \"\"\"\n if not force:\n valid_hosts = [\"www.hepdata.net\", \"doi.org\"]\n netloc = urlparse(archive_url).netloc\n if netloc not in valid_hosts:\n raise exceptions.InvalidArchiveHost(\n f\"{netloc} is not an approved archive host: {', '.join(str(host) for host in valid_hosts)}\\n\"\n + \"To download an archive from this host use the --force option.\"\n )\n\n with requests.get(archive_url) as response:\n if compress:\n with open(output_directory, \"wb\") as archive:\n archive.write(response.content)\n else:\n with tarfile.open(\n mode=\"r|gz\", fileobj=BytesIO(response.content)\n ) as archive:\n archive.extractall(output_directory)\n\n\nexcept ModuleNotFoundError:\n log.error(\n \"\\nInstallation of the contrib extra is required to use pyhf.contrib.utils.download\"\n + \"\\nPlease install with: python -m pip install pyhf[contrib]\\n\",\n exc_info=True,\n )\n", "path": "src/pyhf/contrib/utils.py"}], "after_files": [{"content": "\"\"\"Helper utilities for common tasks.\"\"\"\n\nfrom urllib.parse import urlparse\nimport tarfile\nfrom io import BytesIO\nimport logging\nfrom .. import exceptions\n\nlog = logging.getLogger(__name__)\n\n__all__ = [\"download\"]\n\n\ndef __dir__():\n return __all__\n\n\ntry:\n import requests\n\n def download(archive_url, output_directory, force=False, compress=False):\n \"\"\"\n Download the patchset archive from the remote URL and extract it in a\n directory at the path given.\n\n Example:\n\n >>> from pyhf.contrib.utils import download\n >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods\")\n >>> import os\n >>> sorted(os.listdir(\"1Lbb-likelihoods\"))\n ['BkgOnly.json', 'README.md', 'patchset.json']\n >>> download(\"https://doi.org/10.17182/hepdata.90607.v3/r3\", \"1Lbb-likelihoods.tar.gz\", compress=True)\n >>> import glob\n >>> glob.glob(\"1Lbb-likelihoods.tar.gz\")\n ['1Lbb-likelihoods.tar.gz']\n\n Args:\n archive_url (:obj:`str`): The URL of the :class:`~pyhf.patchset.PatchSet` archive to download.\n output_directory (:obj:`str`): Name of the directory to unpack the archive into.\n force (:obj:`bool`): Force download from non-approved host. Default is ``False``.\n compress (:obj:`bool`): Keep the archive in a compressed ``tar.gz`` form. Default is ``False``.\n\n Raises:\n :class:`~pyhf.exceptions.InvalidArchiveHost`: if the provided archive host name is not known to be valid\n \"\"\"\n if not force:\n valid_hosts = [\"www.hepdata.net\", \"doi.org\"]\n netloc = urlparse(archive_url).netloc\n if netloc not in valid_hosts:\n raise exceptions.InvalidArchiveHost(\n f\"{netloc} is not an approved archive host: {', '.join(str(host) for host in valid_hosts)}\\n\"\n + \"To download an archive from this host use the --force option.\"\n )\n\n with requests.get(archive_url) as response:\n if compress:\n with open(output_directory, \"wb\") as archive:\n archive.write(response.content)\n else:\n with tarfile.open(\n mode=\"r|gz\", fileobj=BytesIO(response.content)\n ) as archive:\n archive.extractall(output_directory)\n\n\nexcept ModuleNotFoundError:\n log.error(\n \"\\nInstallation of the contrib extra is required to use pyhf.contrib.utils.download\"\n + \"\\nPlease install with: python -m pip install pyhf[contrib]\\n\",\n exc_info=True,\n )\n", "path": "src/pyhf/contrib/utils.py"}]}
| 1,550 | 77 |
gh_patches_debug_8585
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-4759
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add a job for Python 3.9 to .travis.yml
It looks like Travis support specifying such a Python version as `3.9-dev`.
While I’m not sure we should officially support Python 3.9 it until its release, running tests on it will allow us to catch any issue early.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from os.path import dirname, join
2 from pkg_resources import parse_version
3 from setuptools import setup, find_packages, __version__ as setuptools_version
4
5
6 with open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:
7 version = f.read().decode('ascii').strip()
8
9
10 def has_environment_marker_platform_impl_support():
11 """Code extracted from 'pytest/setup.py'
12 https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31
13
14 The first known release to support environment marker with range operators
15 it is 18.5, see:
16 https://setuptools.readthedocs.io/en/latest/history.html#id235
17 """
18 return parse_version(setuptools_version) >= parse_version('18.5')
19
20
21 install_requires = [
22 'Twisted>=17.9.0',
23 'cryptography>=2.0',
24 'cssselect>=0.9.1',
25 'itemloaders>=1.0.1',
26 'parsel>=1.5.0',
27 'pyOpenSSL>=16.2.0',
28 'queuelib>=1.4.2',
29 'service_identity>=16.0.0',
30 'w3lib>=1.17.0',
31 'zope.interface>=4.1.3',
32 'protego>=0.1.15',
33 'itemadapter>=0.1.0',
34 ]
35 extras_require = {}
36 cpython_dependencies = [
37 'lxml>=3.5.0',
38 'PyDispatcher>=2.0.5',
39 ]
40 if has_environment_marker_platform_impl_support():
41 extras_require[':platform_python_implementation == "CPython"'] = cpython_dependencies
42 extras_require[':platform_python_implementation == "PyPy"'] = [
43 # Earlier lxml versions are affected by
44 # https://foss.heptapod.net/pypy/pypy/-/issues/2498,
45 # which was fixed in Cython 0.26, released on 2017-06-19, and used to
46 # generate the C headers of lxml release tarballs published since then, the
47 # first of which was:
48 'lxml>=4.0.0',
49 'PyPyDispatcher>=2.1.0',
50 ]
51 else:
52 install_requires.extend(cpython_dependencies)
53
54
55 setup(
56 name='Scrapy',
57 version=version,
58 url='https://scrapy.org',
59 project_urls={
60 'Documentation': 'https://docs.scrapy.org/',
61 'Source': 'https://github.com/scrapy/scrapy',
62 'Tracker': 'https://github.com/scrapy/scrapy/issues',
63 },
64 description='A high-level Web Crawling and Web Scraping framework',
65 long_description=open('README.rst').read(),
66 author='Scrapy developers',
67 maintainer='Pablo Hoffman',
68 maintainer_email='[email protected]',
69 license='BSD',
70 packages=find_packages(exclude=('tests', 'tests.*')),
71 include_package_data=True,
72 zip_safe=False,
73 entry_points={
74 'console_scripts': ['scrapy = scrapy.cmdline:execute']
75 },
76 classifiers=[
77 'Framework :: Scrapy',
78 'Development Status :: 5 - Production/Stable',
79 'Environment :: Console',
80 'Intended Audience :: Developers',
81 'License :: OSI Approved :: BSD License',
82 'Operating System :: OS Independent',
83 'Programming Language :: Python',
84 'Programming Language :: Python :: 3',
85 'Programming Language :: Python :: 3.6',
86 'Programming Language :: Python :: 3.7',
87 'Programming Language :: Python :: 3.8',
88 'Programming Language :: Python :: Implementation :: CPython',
89 'Programming Language :: Python :: Implementation :: PyPy',
90 'Topic :: Internet :: WWW/HTTP',
91 'Topic :: Software Development :: Libraries :: Application Frameworks',
92 'Topic :: Software Development :: Libraries :: Python Modules',
93 ],
94 python_requires='>=3.6',
95 install_requires=install_requires,
96 extras_require=extras_require,
97 )
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -85,6 +85,7 @@
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
+ 'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
'Topic :: Internet :: WWW/HTTP',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -85,6 +85,7 @@\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n+ 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n", "issue": "Add a job for Python 3.9 to .travis.yml\nIt looks like Travis support specifying such a Python version as `3.9-dev`.\r\n\r\nWhile I\u2019m not sure we should officially support Python 3.9 it until its release, running tests on it will allow us to catch any issue early.\n", "before_files": [{"content": "from os.path import dirname, join\nfrom pkg_resources import parse_version\nfrom setuptools import setup, find_packages, __version__ as setuptools_version\n\n\nwith open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:\n version = f.read().decode('ascii').strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version('18.5')\n\n\ninstall_requires = [\n 'Twisted>=17.9.0',\n 'cryptography>=2.0',\n 'cssselect>=0.9.1',\n 'itemloaders>=1.0.1',\n 'parsel>=1.5.0',\n 'pyOpenSSL>=16.2.0',\n 'queuelib>=1.4.2',\n 'service_identity>=16.0.0',\n 'w3lib>=1.17.0',\n 'zope.interface>=4.1.3',\n 'protego>=0.1.15',\n 'itemadapter>=0.1.0',\n]\nextras_require = {}\ncpython_dependencies = [\n 'lxml>=3.5.0',\n 'PyDispatcher>=2.0.5',\n]\nif has_environment_marker_platform_impl_support():\n extras_require[':platform_python_implementation == \"CPython\"'] = cpython_dependencies\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n # Earlier lxml versions are affected by\n # https://foss.heptapod.net/pypy/pypy/-/issues/2498,\n # which was fixed in Cython 0.26, released on 2017-06-19, and used to\n # generate the C headers of lxml release tarballs published since then, the\n # first of which was:\n 'lxml>=4.0.0',\n 'PyPyDispatcher>=2.1.0',\n ]\nelse:\n install_requires.extend(cpython_dependencies)\n\n\nsetup(\n name='Scrapy',\n version=version,\n url='https://scrapy.org',\n project_urls={\n 'Documentation': 'https://docs.scrapy.org/',\n 'Source': 'https://github.com/scrapy/scrapy',\n 'Tracker': 'https://github.com/scrapy/scrapy/issues',\n },\n description='A high-level Web Crawling and Web Scraping framework',\n long_description=open('README.rst').read(),\n author='Scrapy developers',\n maintainer='Pablo Hoffman',\n maintainer_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=('tests', 'tests.*')),\n include_package_data=True,\n zip_safe=False,\n entry_points={\n 'console_scripts': ['scrapy = scrapy.cmdline:execute']\n },\n classifiers=[\n 'Framework :: Scrapy',\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.6',\n install_requires=install_requires,\n extras_require=extras_require,\n)\n", "path": "setup.py"}], "after_files": [{"content": "from os.path import dirname, join\nfrom pkg_resources import parse_version\nfrom setuptools import setup, find_packages, __version__ as setuptools_version\n\n\nwith open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:\n version = f.read().decode('ascii').strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version('18.5')\n\n\ninstall_requires = [\n 'Twisted>=17.9.0',\n 'cryptography>=2.0',\n 'cssselect>=0.9.1',\n 'itemloaders>=1.0.1',\n 'parsel>=1.5.0',\n 'pyOpenSSL>=16.2.0',\n 'queuelib>=1.4.2',\n 'service_identity>=16.0.0',\n 'w3lib>=1.17.0',\n 'zope.interface>=4.1.3',\n 'protego>=0.1.15',\n 'itemadapter>=0.1.0',\n]\nextras_require = {}\ncpython_dependencies = [\n 'lxml>=3.5.0',\n 'PyDispatcher>=2.0.5',\n]\nif has_environment_marker_platform_impl_support():\n extras_require[':platform_python_implementation == \"CPython\"'] = cpython_dependencies\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n # Earlier lxml versions are affected by\n # https://foss.heptapod.net/pypy/pypy/-/issues/2498,\n # which was fixed in Cython 0.26, released on 2017-06-19, and used to\n # generate the C headers of lxml release tarballs published since then, the\n # first of which was:\n 'lxml>=4.0.0',\n 'PyPyDispatcher>=2.1.0',\n ]\nelse:\n install_requires.extend(cpython_dependencies)\n\n\nsetup(\n name='Scrapy',\n version=version,\n url='https://scrapy.org',\n project_urls={\n 'Documentation': 'https://docs.scrapy.org/',\n 'Source': 'https://github.com/scrapy/scrapy',\n 'Tracker': 'https://github.com/scrapy/scrapy/issues',\n },\n description='A high-level Web Crawling and Web Scraping framework',\n long_description=open('README.rst').read(),\n author='Scrapy developers',\n maintainer='Pablo Hoffman',\n maintainer_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=('tests', 'tests.*')),\n include_package_data=True,\n zip_safe=False,\n entry_points={\n 'console_scripts': ['scrapy = scrapy.cmdline:execute']\n },\n classifiers=[\n 'Framework :: Scrapy',\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.6',\n install_requires=install_requires,\n extras_require=extras_require,\n)\n", "path": "setup.py"}]}
| 1,398 | 115 |
gh_patches_debug_41196
|
rasdani/github-patches
|
git_diff
|
pytorch__pytorch-4444
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
torch.nn.InstanceNorm1d has inconsistent semantics of train and eval
For `torch.nn.InstanceNorm1d` the docs say:
*At evaluation time (.eval()), the default behaviour of the InstanceNorm module stays the same i.e. running mean/variance is NOT used for normalization. One can force using stored mean and variance with .train(False) method.*
However, the source for `.eval()` shows that it simply calls `.train(False)`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch/nn/modules/instancenorm.py`
Content:
```
1 from .batchnorm import _BatchNorm
2 from .. import functional as F
3
4
5 class _InstanceNorm(_BatchNorm):
6 def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=False):
7 super(_InstanceNorm, self).__init__(
8 num_features, eps, momentum, affine)
9
10 def forward(self, input):
11 b, c = input.size(0), input.size(1)
12
13 # Repeat stored stats and affine transform params
14 running_mean = self.running_mean.repeat(b)
15 running_var = self.running_var.repeat(b)
16
17 weight, bias = None, None
18 if self.affine:
19 weight = self.weight.repeat(b)
20 bias = self.bias.repeat(b)
21
22 # Apply instance norm
23 input_reshaped = input.contiguous().view(1, b * c, *input.size()[2:])
24
25 out = F.batch_norm(
26 input_reshaped, running_mean, running_var, weight, bias,
27 True, self.momentum, self.eps)
28
29 # Reshape back
30 self.running_mean.copy_(running_mean.view(b, c).mean(0, keepdim=False))
31 self.running_var.copy_(running_var.view(b, c).mean(0, keepdim=False))
32
33 return out.view(b, c, *input.size()[2:])
34
35 def eval(self):
36 return self
37
38
39 class InstanceNorm1d(_InstanceNorm):
40 r"""Applies Instance Normalization over a 3d input that is seen as a mini-batch.
41
42 .. math::
43
44 y = \frac{x - mean[x]}{ \sqrt{Var[x]} + \epsilon} * gamma + beta
45
46 The mean and standard-deviation are calculated per-dimension separately
47 for each object in a mini-batch. Gamma and beta are learnable parameter vectors
48 of size C (where C is the input size).
49
50 During training, this layer keeps a running estimate of its computed mean
51 and variance. The running sum is kept with a default momentum of 0.1.
52
53 At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same
54 i.e. running mean/variance is NOT used for normalization. One can force using stored
55 mean and variance with `.train(False)` method.
56
57 Args:
58 num_features: num_features from an expected input of size `batch_size x num_features x width`
59 eps: a value added to the denominator for numerical stability. Default: 1e-5
60 momentum: the value used for the running_mean and running_var computation. Default: 0.1
61 affine: a boolean value that when set to ``True``, gives the layer learnable
62 affine parameters. Default: ``False``
63
64 Shape:
65 - Input: :math:`(N, C, L)`
66 - Output: :math:`(N, C, L)` (same shape as input)
67
68 Examples:
69 >>> # Without Learnable Parameters
70 >>> m = nn.InstanceNorm1d(100)
71 >>> # With Learnable Parameters
72 >>> m = nn.InstanceNorm1d(100, affine=True)
73 >>> input = autograd.Variable(torch.randn(20, 100, 40))
74 >>> output = m(input)
75 """
76
77 def _check_input_dim(self, input):
78 if input.dim() != 3:
79 raise ValueError('expected 3D input (got {}D input)'
80 .format(input.dim()))
81 super(InstanceNorm1d, self)._check_input_dim(input)
82
83
84 class InstanceNorm2d(_InstanceNorm):
85 r"""Applies Instance Normalization over a 4d input that is seen as a mini-batch of 3d inputs
86
87 .. math::
88
89 y = \frac{x - mean[x]}{ \sqrt{Var[x]} + \epsilon} * gamma + beta
90
91 The mean and standard-deviation are calculated per-dimension separately
92 for each object in a mini-batch. Gamma and beta are learnable parameter vectors
93 of size C (where C is the input size).
94
95 During training, this layer keeps a running estimate of its computed mean
96 and variance. The running sum is kept with a default momentum of 0.1.
97
98 At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same
99 i.e. running mean/variance is NOT used for normalization. One can force using stored
100 mean and variance with `.train(False)` method.
101
102 Args:
103 num_features: num_features from an expected input of size batch_size x num_features x height x width
104 eps: a value added to the denominator for numerical stability. Default: 1e-5
105 momentum: the value used for the running_mean and running_var computation. Default: 0.1
106 affine: a boolean value that when set to ``True``, gives the layer learnable
107 affine parameters. Default: ``False``
108
109 Shape:
110 - Input: :math:`(N, C, H, W)`
111 - Output: :math:`(N, C, H, W)` (same shape as input)
112
113 Examples:
114 >>> # Without Learnable Parameters
115 >>> m = nn.InstanceNorm2d(100)
116 >>> # With Learnable Parameters
117 >>> m = nn.InstanceNorm2d(100, affine=True)
118 >>> input = autograd.Variable(torch.randn(20, 100, 35, 45))
119 >>> output = m(input)
120 """
121
122 def _check_input_dim(self, input):
123 if input.dim() != 4:
124 raise ValueError('expected 4D input (got {}D input)'
125 .format(input.dim()))
126 super(InstanceNorm2d, self)._check_input_dim(input)
127
128
129 class InstanceNorm3d(_InstanceNorm):
130 r"""Applies Instance Normalization over a 5d input that is seen as a mini-batch of 4d inputs
131
132 .. math::
133
134 y = \frac{x - mean[x]}{ \sqrt{Var[x]} + \epsilon} * gamma + beta
135
136 The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch.
137 Gamma and beta are learnable parameter vectors
138 of size C (where C is the input size).
139
140 During training, this layer keeps a running estimate of its computed mean
141 and variance. The running sum is kept with a default momentum of 0.1.
142
143 At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same
144 i.e. running mean/variance is NOT used for normalization. One can force using stored
145 mean and variance with `.train(False)` method.
146
147
148 Args:
149 num_features: num_features from an expected input of size batch_size x num_features x depth x height x width
150 eps: a value added to the denominator for numerical stability. Default: 1e-5
151 momentum: the value used for the running_mean and running_var computation. Default: 0.1
152 affine: a boolean value that when set to ``True``, gives the layer learnable
153 affine parameters. Default: ``False``
154
155 Shape:
156 - Input: :math:`(N, C, D, H, W)`
157 - Output: :math:`(N, C, D, H, W)` (same shape as input)
158
159 Examples:
160 >>> # Without Learnable Parameters
161 >>> m = nn.InstanceNorm3d(100)
162 >>> # With Learnable Parameters
163 >>> m = nn.InstanceNorm3d(100, affine=True)
164 >>> input = autograd.Variable(torch.randn(20, 100, 35, 45, 10))
165 >>> output = m(input)
166 """
167
168 def _check_input_dim(self, input):
169 if input.dim() != 5:
170 raise ValueError('expected 5D input (got {}D input)'
171 .format(input.dim()))
172 super(InstanceNorm3d, self)._check_input_dim(input)
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torch/nn/modules/instancenorm.py b/torch/nn/modules/instancenorm.py
--- a/torch/nn/modules/instancenorm.py
+++ b/torch/nn/modules/instancenorm.py
@@ -6,6 +6,7 @@
def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=False):
super(_InstanceNorm, self).__init__(
num_features, eps, momentum, affine)
+ self.use_running_stats = False
def forward(self, input):
b, c = input.size(0), input.size(1)
@@ -24,7 +25,7 @@
out = F.batch_norm(
input_reshaped, running_mean, running_var, weight, bias,
- True, self.momentum, self.eps)
+ not self.use_running_stats, self.momentum, self.eps)
# Reshape back
self.running_mean.copy_(running_mean.view(b, c).mean(0, keepdim=False))
@@ -32,8 +33,14 @@
return out.view(b, c, *input.size()[2:])
- def eval(self):
- return self
+ def use_running_stats(self, mode=True):
+ r"""Set using running statistics or instance statistics.
+
+ Instance normalization usually use instance statistics in both training
+ and evaluation modes. But users can set this method to use running
+ statistics in the fashion similar to batch normalization in eval mode.
+ """
+ self.use_running_stats = mode
class InstanceNorm1d(_InstanceNorm):
@@ -52,7 +59,8 @@
At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same
i.e. running mean/variance is NOT used for normalization. One can force using stored
- mean and variance with `.train(False)` method.
+ mean and variance with `.use_running_stats(mode=True)` method, and switch back to normal
+ behavior with `.use_running_stats(mode=False)` method.
Args:
num_features: num_features from an expected input of size `batch_size x num_features x width`
@@ -97,7 +105,8 @@
At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same
i.e. running mean/variance is NOT used for normalization. One can force using stored
- mean and variance with `.train(False)` method.
+ mean and variance with `.use_running_stats(mode=True)` method, and switch back to normal
+ behavior with `.use_running_stats(mode=False)` method.
Args:
num_features: num_features from an expected input of size batch_size x num_features x height x width
@@ -142,7 +151,8 @@
At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same
i.e. running mean/variance is NOT used for normalization. One can force using stored
- mean and variance with `.train(False)` method.
+ mean and variance with `.use_running_stats(mode=True)` method, and switch back to normal
+ behavior with `.use_running_stats(mode=False)` method.
Args:
|
{"golden_diff": "diff --git a/torch/nn/modules/instancenorm.py b/torch/nn/modules/instancenorm.py\n--- a/torch/nn/modules/instancenorm.py\n+++ b/torch/nn/modules/instancenorm.py\n@@ -6,6 +6,7 @@\n def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=False):\n super(_InstanceNorm, self).__init__(\n num_features, eps, momentum, affine)\n+ self.use_running_stats = False\n \n def forward(self, input):\n b, c = input.size(0), input.size(1)\n@@ -24,7 +25,7 @@\n \n out = F.batch_norm(\n input_reshaped, running_mean, running_var, weight, bias,\n- True, self.momentum, self.eps)\n+ not self.use_running_stats, self.momentum, self.eps)\n \n # Reshape back\n self.running_mean.copy_(running_mean.view(b, c).mean(0, keepdim=False))\n@@ -32,8 +33,14 @@\n \n return out.view(b, c, *input.size()[2:])\n \n- def eval(self):\n- return self\n+ def use_running_stats(self, mode=True):\n+ r\"\"\"Set using running statistics or instance statistics.\n+\n+ Instance normalization usually use instance statistics in both training\n+ and evaluation modes. But users can set this method to use running\n+ statistics in the fashion similar to batch normalization in eval mode.\n+ \"\"\"\n+ self.use_running_stats = mode\n \n \n class InstanceNorm1d(_InstanceNorm):\n@@ -52,7 +59,8 @@\n \n At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same\n i.e. running mean/variance is NOT used for normalization. One can force using stored\n- mean and variance with `.train(False)` method.\n+ mean and variance with `.use_running_stats(mode=True)` method, and switch back to normal\n+ behavior with `.use_running_stats(mode=False)` method.\n \n Args:\n num_features: num_features from an expected input of size `batch_size x num_features x width`\n@@ -97,7 +105,8 @@\n \n At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same\n i.e. running mean/variance is NOT used for normalization. One can force using stored\n- mean and variance with `.train(False)` method.\n+ mean and variance with `.use_running_stats(mode=True)` method, and switch back to normal\n+ behavior with `.use_running_stats(mode=False)` method.\n \n Args:\n num_features: num_features from an expected input of size batch_size x num_features x height x width\n@@ -142,7 +151,8 @@\n \n At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same\n i.e. running mean/variance is NOT used for normalization. One can force using stored\n- mean and variance with `.train(False)` method.\n+ mean and variance with `.use_running_stats(mode=True)` method, and switch back to normal\n+ behavior with `.use_running_stats(mode=False)` method.\n \n \n Args:\n", "issue": "torch.nn.InstanceNorm1d has inconsistent semantics of train and eval\nFor `torch.nn.InstanceNorm1d` the docs say:\r\n\r\n*At evaluation time (.eval()), the default behaviour of the InstanceNorm module stays the same i.e. running mean/variance is NOT used for normalization. One can force using stored mean and variance with .train(False) method.*\r\n\r\nHowever, the source for `.eval()` shows that it simply calls `.train(False)`.\n", "before_files": [{"content": "from .batchnorm import _BatchNorm\nfrom .. import functional as F\n\n\nclass _InstanceNorm(_BatchNorm):\n def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=False):\n super(_InstanceNorm, self).__init__(\n num_features, eps, momentum, affine)\n\n def forward(self, input):\n b, c = input.size(0), input.size(1)\n\n # Repeat stored stats and affine transform params\n running_mean = self.running_mean.repeat(b)\n running_var = self.running_var.repeat(b)\n\n weight, bias = None, None\n if self.affine:\n weight = self.weight.repeat(b)\n bias = self.bias.repeat(b)\n\n # Apply instance norm\n input_reshaped = input.contiguous().view(1, b * c, *input.size()[2:])\n\n out = F.batch_norm(\n input_reshaped, running_mean, running_var, weight, bias,\n True, self.momentum, self.eps)\n\n # Reshape back\n self.running_mean.copy_(running_mean.view(b, c).mean(0, keepdim=False))\n self.running_var.copy_(running_var.view(b, c).mean(0, keepdim=False))\n\n return out.view(b, c, *input.size()[2:])\n\n def eval(self):\n return self\n\n\nclass InstanceNorm1d(_InstanceNorm):\n r\"\"\"Applies Instance Normalization over a 3d input that is seen as a mini-batch.\n\n .. math::\n\n y = \\frac{x - mean[x]}{ \\sqrt{Var[x]} + \\epsilon} * gamma + beta\n\n The mean and standard-deviation are calculated per-dimension separately\n for each object in a mini-batch. Gamma and beta are learnable parameter vectors\n of size C (where C is the input size).\n\n During training, this layer keeps a running estimate of its computed mean\n and variance. The running sum is kept with a default momentum of 0.1.\n\n At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same\n i.e. running mean/variance is NOT used for normalization. One can force using stored\n mean and variance with `.train(False)` method.\n\n Args:\n num_features: num_features from an expected input of size `batch_size x num_features x width`\n eps: a value added to the denominator for numerical stability. Default: 1e-5\n momentum: the value used for the running_mean and running_var computation. Default: 0.1\n affine: a boolean value that when set to ``True``, gives the layer learnable\n affine parameters. Default: ``False``\n\n Shape:\n - Input: :math:`(N, C, L)`\n - Output: :math:`(N, C, L)` (same shape as input)\n\n Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm1d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm1d(100, affine=True)\n >>> input = autograd.Variable(torch.randn(20, 100, 40))\n >>> output = m(input)\n \"\"\"\n\n def _check_input_dim(self, input):\n if input.dim() != 3:\n raise ValueError('expected 3D input (got {}D input)'\n .format(input.dim()))\n super(InstanceNorm1d, self)._check_input_dim(input)\n\n\nclass InstanceNorm2d(_InstanceNorm):\n r\"\"\"Applies Instance Normalization over a 4d input that is seen as a mini-batch of 3d inputs\n\n .. math::\n\n y = \\frac{x - mean[x]}{ \\sqrt{Var[x]} + \\epsilon} * gamma + beta\n\n The mean and standard-deviation are calculated per-dimension separately\n for each object in a mini-batch. Gamma and beta are learnable parameter vectors\n of size C (where C is the input size).\n\n During training, this layer keeps a running estimate of its computed mean\n and variance. The running sum is kept with a default momentum of 0.1.\n\n At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same\n i.e. running mean/variance is NOT used for normalization. One can force using stored\n mean and variance with `.train(False)` method.\n\n Args:\n num_features: num_features from an expected input of size batch_size x num_features x height x width\n eps: a value added to the denominator for numerical stability. Default: 1e-5\n momentum: the value used for the running_mean and running_var computation. Default: 0.1\n affine: a boolean value that when set to ``True``, gives the layer learnable\n affine parameters. Default: ``False``\n\n Shape:\n - Input: :math:`(N, C, H, W)`\n - Output: :math:`(N, C, H, W)` (same shape as input)\n\n Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm2d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm2d(100, affine=True)\n >>> input = autograd.Variable(torch.randn(20, 100, 35, 45))\n >>> output = m(input)\n \"\"\"\n\n def _check_input_dim(self, input):\n if input.dim() != 4:\n raise ValueError('expected 4D input (got {}D input)'\n .format(input.dim()))\n super(InstanceNorm2d, self)._check_input_dim(input)\n\n\nclass InstanceNorm3d(_InstanceNorm):\n r\"\"\"Applies Instance Normalization over a 5d input that is seen as a mini-batch of 4d inputs\n\n .. math::\n\n y = \\frac{x - mean[x]}{ \\sqrt{Var[x]} + \\epsilon} * gamma + beta\n\n The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch.\n Gamma and beta are learnable parameter vectors\n of size C (where C is the input size).\n\n During training, this layer keeps a running estimate of its computed mean\n and variance. The running sum is kept with a default momentum of 0.1.\n\n At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same\n i.e. running mean/variance is NOT used for normalization. One can force using stored\n mean and variance with `.train(False)` method.\n\n\n Args:\n num_features: num_features from an expected input of size batch_size x num_features x depth x height x width\n eps: a value added to the denominator for numerical stability. Default: 1e-5\n momentum: the value used for the running_mean and running_var computation. Default: 0.1\n affine: a boolean value that when set to ``True``, gives the layer learnable\n affine parameters. Default: ``False``\n\n Shape:\n - Input: :math:`(N, C, D, H, W)`\n - Output: :math:`(N, C, D, H, W)` (same shape as input)\n\n Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm3d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm3d(100, affine=True)\n >>> input = autograd.Variable(torch.randn(20, 100, 35, 45, 10))\n >>> output = m(input)\n \"\"\"\n\n def _check_input_dim(self, input):\n if input.dim() != 5:\n raise ValueError('expected 5D input (got {}D input)'\n .format(input.dim()))\n super(InstanceNorm3d, self)._check_input_dim(input)\n", "path": "torch/nn/modules/instancenorm.py"}], "after_files": [{"content": "from .batchnorm import _BatchNorm\nfrom .. import functional as F\n\n\nclass _InstanceNorm(_BatchNorm):\n def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=False):\n super(_InstanceNorm, self).__init__(\n num_features, eps, momentum, affine)\n self.use_running_stats = False\n\n def forward(self, input):\n b, c = input.size(0), input.size(1)\n\n # Repeat stored stats and affine transform params\n running_mean = self.running_mean.repeat(b)\n running_var = self.running_var.repeat(b)\n\n weight, bias = None, None\n if self.affine:\n weight = self.weight.repeat(b)\n bias = self.bias.repeat(b)\n\n # Apply instance norm\n input_reshaped = input.contiguous().view(1, b * c, *input.size()[2:])\n\n out = F.batch_norm(\n input_reshaped, running_mean, running_var, weight, bias,\n not self.use_running_stats, self.momentum, self.eps)\n\n # Reshape back\n self.running_mean.copy_(running_mean.view(b, c).mean(0, keepdim=False))\n self.running_var.copy_(running_var.view(b, c).mean(0, keepdim=False))\n\n return out.view(b, c, *input.size()[2:])\n\n def use_running_stats(self, mode=True):\n r\"\"\"Set using running statistics or instance statistics.\n\n Instance normalization usually use instance statistics in both training\n and evaluation modes. But users can set this method to use running\n statistics in the fashion similar to batch normalization in eval mode.\n \"\"\"\n self.use_running_stats = mode\n\n\nclass InstanceNorm1d(_InstanceNorm):\n r\"\"\"Applies Instance Normalization over a 3d input that is seen as a mini-batch.\n\n .. math::\n\n y = \\frac{x - mean[x]}{ \\sqrt{Var[x]} + \\epsilon} * gamma + beta\n\n The mean and standard-deviation are calculated per-dimension separately\n for each object in a mini-batch. Gamma and beta are learnable parameter vectors\n of size C (where C is the input size).\n\n During training, this layer keeps a running estimate of its computed mean\n and variance. The running sum is kept with a default momentum of 0.1.\n\n At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same\n i.e. running mean/variance is NOT used for normalization. One can force using stored\n mean and variance with `.use_running_stats(mode=True)` method, and switch back to normal\n behavior with `.use_running_stats(mode=False)` method.\n\n Args:\n num_features: num_features from an expected input of size `batch_size x num_features x width`\n eps: a value added to the denominator for numerical stability. Default: 1e-5\n momentum: the value used for the running_mean and running_var computation. Default: 0.1\n affine: a boolean value that when set to ``True``, gives the layer learnable\n affine parameters. Default: ``False``\n\n Shape:\n - Input: :math:`(N, C, L)`\n - Output: :math:`(N, C, L)` (same shape as input)\n\n Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm1d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm1d(100, affine=True)\n >>> input = autograd.Variable(torch.randn(20, 100, 40))\n >>> output = m(input)\n \"\"\"\n\n def _check_input_dim(self, input):\n if input.dim() != 3:\n raise ValueError('expected 3D input (got {}D input)'\n .format(input.dim()))\n super(InstanceNorm1d, self)._check_input_dim(input)\n\n\nclass InstanceNorm2d(_InstanceNorm):\n r\"\"\"Applies Instance Normalization over a 4d input that is seen as a mini-batch of 3d inputs\n\n .. math::\n\n y = \\frac{x - mean[x]}{ \\sqrt{Var[x]} + \\epsilon} * gamma + beta\n\n The mean and standard-deviation are calculated per-dimension separately\n for each object in a mini-batch. Gamma and beta are learnable parameter vectors\n of size C (where C is the input size).\n\n During training, this layer keeps a running estimate of its computed mean\n and variance. The running sum is kept with a default momentum of 0.1.\n\n At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same\n i.e. running mean/variance is NOT used for normalization. One can force using stored\n mean and variance with `.use_running_stats(mode=True)` method, and switch back to normal\n behavior with `.use_running_stats(mode=False)` method.\n\n Args:\n num_features: num_features from an expected input of size batch_size x num_features x height x width\n eps: a value added to the denominator for numerical stability. Default: 1e-5\n momentum: the value used for the running_mean and running_var computation. Default: 0.1\n affine: a boolean value that when set to ``True``, gives the layer learnable\n affine parameters. Default: ``False``\n\n Shape:\n - Input: :math:`(N, C, H, W)`\n - Output: :math:`(N, C, H, W)` (same shape as input)\n\n Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm2d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm2d(100, affine=True)\n >>> input = autograd.Variable(torch.randn(20, 100, 35, 45))\n >>> output = m(input)\n \"\"\"\n\n def _check_input_dim(self, input):\n if input.dim() != 4:\n raise ValueError('expected 4D input (got {}D input)'\n .format(input.dim()))\n super(InstanceNorm2d, self)._check_input_dim(input)\n\n\nclass InstanceNorm3d(_InstanceNorm):\n r\"\"\"Applies Instance Normalization over a 5d input that is seen as a mini-batch of 4d inputs\n\n .. math::\n\n y = \\frac{x - mean[x]}{ \\sqrt{Var[x]} + \\epsilon} * gamma + beta\n\n The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch.\n Gamma and beta are learnable parameter vectors\n of size C (where C is the input size).\n\n During training, this layer keeps a running estimate of its computed mean\n and variance. The running sum is kept with a default momentum of 0.1.\n\n At evaluation time (`.eval()`), the default behaviour of the InstanceNorm module stays the same\n i.e. running mean/variance is NOT used for normalization. One can force using stored\n mean and variance with `.use_running_stats(mode=True)` method, and switch back to normal\n behavior with `.use_running_stats(mode=False)` method.\n\n\n Args:\n num_features: num_features from an expected input of size batch_size x num_features x depth x height x width\n eps: a value added to the denominator for numerical stability. Default: 1e-5\n momentum: the value used for the running_mean and running_var computation. Default: 0.1\n affine: a boolean value that when set to ``True``, gives the layer learnable\n affine parameters. Default: ``False``\n\n Shape:\n - Input: :math:`(N, C, D, H, W)`\n - Output: :math:`(N, C, D, H, W)` (same shape as input)\n\n Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm3d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm3d(100, affine=True)\n >>> input = autograd.Variable(torch.randn(20, 100, 35, 45, 10))\n >>> output = m(input)\n \"\"\"\n\n def _check_input_dim(self, input):\n if input.dim() != 5:\n raise ValueError('expected 5D input (got {}D input)'\n .format(input.dim()))\n super(InstanceNorm3d, self)._check_input_dim(input)\n", "path": "torch/nn/modules/instancenorm.py"}]}
| 2,518 | 715 |
gh_patches_debug_18993
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-2062
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: Typo in icon specification: "varient" should be "variant"
### I Have A Problem With:
The integration in general
### What's Your Problem
There is no Material Design Icon with the spelling "varient", it should be "variant".
In the following files:
* `custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py`
* `custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py`

### Source (if relevant)
_No response_
### Logs
_No response_
### Relevant Configuration
_No response_
### Checklist Source Error
- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [ ] Checked that the website of your service provider is still working
- [ ] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py`
Content:
```
1 # Credit where it's due:
2 # This is predominantly a refactoring of the Bristol City Council script from the UKBinCollectionData repo
3 # https://github.com/robbrad/UKBinCollectionData
4
5 from datetime import datetime
6
7 import requests
8 from waste_collection_schedule import Collection # type: ignore[attr-defined]
9
10 TITLE = "Basildon Council"
11 DESCRIPTION = "Source for basildon.gov.uk services for Basildon Council, UK."
12 URL = "https://basildon.gov.uk"
13
14 TEST_CASES = {
15 "Test_Addres_001": {"postcode": "CM111BJ", "address": "6, HEADLEY ROAD"},
16 "Test_Addres_002": {"postcode": "SS14 1QU", "address": "25 LONG RIDING"},
17 "Test_UPRN_001": {"uprn": "100090277795"},
18 "Test_UPRN_002": {"uprn": 10024197625},
19 "Test_UPRN_003": {"uprn": "10090455610"},
20 }
21 ICON_MAP = {
22 "green_waste": "mdi:leaf",
23 "general_waste": "mdi:trash-can",
24 "food_waste": "mdi:food",
25 "glass_waste": "mdi:bottle-wine",
26 "papercard_waste": "mdi:package-varient",
27 "plasticcans_waste": "mdi:bottle-soda-classic",
28 }
29 NAME_MAP = {
30 "green_waste": "Garden",
31 "general_waste": "General",
32 "food_waste": "Food",
33 "glass_waste": "Glass",
34 "papercard_waste": "Paper/Cardboard",
35 "plasticcans_waste": "Plastic/Cans",
36 }
37 HEADERS = {
38 "Accept": "*/*",
39 "Accept-Language": "en-GB,en;q=0.9",
40 "Connection": "keep-alive",
41 "Ocp-Apim-Trace": "true",
42 "Origin": "https://mybasildon.powerappsportals.com",
43 "Referer": "https://mybasildon.powerappsportals.com/",
44 "Sec-Fetch-Dest": "empty",
45 "Sec-Fetch-Mode": "cors",
46 "Sec-Fetch-Site": "cross-site",
47 "Sec-GPC": "1",
48 "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36",
49 }
50
51
52 class Source:
53 def __init__(self, postcode=None, address=None, uprn=None):
54 if uprn is None and (postcode is None or address is None):
55 raise ValueError("Either uprn or postcode and address must be provided")
56
57 self._uprn = str(uprn).zfill(12) if uprn is not None else None
58 self._postcode = postcode
59 self._address = address
60
61 def compare_address(self, address) -> bool:
62 return (
63 self._address.replace(",", "").replace(" ", "").upper()
64 == address.replace(",", "").replace(" ", "").upper()
65 )
66
67 def get_uprn(self, s):
68 r = s.post(
69 "https://basildonportal.azurewebsites.net/api/listPropertiesByPostcode",
70 headers=HEADERS,
71 json={"postcode": self._postcode},
72 )
73 r.raise_for_status()
74 data = r.json()
75 if data["result"] != "success":
76 raise ValueError("Invalid postcode")
77 for item in data["properties"]:
78 if self.compare_address(item["line1"]):
79 self._uprn = item["uprn"]
80 break
81 if self._uprn is None:
82 raise ValueError("Invalid address")
83
84 def fetch(self):
85 s = requests.Session()
86 if self._uprn is None:
87 self.get_uprn(s)
88
89 # Retrieve the schedule
90 payload = {"uprn": self._uprn}
91 response = s.post(
92 "https://basildonportal.azurewebsites.net/api/getPropertyRefuseInformation",
93 headers=HEADERS,
94 json=payload,
95 )
96 data = response.json()["refuse"]["available_services"]
97 entries = []
98 for item in ICON_MAP:
99 for collection_date_key in [
100 "current_collection_",
101 "next_collection_",
102 "last_collection_",
103 ]:
104 if data[item][collection_date_key + "active"]:
105 date_string = data[item][collection_date_key + "date"]
106 entries.append(
107 Collection(
108 date=datetime.strptime(
109 date_string,
110 "%Y-%m-%d",
111 ).date(),
112 t=NAME_MAP[item],
113 icon=ICON_MAP.get(item),
114 )
115 )
116
117 return entries
118
```
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py`
Content:
```
1 # Credit where it's due:
2 # This is predominantly a refactoring of the Bristol City Council script from the UKBinCollectionData repo
3 # https://github.com/robbrad/UKBinCollectionData
4
5 from datetime import datetime
6
7 import requests
8 from waste_collection_schedule import Collection # type: ignore[attr-defined]
9
10 TITLE = "Bristol City Council"
11 DESCRIPTION = "Source for bristol.gov.uk services for Bristol City Council, UK."
12 URL = "https://bristol.gov.uk"
13
14 TEST_CASES = {
15 "Test_001": {"uprn": "107652"},
16 "Test_002": {"uprn": "2987"},
17 "Test_003": {"uprn": 17929},
18 }
19 ICON_MAP = {
20 "90L BLUE SACK": "mdi:recycle",
21 "240L GARDEN WASTE BIN": "mdi:leaf",
22 "180L GENERAL WASTE": "mdi:trash-can",
23 "45L BLACK RECYCLING BOX": "mdi:recycle",
24 "23L FOOD WASTE BIN": "mdi:food",
25 "55L GREEN RECYCLING BOX": "mdi:recycle",
26 "140L FOOD WASTE BIN": "mdi:food",
27 "240L RECYCLING MIXED GLASS": "mdi:bottle-wine",
28 "240L RECYCLING PAPER": "mdi:newspaper",
29 "1100L GENERAL WASTE": "mdi:trash-can",
30 "1100L RECYCLING CARD": "mdi:package-varient",
31 "360L RECYCLING PLASTIC/CANS": "mdi:bottle-soda-classic",
32 }
33 HEADERS = {
34 "Accept": "*/*",
35 "Accept-Language": "en-GB,en;q=0.9",
36 "Connection": "keep-alive",
37 "Ocp-Apim-Subscription-Key": "47ffd667d69c4a858f92fc38dc24b150",
38 "Ocp-Apim-Trace": "true",
39 "Origin": "https://bristolcouncil.powerappsportals.com",
40 "Referer": "https://bristolcouncil.powerappsportals.com/",
41 "Sec-Fetch-Dest": "empty",
42 "Sec-Fetch-Mode": "cors",
43 "Sec-Fetch-Site": "cross-site",
44 "Sec-GPC": "1",
45 "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36",
46 }
47
48
49 class Source:
50 def __init__(self, uprn):
51 self._uprn = str(uprn).zfill(12)
52
53 def fetch(self):
54 s = requests.Session()
55
56 # Initialise form
57 payload = {"servicetypeid": "7dce896c-b3ba-ea11-a812-000d3a7f1cdc"}
58 response = s.get(
59 "https://bristolcouncil.powerappsportals.com/completedynamicformunauth/",
60 headers=HEADERS,
61 params=payload,
62 )
63
64 # Set the search criteria
65 payload = {"Uprn": "UPRN" + self._uprn}
66 response = s.post(
67 "https://bcprdapidyna002.azure-api.net/bcprdfundyna001-llpg/DetailedLLPG",
68 headers=HEADERS,
69 json=payload,
70 )
71
72 # Retrieve the schedule
73 payload = {"uprn": self._uprn}
74 response = s.post(
75 "https://bcprdapidyna002.azure-api.net/bcprdfundyna001-alloy/NextCollectionDates",
76 headers=HEADERS,
77 json=payload,
78 )
79 data = response.json()["data"]
80
81 entries = []
82 for item in data:
83 for collection in item["collection"]:
84 for collection_date_key in ["nextCollectionDate", "lastCollectionDate"]:
85 date_string = collection[collection_date_key].split("T")[0]
86 entries.append(
87 Collection(
88 date=datetime.strptime(
89 date_string,
90 "%Y-%m-%d",
91 ).date(),
92 t=item["containerName"],
93 icon=ICON_MAP.get(item["containerName"].upper()),
94 )
95 )
96
97 return entries
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py
@@ -23,7 +23,7 @@
"general_waste": "mdi:trash-can",
"food_waste": "mdi:food",
"glass_waste": "mdi:bottle-wine",
- "papercard_waste": "mdi:package-varient",
+ "papercard_waste": "mdi:package-variant",
"plasticcans_waste": "mdi:bottle-soda-classic",
}
NAME_MAP = {
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py
@@ -27,7 +27,7 @@
"240L RECYCLING MIXED GLASS": "mdi:bottle-wine",
"240L RECYCLING PAPER": "mdi:newspaper",
"1100L GENERAL WASTE": "mdi:trash-can",
- "1100L RECYCLING CARD": "mdi:package-varient",
+ "1100L RECYCLING CARD": "mdi:package-variant",
"360L RECYCLING PLASTIC/CANS": "mdi:bottle-soda-classic",
}
HEADERS = {
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py\n@@ -23,7 +23,7 @@\n \"general_waste\": \"mdi:trash-can\",\n \"food_waste\": \"mdi:food\",\n \"glass_waste\": \"mdi:bottle-wine\",\n- \"papercard_waste\": \"mdi:package-varient\",\n+ \"papercard_waste\": \"mdi:package-variant\",\n \"plasticcans_waste\": \"mdi:bottle-soda-classic\",\n }\n NAME_MAP = {\ndiff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py\n@@ -27,7 +27,7 @@\n \"240L RECYCLING MIXED GLASS\": \"mdi:bottle-wine\",\n \"240L RECYCLING PAPER\": \"mdi:newspaper\",\n \"1100L GENERAL WASTE\": \"mdi:trash-can\",\n- \"1100L RECYCLING CARD\": \"mdi:package-varient\",\n+ \"1100L RECYCLING CARD\": \"mdi:package-variant\",\n \"360L RECYCLING PLASTIC/CANS\": \"mdi:bottle-soda-classic\",\n }\n HEADERS = {\n", "issue": "[Bug]: Typo in icon specification: \"varient\" should be \"variant\"\n### I Have A Problem With:\n\nThe integration in general\n\n### What's Your Problem\n\nThere is no Material Design Icon with the spelling \"varient\", it should be \"variant\".\r\n\r\nIn the following files:\r\n* `custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py`\r\n* `custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py`\r\n\r\n\r\n\n\n### Source (if relevant)\n\n_No response_\n\n### Logs\n\n_No response_\n\n### Relevant Configuration\n\n_No response_\n\n### Checklist Source Error\n\n- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [ ] Checked that the website of your service provider is still working\n- [ ] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [ ] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "# Credit where it's due:\n# This is predominantly a refactoring of the Bristol City Council script from the UKBinCollectionData repo\n# https://github.com/robbrad/UKBinCollectionData\n\nfrom datetime import datetime\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Basildon Council\"\nDESCRIPTION = \"Source for basildon.gov.uk services for Basildon Council, UK.\"\nURL = \"https://basildon.gov.uk\"\n\nTEST_CASES = {\n \"Test_Addres_001\": {\"postcode\": \"CM111BJ\", \"address\": \"6, HEADLEY ROAD\"},\n \"Test_Addres_002\": {\"postcode\": \"SS14 1QU\", \"address\": \"25 LONG RIDING\"},\n \"Test_UPRN_001\": {\"uprn\": \"100090277795\"},\n \"Test_UPRN_002\": {\"uprn\": 10024197625},\n \"Test_UPRN_003\": {\"uprn\": \"10090455610\"},\n}\nICON_MAP = {\n \"green_waste\": \"mdi:leaf\",\n \"general_waste\": \"mdi:trash-can\",\n \"food_waste\": \"mdi:food\",\n \"glass_waste\": \"mdi:bottle-wine\",\n \"papercard_waste\": \"mdi:package-varient\",\n \"plasticcans_waste\": \"mdi:bottle-soda-classic\",\n}\nNAME_MAP = {\n \"green_waste\": \"Garden\",\n \"general_waste\": \"General\",\n \"food_waste\": \"Food\",\n \"glass_waste\": \"Glass\",\n \"papercard_waste\": \"Paper/Cardboard\",\n \"plasticcans_waste\": \"Plastic/Cans\",\n}\nHEADERS = {\n \"Accept\": \"*/*\",\n \"Accept-Language\": \"en-GB,en;q=0.9\",\n \"Connection\": \"keep-alive\",\n \"Ocp-Apim-Trace\": \"true\",\n \"Origin\": \"https://mybasildon.powerappsportals.com\",\n \"Referer\": \"https://mybasildon.powerappsportals.com/\",\n \"Sec-Fetch-Dest\": \"empty\",\n \"Sec-Fetch-Mode\": \"cors\",\n \"Sec-Fetch-Site\": \"cross-site\",\n \"Sec-GPC\": \"1\",\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36\",\n}\n\n\nclass Source:\n def __init__(self, postcode=None, address=None, uprn=None):\n if uprn is None and (postcode is None or address is None):\n raise ValueError(\"Either uprn or postcode and address must be provided\")\n\n self._uprn = str(uprn).zfill(12) if uprn is not None else None\n self._postcode = postcode\n self._address = address\n\n def compare_address(self, address) -> bool:\n return (\n self._address.replace(\",\", \"\").replace(\" \", \"\").upper()\n == address.replace(\",\", \"\").replace(\" \", \"\").upper()\n )\n\n def get_uprn(self, s):\n r = s.post(\n \"https://basildonportal.azurewebsites.net/api/listPropertiesByPostcode\",\n headers=HEADERS,\n json={\"postcode\": self._postcode},\n )\n r.raise_for_status()\n data = r.json()\n if data[\"result\"] != \"success\":\n raise ValueError(\"Invalid postcode\")\n for item in data[\"properties\"]:\n if self.compare_address(item[\"line1\"]):\n self._uprn = item[\"uprn\"]\n break\n if self._uprn is None:\n raise ValueError(\"Invalid address\")\n\n def fetch(self):\n s = requests.Session()\n if self._uprn is None:\n self.get_uprn(s)\n\n # Retrieve the schedule\n payload = {\"uprn\": self._uprn}\n response = s.post(\n \"https://basildonportal.azurewebsites.net/api/getPropertyRefuseInformation\",\n headers=HEADERS,\n json=payload,\n )\n data = response.json()[\"refuse\"][\"available_services\"]\n entries = []\n for item in ICON_MAP:\n for collection_date_key in [\n \"current_collection_\",\n \"next_collection_\",\n \"last_collection_\",\n ]:\n if data[item][collection_date_key + \"active\"]:\n date_string = data[item][collection_date_key + \"date\"]\n entries.append(\n Collection(\n date=datetime.strptime(\n date_string,\n \"%Y-%m-%d\",\n ).date(),\n t=NAME_MAP[item],\n icon=ICON_MAP.get(item),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py"}, {"content": "# Credit where it's due:\n# This is predominantly a refactoring of the Bristol City Council script from the UKBinCollectionData repo\n# https://github.com/robbrad/UKBinCollectionData\n\nfrom datetime import datetime\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Bristol City Council\"\nDESCRIPTION = \"Source for bristol.gov.uk services for Bristol City Council, UK.\"\nURL = \"https://bristol.gov.uk\"\n\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"107652\"},\n \"Test_002\": {\"uprn\": \"2987\"},\n \"Test_003\": {\"uprn\": 17929},\n}\nICON_MAP = {\n \"90L BLUE SACK\": \"mdi:recycle\",\n \"240L GARDEN WASTE BIN\": \"mdi:leaf\",\n \"180L GENERAL WASTE\": \"mdi:trash-can\",\n \"45L BLACK RECYCLING BOX\": \"mdi:recycle\",\n \"23L FOOD WASTE BIN\": \"mdi:food\",\n \"55L GREEN RECYCLING BOX\": \"mdi:recycle\",\n \"140L FOOD WASTE BIN\": \"mdi:food\",\n \"240L RECYCLING MIXED GLASS\": \"mdi:bottle-wine\",\n \"240L RECYCLING PAPER\": \"mdi:newspaper\",\n \"1100L GENERAL WASTE\": \"mdi:trash-can\",\n \"1100L RECYCLING CARD\": \"mdi:package-varient\",\n \"360L RECYCLING PLASTIC/CANS\": \"mdi:bottle-soda-classic\",\n}\nHEADERS = {\n \"Accept\": \"*/*\",\n \"Accept-Language\": \"en-GB,en;q=0.9\",\n \"Connection\": \"keep-alive\",\n \"Ocp-Apim-Subscription-Key\": \"47ffd667d69c4a858f92fc38dc24b150\",\n \"Ocp-Apim-Trace\": \"true\",\n \"Origin\": \"https://bristolcouncil.powerappsportals.com\",\n \"Referer\": \"https://bristolcouncil.powerappsportals.com/\",\n \"Sec-Fetch-Dest\": \"empty\",\n \"Sec-Fetch-Mode\": \"cors\",\n \"Sec-Fetch-Site\": \"cross-site\",\n \"Sec-GPC\": \"1\",\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = str(uprn).zfill(12)\n\n def fetch(self):\n s = requests.Session()\n\n # Initialise form\n payload = {\"servicetypeid\": \"7dce896c-b3ba-ea11-a812-000d3a7f1cdc\"}\n response = s.get(\n \"https://bristolcouncil.powerappsportals.com/completedynamicformunauth/\",\n headers=HEADERS,\n params=payload,\n )\n\n # Set the search criteria\n payload = {\"Uprn\": \"UPRN\" + self._uprn}\n response = s.post(\n \"https://bcprdapidyna002.azure-api.net/bcprdfundyna001-llpg/DetailedLLPG\",\n headers=HEADERS,\n json=payload,\n )\n\n # Retrieve the schedule\n payload = {\"uprn\": self._uprn}\n response = s.post(\n \"https://bcprdapidyna002.azure-api.net/bcprdfundyna001-alloy/NextCollectionDates\",\n headers=HEADERS,\n json=payload,\n )\n data = response.json()[\"data\"]\n\n entries = []\n for item in data:\n for collection in item[\"collection\"]:\n for collection_date_key in [\"nextCollectionDate\", \"lastCollectionDate\"]:\n date_string = collection[collection_date_key].split(\"T\")[0]\n entries.append(\n Collection(\n date=datetime.strptime(\n date_string,\n \"%Y-%m-%d\",\n ).date(),\n t=item[\"containerName\"],\n icon=ICON_MAP.get(item[\"containerName\"].upper()),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py"}], "after_files": [{"content": "# Credit where it's due:\n# This is predominantly a refactoring of the Bristol City Council script from the UKBinCollectionData repo\n# https://github.com/robbrad/UKBinCollectionData\n\nfrom datetime import datetime\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Basildon Council\"\nDESCRIPTION = \"Source for basildon.gov.uk services for Basildon Council, UK.\"\nURL = \"https://basildon.gov.uk\"\n\nTEST_CASES = {\n \"Test_Addres_001\": {\"postcode\": \"CM111BJ\", \"address\": \"6, HEADLEY ROAD\"},\n \"Test_Addres_002\": {\"postcode\": \"SS14 1QU\", \"address\": \"25 LONG RIDING\"},\n \"Test_UPRN_001\": {\"uprn\": \"100090277795\"},\n \"Test_UPRN_002\": {\"uprn\": 10024197625},\n \"Test_UPRN_003\": {\"uprn\": \"10090455610\"},\n}\nICON_MAP = {\n \"green_waste\": \"mdi:leaf\",\n \"general_waste\": \"mdi:trash-can\",\n \"food_waste\": \"mdi:food\",\n \"glass_waste\": \"mdi:bottle-wine\",\n \"papercard_waste\": \"mdi:package-variant\",\n \"plasticcans_waste\": \"mdi:bottle-soda-classic\",\n}\nNAME_MAP = {\n \"green_waste\": \"Garden\",\n \"general_waste\": \"General\",\n \"food_waste\": \"Food\",\n \"glass_waste\": \"Glass\",\n \"papercard_waste\": \"Paper/Cardboard\",\n \"plasticcans_waste\": \"Plastic/Cans\",\n}\nHEADERS = {\n \"Accept\": \"*/*\",\n \"Accept-Language\": \"en-GB,en;q=0.9\",\n \"Connection\": \"keep-alive\",\n \"Ocp-Apim-Trace\": \"true\",\n \"Origin\": \"https://mybasildon.powerappsportals.com\",\n \"Referer\": \"https://mybasildon.powerappsportals.com/\",\n \"Sec-Fetch-Dest\": \"empty\",\n \"Sec-Fetch-Mode\": \"cors\",\n \"Sec-Fetch-Site\": \"cross-site\",\n \"Sec-GPC\": \"1\",\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36\",\n}\n\n\nclass Source:\n def __init__(self, postcode=None, address=None, uprn=None):\n if uprn is None and (postcode is None or address is None):\n raise ValueError(\"Either uprn or postcode and address must be provided\")\n\n self._uprn = str(uprn).zfill(12) if uprn is not None else None\n self._postcode = postcode\n self._address = address\n\n def compare_address(self, address) -> bool:\n return (\n self._address.replace(\",\", \"\").replace(\" \", \"\").upper()\n == address.replace(\",\", \"\").replace(\" \", \"\").upper()\n )\n\n def get_uprn(self, s):\n r = s.post(\n \"https://basildonportal.azurewebsites.net/api/listPropertiesByPostcode\",\n headers=HEADERS,\n json={\"postcode\": self._postcode},\n )\n r.raise_for_status()\n data = r.json()\n if data[\"result\"] != \"success\":\n raise ValueError(\"Invalid postcode\")\n for item in data[\"properties\"]:\n if self.compare_address(item[\"line1\"]):\n self._uprn = item[\"uprn\"]\n break\n if self._uprn is None:\n raise ValueError(\"Invalid address\")\n\n def fetch(self):\n s = requests.Session()\n if self._uprn is None:\n self.get_uprn(s)\n\n # Retrieve the schedule\n payload = {\"uprn\": self._uprn}\n response = s.post(\n \"https://basildonportal.azurewebsites.net/api/getPropertyRefuseInformation\",\n headers=HEADERS,\n json=payload,\n )\n data = response.json()[\"refuse\"][\"available_services\"]\n entries = []\n for item in ICON_MAP:\n for collection_date_key in [\n \"current_collection_\",\n \"next_collection_\",\n \"last_collection_\",\n ]:\n if data[item][collection_date_key + \"active\"]:\n date_string = data[item][collection_date_key + \"date\"]\n entries.append(\n Collection(\n date=datetime.strptime(\n date_string,\n \"%Y-%m-%d\",\n ).date(),\n t=NAME_MAP[item],\n icon=ICON_MAP.get(item),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/basildon_gov_uk.py"}, {"content": "# Credit where it's due:\n# This is predominantly a refactoring of the Bristol City Council script from the UKBinCollectionData repo\n# https://github.com/robbrad/UKBinCollectionData\n\nfrom datetime import datetime\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Bristol City Council\"\nDESCRIPTION = \"Source for bristol.gov.uk services for Bristol City Council, UK.\"\nURL = \"https://bristol.gov.uk\"\n\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"107652\"},\n \"Test_002\": {\"uprn\": \"2987\"},\n \"Test_003\": {\"uprn\": 17929},\n}\nICON_MAP = {\n \"90L BLUE SACK\": \"mdi:recycle\",\n \"240L GARDEN WASTE BIN\": \"mdi:leaf\",\n \"180L GENERAL WASTE\": \"mdi:trash-can\",\n \"45L BLACK RECYCLING BOX\": \"mdi:recycle\",\n \"23L FOOD WASTE BIN\": \"mdi:food\",\n \"55L GREEN RECYCLING BOX\": \"mdi:recycle\",\n \"140L FOOD WASTE BIN\": \"mdi:food\",\n \"240L RECYCLING MIXED GLASS\": \"mdi:bottle-wine\",\n \"240L RECYCLING PAPER\": \"mdi:newspaper\",\n \"1100L GENERAL WASTE\": \"mdi:trash-can\",\n \"1100L RECYCLING CARD\": \"mdi:package-variant\",\n \"360L RECYCLING PLASTIC/CANS\": \"mdi:bottle-soda-classic\",\n}\nHEADERS = {\n \"Accept\": \"*/*\",\n \"Accept-Language\": \"en-GB,en;q=0.9\",\n \"Connection\": \"keep-alive\",\n \"Ocp-Apim-Subscription-Key\": \"47ffd667d69c4a858f92fc38dc24b150\",\n \"Ocp-Apim-Trace\": \"true\",\n \"Origin\": \"https://bristolcouncil.powerappsportals.com\",\n \"Referer\": \"https://bristolcouncil.powerappsportals.com/\",\n \"Sec-Fetch-Dest\": \"empty\",\n \"Sec-Fetch-Mode\": \"cors\",\n \"Sec-Fetch-Site\": \"cross-site\",\n \"Sec-GPC\": \"1\",\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = str(uprn).zfill(12)\n\n def fetch(self):\n s = requests.Session()\n\n # Initialise form\n payload = {\"servicetypeid\": \"7dce896c-b3ba-ea11-a812-000d3a7f1cdc\"}\n response = s.get(\n \"https://bristolcouncil.powerappsportals.com/completedynamicformunauth/\",\n headers=HEADERS,\n params=payload,\n )\n\n # Set the search criteria\n payload = {\"Uprn\": \"UPRN\" + self._uprn}\n response = s.post(\n \"https://bcprdapidyna002.azure-api.net/bcprdfundyna001-llpg/DetailedLLPG\",\n headers=HEADERS,\n json=payload,\n )\n\n # Retrieve the schedule\n payload = {\"uprn\": self._uprn}\n response = s.post(\n \"https://bcprdapidyna002.azure-api.net/bcprdfundyna001-alloy/NextCollectionDates\",\n headers=HEADERS,\n json=payload,\n )\n data = response.json()[\"data\"]\n\n entries = []\n for item in data:\n for collection in item[\"collection\"]:\n for collection_date_key in [\"nextCollectionDate\", \"lastCollectionDate\"]:\n date_string = collection[collection_date_key].split(\"T\")[0]\n entries.append(\n Collection(\n date=datetime.strptime(\n date_string,\n \"%Y-%m-%d\",\n ).date(),\n t=item[\"containerName\"],\n icon=ICON_MAP.get(item[\"containerName\"].upper()),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/bristol_gov_uk.py"}]}
| 3,224 | 427 |
gh_patches_debug_3332
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-3241
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NL Down
ENTSO-E data seems fine, so is there a problem with energieopwek.nl?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/NL.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import arrow
4 import math
5
6 from . import statnett
7 from . import ENTSOE
8 from . import DK
9 import logging
10 import pandas as pd
11 import requests
12
13
14 def fetch_production(zone_key='NL', session=None, target_datetime=None,
15 logger=logging.getLogger(__name__), energieopwek_nl=True):
16 if target_datetime is None:
17 target_datetime = arrow.utcnow()
18 else:
19 target_datetime = arrow.get(target_datetime)
20 r = session or requests.session()
21
22 consumptions = ENTSOE.fetch_consumption(zone_key=zone_key,
23 session=r,
24 target_datetime=target_datetime,
25 logger=logger)
26 if not consumptions:
27 return
28 for c in consumptions:
29 del c['source']
30 df_consumptions = pd.DataFrame.from_dict(consumptions).set_index(
31 'datetime')
32
33 # NL has exchanges with BE, DE, NO, GB, DK-DK1
34 exchanges = []
35 for exchange_key in ['BE', 'DE', 'GB']:
36 zone_1, zone_2 = sorted([exchange_key, zone_key])
37 exchange = ENTSOE.fetch_exchange(zone_key1=zone_1,
38 zone_key2=zone_2,
39 session=r,
40 target_datetime=target_datetime,
41 logger=logger)
42 if not exchange:
43 return
44 exchanges.extend(exchange or [])
45
46 # add NO data, fetch once for every hour
47 # This introduces an error, because it doesn't use the average power flow
48 # during the hour, but rather only the value during the first minute of the
49 # hour!
50 zone_1, zone_2 = sorted(['NO', zone_key])
51 exchange_NO = [statnett.fetch_exchange(zone_key1=zone_1, zone_key2=zone_2,
52 session=r, target_datetime=dt.datetime,
53 logger=logger)
54 for dt in arrow.Arrow.range(
55 'hour',
56 arrow.get(min([e['datetime']
57 for e in exchanges])).replace(minute=0),
58 arrow.get(max([e['datetime']
59 for e in exchanges])).replace(minute=0))]
60 exchanges.extend(exchange_NO)
61
62 # add DK1 data (only for dates after operation)
63 if target_datetime > arrow.get('2019-08-24', 'YYYY-MM-DD') :
64 zone_1, zone_2 = sorted(['DK-DK1', zone_key])
65 df_dk = pd.DataFrame(DK.fetch_exchange(zone_key1=zone_1, zone_key2=zone_2,
66 session=r, target_datetime=target_datetime,
67 logger=logger))
68
69 # Because other exchanges and consumption data is only available per hour
70 # we floor the timpstamp to hour and group by hour with averaging of netFlow
71 df_dk['datetime'] = df_dk['datetime'].dt.floor('H')
72 exchange_DK = df_dk.groupby(['datetime']).aggregate({'netFlow' : 'mean',
73 'sortedZoneKeys': 'max', 'source' : 'max'}).reset_index()
74
75 # because averaging with high precision numbers leads to rounding errors
76 exchange_DK = exchange_DK.round({'netFlow': 3})
77
78 exchanges.extend(exchange_DK.to_dict(orient='records'))
79
80 # We want to know the net-imports into NL, so if NL is in zone_1 we need
81 # to flip the direction of the flow. E.g. 100MW for NL->DE means 100MW
82 # export to DE and needs to become -100MW for import to NL.
83 for e in exchanges:
84 if(e['sortedZoneKeys'].startswith('NL->')):
85 e['NL_import'] = -1 * e['netFlow']
86 else:
87 e['NL_import'] = e['netFlow']
88 del e['source']
89 del e['netFlow']
90
91 df_exchanges = pd.DataFrame.from_dict(exchanges).set_index('datetime')
92 # Sum all exchanges to NL imports
93 df_exchanges = df_exchanges.groupby('datetime').sum()
94
95 # Fill missing values by propagating the value forward
96 df_consumptions_with_exchanges = df_consumptions.join(df_exchanges).fillna(
97 method='ffill', limit=3) # Limit to 3 x 15min
98
99 # Load = Generation + netImports
100 # => Generation = Load - netImports
101 df_total_generations = (df_consumptions_with_exchanges['consumption']
102 - df_consumptions_with_exchanges['NL_import'])
103
104 # Fetch all production
105 # The energieopwek_nl parser is backwards compatible with ENTSOE parser.
106 # Because of data quality issues we switch to using energieopwek, but if
107 # data quality of ENTSOE improves we can switch back to using a single
108 # source.
109 productions_ENTSOE = ENTSOE.fetch_production(zone_key=zone_key, session=r,
110 target_datetime=target_datetime, logger=logger)
111 if energieopwek_nl:
112 productions_eopwek = fetch_production_energieopwek_nl(session=r,
113 target_datetime=target_datetime, logger=logger)
114 # For every production value we look up the corresponding ENTSOE
115 # values and copy the nuclear, gas, coal, biomass and unknown production.
116 productions = []
117 for p in productions_eopwek:
118 entsoe_value = next((pe for pe in productions_ENTSOE
119 if pe["datetime"] == p["datetime"]), None)
120 if entsoe_value:
121 p["production"]["nuclear"] = entsoe_value["production"]["nuclear"]
122 p["production"]["gas"] = entsoe_value["production"]["gas"]
123 p["production"]["coal"] = entsoe_value["production"]["coal"]
124 p["production"]["biomass"] = entsoe_value["production"]["biomass"]
125 p["production"]["unknown"] = entsoe_value["production"]["unknown"]
126 productions.append(p)
127 else:
128 productions = productions_ENTSOE
129 if not productions:
130 return
131
132 # Flatten production dictionaries (we ignore storage)
133 for p in productions:
134 # if for some reason theré's no unknown value
135 if not 'unknown' in p['production'] or p['production']['unknown'] == None:
136 p['production']['unknown'] = 0
137
138 Z = sum([x or 0 for x in p['production'].values()])
139 # Only calculate the difference if the datetime exists
140 # If total ENTSOE reported production (Z) is less than total generation
141 # (calculated from consumption and imports), then there must be some
142 # unknown production missing, so we add the difference.
143 # The difference can actually be negative, because consumption is based
144 # on TSO network load, but locally generated electricity may never leave
145 # the DSO network and be substantial (e.g. Solar).
146 if p['datetime'] in df_total_generations and Z < df_total_generations[p['datetime']]:
147 p['production']['unknown'] = round((
148 df_total_generations[p['datetime']] - Z + p['production']['unknown']), 3)
149
150 # Filter invalid
151 # We should probably add logging to this
152 return [p for p in productions if p['production']['unknown'] > 0]
153
154
155 def fetch_production_energieopwek_nl(session=None, target_datetime=None,
156 logger=logging.getLogger(__name__)) -> list:
157 if target_datetime is None:
158 target_datetime = arrow.utcnow()
159
160 # Get production values for target and target-1 day
161 df_current = get_production_data_energieopwek(
162 target_datetime, session=session)
163 df_previous = get_production_data_energieopwek(
164 target_datetime.shift(days=-1), session=session)
165
166 # Concat them, oldest first to keep chronological order intact
167 df = pd.concat([df_previous, df_current])
168
169 output = []
170 base_time = arrow.get(target_datetime.date(), 'Europe/Paris').shift(days=-1).to('utc')
171
172 for i, prod in enumerate(df.to_dict(orient='records')):
173 output.append(
174 {
175 'zoneKey': 'NL',
176 'datetime': base_time.shift(minutes=i*15).datetime,
177 'production': prod,
178 'source': 'energieopwek.nl, entsoe.eu'
179 }
180 )
181 return output
182
183 def get_production_data_energieopwek(date, session=None):
184 r = session or requests.session()
185
186 # The API returns values per day from local time midnight until the last
187 # round 10 minutes if the requested date is today or for the entire day if
188 # it's in the past. 'sid' can be anything.
189 url = 'http://energieopwek.nl/jsonData.php?sid=2ecde3&Day=%s' % date.format('YYYY-MM-DD')
190 response = r.get(url)
191 obj = response.json()
192 production_input = obj['TenMin']['Country']
193
194 # extract the power values in kW from the different production types
195 # we only need column 0, 1 and 3 contain energy sum values
196 df_solar = pd.DataFrame(production_input['Solar']) .drop(['1','3'], axis=1).astype(int).rename(columns={"0" : "solar"})
197 df_offshore = pd.DataFrame(production_input['WindOffshore']).drop(['1','3'], axis=1).astype(int)
198 df_onshore = pd.DataFrame(production_input['Wind']) .drop(['1','3'], axis=1).astype(int)
199
200 # We don't differentiate between onshore and offshore wind so we sum them
201 # toghether and build a single data frame with named columns
202 df_wind = df_onshore.add(df_offshore).rename(columns={"0": "wind"})
203 df = pd.concat([df_solar, df_wind], axis=1)
204
205 # resample from 10min resolution to 15min resolution to align with ENTSOE data
206 # we duplicate every row and then group them per 3 and take the mean
207 df = pd.concat([df]*2).sort_index(axis=0).reset_index(drop=True).groupby(by=lambda x : math.floor(x/3)).mean()
208
209 # Convert kW to MW with kW resolution
210 df = df.apply(lambda x: round(x / 1000, 3))
211
212 return df
213
214 if __name__ == '__main__':
215 print(fetch_production())
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/NL.py b/parsers/NL.py
--- a/parsers/NL.py
+++ b/parsers/NL.py
@@ -12,7 +12,7 @@
def fetch_production(zone_key='NL', session=None, target_datetime=None,
- logger=logging.getLogger(__name__), energieopwek_nl=True):
+ logger=logging.getLogger(__name__), energieopwek_nl=False):
if target_datetime is None:
target_datetime = arrow.utcnow()
else:
|
{"golden_diff": "diff --git a/parsers/NL.py b/parsers/NL.py\n--- a/parsers/NL.py\n+++ b/parsers/NL.py\n@@ -12,7 +12,7 @@\n \n \n def fetch_production(zone_key='NL', session=None, target_datetime=None,\n- logger=logging.getLogger(__name__), energieopwek_nl=True):\n+ logger=logging.getLogger(__name__), energieopwek_nl=False):\n if target_datetime is None:\n target_datetime = arrow.utcnow()\n else:\n", "issue": "NL Down\nENTSO-E data seems fine, so is there a problem with energieopwek.nl?\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport arrow\nimport math\n\nfrom . import statnett\nfrom . import ENTSOE\nfrom . import DK\nimport logging\nimport pandas as pd\nimport requests\n\n\ndef fetch_production(zone_key='NL', session=None, target_datetime=None,\n logger=logging.getLogger(__name__), energieopwek_nl=True):\n if target_datetime is None:\n target_datetime = arrow.utcnow()\n else:\n target_datetime = arrow.get(target_datetime)\n r = session or requests.session()\n\n consumptions = ENTSOE.fetch_consumption(zone_key=zone_key,\n session=r,\n target_datetime=target_datetime,\n logger=logger)\n if not consumptions:\n return\n for c in consumptions:\n del c['source']\n df_consumptions = pd.DataFrame.from_dict(consumptions).set_index(\n 'datetime')\n\n # NL has exchanges with BE, DE, NO, GB, DK-DK1\n exchanges = []\n for exchange_key in ['BE', 'DE', 'GB']:\n zone_1, zone_2 = sorted([exchange_key, zone_key])\n exchange = ENTSOE.fetch_exchange(zone_key1=zone_1,\n zone_key2=zone_2,\n session=r,\n target_datetime=target_datetime,\n logger=logger)\n if not exchange:\n return\n exchanges.extend(exchange or [])\n\n # add NO data, fetch once for every hour\n # This introduces an error, because it doesn't use the average power flow\n # during the hour, but rather only the value during the first minute of the\n # hour!\n zone_1, zone_2 = sorted(['NO', zone_key])\n exchange_NO = [statnett.fetch_exchange(zone_key1=zone_1, zone_key2=zone_2,\n session=r, target_datetime=dt.datetime,\n logger=logger)\n for dt in arrow.Arrow.range(\n 'hour',\n arrow.get(min([e['datetime']\n for e in exchanges])).replace(minute=0),\n arrow.get(max([e['datetime']\n for e in exchanges])).replace(minute=0))]\n exchanges.extend(exchange_NO)\n\n # add DK1 data (only for dates after operation)\n if target_datetime > arrow.get('2019-08-24', 'YYYY-MM-DD') :\n zone_1, zone_2 = sorted(['DK-DK1', zone_key])\n df_dk = pd.DataFrame(DK.fetch_exchange(zone_key1=zone_1, zone_key2=zone_2,\n session=r, target_datetime=target_datetime,\n logger=logger))\n\n # Because other exchanges and consumption data is only available per hour\n # we floor the timpstamp to hour and group by hour with averaging of netFlow\n df_dk['datetime'] = df_dk['datetime'].dt.floor('H')\n exchange_DK = df_dk.groupby(['datetime']).aggregate({'netFlow' : 'mean', \n 'sortedZoneKeys': 'max', 'source' : 'max'}).reset_index()\n\n # because averaging with high precision numbers leads to rounding errors\n exchange_DK = exchange_DK.round({'netFlow': 3})\n\n exchanges.extend(exchange_DK.to_dict(orient='records'))\n\n # We want to know the net-imports into NL, so if NL is in zone_1 we need\n # to flip the direction of the flow. E.g. 100MW for NL->DE means 100MW\n # export to DE and needs to become -100MW for import to NL.\n for e in exchanges:\n if(e['sortedZoneKeys'].startswith('NL->')):\n e['NL_import'] = -1 * e['netFlow']\n else:\n e['NL_import'] = e['netFlow']\n del e['source']\n del e['netFlow']\n\n df_exchanges = pd.DataFrame.from_dict(exchanges).set_index('datetime')\n # Sum all exchanges to NL imports\n df_exchanges = df_exchanges.groupby('datetime').sum()\n\n # Fill missing values by propagating the value forward\n df_consumptions_with_exchanges = df_consumptions.join(df_exchanges).fillna(\n method='ffill', limit=3) # Limit to 3 x 15min\n\n # Load = Generation + netImports\n # => Generation = Load - netImports\n df_total_generations = (df_consumptions_with_exchanges['consumption']\n - df_consumptions_with_exchanges['NL_import'])\n\n # Fetch all production\n # The energieopwek_nl parser is backwards compatible with ENTSOE parser.\n # Because of data quality issues we switch to using energieopwek, but if\n # data quality of ENTSOE improves we can switch back to using a single\n # source.\n productions_ENTSOE = ENTSOE.fetch_production(zone_key=zone_key, session=r,\n target_datetime=target_datetime, logger=logger)\n if energieopwek_nl:\n productions_eopwek = fetch_production_energieopwek_nl(session=r,\n target_datetime=target_datetime, logger=logger)\n # For every production value we look up the corresponding ENTSOE\n # values and copy the nuclear, gas, coal, biomass and unknown production. \n productions = []\n for p in productions_eopwek:\n entsoe_value = next((pe for pe in productions_ENTSOE\n if pe[\"datetime\"] == p[\"datetime\"]), None)\n if entsoe_value:\n p[\"production\"][\"nuclear\"] = entsoe_value[\"production\"][\"nuclear\"]\n p[\"production\"][\"gas\"] = entsoe_value[\"production\"][\"gas\"]\n p[\"production\"][\"coal\"] = entsoe_value[\"production\"][\"coal\"]\n p[\"production\"][\"biomass\"] = entsoe_value[\"production\"][\"biomass\"]\n p[\"production\"][\"unknown\"] = entsoe_value[\"production\"][\"unknown\"]\n productions.append(p)\n else:\n productions = productions_ENTSOE\n if not productions:\n return\n\n # Flatten production dictionaries (we ignore storage)\n for p in productions:\n # if for some reason ther\u00e9's no unknown value\n if not 'unknown' in p['production'] or p['production']['unknown'] == None:\n p['production']['unknown'] = 0\n\n Z = sum([x or 0 for x in p['production'].values()])\n # Only calculate the difference if the datetime exists\n # If total ENTSOE reported production (Z) is less than total generation\n # (calculated from consumption and imports), then there must be some\n # unknown production missing, so we add the difference.\n # The difference can actually be negative, because consumption is based\n # on TSO network load, but locally generated electricity may never leave\n # the DSO network and be substantial (e.g. Solar).\n if p['datetime'] in df_total_generations and Z < df_total_generations[p['datetime']]:\n p['production']['unknown'] = round((\n df_total_generations[p['datetime']] - Z + p['production']['unknown']), 3)\n\n # Filter invalid\n # We should probably add logging to this\n return [p for p in productions if p['production']['unknown'] > 0]\n\n\ndef fetch_production_energieopwek_nl(session=None, target_datetime=None,\n logger=logging.getLogger(__name__)) -> list:\n if target_datetime is None:\n target_datetime = arrow.utcnow()\n\n # Get production values for target and target-1 day\n df_current = get_production_data_energieopwek(\n target_datetime, session=session)\n df_previous = get_production_data_energieopwek(\n target_datetime.shift(days=-1), session=session)\n\n # Concat them, oldest first to keep chronological order intact\n df = pd.concat([df_previous, df_current])\n\n output = []\n base_time = arrow.get(target_datetime.date(), 'Europe/Paris').shift(days=-1).to('utc')\n\n for i, prod in enumerate(df.to_dict(orient='records')):\n output.append(\n {\n 'zoneKey': 'NL',\n 'datetime': base_time.shift(minutes=i*15).datetime,\n 'production': prod,\n 'source': 'energieopwek.nl, entsoe.eu'\n }\n )\n return output\n\ndef get_production_data_energieopwek(date, session=None):\n r = session or requests.session()\n\n # The API returns values per day from local time midnight until the last\n # round 10 minutes if the requested date is today or for the entire day if\n # it's in the past. 'sid' can be anything.\n url = 'http://energieopwek.nl/jsonData.php?sid=2ecde3&Day=%s' % date.format('YYYY-MM-DD')\n response = r.get(url)\n obj = response.json()\n production_input = obj['TenMin']['Country']\n\n # extract the power values in kW from the different production types\n # we only need column 0, 1 and 3 contain energy sum values\n df_solar = pd.DataFrame(production_input['Solar']) .drop(['1','3'], axis=1).astype(int).rename(columns={\"0\" : \"solar\"})\n df_offshore = pd.DataFrame(production_input['WindOffshore']).drop(['1','3'], axis=1).astype(int)\n df_onshore = pd.DataFrame(production_input['Wind']) .drop(['1','3'], axis=1).astype(int)\n\n # We don't differentiate between onshore and offshore wind so we sum them\n # toghether and build a single data frame with named columns\n df_wind = df_onshore.add(df_offshore).rename(columns={\"0\": \"wind\"})\n df = pd.concat([df_solar, df_wind], axis=1)\n\n # resample from 10min resolution to 15min resolution to align with ENTSOE data\n # we duplicate every row and then group them per 3 and take the mean\n df = pd.concat([df]*2).sort_index(axis=0).reset_index(drop=True).groupby(by=lambda x : math.floor(x/3)).mean()\n\n # Convert kW to MW with kW resolution\n df = df.apply(lambda x: round(x / 1000, 3))\n\n return df\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/NL.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport arrow\nimport math\n\nfrom . import statnett\nfrom . import ENTSOE\nfrom . import DK\nimport logging\nimport pandas as pd\nimport requests\n\n\ndef fetch_production(zone_key='NL', session=None, target_datetime=None,\n logger=logging.getLogger(__name__), energieopwek_nl=False):\n if target_datetime is None:\n target_datetime = arrow.utcnow()\n else:\n target_datetime = arrow.get(target_datetime)\n r = session or requests.session()\n\n consumptions = ENTSOE.fetch_consumption(zone_key=zone_key,\n session=r,\n target_datetime=target_datetime,\n logger=logger)\n if not consumptions:\n return\n for c in consumptions:\n del c['source']\n df_consumptions = pd.DataFrame.from_dict(consumptions).set_index(\n 'datetime')\n\n # NL has exchanges with BE, DE, NO, GB, DK-DK1\n exchanges = []\n for exchange_key in ['BE', 'DE', 'GB']:\n zone_1, zone_2 = sorted([exchange_key, zone_key])\n exchange = ENTSOE.fetch_exchange(zone_key1=zone_1,\n zone_key2=zone_2,\n session=r,\n target_datetime=target_datetime,\n logger=logger)\n if not exchange:\n return\n exchanges.extend(exchange or [])\n\n # add NO data, fetch once for every hour\n # This introduces an error, because it doesn't use the average power flow\n # during the hour, but rather only the value during the first minute of the\n # hour!\n zone_1, zone_2 = sorted(['NO', zone_key])\n exchange_NO = [statnett.fetch_exchange(zone_key1=zone_1, zone_key2=zone_2,\n session=r, target_datetime=dt.datetime,\n logger=logger)\n for dt in arrow.Arrow.range(\n 'hour',\n arrow.get(min([e['datetime']\n for e in exchanges])).replace(minute=0),\n arrow.get(max([e['datetime']\n for e in exchanges])).replace(minute=0))]\n exchanges.extend(exchange_NO)\n\n # add DK1 data (only for dates after operation)\n if target_datetime > arrow.get('2019-08-24', 'YYYY-MM-DD') :\n zone_1, zone_2 = sorted(['DK-DK1', zone_key])\n df_dk = pd.DataFrame(DK.fetch_exchange(zone_key1=zone_1, zone_key2=zone_2,\n session=r, target_datetime=target_datetime,\n logger=logger))\n\n # Because other exchanges and consumption data is only available per hour\n # we floor the timpstamp to hour and group by hour with averaging of netFlow\n df_dk['datetime'] = df_dk['datetime'].dt.floor('H')\n exchange_DK = df_dk.groupby(['datetime']).aggregate({'netFlow' : 'mean', \n 'sortedZoneKeys': 'max', 'source' : 'max'}).reset_index()\n\n # because averaging with high precision numbers leads to rounding errors\n exchange_DK = exchange_DK.round({'netFlow': 3})\n\n exchanges.extend(exchange_DK.to_dict(orient='records'))\n\n # We want to know the net-imports into NL, so if NL is in zone_1 we need\n # to flip the direction of the flow. E.g. 100MW for NL->DE means 100MW\n # export to DE and needs to become -100MW for import to NL.\n for e in exchanges:\n if(e['sortedZoneKeys'].startswith('NL->')):\n e['NL_import'] = -1 * e['netFlow']\n else:\n e['NL_import'] = e['netFlow']\n del e['source']\n del e['netFlow']\n\n df_exchanges = pd.DataFrame.from_dict(exchanges).set_index('datetime')\n # Sum all exchanges to NL imports\n df_exchanges = df_exchanges.groupby('datetime').sum()\n\n # Fill missing values by propagating the value forward\n df_consumptions_with_exchanges = df_consumptions.join(df_exchanges).fillna(\n method='ffill', limit=3) # Limit to 3 x 15min\n\n # Load = Generation + netImports\n # => Generation = Load - netImports\n df_total_generations = (df_consumptions_with_exchanges['consumption']\n - df_consumptions_with_exchanges['NL_import'])\n\n # Fetch all production\n # The energieopwek_nl parser is backwards compatible with ENTSOE parser.\n # Because of data quality issues we switch to using energieopwek, but if\n # data quality of ENTSOE improves we can switch back to using a single\n # source.\n productions_ENTSOE = ENTSOE.fetch_production(zone_key=zone_key, session=r,\n target_datetime=target_datetime, logger=logger)\n if energieopwek_nl:\n productions_eopwek = fetch_production_energieopwek_nl(session=r,\n target_datetime=target_datetime, logger=logger)\n # For every production value we look up the corresponding ENTSOE\n # values and copy the nuclear, gas, coal, biomass and unknown production. \n productions = []\n for p in productions_eopwek:\n entsoe_value = next((pe for pe in productions_ENTSOE\n if pe[\"datetime\"] == p[\"datetime\"]), None)\n if entsoe_value:\n p[\"production\"][\"nuclear\"] = entsoe_value[\"production\"][\"nuclear\"]\n p[\"production\"][\"gas\"] = entsoe_value[\"production\"][\"gas\"]\n p[\"production\"][\"coal\"] = entsoe_value[\"production\"][\"coal\"]\n p[\"production\"][\"biomass\"] = entsoe_value[\"production\"][\"biomass\"]\n p[\"production\"][\"unknown\"] = entsoe_value[\"production\"][\"unknown\"]\n productions.append(p)\n else:\n productions = productions_ENTSOE\n if not productions:\n return\n\n # Flatten production dictionaries (we ignore storage)\n for p in productions:\n # if for some reason ther\u00e9's no unknown value\n if not 'unknown' in p['production'] or p['production']['unknown'] == None:\n p['production']['unknown'] = 0\n\n Z = sum([x or 0 for x in p['production'].values()])\n # Only calculate the difference if the datetime exists\n # If total ENTSOE reported production (Z) is less than total generation\n # (calculated from consumption and imports), then there must be some\n # unknown production missing, so we add the difference.\n # The difference can actually be negative, because consumption is based\n # on TSO network load, but locally generated electricity may never leave\n # the DSO network and be substantial (e.g. Solar).\n if p['datetime'] in df_total_generations and Z < df_total_generations[p['datetime']]:\n p['production']['unknown'] = round((\n df_total_generations[p['datetime']] - Z + p['production']['unknown']), 3)\n\n # Filter invalid\n # We should probably add logging to this\n return [p for p in productions if p['production']['unknown'] > 0]\n\n\ndef fetch_production_energieopwek_nl(session=None, target_datetime=None,\n logger=logging.getLogger(__name__)) -> list:\n if target_datetime is None:\n target_datetime = arrow.utcnow()\n\n # Get production values for target and target-1 day\n df_current = get_production_data_energieopwek(\n target_datetime, session=session)\n df_previous = get_production_data_energieopwek(\n target_datetime.shift(days=-1), session=session)\n\n # Concat them, oldest first to keep chronological order intact\n df = pd.concat([df_previous, df_current])\n\n output = []\n base_time = arrow.get(target_datetime.date(), 'Europe/Paris').shift(days=-1).to('utc')\n\n for i, prod in enumerate(df.to_dict(orient='records')):\n output.append(\n {\n 'zoneKey': 'NL',\n 'datetime': base_time.shift(minutes=i*15).datetime,\n 'production': prod,\n 'source': 'energieopwek.nl, entsoe.eu'\n }\n )\n return output\n\ndef get_production_data_energieopwek(date, session=None):\n r = session or requests.session()\n\n # The API returns values per day from local time midnight until the last\n # round 10 minutes if the requested date is today or for the entire day if\n # it's in the past. 'sid' can be anything.\n url = 'http://energieopwek.nl/jsonData.php?sid=2ecde3&Day=%s' % date.format('YYYY-MM-DD')\n response = r.get(url)\n obj = response.json()\n production_input = obj['TenMin']['Country']\n\n # extract the power values in kW from the different production types\n # we only need column 0, 1 and 3 contain energy sum values\n df_solar = pd.DataFrame(production_input['Solar']) .drop(['1','3'], axis=1).astype(int).rename(columns={\"0\" : \"solar\"})\n df_offshore = pd.DataFrame(production_input['WindOffshore']).drop(['1','3'], axis=1).astype(int)\n df_onshore = pd.DataFrame(production_input['Wind']) .drop(['1','3'], axis=1).astype(int)\n\n # We don't differentiate between onshore and offshore wind so we sum them\n # toghether and build a single data frame with named columns\n df_wind = df_onshore.add(df_offshore).rename(columns={\"0\": \"wind\"})\n df = pd.concat([df_solar, df_wind], axis=1)\n\n # resample from 10min resolution to 15min resolution to align with ENTSOE data\n # we duplicate every row and then group them per 3 and take the mean\n df = pd.concat([df]*2).sort_index(axis=0).reset_index(drop=True).groupby(by=lambda x : math.floor(x/3)).mean()\n\n # Convert kW to MW with kW resolution\n df = df.apply(lambda x: round(x / 1000, 3))\n\n return df\n\nif __name__ == '__main__':\n print(fetch_production())\n", "path": "parsers/NL.py"}]}
| 3,106 | 109 |
gh_patches_debug_1345
|
rasdani/github-patches
|
git_diff
|
castorini__pyserini-667
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Switch to jnius_config.add_classpath
Currently, pyserini replaces any previously registered jars on the classpath in its setup code. Is there any reason to not use add_classpath() instead of set_classpath()?
Here is the pyjnius relevant code:
```python
def set_classpath(*path):
"""
Sets the classpath for the JVM to use. Replaces any existing classpath, overriding the CLASSPATH environment variable.
"""
check_vm_running()
global classpath
classpath = list(path)
def add_classpath(*path):
"""
Appends items to the classpath for the JVM to use.
Replaces any existing classpath, overriding the CLASSPATH environment variable.
"""
check_vm_running()
global classpath
if classpath is None:
classpath = list(path)
else:
classpath.extend(path)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyserini/setup.py`
Content:
```
1 #
2 # Pyserini: Reproducible IR research with sparse and dense representations
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16
17 """
18 Module for adding Anserini jar to classpath for pyjnius usage
19 """
20
21 import glob
22 import os
23
24 import jnius_config
25
26
27 def configure_classpath(anserini_root="."):
28 """
29 Parameters
30 ----------
31 anserini_root : str
32 (Optional) path to root anserini directory.
33
34 """
35 paths = glob.glob(os.path.join(anserini_root, 'anserini-*-fatjar.jar'))
36 if not paths:
37 raise Exception('No matching jar file found in {}'.format(os.path.abspath(anserini_root)))
38
39 latest = max(paths, key=os.path.getctime)
40 jnius_config.set_classpath(latest)
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyserini/setup.py b/pyserini/setup.py
--- a/pyserini/setup.py
+++ b/pyserini/setup.py
@@ -37,4 +37,4 @@
raise Exception('No matching jar file found in {}'.format(os.path.abspath(anserini_root)))
latest = max(paths, key=os.path.getctime)
- jnius_config.set_classpath(latest)
+ jnius_config.add_classpath(latest)
|
{"golden_diff": "diff --git a/pyserini/setup.py b/pyserini/setup.py\n--- a/pyserini/setup.py\n+++ b/pyserini/setup.py\n@@ -37,4 +37,4 @@\n raise Exception('No matching jar file found in {}'.format(os.path.abspath(anserini_root)))\n \n latest = max(paths, key=os.path.getctime)\n- jnius_config.set_classpath(latest)\n+ jnius_config.add_classpath(latest)\n", "issue": "Switch to jnius_config.add_classpath\nCurrently, pyserini replaces any previously registered jars on the classpath in its setup code. Is there any reason to not use add_classpath() instead of set_classpath()?\r\n\r\nHere is the pyjnius relevant code:\r\n```python\r\ndef set_classpath(*path):\r\n \"\"\"\r\n Sets the classpath for the JVM to use. Replaces any existing classpath, overriding the CLASSPATH environment variable.\r\n \"\"\"\r\n check_vm_running()\r\n global classpath\r\n classpath = list(path)\r\n\r\n\r\ndef add_classpath(*path):\r\n \"\"\"\r\n Appends items to the classpath for the JVM to use.\r\n Replaces any existing classpath, overriding the CLASSPATH environment variable.\r\n \"\"\"\r\n check_vm_running()\r\n global classpath\r\n if classpath is None:\r\n classpath = list(path)\r\n else:\r\n classpath.extend(path)\r\n```\n", "before_files": [{"content": "#\n# Pyserini: Reproducible IR research with sparse and dense representations\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"\nModule for adding Anserini jar to classpath for pyjnius usage\n\"\"\"\n\nimport glob\nimport os\n\nimport jnius_config\n\n\ndef configure_classpath(anserini_root=\".\"):\n \"\"\"\n Parameters\n ----------\n anserini_root : str\n (Optional) path to root anserini directory.\n\n \"\"\"\n paths = glob.glob(os.path.join(anserini_root, 'anserini-*-fatjar.jar'))\n if not paths:\n raise Exception('No matching jar file found in {}'.format(os.path.abspath(anserini_root)))\n\n latest = max(paths, key=os.path.getctime)\n jnius_config.set_classpath(latest)\n", "path": "pyserini/setup.py"}], "after_files": [{"content": "#\n# Pyserini: Reproducible IR research with sparse and dense representations\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"\nModule for adding Anserini jar to classpath for pyjnius usage\n\"\"\"\n\nimport glob\nimport os\n\nimport jnius_config\n\n\ndef configure_classpath(anserini_root=\".\"):\n \"\"\"\n Parameters\n ----------\n anserini_root : str\n (Optional) path to root anserini directory.\n\n \"\"\"\n paths = glob.glob(os.path.join(anserini_root, 'anserini-*-fatjar.jar'))\n if not paths:\n raise Exception('No matching jar file found in {}'.format(os.path.abspath(anserini_root)))\n\n latest = max(paths, key=os.path.getctime)\n jnius_config.add_classpath(latest)\n", "path": "pyserini/setup.py"}]}
| 803 | 101 |
gh_patches_debug_21488
|
rasdani/github-patches
|
git_diff
|
WeblateOrg__weblate-10189
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"Add missing languages" add-on not working
### Describe the issue
I have enabled the "Add missing languages" add-on on https://hosted.weblate.org/projects/catima/. However, despite waiting over 24 hours as the documentation on https://docs.weblate.org/en/latest/admin/addons.html#addon-weblate-consistency-languages states, it has not put the different components of the same project in sync.
This is most noticeable when comparing https://hosted.weblate.org/projects/catima/catima/ with https://hosted.weblate.org/projects/catima/android-debug/
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar issues in this repository.
### Steps to reproduce the behavior
1. Enable the "Add missing languages" add-on in a project with multiple components where one component has less languages than the other
2. Wait at least 24 hours as the add-on states
### Expected behavior
All components have the same languages, missing languages on components get created
### Screenshots
Android component:

Android (Debug) component:

### Exception traceback
_No response_
### How do you run Weblate?
weblate.org service
### Weblate versions
_No response_
### Weblate deploy checks
_No response_
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `weblate/addons/consistency.py`
Content:
```
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from django.db.models import Q
6 from django.utils.translation import gettext_lazy
7
8 from weblate.addons.base import BaseAddon
9 from weblate.addons.events import EVENT_DAILY, EVENT_POST_ADD
10 from weblate.addons.tasks import language_consistency
11 from weblate.lang.models import Language
12
13
14 class LangaugeConsistencyAddon(BaseAddon):
15 events = (EVENT_DAILY, EVENT_POST_ADD)
16 name = "weblate.consistency.languages"
17 verbose = gettext_lazy("Add missing languages")
18 description = gettext_lazy(
19 "Ensures a consistent set of languages is used for all components "
20 "within a project."
21 )
22 icon = "language.svg"
23 project_scope = True
24
25 def daily(self, component):
26 language_consistency.delay(
27 component.project_id,
28 list(
29 Language.objects.filter(
30 Q(translation__component=component) | Q(component=component)
31 ).values_list("pk", flat=True)
32 ),
33 )
34
35 def post_add(self, translation):
36 language_consistency.delay(
37 translation.component.project_id,
38 [translation.language_id],
39 )
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/weblate/addons/consistency.py b/weblate/addons/consistency.py
--- a/weblate/addons/consistency.py
+++ b/weblate/addons/consistency.py
@@ -2,13 +2,11 @@
#
# SPDX-License-Identifier: GPL-3.0-or-later
-from django.db.models import Q
from django.utils.translation import gettext_lazy
from weblate.addons.base import BaseAddon
from weblate.addons.events import EVENT_DAILY, EVENT_POST_ADD
from weblate.addons.tasks import language_consistency
-from weblate.lang.models import Language
class LangaugeConsistencyAddon(BaseAddon):
@@ -25,11 +23,7 @@
def daily(self, component):
language_consistency.delay(
component.project_id,
- list(
- Language.objects.filter(
- Q(translation__component=component) | Q(component=component)
- ).values_list("pk", flat=True)
- ),
+ [language.id for language in component.project.languages],
)
def post_add(self, translation):
|
{"golden_diff": "diff --git a/weblate/addons/consistency.py b/weblate/addons/consistency.py\n--- a/weblate/addons/consistency.py\n+++ b/weblate/addons/consistency.py\n@@ -2,13 +2,11 @@\n #\n # SPDX-License-Identifier: GPL-3.0-or-later\n \n-from django.db.models import Q\n from django.utils.translation import gettext_lazy\n \n from weblate.addons.base import BaseAddon\n from weblate.addons.events import EVENT_DAILY, EVENT_POST_ADD\n from weblate.addons.tasks import language_consistency\n-from weblate.lang.models import Language\n \n \n class LangaugeConsistencyAddon(BaseAddon):\n@@ -25,11 +23,7 @@\n def daily(self, component):\n language_consistency.delay(\n component.project_id,\n- list(\n- Language.objects.filter(\n- Q(translation__component=component) | Q(component=component)\n- ).values_list(\"pk\", flat=True)\n- ),\n+ [language.id for language in component.project.languages],\n )\n \n def post_add(self, translation):\n", "issue": "\"Add missing languages\" add-on not working\n### Describe the issue\n\nI have enabled the \"Add missing languages\" add-on on https://hosted.weblate.org/projects/catima/. However, despite waiting over 24 hours as the documentation on https://docs.weblate.org/en/latest/admin/addons.html#addon-weblate-consistency-languages states, it has not put the different components of the same project in sync.\r\n\r\nThis is most noticeable when comparing https://hosted.weblate.org/projects/catima/catima/ with https://hosted.weblate.org/projects/catima/android-debug/\n\n### I already tried\n\n- [X] I've read and searched [the documentation](https://docs.weblate.org/).\n- [X] I've searched for similar issues in this repository.\n\n### Steps to reproduce the behavior\n\n1. Enable the \"Add missing languages\" add-on in a project with multiple components where one component has less languages than the other\r\n2. Wait at least 24 hours as the add-on states\n\n### Expected behavior\n\nAll components have the same languages, missing languages on components get created\n\n### Screenshots\n\nAndroid component:\r\n\r\n\r\nAndroid (Debug) component:\r\n\r\n\n\n### Exception traceback\n\n_No response_\n\n### How do you run Weblate?\n\nweblate.org service\n\n### Weblate versions\n\n_No response_\n\n### Weblate deploy checks\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom django.db.models import Q\nfrom django.utils.translation import gettext_lazy\n\nfrom weblate.addons.base import BaseAddon\nfrom weblate.addons.events import EVENT_DAILY, EVENT_POST_ADD\nfrom weblate.addons.tasks import language_consistency\nfrom weblate.lang.models import Language\n\n\nclass LangaugeConsistencyAddon(BaseAddon):\n events = (EVENT_DAILY, EVENT_POST_ADD)\n name = \"weblate.consistency.languages\"\n verbose = gettext_lazy(\"Add missing languages\")\n description = gettext_lazy(\n \"Ensures a consistent set of languages is used for all components \"\n \"within a project.\"\n )\n icon = \"language.svg\"\n project_scope = True\n\n def daily(self, component):\n language_consistency.delay(\n component.project_id,\n list(\n Language.objects.filter(\n Q(translation__component=component) | Q(component=component)\n ).values_list(\"pk\", flat=True)\n ),\n )\n\n def post_add(self, translation):\n language_consistency.delay(\n translation.component.project_id,\n [translation.language_id],\n )\n", "path": "weblate/addons/consistency.py"}], "after_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom django.utils.translation import gettext_lazy\n\nfrom weblate.addons.base import BaseAddon\nfrom weblate.addons.events import EVENT_DAILY, EVENT_POST_ADD\nfrom weblate.addons.tasks import language_consistency\n\n\nclass LangaugeConsistencyAddon(BaseAddon):\n events = (EVENT_DAILY, EVENT_POST_ADD)\n name = \"weblate.consistency.languages\"\n verbose = gettext_lazy(\"Add missing languages\")\n description = gettext_lazy(\n \"Ensures a consistent set of languages is used for all components \"\n \"within a project.\"\n )\n icon = \"language.svg\"\n project_scope = True\n\n def daily(self, component):\n language_consistency.delay(\n component.project_id,\n [language.id for language in component.project.languages],\n )\n\n def post_add(self, translation):\n language_consistency.delay(\n translation.component.project_id,\n [translation.language_id],\n )\n", "path": "weblate/addons/consistency.py"}]}
| 1,015 | 242 |
gh_patches_debug_42186
|
rasdani/github-patches
|
git_diff
|
pypa__virtualenv-1870
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
virtualenv parallel run silently breaks
When running virtualenv creation in parallel, the app-data image builder is not synchronized and inconsistent states might break the virtual environment creation. Furthermore, in this case no error is raised in case of the failed commands.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/virtualenv/seed/embed/via_app_data/via_app_data.py`
Content:
```
1 """Bootstrap"""
2 from __future__ import absolute_import, unicode_literals
3
4 import logging
5 from contextlib import contextmanager
6 from subprocess import CalledProcessError
7 from threading import Lock, Thread
8
9 import six
10
11 from virtualenv.info import fs_supports_symlink
12 from virtualenv.seed.embed.base_embed import BaseEmbed
13 from virtualenv.seed.wheels import get_wheel
14 from virtualenv.util.path import Path
15
16 from .pip_install.copy import CopyPipInstall
17 from .pip_install.symlink import SymlinkPipInstall
18
19
20 class FromAppData(BaseEmbed):
21 def __init__(self, options):
22 super(FromAppData, self).__init__(options)
23 self.symlinks = options.symlink_app_data
24
25 @classmethod
26 def add_parser_arguments(cls, parser, interpreter, app_data):
27 super(FromAppData, cls).add_parser_arguments(parser, interpreter, app_data)
28 can_symlink = app_data.transient is False and fs_supports_symlink()
29 parser.add_argument(
30 "--symlink-app-data",
31 dest="symlink_app_data",
32 action="store_true" if can_symlink else "store_false",
33 help="{} symlink the python packages from the app-data folder (requires seed pip>=19.3)".format(
34 "" if can_symlink else "not supported - ",
35 ),
36 default=False,
37 )
38
39 def run(self, creator):
40 if not self.enabled:
41 return
42 with self._get_seed_wheels(creator) as name_to_whl:
43 pip_version = name_to_whl["pip"].version_tuple if "pip" in name_to_whl else None
44 installer_class = self.installer_class(pip_version)
45
46 def _install(name, wheel):
47 logging.debug("install %s from wheel %s via %s", name, wheel, installer_class.__name__)
48 key = Path(installer_class.__name__) / wheel.path.stem
49 wheel_img = self.app_data.wheel_image(creator.interpreter.version_release_str, key)
50 installer = installer_class(wheel.path, creator, wheel_img)
51 if not installer.has_image():
52 installer.build_image()
53 installer.install(creator.interpreter.version_info)
54
55 threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())
56 for thread in threads:
57 thread.start()
58 for thread in threads:
59 thread.join()
60
61 @contextmanager
62 def _get_seed_wheels(self, creator):
63 name_to_whl, lock, fail = {}, Lock(), {}
64
65 def _get(distribution, version):
66 for_py_version = creator.interpreter.version_release_str
67 failure, result = None, None
68 # fallback to download in case the exact version is not available
69 for download in [True] if self.download else [False, True]:
70 failure = None
71 try:
72 result = get_wheel(
73 distribution=distribution,
74 version=version,
75 for_py_version=for_py_version,
76 search_dirs=self.extra_search_dir,
77 download=download,
78 app_data=self.app_data,
79 do_periodic_update=self.periodic_update,
80 )
81 if result is not None:
82 break
83 except Exception as exception: # noqa
84 logging.exception("fail")
85 failure = exception
86 if failure:
87 if isinstance(failure, CalledProcessError):
88 msg = "failed to download {}".format(distribution)
89 if version is not None:
90 msg += " version {}".format(version)
91 msg += ", pip download exit code {}".format(failure.returncode)
92 output = failure.output if six.PY2 else (failure.output + failure.stderr)
93 if output:
94 msg += "\n"
95 msg += output
96 else:
97 msg = repr(failure)
98 logging.error(msg)
99 with lock:
100 fail[distribution] = version
101 else:
102 with lock:
103 name_to_whl[distribution] = result
104
105 threads = list(
106 Thread(target=_get, args=(distribution, version))
107 for distribution, version in self.distribution_to_versions().items()
108 )
109 for thread in threads:
110 thread.start()
111 for thread in threads:
112 thread.join()
113 if fail:
114 raise RuntimeError("seed failed due to failing to download wheels {}".format(", ".join(fail.keys())))
115 yield name_to_whl
116
117 def installer_class(self, pip_version_tuple):
118 if self.symlinks and pip_version_tuple:
119 # symlink support requires pip 19.3+
120 if pip_version_tuple >= (19, 3):
121 return SymlinkPipInstall
122 return CopyPipInstall
123
124 def __unicode__(self):
125 base = super(FromAppData, self).__unicode__()
126 msg = ", via={}, app_data_dir={}".format("symlink" if self.symlinks else "copy", self.app_data)
127 return base[:-1] + msg + base[-1]
128
```
Path: `src/virtualenv/util/lock.py`
Content:
```
1 """holds locking functionality that works across processes"""
2 from __future__ import absolute_import, unicode_literals
3
4 import logging
5 import os
6 from contextlib import contextmanager
7 from threading import Lock, RLock
8
9 from filelock import FileLock, Timeout
10
11 from virtualenv.util.path import Path
12
13
14 class _CountedFileLock(FileLock):
15 def __init__(self, lock_file):
16 super(_CountedFileLock, self).__init__(lock_file)
17 self.count = 0
18 self.thread_safe = RLock()
19
20 def acquire(self, timeout=None, poll_intervall=0.05):
21 with self.thread_safe:
22 if self.count == 0:
23 super(_CountedFileLock, self).acquire(timeout=timeout, poll_intervall=poll_intervall)
24 self.count += 1
25
26 def release(self, force=False):
27 with self.thread_safe:
28 if self.count == 1:
29 super(_CountedFileLock, self).release()
30 self.count = max(self.count - 1, 0)
31
32
33 _lock_store = {}
34 _store_lock = Lock()
35
36
37 class ReentrantFileLock(object):
38 def __init__(self, folder):
39 self._lock = None
40 path = Path(folder)
41 self.path = path.resolve() if path.exists() else path
42
43 def __repr__(self):
44 return "{}({})".format(self.__class__.__name__, self.path)
45
46 def __div__(self, other):
47 return ReentrantFileLock(self.path / other)
48
49 def __truediv__(self, other):
50 return self.__div__(other)
51
52 def _create_lock(self, name=""):
53 lock_file = str(self.path / "{}.lock".format(name))
54 with _store_lock:
55 if lock_file not in _lock_store:
56 _lock_store[lock_file] = _CountedFileLock(lock_file)
57 return _lock_store[lock_file]
58
59 @staticmethod
60 def _del_lock(lock):
61 with _store_lock:
62 if lock is not None:
63 with lock.thread_safe:
64 if lock.count == 0:
65 _lock_store.pop(lock.lock_file, None)
66
67 def __del__(self):
68 self._del_lock(self._lock)
69
70 def __enter__(self):
71 self._lock = self._create_lock()
72 self._lock_file(self._lock)
73
74 def __exit__(self, exc_type, exc_val, exc_tb):
75 self._release(self._lock)
76
77 def _lock_file(self, lock, no_block=False):
78 # multiple processes might be trying to get a first lock... so we cannot check if this directory exist without
79 # a lock, but that lock might then become expensive, and it's not clear where that lock should live.
80 # Instead here we just ignore if we fail to create the directory.
81 try:
82 os.makedirs(str(self.path))
83 except OSError:
84 pass
85 try:
86 lock.acquire(0.0001)
87 except Timeout:
88 if no_block:
89 raise
90 logging.debug("lock file %s present, will block until released", lock.lock_file)
91 lock.release() # release the acquire try from above
92 lock.acquire()
93
94 @staticmethod
95 def _release(lock):
96 lock.release()
97
98 @contextmanager
99 def lock_for_key(self, name, no_block=False):
100 lock = self._create_lock(name)
101 try:
102 try:
103 self._lock_file(lock, no_block)
104 yield
105 finally:
106 self._release(lock)
107 finally:
108 self._del_lock(lock)
109
110
111 __all__ = (
112 "Timeout",
113 "ReentrantFileLock",
114 )
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/virtualenv/seed/embed/via_app_data/via_app_data.py b/src/virtualenv/seed/embed/via_app_data/via_app_data.py
--- a/src/virtualenv/seed/embed/via_app_data/via_app_data.py
+++ b/src/virtualenv/seed/embed/via_app_data/via_app_data.py
@@ -2,6 +2,8 @@
from __future__ import absolute_import, unicode_literals
import logging
+import sys
+import traceback
from contextlib import contextmanager
from subprocess import CalledProcessError
from threading import Lock, Thread
@@ -11,7 +13,9 @@
from virtualenv.info import fs_supports_symlink
from virtualenv.seed.embed.base_embed import BaseEmbed
from virtualenv.seed.wheels import get_wheel
+from virtualenv.util.lock import _CountedFileLock
from virtualenv.util.path import Path
+from virtualenv.util.six import ensure_text
from .pip_install.copy import CopyPipInstall
from .pip_install.symlink import SymlinkPipInstall
@@ -42,21 +46,32 @@
with self._get_seed_wheels(creator) as name_to_whl:
pip_version = name_to_whl["pip"].version_tuple if "pip" in name_to_whl else None
installer_class = self.installer_class(pip_version)
+ exceptions = {}
def _install(name, wheel):
- logging.debug("install %s from wheel %s via %s", name, wheel, installer_class.__name__)
- key = Path(installer_class.__name__) / wheel.path.stem
- wheel_img = self.app_data.wheel_image(creator.interpreter.version_release_str, key)
- installer = installer_class(wheel.path, creator, wheel_img)
- if not installer.has_image():
- installer.build_image()
- installer.install(creator.interpreter.version_info)
+ try:
+ logging.debug("install %s from wheel %s via %s", name, wheel, installer_class.__name__)
+ key = Path(installer_class.__name__) / wheel.path.stem
+ wheel_img = self.app_data.wheel_image(creator.interpreter.version_release_str, key)
+ installer = installer_class(wheel.path, creator, wheel_img)
+ with _CountedFileLock(ensure_text(str(wheel_img.parent / "{}.lock".format(wheel_img.name)))):
+ if not installer.has_image():
+ installer.build_image()
+ installer.install(creator.interpreter.version_info)
+ except Exception: # noqa
+ exceptions[name] = sys.exc_info()
threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())
for thread in threads:
thread.start()
for thread in threads:
thread.join()
+ if exceptions:
+ messages = ["failed to build image {} because:".format(", ".join(exceptions.keys()))]
+ for value in exceptions.values():
+ exc_type, exc_value, exc_traceback = value
+ messages.append("".join(traceback.format_exception(exc_type, exc_value, exc_traceback)))
+ raise RuntimeError("\n".join(messages))
@contextmanager
def _get_seed_wheels(self, creator):
diff --git a/src/virtualenv/util/lock.py b/src/virtualenv/util/lock.py
--- a/src/virtualenv/util/lock.py
+++ b/src/virtualenv/util/lock.py
@@ -13,6 +13,12 @@
class _CountedFileLock(FileLock):
def __init__(self, lock_file):
+ parent = os.path.dirname(lock_file)
+ if not os.path.exists(parent):
+ try:
+ os.makedirs(parent)
+ except OSError:
+ pass
super(_CountedFileLock, self).__init__(lock_file)
self.count = 0
self.thread_safe = RLock()
|
{"golden_diff": "diff --git a/src/virtualenv/seed/embed/via_app_data/via_app_data.py b/src/virtualenv/seed/embed/via_app_data/via_app_data.py\n--- a/src/virtualenv/seed/embed/via_app_data/via_app_data.py\n+++ b/src/virtualenv/seed/embed/via_app_data/via_app_data.py\n@@ -2,6 +2,8 @@\n from __future__ import absolute_import, unicode_literals\n \n import logging\n+import sys\n+import traceback\n from contextlib import contextmanager\n from subprocess import CalledProcessError\n from threading import Lock, Thread\n@@ -11,7 +13,9 @@\n from virtualenv.info import fs_supports_symlink\n from virtualenv.seed.embed.base_embed import BaseEmbed\n from virtualenv.seed.wheels import get_wheel\n+from virtualenv.util.lock import _CountedFileLock\n from virtualenv.util.path import Path\n+from virtualenv.util.six import ensure_text\n \n from .pip_install.copy import CopyPipInstall\n from .pip_install.symlink import SymlinkPipInstall\n@@ -42,21 +46,32 @@\n with self._get_seed_wheels(creator) as name_to_whl:\n pip_version = name_to_whl[\"pip\"].version_tuple if \"pip\" in name_to_whl else None\n installer_class = self.installer_class(pip_version)\n+ exceptions = {}\n \n def _install(name, wheel):\n- logging.debug(\"install %s from wheel %s via %s\", name, wheel, installer_class.__name__)\n- key = Path(installer_class.__name__) / wheel.path.stem\n- wheel_img = self.app_data.wheel_image(creator.interpreter.version_release_str, key)\n- installer = installer_class(wheel.path, creator, wheel_img)\n- if not installer.has_image():\n- installer.build_image()\n- installer.install(creator.interpreter.version_info)\n+ try:\n+ logging.debug(\"install %s from wheel %s via %s\", name, wheel, installer_class.__name__)\n+ key = Path(installer_class.__name__) / wheel.path.stem\n+ wheel_img = self.app_data.wheel_image(creator.interpreter.version_release_str, key)\n+ installer = installer_class(wheel.path, creator, wheel_img)\n+ with _CountedFileLock(ensure_text(str(wheel_img.parent / \"{}.lock\".format(wheel_img.name)))):\n+ if not installer.has_image():\n+ installer.build_image()\n+ installer.install(creator.interpreter.version_info)\n+ except Exception: # noqa\n+ exceptions[name] = sys.exc_info()\n \n threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n+ if exceptions:\n+ messages = [\"failed to build image {} because:\".format(\", \".join(exceptions.keys()))]\n+ for value in exceptions.values():\n+ exc_type, exc_value, exc_traceback = value\n+ messages.append(\"\".join(traceback.format_exception(exc_type, exc_value, exc_traceback)))\n+ raise RuntimeError(\"\\n\".join(messages))\n \n @contextmanager\n def _get_seed_wheels(self, creator):\ndiff --git a/src/virtualenv/util/lock.py b/src/virtualenv/util/lock.py\n--- a/src/virtualenv/util/lock.py\n+++ b/src/virtualenv/util/lock.py\n@@ -13,6 +13,12 @@\n \n class _CountedFileLock(FileLock):\n def __init__(self, lock_file):\n+ parent = os.path.dirname(lock_file)\n+ if not os.path.exists(parent):\n+ try:\n+ os.makedirs(parent)\n+ except OSError:\n+ pass\n super(_CountedFileLock, self).__init__(lock_file)\n self.count = 0\n self.thread_safe = RLock()\n", "issue": "virtualenv parallel run silently breaks\nWhen running virtualenv creation in parallel, the app-data image builder is not synchronized and inconsistent states might break the virtual environment creation. Furthermore, in this case no error is raised in case of the failed commands.\n", "before_files": [{"content": "\"\"\"Bootstrap\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nfrom contextlib import contextmanager\nfrom subprocess import CalledProcessError\nfrom threading import Lock, Thread\n\nimport six\n\nfrom virtualenv.info import fs_supports_symlink\nfrom virtualenv.seed.embed.base_embed import BaseEmbed\nfrom virtualenv.seed.wheels import get_wheel\nfrom virtualenv.util.path import Path\n\nfrom .pip_install.copy import CopyPipInstall\nfrom .pip_install.symlink import SymlinkPipInstall\n\n\nclass FromAppData(BaseEmbed):\n def __init__(self, options):\n super(FromAppData, self).__init__(options)\n self.symlinks = options.symlink_app_data\n\n @classmethod\n def add_parser_arguments(cls, parser, interpreter, app_data):\n super(FromAppData, cls).add_parser_arguments(parser, interpreter, app_data)\n can_symlink = app_data.transient is False and fs_supports_symlink()\n parser.add_argument(\n \"--symlink-app-data\",\n dest=\"symlink_app_data\",\n action=\"store_true\" if can_symlink else \"store_false\",\n help=\"{} symlink the python packages from the app-data folder (requires seed pip>=19.3)\".format(\n \"\" if can_symlink else \"not supported - \",\n ),\n default=False,\n )\n\n def run(self, creator):\n if not self.enabled:\n return\n with self._get_seed_wheels(creator) as name_to_whl:\n pip_version = name_to_whl[\"pip\"].version_tuple if \"pip\" in name_to_whl else None\n installer_class = self.installer_class(pip_version)\n\n def _install(name, wheel):\n logging.debug(\"install %s from wheel %s via %s\", name, wheel, installer_class.__name__)\n key = Path(installer_class.__name__) / wheel.path.stem\n wheel_img = self.app_data.wheel_image(creator.interpreter.version_release_str, key)\n installer = installer_class(wheel.path, creator, wheel_img)\n if not installer.has_image():\n installer.build_image()\n installer.install(creator.interpreter.version_info)\n\n threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n\n @contextmanager\n def _get_seed_wheels(self, creator):\n name_to_whl, lock, fail = {}, Lock(), {}\n\n def _get(distribution, version):\n for_py_version = creator.interpreter.version_release_str\n failure, result = None, None\n # fallback to download in case the exact version is not available\n for download in [True] if self.download else [False, True]:\n failure = None\n try:\n result = get_wheel(\n distribution=distribution,\n version=version,\n for_py_version=for_py_version,\n search_dirs=self.extra_search_dir,\n download=download,\n app_data=self.app_data,\n do_periodic_update=self.periodic_update,\n )\n if result is not None:\n break\n except Exception as exception: # noqa\n logging.exception(\"fail\")\n failure = exception\n if failure:\n if isinstance(failure, CalledProcessError):\n msg = \"failed to download {}\".format(distribution)\n if version is not None:\n msg += \" version {}\".format(version)\n msg += \", pip download exit code {}\".format(failure.returncode)\n output = failure.output if six.PY2 else (failure.output + failure.stderr)\n if output:\n msg += \"\\n\"\n msg += output\n else:\n msg = repr(failure)\n logging.error(msg)\n with lock:\n fail[distribution] = version\n else:\n with lock:\n name_to_whl[distribution] = result\n\n threads = list(\n Thread(target=_get, args=(distribution, version))\n for distribution, version in self.distribution_to_versions().items()\n )\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n if fail:\n raise RuntimeError(\"seed failed due to failing to download wheels {}\".format(\", \".join(fail.keys())))\n yield name_to_whl\n\n def installer_class(self, pip_version_tuple):\n if self.symlinks and pip_version_tuple:\n # symlink support requires pip 19.3+\n if pip_version_tuple >= (19, 3):\n return SymlinkPipInstall\n return CopyPipInstall\n\n def __unicode__(self):\n base = super(FromAppData, self).__unicode__()\n msg = \", via={}, app_data_dir={}\".format(\"symlink\" if self.symlinks else \"copy\", self.app_data)\n return base[:-1] + msg + base[-1]\n", "path": "src/virtualenv/seed/embed/via_app_data/via_app_data.py"}, {"content": "\"\"\"holds locking functionality that works across processes\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nimport os\nfrom contextlib import contextmanager\nfrom threading import Lock, RLock\n\nfrom filelock import FileLock, Timeout\n\nfrom virtualenv.util.path import Path\n\n\nclass _CountedFileLock(FileLock):\n def __init__(self, lock_file):\n super(_CountedFileLock, self).__init__(lock_file)\n self.count = 0\n self.thread_safe = RLock()\n\n def acquire(self, timeout=None, poll_intervall=0.05):\n with self.thread_safe:\n if self.count == 0:\n super(_CountedFileLock, self).acquire(timeout=timeout, poll_intervall=poll_intervall)\n self.count += 1\n\n def release(self, force=False):\n with self.thread_safe:\n if self.count == 1:\n super(_CountedFileLock, self).release()\n self.count = max(self.count - 1, 0)\n\n\n_lock_store = {}\n_store_lock = Lock()\n\n\nclass ReentrantFileLock(object):\n def __init__(self, folder):\n self._lock = None\n path = Path(folder)\n self.path = path.resolve() if path.exists() else path\n\n def __repr__(self):\n return \"{}({})\".format(self.__class__.__name__, self.path)\n\n def __div__(self, other):\n return ReentrantFileLock(self.path / other)\n\n def __truediv__(self, other):\n return self.__div__(other)\n\n def _create_lock(self, name=\"\"):\n lock_file = str(self.path / \"{}.lock\".format(name))\n with _store_lock:\n if lock_file not in _lock_store:\n _lock_store[lock_file] = _CountedFileLock(lock_file)\n return _lock_store[lock_file]\n\n @staticmethod\n def _del_lock(lock):\n with _store_lock:\n if lock is not None:\n with lock.thread_safe:\n if lock.count == 0:\n _lock_store.pop(lock.lock_file, None)\n\n def __del__(self):\n self._del_lock(self._lock)\n\n def __enter__(self):\n self._lock = self._create_lock()\n self._lock_file(self._lock)\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n self._release(self._lock)\n\n def _lock_file(self, lock, no_block=False):\n # multiple processes might be trying to get a first lock... so we cannot check if this directory exist without\n # a lock, but that lock might then become expensive, and it's not clear where that lock should live.\n # Instead here we just ignore if we fail to create the directory.\n try:\n os.makedirs(str(self.path))\n except OSError:\n pass\n try:\n lock.acquire(0.0001)\n except Timeout:\n if no_block:\n raise\n logging.debug(\"lock file %s present, will block until released\", lock.lock_file)\n lock.release() # release the acquire try from above\n lock.acquire()\n\n @staticmethod\n def _release(lock):\n lock.release()\n\n @contextmanager\n def lock_for_key(self, name, no_block=False):\n lock = self._create_lock(name)\n try:\n try:\n self._lock_file(lock, no_block)\n yield\n finally:\n self._release(lock)\n finally:\n self._del_lock(lock)\n\n\n__all__ = (\n \"Timeout\",\n \"ReentrantFileLock\",\n)\n", "path": "src/virtualenv/util/lock.py"}], "after_files": [{"content": "\"\"\"Bootstrap\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nimport sys\nimport traceback\nfrom contextlib import contextmanager\nfrom subprocess import CalledProcessError\nfrom threading import Lock, Thread\n\nimport six\n\nfrom virtualenv.info import fs_supports_symlink\nfrom virtualenv.seed.embed.base_embed import BaseEmbed\nfrom virtualenv.seed.wheels import get_wheel\nfrom virtualenv.util.lock import _CountedFileLock\nfrom virtualenv.util.path import Path\nfrom virtualenv.util.six import ensure_text\n\nfrom .pip_install.copy import CopyPipInstall\nfrom .pip_install.symlink import SymlinkPipInstall\n\n\nclass FromAppData(BaseEmbed):\n def __init__(self, options):\n super(FromAppData, self).__init__(options)\n self.symlinks = options.symlink_app_data\n\n @classmethod\n def add_parser_arguments(cls, parser, interpreter, app_data):\n super(FromAppData, cls).add_parser_arguments(parser, interpreter, app_data)\n can_symlink = app_data.transient is False and fs_supports_symlink()\n parser.add_argument(\n \"--symlink-app-data\",\n dest=\"symlink_app_data\",\n action=\"store_true\" if can_symlink else \"store_false\",\n help=\"{} symlink the python packages from the app-data folder (requires seed pip>=19.3)\".format(\n \"\" if can_symlink else \"not supported - \",\n ),\n default=False,\n )\n\n def run(self, creator):\n if not self.enabled:\n return\n with self._get_seed_wheels(creator) as name_to_whl:\n pip_version = name_to_whl[\"pip\"].version_tuple if \"pip\" in name_to_whl else None\n installer_class = self.installer_class(pip_version)\n exceptions = {}\n\n def _install(name, wheel):\n try:\n logging.debug(\"install %s from wheel %s via %s\", name, wheel, installer_class.__name__)\n key = Path(installer_class.__name__) / wheel.path.stem\n wheel_img = self.app_data.wheel_image(creator.interpreter.version_release_str, key)\n installer = installer_class(wheel.path, creator, wheel_img)\n with _CountedFileLock(ensure_text(str(wheel_img.parent / \"{}.lock\".format(wheel_img.name)))):\n if not installer.has_image():\n installer.build_image()\n installer.install(creator.interpreter.version_info)\n except Exception: # noqa\n exceptions[name] = sys.exc_info()\n\n threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n if exceptions:\n messages = [\"failed to build image {} because:\".format(\", \".join(exceptions.keys()))]\n for value in exceptions.values():\n exc_type, exc_value, exc_traceback = value\n messages.append(\"\".join(traceback.format_exception(exc_type, exc_value, exc_traceback)))\n raise RuntimeError(\"\\n\".join(messages))\n\n @contextmanager\n def _get_seed_wheels(self, creator):\n name_to_whl, lock, fail = {}, Lock(), {}\n\n def _get(distribution, version):\n for_py_version = creator.interpreter.version_release_str\n failure, result = None, None\n # fallback to download in case the exact version is not available\n for download in [True] if self.download else [False, True]:\n failure = None\n try:\n result = get_wheel(\n distribution=distribution,\n version=version,\n for_py_version=for_py_version,\n search_dirs=self.extra_search_dir,\n download=download,\n app_data=self.app_data,\n do_periodic_update=self.periodic_update,\n )\n if result is not None:\n break\n except Exception as exception: # noqa\n logging.exception(\"fail\")\n failure = exception\n if failure:\n if isinstance(failure, CalledProcessError):\n msg = \"failed to download {}\".format(distribution)\n if version is not None:\n msg += \" version {}\".format(version)\n msg += \", pip download exit code {}\".format(failure.returncode)\n output = failure.output if six.PY2 else (failure.output + failure.stderr)\n if output:\n msg += \"\\n\"\n msg += output\n else:\n msg = repr(failure)\n logging.error(msg)\n with lock:\n fail[distribution] = version\n else:\n with lock:\n name_to_whl[distribution] = result\n\n threads = list(\n Thread(target=_get, args=(distribution, version))\n for distribution, version in self.distribution_to_versions().items()\n )\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n if fail:\n raise RuntimeError(\"seed failed due to failing to download wheels {}\".format(\", \".join(fail.keys())))\n yield name_to_whl\n\n def installer_class(self, pip_version_tuple):\n if self.symlinks and pip_version_tuple:\n # symlink support requires pip 19.3+\n if pip_version_tuple >= (19, 3):\n return SymlinkPipInstall\n return CopyPipInstall\n\n def __unicode__(self):\n base = super(FromAppData, self).__unicode__()\n msg = \", via={}, app_data_dir={}\".format(\"symlink\" if self.symlinks else \"copy\", self.app_data)\n return base[:-1] + msg + base[-1]\n", "path": "src/virtualenv/seed/embed/via_app_data/via_app_data.py"}, {"content": "\"\"\"holds locking functionality that works across processes\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nimport os\nfrom contextlib import contextmanager\nfrom threading import Lock, RLock\n\nfrom filelock import FileLock, Timeout\n\nfrom virtualenv.util.path import Path\n\n\nclass _CountedFileLock(FileLock):\n def __init__(self, lock_file):\n parent = os.path.dirname(lock_file)\n if not os.path.exists(parent):\n try:\n os.makedirs(parent)\n except OSError:\n pass\n super(_CountedFileLock, self).__init__(lock_file)\n self.count = 0\n self.thread_safe = RLock()\n\n def acquire(self, timeout=None, poll_intervall=0.05):\n with self.thread_safe:\n if self.count == 0:\n super(_CountedFileLock, self).acquire(timeout=timeout, poll_intervall=poll_intervall)\n self.count += 1\n\n def release(self, force=False):\n with self.thread_safe:\n if self.count == 1:\n super(_CountedFileLock, self).release()\n self.count = max(self.count - 1, 0)\n\n\n_lock_store = {}\n_store_lock = Lock()\n\n\nclass ReentrantFileLock(object):\n def __init__(self, folder):\n self._lock = None\n path = Path(folder)\n self.path = path.resolve() if path.exists() else path\n\n def __repr__(self):\n return \"{}({})\".format(self.__class__.__name__, self.path)\n\n def __div__(self, other):\n return ReentrantFileLock(self.path / other)\n\n def __truediv__(self, other):\n return self.__div__(other)\n\n def _create_lock(self, name=\"\"):\n lock_file = str(self.path / \"{}.lock\".format(name))\n with _store_lock:\n if lock_file not in _lock_store:\n _lock_store[lock_file] = _CountedFileLock(lock_file)\n return _lock_store[lock_file]\n\n @staticmethod\n def _del_lock(lock):\n with _store_lock:\n if lock is not None:\n with lock.thread_safe:\n if lock.count == 0:\n _lock_store.pop(lock.lock_file, None)\n\n def __del__(self):\n self._del_lock(self._lock)\n\n def __enter__(self):\n self._lock = self._create_lock()\n self._lock_file(self._lock)\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n self._release(self._lock)\n\n def _lock_file(self, lock, no_block=False):\n # multiple processes might be trying to get a first lock... so we cannot check if this directory exist without\n # a lock, but that lock might then become expensive, and it's not clear where that lock should live.\n # Instead here we just ignore if we fail to create the directory.\n try:\n os.makedirs(str(self.path))\n except OSError:\n pass\n try:\n lock.acquire(0.0001)\n except Timeout:\n if no_block:\n raise\n logging.debug(\"lock file %s present, will block until released\", lock.lock_file)\n lock.release() # release the acquire try from above\n lock.acquire()\n\n @staticmethod\n def _release(lock):\n lock.release()\n\n @contextmanager\n def lock_for_key(self, name, no_block=False):\n lock = self._create_lock(name)\n try:\n try:\n self._lock_file(lock, no_block)\n yield\n finally:\n self._release(lock)\n finally:\n self._del_lock(lock)\n\n\n__all__ = (\n \"Timeout\",\n \"ReentrantFileLock\",\n)\n", "path": "src/virtualenv/util/lock.py"}]}
| 2,687 | 854 |
gh_patches_debug_3297
|
rasdani/github-patches
|
git_diff
|
liberapay__liberapay.com-1484
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Requests access to GitHub private repos?
Hi. I'm a brand-new user. I have a question I didn't see in the FAQ or when I searched issues here.
I was going to connect my GitHub account and saw this:
> Liberapay by liberapay
> wants to access your greghendershott account
>
> Organizations and teams
> Read-only access
>
> This application will be able to read your organization and team membership and private Projects.
I almost clicked OK, but noticed "**private** Projects". I stopped. I don't want to do that.
Is this as-intended?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `liberapay/elsewhere/github.py`
Content:
```
1 from liberapay.elsewhere._base import PlatformOAuth2
2 from liberapay.elsewhere._exceptions import CantReadMembership
3 from liberapay.elsewhere._extractors import key, drop_keys
4 from liberapay.elsewhere._paginators import header_links_paginator
5
6
7 class GitHub(PlatformOAuth2):
8
9 # Platform attributes
10 name = 'github'
11 display_name = 'GitHub'
12 fontawesome_name = name
13 account_url = 'https://github.com/{user_name}'
14 repo_url = 'https://github.com/{slug}'
15 has_teams = True
16
17 # Auth attributes
18 auth_url = 'https://github.com/login/oauth/authorize'
19 access_token_url = 'https://github.com/login/oauth/access_token'
20 oauth_email_scope = 'user:email'
21 oauth_default_scope = ['read:org']
22
23 # API attributes
24 api_format = 'json'
25 api_paginator = header_links_paginator()
26 api_url = 'https://api.github.com'
27 api_app_auth_params = 'client_id={api_key}&client_secret={api_secret}'
28 api_user_info_path = '/user/{user_id}'
29 api_user_name_info_path = '/users/{user_name}'
30 api_user_self_info_path = '/user'
31 api_team_members_path = '/orgs/{user_name}/public_members'
32 api_friends_path = '/users/{user_name}/following'
33 api_repos_path = '/users/{user_name}/repos?type=owner&sort=updated&per_page=100'
34 api_starred_path = '/users/{user_name}/starred'
35 ratelimit_headers_prefix = 'x-ratelimit-'
36
37 # User info extractors
38 x_user_id = key('id')
39 x_user_name = key('login')
40 x_display_name = key('name')
41 x_email = key('email')
42 x_gravatar_id = key('gravatar_id')
43 x_avatar_url = key('avatar_url')
44 x_is_team = key('type', clean=lambda t: t.lower() == 'organization')
45 x_description = key('bio')
46 x_extra_info_drop = drop_keys(lambda k: k.endswith('_url'))
47
48 # Repo info extractors
49 x_repo_id = key('id')
50 x_repo_name = key('name')
51 x_repo_slug = key('full_name')
52 x_repo_description = key('description')
53 x_repo_last_update = key('pushed_at')
54 x_repo_is_fork = key('fork')
55 x_repo_stars_count = key('stargazers_count')
56 x_repo_owner_id = key('owner', clean=lambda d: d['id'])
57 x_repo_extra_info_drop = drop_keys(lambda k: k.endswith('_url'))
58
59 def get_CantReadMembership_url(self, **kw):
60 return 'https://github.com/settings/connections/applications/'+self.api_key
61
62 def is_team_member(self, org_name, sess, account):
63 org_name = org_name.lower()
64
65 # Check public membership first
66 response = self.api_get(
67 '', '/orgs/'+org_name+'/public_members/'+account.user_name,
68 sess=sess, error_handler=None
69 )
70 if response.status_code == 204:
71 return True
72 elif response.status_code != 404:
73 self.api_error_handler(response, True, self.domain)
74
75 # Check private membership
76 response = self.api_get(
77 '', '/user/memberships/orgs/'+org_name, sess=sess, error_handler=None
78 )
79 if response.status_code == 403:
80 raise CantReadMembership
81 elif response.status_code >= 400:
82 self.api_error_handler(response, True, self.domain)
83 membership = self.api_parser(response)
84 if membership['state'] == 'active':
85 return True
86
87 # Try the endpoint we were using before
88 user_orgs = self.api_parser(self.api_get('', '/user/orgs', sess=sess))
89 return any(org.get('login') == org_name for org in user_orgs)
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/liberapay/elsewhere/github.py b/liberapay/elsewhere/github.py
--- a/liberapay/elsewhere/github.py
+++ b/liberapay/elsewhere/github.py
@@ -18,7 +18,6 @@
auth_url = 'https://github.com/login/oauth/authorize'
access_token_url = 'https://github.com/login/oauth/access_token'
oauth_email_scope = 'user:email'
- oauth_default_scope = ['read:org']
# API attributes
api_format = 'json'
|
{"golden_diff": "diff --git a/liberapay/elsewhere/github.py b/liberapay/elsewhere/github.py\n--- a/liberapay/elsewhere/github.py\n+++ b/liberapay/elsewhere/github.py\n@@ -18,7 +18,6 @@\n auth_url = 'https://github.com/login/oauth/authorize'\n access_token_url = 'https://github.com/login/oauth/access_token'\n oauth_email_scope = 'user:email'\n- oauth_default_scope = ['read:org']\n \n # API attributes\n api_format = 'json'\n", "issue": "Requests access to GitHub private repos?\nHi. I'm a brand-new user. I have a question I didn't see in the FAQ or when I searched issues here.\r\n\r\nI was going to connect my GitHub account and saw this:\r\n\r\n> Liberapay by liberapay\r\n> wants to access your greghendershott account\r\n> \r\n> Organizations and teams\r\n> Read-only access\r\n>\r\n> This application will be able to read your organization and team membership and private Projects.\r\n\r\nI almost clicked OK, but noticed \"**private** Projects\". I stopped. I don't want to do that.\r\n\r\nIs this as-intended?\n", "before_files": [{"content": "from liberapay.elsewhere._base import PlatformOAuth2\nfrom liberapay.elsewhere._exceptions import CantReadMembership\nfrom liberapay.elsewhere._extractors import key, drop_keys\nfrom liberapay.elsewhere._paginators import header_links_paginator\n\n\nclass GitHub(PlatformOAuth2):\n\n # Platform attributes\n name = 'github'\n display_name = 'GitHub'\n fontawesome_name = name\n account_url = 'https://github.com/{user_name}'\n repo_url = 'https://github.com/{slug}'\n has_teams = True\n\n # Auth attributes\n auth_url = 'https://github.com/login/oauth/authorize'\n access_token_url = 'https://github.com/login/oauth/access_token'\n oauth_email_scope = 'user:email'\n oauth_default_scope = ['read:org']\n\n # API attributes\n api_format = 'json'\n api_paginator = header_links_paginator()\n api_url = 'https://api.github.com'\n api_app_auth_params = 'client_id={api_key}&client_secret={api_secret}'\n api_user_info_path = '/user/{user_id}'\n api_user_name_info_path = '/users/{user_name}'\n api_user_self_info_path = '/user'\n api_team_members_path = '/orgs/{user_name}/public_members'\n api_friends_path = '/users/{user_name}/following'\n api_repos_path = '/users/{user_name}/repos?type=owner&sort=updated&per_page=100'\n api_starred_path = '/users/{user_name}/starred'\n ratelimit_headers_prefix = 'x-ratelimit-'\n\n # User info extractors\n x_user_id = key('id')\n x_user_name = key('login')\n x_display_name = key('name')\n x_email = key('email')\n x_gravatar_id = key('gravatar_id')\n x_avatar_url = key('avatar_url')\n x_is_team = key('type', clean=lambda t: t.lower() == 'organization')\n x_description = key('bio')\n x_extra_info_drop = drop_keys(lambda k: k.endswith('_url'))\n\n # Repo info extractors\n x_repo_id = key('id')\n x_repo_name = key('name')\n x_repo_slug = key('full_name')\n x_repo_description = key('description')\n x_repo_last_update = key('pushed_at')\n x_repo_is_fork = key('fork')\n x_repo_stars_count = key('stargazers_count')\n x_repo_owner_id = key('owner', clean=lambda d: d['id'])\n x_repo_extra_info_drop = drop_keys(lambda k: k.endswith('_url'))\n\n def get_CantReadMembership_url(self, **kw):\n return 'https://github.com/settings/connections/applications/'+self.api_key\n\n def is_team_member(self, org_name, sess, account):\n org_name = org_name.lower()\n\n # Check public membership first\n response = self.api_get(\n '', '/orgs/'+org_name+'/public_members/'+account.user_name,\n sess=sess, error_handler=None\n )\n if response.status_code == 204:\n return True\n elif response.status_code != 404:\n self.api_error_handler(response, True, self.domain)\n\n # Check private membership\n response = self.api_get(\n '', '/user/memberships/orgs/'+org_name, sess=sess, error_handler=None\n )\n if response.status_code == 403:\n raise CantReadMembership\n elif response.status_code >= 400:\n self.api_error_handler(response, True, self.domain)\n membership = self.api_parser(response)\n if membership['state'] == 'active':\n return True\n\n # Try the endpoint we were using before\n user_orgs = self.api_parser(self.api_get('', '/user/orgs', sess=sess))\n return any(org.get('login') == org_name for org in user_orgs)\n", "path": "liberapay/elsewhere/github.py"}], "after_files": [{"content": "from liberapay.elsewhere._base import PlatformOAuth2\nfrom liberapay.elsewhere._exceptions import CantReadMembership\nfrom liberapay.elsewhere._extractors import key, drop_keys\nfrom liberapay.elsewhere._paginators import header_links_paginator\n\n\nclass GitHub(PlatformOAuth2):\n\n # Platform attributes\n name = 'github'\n display_name = 'GitHub'\n fontawesome_name = name\n account_url = 'https://github.com/{user_name}'\n repo_url = 'https://github.com/{slug}'\n has_teams = True\n\n # Auth attributes\n auth_url = 'https://github.com/login/oauth/authorize'\n access_token_url = 'https://github.com/login/oauth/access_token'\n oauth_email_scope = 'user:email'\n\n # API attributes\n api_format = 'json'\n api_paginator = header_links_paginator()\n api_url = 'https://api.github.com'\n api_app_auth_params = 'client_id={api_key}&client_secret={api_secret}'\n api_user_info_path = '/user/{user_id}'\n api_user_name_info_path = '/users/{user_name}'\n api_user_self_info_path = '/user'\n api_team_members_path = '/orgs/{user_name}/public_members'\n api_friends_path = '/users/{user_name}/following'\n api_repos_path = '/users/{user_name}/repos?type=owner&sort=updated&per_page=100'\n api_starred_path = '/users/{user_name}/starred'\n ratelimit_headers_prefix = 'x-ratelimit-'\n\n # User info extractors\n x_user_id = key('id')\n x_user_name = key('login')\n x_display_name = key('name')\n x_email = key('email')\n x_gravatar_id = key('gravatar_id')\n x_avatar_url = key('avatar_url')\n x_is_team = key('type', clean=lambda t: t.lower() == 'organization')\n x_description = key('bio')\n x_extra_info_drop = drop_keys(lambda k: k.endswith('_url'))\n\n # Repo info extractors\n x_repo_id = key('id')\n x_repo_name = key('name')\n x_repo_slug = key('full_name')\n x_repo_description = key('description')\n x_repo_last_update = key('pushed_at')\n x_repo_is_fork = key('fork')\n x_repo_stars_count = key('stargazers_count')\n x_repo_owner_id = key('owner', clean=lambda d: d['id'])\n x_repo_extra_info_drop = drop_keys(lambda k: k.endswith('_url'))\n\n def get_CantReadMembership_url(self, **kw):\n return 'https://github.com/settings/connections/applications/'+self.api_key\n\n def is_team_member(self, org_name, sess, account):\n org_name = org_name.lower()\n\n # Check public membership first\n response = self.api_get(\n '', '/orgs/'+org_name+'/public_members/'+account.user_name,\n sess=sess, error_handler=None\n )\n if response.status_code == 204:\n return True\n elif response.status_code != 404:\n self.api_error_handler(response, True, self.domain)\n\n # Check private membership\n response = self.api_get(\n '', '/user/memberships/orgs/'+org_name, sess=sess, error_handler=None\n )\n if response.status_code == 403:\n raise CantReadMembership\n elif response.status_code >= 400:\n self.api_error_handler(response, True, self.domain)\n membership = self.api_parser(response)\n if membership['state'] == 'active':\n return True\n\n # Try the endpoint we were using before\n user_orgs = self.api_parser(self.api_get('', '/user/orgs', sess=sess))\n return any(org.get('login') == org_name for org in user_orgs)\n", "path": "liberapay/elsewhere/github.py"}]}
| 1,425 | 123 |
gh_patches_debug_17032
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-1661
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong grade document edit form title
When editing a grade document that holds final grades, the title of the form wrongly shows "Upload midterm grades" (should be "Upload final grades" instead) because the parameter `final_grades` is not correctly set for the template.
This can for example be seen at the course "Operating Systems I (Summer term 2014)" in the test data.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/grades/views.py`
Content:
```
1 from django.conf import settings
2 from django.contrib import messages
3 from django.core.exceptions import PermissionDenied
4 from django.http import HttpResponse
5 from django.shortcuts import get_object_or_404, redirect, render
6 from django.utils.translation import gettext as _
7 from django.views.decorators.http import require_GET, require_POST
8 from django_sendfile import sendfile
9
10 from evap.evaluation.auth import (
11 grade_downloader_required,
12 grade_publisher_or_manager_required,
13 grade_publisher_required,
14 )
15 from evap.evaluation.models import Course, EmailTemplate, Evaluation, Semester
16 from evap.grades.forms import GradeDocumentForm
17 from evap.grades.models import GradeDocument
18
19
20 @grade_publisher_required
21 def index(request):
22 template_data = dict(
23 semesters=Semester.objects.filter(grade_documents_are_deleted=False),
24 disable_breadcrumb_grades=True,
25 )
26 return render(request, "grades_index.html", template_data)
27
28
29 def prefetch_data(courses):
30 courses = courses.prefetch_related("degrees", "responsibles")
31
32 course_data = []
33 for course in courses:
34 course_data.append((course, course.midterm_grade_documents.count(), course.final_grade_documents.count()))
35
36 return course_data
37
38
39 @grade_publisher_required
40 def semester_view(request, semester_id):
41 semester = get_object_or_404(Semester, id=semester_id)
42 if semester.grade_documents_are_deleted:
43 raise PermissionDenied
44
45 courses = (
46 semester.courses.filter(evaluations__wait_for_grade_upload_before_publishing=True)
47 .exclude(evaluations__state=Evaluation.State.NEW)
48 .distinct()
49 )
50 courses = prefetch_data(courses)
51
52 template_data = dict(
53 semester=semester,
54 courses=courses,
55 disable_if_archived="disabled" if semester.grade_documents_are_deleted else "",
56 disable_breadcrumb_semester=True,
57 )
58 return render(request, "grades_semester_view.html", template_data)
59
60
61 @grade_publisher_or_manager_required
62 def course_view(request, semester_id, course_id):
63 semester = get_object_or_404(Semester, id=semester_id)
64 if semester.grade_documents_are_deleted:
65 raise PermissionDenied
66 course = get_object_or_404(Course, id=course_id, semester=semester)
67
68 template_data = dict(
69 semester=semester,
70 course=course,
71 grade_documents=course.grade_documents.all(),
72 disable_if_archived="disabled" if semester.grade_documents_are_deleted else "",
73 disable_breadcrumb_course=True,
74 )
75 return render(request, "grades_course_view.html", template_data)
76
77
78 def on_grading_process_finished(course):
79 evaluations = course.evaluations.all()
80 if all(evaluation.state == Evaluation.State.REVIEWED for evaluation in evaluations):
81 for evaluation in evaluations:
82 assert evaluation.grading_process_is_finished
83 for evaluation in evaluations:
84 evaluation.publish()
85 evaluation.save()
86
87 EmailTemplate.send_participant_publish_notifications(evaluations)
88 EmailTemplate.send_contributor_publish_notifications(evaluations)
89
90
91 @grade_publisher_required
92 def upload_grades(request, semester_id, course_id):
93 semester = get_object_or_404(Semester, id=semester_id)
94 if semester.grade_documents_are_deleted:
95 raise PermissionDenied
96 course = get_object_or_404(Course, id=course_id, semester=semester)
97
98 final_grades = request.GET.get("final") == "true" # if parameter is not given, assume midterm grades
99
100 grade_document = GradeDocument(course=course)
101 if final_grades:
102 grade_document.type = GradeDocument.Type.FINAL_GRADES
103 grade_document.description_en = settings.DEFAULT_FINAL_GRADES_DESCRIPTION_EN
104 grade_document.description_de = settings.DEFAULT_FINAL_GRADES_DESCRIPTION_DE
105 else:
106 grade_document.type = GradeDocument.Type.MIDTERM_GRADES
107 grade_document.description_en = settings.DEFAULT_MIDTERM_GRADES_DESCRIPTION_EN
108 grade_document.description_de = settings.DEFAULT_MIDTERM_GRADES_DESCRIPTION_DE
109
110 form = GradeDocumentForm(request.POST or None, request.FILES or None, instance=grade_document)
111
112 if form.is_valid():
113 form.save(modifying_user=request.user)
114
115 if final_grades:
116 on_grading_process_finished(course)
117
118 messages.success(request, _("Successfully uploaded grades."))
119 return redirect("grades:course_view", semester.id, course.id)
120
121 template_data = dict(
122 semester=semester,
123 course=course,
124 form=form,
125 final_grades=final_grades,
126 show_automated_publishing_info=final_grades,
127 )
128 return render(request, "grades_upload_form.html", template_data)
129
130
131 @require_POST
132 @grade_publisher_required
133 def toggle_no_grades(request):
134 course_id = request.POST.get("course_id")
135 course = get_object_or_404(Course, id=course_id)
136 if course.semester.grade_documents_are_deleted:
137 raise PermissionDenied
138
139 course.gets_no_grade_documents = not course.gets_no_grade_documents
140 course.save()
141
142 if course.gets_no_grade_documents:
143 on_grading_process_finished(course)
144
145 return HttpResponse() # 200 OK
146
147
148 @require_GET
149 @grade_downloader_required
150 def download_grades(request, grade_document_id):
151 grade_document = get_object_or_404(GradeDocument, id=grade_document_id)
152 if grade_document.course.semester.grade_documents_are_deleted:
153 raise PermissionDenied
154
155 return sendfile(request, grade_document.file.path, attachment=True, attachment_filename=grade_document.filename())
156
157
158 @grade_publisher_required
159 def edit_grades(request, semester_id, course_id, grade_document_id):
160 semester = get_object_or_404(Semester, id=semester_id)
161 if semester.grade_documents_are_deleted:
162 raise PermissionDenied
163 course = get_object_or_404(Course, id=course_id, semester=semester)
164 grade_document = get_object_or_404(GradeDocument, id=grade_document_id, course=course)
165
166 form = GradeDocumentForm(request.POST or None, request.FILES or None, instance=grade_document)
167
168 if form.is_valid():
169 form.save(modifying_user=request.user)
170 messages.success(request, _("Successfully updated grades."))
171 return redirect("grades:course_view", semester.id, course.id)
172
173 template_data = dict(
174 semester=semester,
175 course=course,
176 form=form,
177 show_automated_publishing_info=False,
178 )
179 return render(request, "grades_upload_form.html", template_data)
180
181
182 @require_POST
183 @grade_publisher_required
184 def delete_grades(request):
185 grade_document_id = request.POST.get("grade_document_id")
186 grade_document = get_object_or_404(GradeDocument, id=grade_document_id)
187
188 grade_document.delete()
189 return HttpResponse() # 200 OK
190
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/evap/grades/views.py b/evap/grades/views.py
--- a/evap/grades/views.py
+++ b/evap/grades/views.py
@@ -165,6 +165,10 @@
form = GradeDocumentForm(request.POST or None, request.FILES or None, instance=grade_document)
+ final_grades = (
+ grade_document.type == GradeDocument.Type.FINAL_GRADES
+ ) # if parameter is not given, assume midterm grades
+
if form.is_valid():
form.save(modifying_user=request.user)
messages.success(request, _("Successfully updated grades."))
@@ -175,6 +179,7 @@
course=course,
form=form,
show_automated_publishing_info=False,
+ final_grades=final_grades,
)
return render(request, "grades_upload_form.html", template_data)
|
{"golden_diff": "diff --git a/evap/grades/views.py b/evap/grades/views.py\n--- a/evap/grades/views.py\n+++ b/evap/grades/views.py\n@@ -165,6 +165,10 @@\n \n form = GradeDocumentForm(request.POST or None, request.FILES or None, instance=grade_document)\n \n+ final_grades = (\n+ grade_document.type == GradeDocument.Type.FINAL_GRADES\n+ ) # if parameter is not given, assume midterm grades\n+\n if form.is_valid():\n form.save(modifying_user=request.user)\n messages.success(request, _(\"Successfully updated grades.\"))\n@@ -175,6 +179,7 @@\n course=course,\n form=form,\n show_automated_publishing_info=False,\n+ final_grades=final_grades,\n )\n return render(request, \"grades_upload_form.html\", template_data)\n", "issue": "Wrong grade document edit form title\nWhen editing a grade document that holds final grades, the title of the form wrongly shows \"Upload midterm grades\" (should be \"Upload final grades\" instead) because the parameter `final_grades` is not correctly set for the template.\r\nThis can for example be seen at the course \"Operating Systems I (Summer term 2014)\" in the test data.\n", "before_files": [{"content": "from django.conf import settings\nfrom django.contrib import messages\nfrom django.core.exceptions import PermissionDenied\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.translation import gettext as _\nfrom django.views.decorators.http import require_GET, require_POST\nfrom django_sendfile import sendfile\n\nfrom evap.evaluation.auth import (\n grade_downloader_required,\n grade_publisher_or_manager_required,\n grade_publisher_required,\n)\nfrom evap.evaluation.models import Course, EmailTemplate, Evaluation, Semester\nfrom evap.grades.forms import GradeDocumentForm\nfrom evap.grades.models import GradeDocument\n\n\n@grade_publisher_required\ndef index(request):\n template_data = dict(\n semesters=Semester.objects.filter(grade_documents_are_deleted=False),\n disable_breadcrumb_grades=True,\n )\n return render(request, \"grades_index.html\", template_data)\n\n\ndef prefetch_data(courses):\n courses = courses.prefetch_related(\"degrees\", \"responsibles\")\n\n course_data = []\n for course in courses:\n course_data.append((course, course.midterm_grade_documents.count(), course.final_grade_documents.count()))\n\n return course_data\n\n\n@grade_publisher_required\ndef semester_view(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n if semester.grade_documents_are_deleted:\n raise PermissionDenied\n\n courses = (\n semester.courses.filter(evaluations__wait_for_grade_upload_before_publishing=True)\n .exclude(evaluations__state=Evaluation.State.NEW)\n .distinct()\n )\n courses = prefetch_data(courses)\n\n template_data = dict(\n semester=semester,\n courses=courses,\n disable_if_archived=\"disabled\" if semester.grade_documents_are_deleted else \"\",\n disable_breadcrumb_semester=True,\n )\n return render(request, \"grades_semester_view.html\", template_data)\n\n\n@grade_publisher_or_manager_required\ndef course_view(request, semester_id, course_id):\n semester = get_object_or_404(Semester, id=semester_id)\n if semester.grade_documents_are_deleted:\n raise PermissionDenied\n course = get_object_or_404(Course, id=course_id, semester=semester)\n\n template_data = dict(\n semester=semester,\n course=course,\n grade_documents=course.grade_documents.all(),\n disable_if_archived=\"disabled\" if semester.grade_documents_are_deleted else \"\",\n disable_breadcrumb_course=True,\n )\n return render(request, \"grades_course_view.html\", template_data)\n\n\ndef on_grading_process_finished(course):\n evaluations = course.evaluations.all()\n if all(evaluation.state == Evaluation.State.REVIEWED for evaluation in evaluations):\n for evaluation in evaluations:\n assert evaluation.grading_process_is_finished\n for evaluation in evaluations:\n evaluation.publish()\n evaluation.save()\n\n EmailTemplate.send_participant_publish_notifications(evaluations)\n EmailTemplate.send_contributor_publish_notifications(evaluations)\n\n\n@grade_publisher_required\ndef upload_grades(request, semester_id, course_id):\n semester = get_object_or_404(Semester, id=semester_id)\n if semester.grade_documents_are_deleted:\n raise PermissionDenied\n course = get_object_or_404(Course, id=course_id, semester=semester)\n\n final_grades = request.GET.get(\"final\") == \"true\" # if parameter is not given, assume midterm grades\n\n grade_document = GradeDocument(course=course)\n if final_grades:\n grade_document.type = GradeDocument.Type.FINAL_GRADES\n grade_document.description_en = settings.DEFAULT_FINAL_GRADES_DESCRIPTION_EN\n grade_document.description_de = settings.DEFAULT_FINAL_GRADES_DESCRIPTION_DE\n else:\n grade_document.type = GradeDocument.Type.MIDTERM_GRADES\n grade_document.description_en = settings.DEFAULT_MIDTERM_GRADES_DESCRIPTION_EN\n grade_document.description_de = settings.DEFAULT_MIDTERM_GRADES_DESCRIPTION_DE\n\n form = GradeDocumentForm(request.POST or None, request.FILES or None, instance=grade_document)\n\n if form.is_valid():\n form.save(modifying_user=request.user)\n\n if final_grades:\n on_grading_process_finished(course)\n\n messages.success(request, _(\"Successfully uploaded grades.\"))\n return redirect(\"grades:course_view\", semester.id, course.id)\n\n template_data = dict(\n semester=semester,\n course=course,\n form=form,\n final_grades=final_grades,\n show_automated_publishing_info=final_grades,\n )\n return render(request, \"grades_upload_form.html\", template_data)\n\n\n@require_POST\n@grade_publisher_required\ndef toggle_no_grades(request):\n course_id = request.POST.get(\"course_id\")\n course = get_object_or_404(Course, id=course_id)\n if course.semester.grade_documents_are_deleted:\n raise PermissionDenied\n\n course.gets_no_grade_documents = not course.gets_no_grade_documents\n course.save()\n\n if course.gets_no_grade_documents:\n on_grading_process_finished(course)\n\n return HttpResponse() # 200 OK\n\n\n@require_GET\n@grade_downloader_required\ndef download_grades(request, grade_document_id):\n grade_document = get_object_or_404(GradeDocument, id=grade_document_id)\n if grade_document.course.semester.grade_documents_are_deleted:\n raise PermissionDenied\n\n return sendfile(request, grade_document.file.path, attachment=True, attachment_filename=grade_document.filename())\n\n\n@grade_publisher_required\ndef edit_grades(request, semester_id, course_id, grade_document_id):\n semester = get_object_or_404(Semester, id=semester_id)\n if semester.grade_documents_are_deleted:\n raise PermissionDenied\n course = get_object_or_404(Course, id=course_id, semester=semester)\n grade_document = get_object_or_404(GradeDocument, id=grade_document_id, course=course)\n\n form = GradeDocumentForm(request.POST or None, request.FILES or None, instance=grade_document)\n\n if form.is_valid():\n form.save(modifying_user=request.user)\n messages.success(request, _(\"Successfully updated grades.\"))\n return redirect(\"grades:course_view\", semester.id, course.id)\n\n template_data = dict(\n semester=semester,\n course=course,\n form=form,\n show_automated_publishing_info=False,\n )\n return render(request, \"grades_upload_form.html\", template_data)\n\n\n@require_POST\n@grade_publisher_required\ndef delete_grades(request):\n grade_document_id = request.POST.get(\"grade_document_id\")\n grade_document = get_object_or_404(GradeDocument, id=grade_document_id)\n\n grade_document.delete()\n return HttpResponse() # 200 OK\n", "path": "evap/grades/views.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.contrib import messages\nfrom django.core.exceptions import PermissionDenied\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.translation import gettext as _\nfrom django.views.decorators.http import require_GET, require_POST\nfrom django_sendfile import sendfile\n\nfrom evap.evaluation.auth import (\n grade_downloader_required,\n grade_publisher_or_manager_required,\n grade_publisher_required,\n)\nfrom evap.evaluation.models import Course, EmailTemplate, Evaluation, Semester\nfrom evap.grades.forms import GradeDocumentForm\nfrom evap.grades.models import GradeDocument\n\n\n@grade_publisher_required\ndef index(request):\n template_data = dict(\n semesters=Semester.objects.filter(grade_documents_are_deleted=False),\n disable_breadcrumb_grades=True,\n )\n return render(request, \"grades_index.html\", template_data)\n\n\ndef prefetch_data(courses):\n courses = courses.prefetch_related(\"degrees\", \"responsibles\")\n\n course_data = []\n for course in courses:\n course_data.append((course, course.midterm_grade_documents.count(), course.final_grade_documents.count()))\n\n return course_data\n\n\n@grade_publisher_required\ndef semester_view(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n if semester.grade_documents_are_deleted:\n raise PermissionDenied\n\n courses = (\n semester.courses.filter(evaluations__wait_for_grade_upload_before_publishing=True)\n .exclude(evaluations__state=Evaluation.State.NEW)\n .distinct()\n )\n courses = prefetch_data(courses)\n\n template_data = dict(\n semester=semester,\n courses=courses,\n disable_if_archived=\"disabled\" if semester.grade_documents_are_deleted else \"\",\n disable_breadcrumb_semester=True,\n )\n return render(request, \"grades_semester_view.html\", template_data)\n\n\n@grade_publisher_or_manager_required\ndef course_view(request, semester_id, course_id):\n semester = get_object_or_404(Semester, id=semester_id)\n if semester.grade_documents_are_deleted:\n raise PermissionDenied\n course = get_object_or_404(Course, id=course_id, semester=semester)\n\n template_data = dict(\n semester=semester,\n course=course,\n grade_documents=course.grade_documents.all(),\n disable_if_archived=\"disabled\" if semester.grade_documents_are_deleted else \"\",\n disable_breadcrumb_course=True,\n )\n return render(request, \"grades_course_view.html\", template_data)\n\n\ndef on_grading_process_finished(course):\n evaluations = course.evaluations.all()\n if all(evaluation.state == Evaluation.State.REVIEWED for evaluation in evaluations):\n for evaluation in evaluations:\n assert evaluation.grading_process_is_finished\n for evaluation in evaluations:\n evaluation.publish()\n evaluation.save()\n\n EmailTemplate.send_participant_publish_notifications(evaluations)\n EmailTemplate.send_contributor_publish_notifications(evaluations)\n\n\n@grade_publisher_required\ndef upload_grades(request, semester_id, course_id):\n semester = get_object_or_404(Semester, id=semester_id)\n if semester.grade_documents_are_deleted:\n raise PermissionDenied\n course = get_object_or_404(Course, id=course_id, semester=semester)\n\n final_grades = request.GET.get(\"final\") == \"true\" # if parameter is not given, assume midterm grades\n\n grade_document = GradeDocument(course=course)\n if final_grades:\n grade_document.type = GradeDocument.Type.FINAL_GRADES\n grade_document.description_en = settings.DEFAULT_FINAL_GRADES_DESCRIPTION_EN\n grade_document.description_de = settings.DEFAULT_FINAL_GRADES_DESCRIPTION_DE\n else:\n grade_document.type = GradeDocument.Type.MIDTERM_GRADES\n grade_document.description_en = settings.DEFAULT_MIDTERM_GRADES_DESCRIPTION_EN\n grade_document.description_de = settings.DEFAULT_MIDTERM_GRADES_DESCRIPTION_DE\n\n form = GradeDocumentForm(request.POST or None, request.FILES or None, instance=grade_document)\n\n if form.is_valid():\n form.save(modifying_user=request.user)\n\n if final_grades:\n on_grading_process_finished(course)\n\n messages.success(request, _(\"Successfully uploaded grades.\"))\n return redirect(\"grades:course_view\", semester.id, course.id)\n\n template_data = dict(\n semester=semester,\n course=course,\n form=form,\n final_grades=final_grades,\n show_automated_publishing_info=final_grades,\n )\n return render(request, \"grades_upload_form.html\", template_data)\n\n\n@require_POST\n@grade_publisher_required\ndef toggle_no_grades(request):\n course_id = request.POST.get(\"course_id\")\n course = get_object_or_404(Course, id=course_id)\n if course.semester.grade_documents_are_deleted:\n raise PermissionDenied\n\n course.gets_no_grade_documents = not course.gets_no_grade_documents\n course.save()\n\n if course.gets_no_grade_documents:\n on_grading_process_finished(course)\n\n return HttpResponse() # 200 OK\n\n\n@require_GET\n@grade_downloader_required\ndef download_grades(request, grade_document_id):\n grade_document = get_object_or_404(GradeDocument, id=grade_document_id)\n if grade_document.course.semester.grade_documents_are_deleted:\n raise PermissionDenied\n\n return sendfile(request, grade_document.file.path, attachment=True, attachment_filename=grade_document.filename())\n\n\n@grade_publisher_required\ndef edit_grades(request, semester_id, course_id, grade_document_id):\n semester = get_object_or_404(Semester, id=semester_id)\n if semester.grade_documents_are_deleted:\n raise PermissionDenied\n course = get_object_or_404(Course, id=course_id, semester=semester)\n grade_document = get_object_or_404(GradeDocument, id=grade_document_id, course=course)\n\n form = GradeDocumentForm(request.POST or None, request.FILES or None, instance=grade_document)\n\n final_grades = (\n grade_document.type == GradeDocument.Type.FINAL_GRADES\n ) # if parameter is not given, assume midterm grades\n\n if form.is_valid():\n form.save(modifying_user=request.user)\n messages.success(request, _(\"Successfully updated grades.\"))\n return redirect(\"grades:course_view\", semester.id, course.id)\n\n template_data = dict(\n semester=semester,\n course=course,\n form=form,\n show_automated_publishing_info=False,\n final_grades=final_grades,\n )\n return render(request, \"grades_upload_form.html\", template_data)\n\n\n@require_POST\n@grade_publisher_required\ndef delete_grades(request):\n grade_document_id = request.POST.get(\"grade_document_id\")\n grade_document = get_object_or_404(GradeDocument, id=grade_document_id)\n\n grade_document.delete()\n return HttpResponse() # 200 OK\n", "path": "evap/grades/views.py"}]}
| 2,263 | 200 |
gh_patches_debug_18121
|
rasdani/github-patches
|
git_diff
|
numba__numba-7733
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Compiler not found: /tmp/tmp bug
```
$ pip install numba
Collecting numba
Downloading numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl (3.4 MB)
|████████████████████████████████| 3.4 MB 1.2 MB/s
Collecting numpy>=1.15
Downloading numpy-1.21.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)
|████████████████████████████████| 15.7 MB 1.1 MB/s
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from numba) (52.0.0)
Collecting llvmlite<0.37,>=0.36.0rc1
Downloading llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl (25.3 MB)
|████████████████████████████████| 25.3 MB 829 kB/s
Installing collected packages: numpy, llvmlite, numba
WARNING: The scripts f2py, f2py3 and f2py3.9 are installed in '/home/olaf/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed llvmlite-0.36.0 numba-0.53.1 numpy-1.21.0
$ ./t.py
$ ./t.py
$ chmod 000 /tmp/tmp
$ ./t.py
Traceback (most recent call last):
File "/home/olaf/./t.py", line 5, in <module>
cc = CC('interpolation')
File "/home/olaf/.local/lib/python3.9/site-packages/numba/pycc/cc.py", line 65, in __init__
self._toolchain = Toolchain()
File "/home/olaf/.local/lib/python3.9/site-packages/numba/pycc/platform.py", line 78, in __init__
self._raise_external_compiler_error()
File "/home/olaf/.local/lib/python3.9/site-packages/numba/pycc/platform.py", line 121, in _raise_external_compiler_error
raise RuntimeError(msg)
RuntimeError: Attempted to compile AOT function without the compiler used by `numpy.distutils` present. If using conda try:
#> conda install gcc_linux-64 gxx_linux-64
$ cat t.py
#!/usr/bin/python3
from numba.pycc import CC
cc = CC('interpolation')
# cc.compile()
$ pip list|grep numba
numba 0.53.1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numba/pycc/platform.py`
Content:
```
1 from distutils.ccompiler import CCompiler, new_compiler
2 from distutils.command.build_ext import build_ext
3 from distutils.sysconfig import customize_compiler
4 from distutils import log
5
6 import numpy.distutils.misc_util as np_misc
7
8 import functools
9 import os
10 import subprocess
11 import sys
12 from tempfile import NamedTemporaryFile, mkdtemp, gettempdir
13 from contextlib import contextmanager
14
15 _configs = {
16 # DLL suffix, Python C extension suffix
17 'win': ('.dll', '.pyd'),
18 'default': ('.so', '.so'),
19 }
20
21
22 def get_configs(arg):
23 return _configs.get(sys.platform[:3], _configs['default'])[arg]
24
25
26 find_shared_ending = functools.partial(get_configs, 0)
27 find_pyext_ending = functools.partial(get_configs, 1)
28
29 @contextmanager
30 def _gentmpfile(suffix):
31 # windows locks the tempfile so use a tempdir + file, see
32 # https://github.com/numba/numba/issues/3304
33 try:
34 tmpdir = mkdtemp()
35 ntf = open(os.path.join(tmpdir, "temp%s" % suffix), 'wt')
36 yield ntf
37 finally:
38 try:
39 ntf.close()
40 os.remove(ntf)
41 except:
42 pass
43 else:
44 os.rmdir(tmpdir)
45
46 def _check_external_compiler():
47 # see if the external compiler bound in numpy.distutil is present
48 # and working
49 compiler = new_compiler()
50 customize_compiler(compiler)
51 for suffix in ['.c', '.cxx']:
52 try:
53 with _gentmpfile(suffix) as ntf:
54 simple_c = "int main(void) { return 0; }"
55 ntf.write(simple_c)
56 ntf.flush()
57 ntf.close()
58 # *output_dir* is set to avoid the compiler putting temp files
59 # in the current directory.
60 compiler.compile([ntf.name], output_dir=gettempdir())
61 except Exception: # likely CompileError or file system issue
62 return False
63 return True
64
65 # boolean on whether the externally provided compiler is present and
66 # functioning correctly
67 _external_compiler_ok = _check_external_compiler()
68
69
70 class _DummyExtension(object):
71 libraries = []
72
73
74 class Toolchain(object):
75
76 def __init__(self):
77 if not _external_compiler_ok:
78 self._raise_external_compiler_error()
79
80 # Need to import it here since setuptools may monkeypatch it
81 from distutils.dist import Distribution
82 self._verbose = False
83 self._compiler = new_compiler()
84 customize_compiler(self._compiler)
85 self._build_ext = build_ext(Distribution())
86 self._build_ext.finalize_options()
87 self._py_lib_dirs = self._build_ext.library_dirs
88 self._py_include_dirs = self._build_ext.include_dirs
89 self._math_info = np_misc.get_info('npymath')
90
91 @property
92 def verbose(self):
93 return self._verbose
94
95 @verbose.setter
96 def verbose(self, value):
97 self._verbose = value
98 # DEBUG will let Numpy spew many messages, so stick to INFO
99 # to print commands executed by distutils
100 log.set_threshold(log.INFO if value else log.WARN)
101
102 def _raise_external_compiler_error(self):
103 basemsg = ("Attempted to compile AOT function without the "
104 "compiler used by `numpy.distutils` present.")
105 conda_msg = "If using conda try:\n\n#> conda install %s"
106 plt = sys.platform
107 if plt.startswith('linux'):
108 if sys.maxsize <= 2 ** 32:
109 compilers = ['gcc_linux-32', 'gxx_linux-32']
110 else:
111 compilers = ['gcc_linux-64', 'gxx_linux-64']
112 msg = "%s %s" % (basemsg, conda_msg % ' '.join(compilers))
113 elif plt.startswith('darwin'):
114 compilers = ['clang_osx-64', 'clangxx_osx-64']
115 msg = "%s %s" % (basemsg, conda_msg % ' '.join(compilers))
116 elif plt.startswith('win32'):
117 winmsg = "Cannot find suitable msvc."
118 msg = "%s %s" % (basemsg, winmsg)
119 else:
120 msg = "Unknown platform %s" % plt
121 raise RuntimeError(msg)
122
123 def compile_objects(self, sources, output_dir,
124 include_dirs=(), depends=(), macros=(),
125 extra_cflags=None):
126 """
127 Compile the given source files into a separate object file each,
128 all beneath the *output_dir*. A list of paths to object files
129 is returned.
130
131 *macros* has the same format as in distutils: a list of 1- or 2-tuples.
132 If a 1-tuple (name,), the given name is considered undefined by
133 the C preprocessor.
134 If a 2-tuple (name, value), the given name is expanded into the
135 given value by the C preprocessor.
136 """
137 objects = self._compiler.compile(sources,
138 output_dir=output_dir,
139 include_dirs=include_dirs,
140 depends=depends,
141 macros=macros or [],
142 extra_preargs=extra_cflags)
143 return objects
144
145 def link_shared(self, output, objects, libraries=(),
146 library_dirs=(), export_symbols=(),
147 extra_ldflags=None):
148 """
149 Create a shared library *output* linking the given *objects*
150 and *libraries* (all strings).
151 """
152 output_dir, output_filename = os.path.split(output)
153 self._compiler.link(CCompiler.SHARED_OBJECT, objects,
154 output_filename, output_dir,
155 libraries, library_dirs,
156 export_symbols=export_symbols,
157 extra_preargs=extra_ldflags)
158
159 def get_python_libraries(self):
160 """
161 Get the library arguments necessary to link with Python.
162 """
163 libs = self._build_ext.get_libraries(_DummyExtension())
164 if sys.platform == 'win32':
165 # Under Windows, need to link explicitly against the CRT,
166 # as the MSVC compiler would implicitly do.
167 # (XXX msvcrtd in pydebug mode?)
168 libs = libs + ['msvcrt']
169 return libs + self._math_info['libraries']
170
171 def get_python_library_dirs(self):
172 """
173 Get the library directories necessary to link with Python.
174 """
175 return list(self._py_lib_dirs) + self._math_info['library_dirs']
176
177 def get_python_include_dirs(self):
178 """
179 Get the include directories necessary to compile against the Python
180 and Numpy C APIs.
181 """
182 return list(self._py_include_dirs) + self._math_info['include_dirs']
183
184 def get_ext_filename(self, ext_name):
185 """
186 Given a C extension's module name, return its intended filename.
187 """
188 return self._build_ext.get_ext_filename(ext_name)
189
190
191 #
192 # Patch Numpy's exec_command() to avoid random crashes on Windows in test_pycc
193 # see https://github.com/numpy/numpy/pull/7614
194 # and https://github.com/numpy/numpy/pull/7862
195 #
196
197 def _patch_exec_command():
198 # Patch the internal worker _exec_command()
199 import numpy.distutils.exec_command as mod
200 orig_exec_command = mod._exec_command
201 mod._exec_command = _exec_command
202
203
204 def _exec_command(command, use_shell=None, use_tee=None, **env):
205 """
206 Internal workhorse for exec_command().
207 Code from https://github.com/numpy/numpy/pull/7862
208 """
209 if use_shell is None:
210 use_shell = os.name == 'posix'
211 if use_tee is None:
212 use_tee = os.name == 'posix'
213
214 executable = None
215
216 if os.name == 'posix' and use_shell:
217 # On POSIX, subprocess always uses /bin/sh, override
218 sh = os.environ.get('SHELL', '/bin/sh')
219 if _is_sequence(command):
220 command = [sh, '-c', ' '.join(command)]
221 else:
222 command = [sh, '-c', command]
223 use_shell = False
224
225 elif os.name == 'nt' and _is_sequence(command):
226 # On Windows, join the string for CreateProcess() ourselves as
227 # subprocess does it a bit differently
228 command = ' '.join(_quote_arg(arg) for arg in command)
229
230 # Inherit environment by default
231 env = env or None
232 try:
233 proc = subprocess.Popen(command, shell=use_shell, env=env,
234 stdout=subprocess.PIPE,
235 stderr=subprocess.PIPE,
236 universal_newlines=True)
237 except OSError:
238 # Return 127, as os.spawn*() and /bin/sh do
239 return '', 127
240 text, err = proc.communicate()
241 # Only append stderr if the command failed, as otherwise
242 # the output may become garbled for parsing
243 if proc.returncode:
244 if text:
245 text += "\n"
246 text += err
247 # Another historical oddity
248 if text[-1:] == '\n':
249 text = text[:-1]
250 if use_tee:
251 print(text)
252 return proc.returncode, text
253
254
255 def _quote_arg(arg):
256 """
257 Quote the argument for safe use in a shell command line.
258 """
259 # If there is a quote in the string, assume relevants parts of the
260 # string are already quoted (e.g. '-I"C:\\Program Files\\..."')
261 if '"' not in arg and ' ' in arg:
262 return '"%s"' % arg
263 return arg
264
265
266 def _is_sequence(arg):
267 if isinstance(arg, (str, bytes)):
268 return False
269 try:
270 len(arg)
271 return True
272 except Exception:
273 return False
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/numba/pycc/platform.py b/numba/pycc/platform.py
--- a/numba/pycc/platform.py
+++ b/numba/pycc/platform.py
@@ -9,8 +9,9 @@
import os
import subprocess
import sys
-from tempfile import NamedTemporaryFile, mkdtemp, gettempdir
+from tempfile import mkdtemp
from contextlib import contextmanager
+from pathlib import Path
_configs = {
# DLL suffix, Python C extension suffix
@@ -57,7 +58,7 @@
ntf.close()
# *output_dir* is set to avoid the compiler putting temp files
# in the current directory.
- compiler.compile([ntf.name], output_dir=gettempdir())
+ compiler.compile([ntf.name], output_dir=Path(ntf.name).anchor)
except Exception: # likely CompileError or file system issue
return False
return True
|
{"golden_diff": "diff --git a/numba/pycc/platform.py b/numba/pycc/platform.py\n--- a/numba/pycc/platform.py\n+++ b/numba/pycc/platform.py\n@@ -9,8 +9,9 @@\n import os\n import subprocess\n import sys\n-from tempfile import NamedTemporaryFile, mkdtemp, gettempdir\n+from tempfile import mkdtemp\n from contextlib import contextmanager\n+from pathlib import Path\n \n _configs = {\n # DLL suffix, Python C extension suffix\n@@ -57,7 +58,7 @@\n ntf.close()\n # *output_dir* is set to avoid the compiler putting temp files\n # in the current directory.\n- compiler.compile([ntf.name], output_dir=gettempdir())\n+ compiler.compile([ntf.name], output_dir=Path(ntf.name).anchor)\n except Exception: # likely CompileError or file system issue\n return False\n return True\n", "issue": "Compiler not found: /tmp/tmp bug\n\r\n\r\n```\r\n$ pip install numba\r\nCollecting numba\r\n Downloading numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl (3.4 MB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3.4 MB 1.2 MB/s \r\nCollecting numpy>=1.15\r\n Downloading numpy-1.21.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 15.7 MB 1.1 MB/s \r\nRequirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from numba) (52.0.0)\r\nCollecting llvmlite<0.37,>=0.36.0rc1\r\n Downloading llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl (25.3 MB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 25.3 MB 829 kB/s \r\nInstalling collected packages: numpy, llvmlite, numba\r\n WARNING: The scripts f2py, f2py3 and f2py3.9 are installed in '/home/olaf/.local/bin' which is not on PATH.\r\n Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\r\nSuccessfully installed llvmlite-0.36.0 numba-0.53.1 numpy-1.21.0\r\n\r\n$ ./t.py \r\n \r\n$ ./t.py \r\n\r\n$ chmod 000 /tmp/tmp\r\n\r\n$ ./t.py \r\nTraceback (most recent call last):\r\n File \"/home/olaf/./t.py\", line 5, in <module>\r\n cc = CC('interpolation')\r\n File \"/home/olaf/.local/lib/python3.9/site-packages/numba/pycc/cc.py\", line 65, in __init__\r\n self._toolchain = Toolchain()\r\n File \"/home/olaf/.local/lib/python3.9/site-packages/numba/pycc/platform.py\", line 78, in __init__\r\n self._raise_external_compiler_error()\r\n File \"/home/olaf/.local/lib/python3.9/site-packages/numba/pycc/platform.py\", line 121, in _raise_external_compiler_error\r\n raise RuntimeError(msg)\r\nRuntimeError: Attempted to compile AOT function without the compiler used by `numpy.distutils` present. If using conda try:\r\n\r\n#> conda install gcc_linux-64 gxx_linux-64\r\n\r\n$ cat t.py \r\n#!/usr/bin/python3\r\n\r\nfrom numba.pycc import CC\r\n\r\ncc = CC('interpolation')\r\n\r\n# cc.compile()\r\n\r\n$ pip list|grep numba\r\nnumba 0.53.1\r\n```\n", "before_files": [{"content": "from distutils.ccompiler import CCompiler, new_compiler\nfrom distutils.command.build_ext import build_ext\nfrom distutils.sysconfig import customize_compiler\nfrom distutils import log\n\nimport numpy.distutils.misc_util as np_misc\n\nimport functools\nimport os\nimport subprocess\nimport sys\nfrom tempfile import NamedTemporaryFile, mkdtemp, gettempdir\nfrom contextlib import contextmanager\n\n_configs = {\n # DLL suffix, Python C extension suffix\n 'win': ('.dll', '.pyd'),\n 'default': ('.so', '.so'),\n}\n\n\ndef get_configs(arg):\n return _configs.get(sys.platform[:3], _configs['default'])[arg]\n\n\nfind_shared_ending = functools.partial(get_configs, 0)\nfind_pyext_ending = functools.partial(get_configs, 1)\n\n@contextmanager\ndef _gentmpfile(suffix):\n # windows locks the tempfile so use a tempdir + file, see\n # https://github.com/numba/numba/issues/3304\n try:\n tmpdir = mkdtemp()\n ntf = open(os.path.join(tmpdir, \"temp%s\" % suffix), 'wt')\n yield ntf\n finally:\n try:\n ntf.close()\n os.remove(ntf)\n except:\n pass\n else:\n os.rmdir(tmpdir)\n\ndef _check_external_compiler():\n # see if the external compiler bound in numpy.distutil is present\n # and working\n compiler = new_compiler()\n customize_compiler(compiler)\n for suffix in ['.c', '.cxx']:\n try:\n with _gentmpfile(suffix) as ntf:\n simple_c = \"int main(void) { return 0; }\"\n ntf.write(simple_c)\n ntf.flush()\n ntf.close()\n # *output_dir* is set to avoid the compiler putting temp files\n # in the current directory.\n compiler.compile([ntf.name], output_dir=gettempdir())\n except Exception: # likely CompileError or file system issue\n return False\n return True\n\n# boolean on whether the externally provided compiler is present and\n# functioning correctly\n_external_compiler_ok = _check_external_compiler()\n\n\nclass _DummyExtension(object):\n libraries = []\n\n\nclass Toolchain(object):\n\n def __init__(self):\n if not _external_compiler_ok:\n self._raise_external_compiler_error()\n\n # Need to import it here since setuptools may monkeypatch it\n from distutils.dist import Distribution\n self._verbose = False\n self._compiler = new_compiler()\n customize_compiler(self._compiler)\n self._build_ext = build_ext(Distribution())\n self._build_ext.finalize_options()\n self._py_lib_dirs = self._build_ext.library_dirs\n self._py_include_dirs = self._build_ext.include_dirs\n self._math_info = np_misc.get_info('npymath')\n\n @property\n def verbose(self):\n return self._verbose\n\n @verbose.setter\n def verbose(self, value):\n self._verbose = value\n # DEBUG will let Numpy spew many messages, so stick to INFO\n # to print commands executed by distutils\n log.set_threshold(log.INFO if value else log.WARN)\n\n def _raise_external_compiler_error(self):\n basemsg = (\"Attempted to compile AOT function without the \"\n \"compiler used by `numpy.distutils` present.\")\n conda_msg = \"If using conda try:\\n\\n#> conda install %s\"\n plt = sys.platform\n if plt.startswith('linux'):\n if sys.maxsize <= 2 ** 32:\n compilers = ['gcc_linux-32', 'gxx_linux-32']\n else:\n compilers = ['gcc_linux-64', 'gxx_linux-64']\n msg = \"%s %s\" % (basemsg, conda_msg % ' '.join(compilers))\n elif plt.startswith('darwin'):\n compilers = ['clang_osx-64', 'clangxx_osx-64']\n msg = \"%s %s\" % (basemsg, conda_msg % ' '.join(compilers))\n elif plt.startswith('win32'):\n winmsg = \"Cannot find suitable msvc.\"\n msg = \"%s %s\" % (basemsg, winmsg)\n else:\n msg = \"Unknown platform %s\" % plt\n raise RuntimeError(msg)\n\n def compile_objects(self, sources, output_dir,\n include_dirs=(), depends=(), macros=(),\n extra_cflags=None):\n \"\"\"\n Compile the given source files into a separate object file each,\n all beneath the *output_dir*. A list of paths to object files\n is returned.\n\n *macros* has the same format as in distutils: a list of 1- or 2-tuples.\n If a 1-tuple (name,), the given name is considered undefined by\n the C preprocessor.\n If a 2-tuple (name, value), the given name is expanded into the\n given value by the C preprocessor.\n \"\"\"\n objects = self._compiler.compile(sources,\n output_dir=output_dir,\n include_dirs=include_dirs,\n depends=depends,\n macros=macros or [],\n extra_preargs=extra_cflags)\n return objects\n\n def link_shared(self, output, objects, libraries=(),\n library_dirs=(), export_symbols=(),\n extra_ldflags=None):\n \"\"\"\n Create a shared library *output* linking the given *objects*\n and *libraries* (all strings).\n \"\"\"\n output_dir, output_filename = os.path.split(output)\n self._compiler.link(CCompiler.SHARED_OBJECT, objects,\n output_filename, output_dir,\n libraries, library_dirs,\n export_symbols=export_symbols,\n extra_preargs=extra_ldflags)\n\n def get_python_libraries(self):\n \"\"\"\n Get the library arguments necessary to link with Python.\n \"\"\"\n libs = self._build_ext.get_libraries(_DummyExtension())\n if sys.platform == 'win32':\n # Under Windows, need to link explicitly against the CRT,\n # as the MSVC compiler would implicitly do.\n # (XXX msvcrtd in pydebug mode?)\n libs = libs + ['msvcrt']\n return libs + self._math_info['libraries']\n\n def get_python_library_dirs(self):\n \"\"\"\n Get the library directories necessary to link with Python.\n \"\"\"\n return list(self._py_lib_dirs) + self._math_info['library_dirs']\n\n def get_python_include_dirs(self):\n \"\"\"\n Get the include directories necessary to compile against the Python\n and Numpy C APIs.\n \"\"\"\n return list(self._py_include_dirs) + self._math_info['include_dirs']\n\n def get_ext_filename(self, ext_name):\n \"\"\"\n Given a C extension's module name, return its intended filename.\n \"\"\"\n return self._build_ext.get_ext_filename(ext_name)\n\n\n#\n# Patch Numpy's exec_command() to avoid random crashes on Windows in test_pycc\n# see https://github.com/numpy/numpy/pull/7614\n# and https://github.com/numpy/numpy/pull/7862\n#\n\ndef _patch_exec_command():\n # Patch the internal worker _exec_command()\n import numpy.distutils.exec_command as mod\n orig_exec_command = mod._exec_command\n mod._exec_command = _exec_command\n\n\ndef _exec_command(command, use_shell=None, use_tee=None, **env):\n \"\"\"\n Internal workhorse for exec_command().\n Code from https://github.com/numpy/numpy/pull/7862\n \"\"\"\n if use_shell is None:\n use_shell = os.name == 'posix'\n if use_tee is None:\n use_tee = os.name == 'posix'\n\n executable = None\n\n if os.name == 'posix' and use_shell:\n # On POSIX, subprocess always uses /bin/sh, override\n sh = os.environ.get('SHELL', '/bin/sh')\n if _is_sequence(command):\n command = [sh, '-c', ' '.join(command)]\n else:\n command = [sh, '-c', command]\n use_shell = False\n\n elif os.name == 'nt' and _is_sequence(command):\n # On Windows, join the string for CreateProcess() ourselves as\n # subprocess does it a bit differently\n command = ' '.join(_quote_arg(arg) for arg in command)\n\n # Inherit environment by default\n env = env or None\n try:\n proc = subprocess.Popen(command, shell=use_shell, env=env,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n universal_newlines=True)\n except OSError:\n # Return 127, as os.spawn*() and /bin/sh do\n return '', 127\n text, err = proc.communicate()\n # Only append stderr if the command failed, as otherwise\n # the output may become garbled for parsing\n if proc.returncode:\n if text:\n text += \"\\n\"\n text += err\n # Another historical oddity\n if text[-1:] == '\\n':\n text = text[:-1]\n if use_tee:\n print(text)\n return proc.returncode, text\n\n\ndef _quote_arg(arg):\n \"\"\"\n Quote the argument for safe use in a shell command line.\n \"\"\"\n # If there is a quote in the string, assume relevants parts of the\n # string are already quoted (e.g. '-I\"C:\\\\Program Files\\\\...\"')\n if '\"' not in arg and ' ' in arg:\n return '\"%s\"' % arg\n return arg\n\n\ndef _is_sequence(arg):\n if isinstance(arg, (str, bytes)):\n return False\n try:\n len(arg)\n return True\n except Exception:\n return False\n", "path": "numba/pycc/platform.py"}], "after_files": [{"content": "from distutils.ccompiler import CCompiler, new_compiler\nfrom distutils.command.build_ext import build_ext\nfrom distutils.sysconfig import customize_compiler\nfrom distutils import log\n\nimport numpy.distutils.misc_util as np_misc\n\nimport functools\nimport os\nimport subprocess\nimport sys\nfrom tempfile import mkdtemp\nfrom contextlib import contextmanager\nfrom pathlib import Path\n\n_configs = {\n # DLL suffix, Python C extension suffix\n 'win': ('.dll', '.pyd'),\n 'default': ('.so', '.so'),\n}\n\n\ndef get_configs(arg):\n return _configs.get(sys.platform[:3], _configs['default'])[arg]\n\n\nfind_shared_ending = functools.partial(get_configs, 0)\nfind_pyext_ending = functools.partial(get_configs, 1)\n\n@contextmanager\ndef _gentmpfile(suffix):\n # windows locks the tempfile so use a tempdir + file, see\n # https://github.com/numba/numba/issues/3304\n try:\n tmpdir = mkdtemp()\n ntf = open(os.path.join(tmpdir, \"temp%s\" % suffix), 'wt')\n yield ntf\n finally:\n try:\n ntf.close()\n os.remove(ntf)\n except:\n pass\n else:\n os.rmdir(tmpdir)\n\ndef _check_external_compiler():\n # see if the external compiler bound in numpy.distutil is present\n # and working\n compiler = new_compiler()\n customize_compiler(compiler)\n for suffix in ['.c', '.cxx']:\n try:\n with _gentmpfile(suffix) as ntf:\n simple_c = \"int main(void) { return 0; }\"\n ntf.write(simple_c)\n ntf.flush()\n ntf.close()\n # *output_dir* is set to avoid the compiler putting temp files\n # in the current directory.\n compiler.compile([ntf.name], output_dir=Path(ntf.name).anchor)\n except Exception: # likely CompileError or file system issue\n return False\n return True\n\n# boolean on whether the externally provided compiler is present and\n# functioning correctly\n_external_compiler_ok = _check_external_compiler()\n\n\nclass _DummyExtension(object):\n libraries = []\n\n\nclass Toolchain(object):\n\n def __init__(self):\n if not _external_compiler_ok:\n self._raise_external_compiler_error()\n\n # Need to import it here since setuptools may monkeypatch it\n from distutils.dist import Distribution\n self._verbose = False\n self._compiler = new_compiler()\n customize_compiler(self._compiler)\n self._build_ext = build_ext(Distribution())\n self._build_ext.finalize_options()\n self._py_lib_dirs = self._build_ext.library_dirs\n self._py_include_dirs = self._build_ext.include_dirs\n self._math_info = np_misc.get_info('npymath')\n\n @property\n def verbose(self):\n return self._verbose\n\n @verbose.setter\n def verbose(self, value):\n self._verbose = value\n # DEBUG will let Numpy spew many messages, so stick to INFO\n # to print commands executed by distutils\n log.set_threshold(log.INFO if value else log.WARN)\n\n def _raise_external_compiler_error(self):\n basemsg = (\"Attempted to compile AOT function without the \"\n \"compiler used by `numpy.distutils` present.\")\n conda_msg = \"If using conda try:\\n\\n#> conda install %s\"\n plt = sys.platform\n if plt.startswith('linux'):\n if sys.maxsize <= 2 ** 32:\n compilers = ['gcc_linux-32', 'gxx_linux-32']\n else:\n compilers = ['gcc_linux-64', 'gxx_linux-64']\n msg = \"%s %s\" % (basemsg, conda_msg % ' '.join(compilers))\n elif plt.startswith('darwin'):\n compilers = ['clang_osx-64', 'clangxx_osx-64']\n msg = \"%s %s\" % (basemsg, conda_msg % ' '.join(compilers))\n elif plt.startswith('win32'):\n winmsg = \"Cannot find suitable msvc.\"\n msg = \"%s %s\" % (basemsg, winmsg)\n else:\n msg = \"Unknown platform %s\" % plt\n raise RuntimeError(msg)\n\n def compile_objects(self, sources, output_dir,\n include_dirs=(), depends=(), macros=(),\n extra_cflags=None):\n \"\"\"\n Compile the given source files into a separate object file each,\n all beneath the *output_dir*. A list of paths to object files\n is returned.\n\n *macros* has the same format as in distutils: a list of 1- or 2-tuples.\n If a 1-tuple (name,), the given name is considered undefined by\n the C preprocessor.\n If a 2-tuple (name, value), the given name is expanded into the\n given value by the C preprocessor.\n \"\"\"\n objects = self._compiler.compile(sources,\n output_dir=output_dir,\n include_dirs=include_dirs,\n depends=depends,\n macros=macros or [],\n extra_preargs=extra_cflags)\n return objects\n\n def link_shared(self, output, objects, libraries=(),\n library_dirs=(), export_symbols=(),\n extra_ldflags=None):\n \"\"\"\n Create a shared library *output* linking the given *objects*\n and *libraries* (all strings).\n \"\"\"\n output_dir, output_filename = os.path.split(output)\n self._compiler.link(CCompiler.SHARED_OBJECT, objects,\n output_filename, output_dir,\n libraries, library_dirs,\n export_symbols=export_symbols,\n extra_preargs=extra_ldflags)\n\n def get_python_libraries(self):\n \"\"\"\n Get the library arguments necessary to link with Python.\n \"\"\"\n libs = self._build_ext.get_libraries(_DummyExtension())\n if sys.platform == 'win32':\n # Under Windows, need to link explicitly against the CRT,\n # as the MSVC compiler would implicitly do.\n # (XXX msvcrtd in pydebug mode?)\n libs = libs + ['msvcrt']\n return libs + self._math_info['libraries']\n\n def get_python_library_dirs(self):\n \"\"\"\n Get the library directories necessary to link with Python.\n \"\"\"\n return list(self._py_lib_dirs) + self._math_info['library_dirs']\n\n def get_python_include_dirs(self):\n \"\"\"\n Get the include directories necessary to compile against the Python\n and Numpy C APIs.\n \"\"\"\n return list(self._py_include_dirs) + self._math_info['include_dirs']\n\n def get_ext_filename(self, ext_name):\n \"\"\"\n Given a C extension's module name, return its intended filename.\n \"\"\"\n return self._build_ext.get_ext_filename(ext_name)\n\n\n#\n# Patch Numpy's exec_command() to avoid random crashes on Windows in test_pycc\n# see https://github.com/numpy/numpy/pull/7614\n# and https://github.com/numpy/numpy/pull/7862\n#\n\ndef _patch_exec_command():\n # Patch the internal worker _exec_command()\n import numpy.distutils.exec_command as mod\n orig_exec_command = mod._exec_command\n mod._exec_command = _exec_command\n\n\ndef _exec_command(command, use_shell=None, use_tee=None, **env):\n \"\"\"\n Internal workhorse for exec_command().\n Code from https://github.com/numpy/numpy/pull/7862\n \"\"\"\n if use_shell is None:\n use_shell = os.name == 'posix'\n if use_tee is None:\n use_tee = os.name == 'posix'\n\n executable = None\n\n if os.name == 'posix' and use_shell:\n # On POSIX, subprocess always uses /bin/sh, override\n sh = os.environ.get('SHELL', '/bin/sh')\n if _is_sequence(command):\n command = [sh, '-c', ' '.join(command)]\n else:\n command = [sh, '-c', command]\n use_shell = False\n\n elif os.name == 'nt' and _is_sequence(command):\n # On Windows, join the string for CreateProcess() ourselves as\n # subprocess does it a bit differently\n command = ' '.join(_quote_arg(arg) for arg in command)\n\n # Inherit environment by default\n env = env or None\n try:\n proc = subprocess.Popen(command, shell=use_shell, env=env,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n universal_newlines=True)\n except OSError:\n # Return 127, as os.spawn*() and /bin/sh do\n return '', 127\n text, err = proc.communicate()\n # Only append stderr if the command failed, as otherwise\n # the output may become garbled for parsing\n if proc.returncode:\n if text:\n text += \"\\n\"\n text += err\n # Another historical oddity\n if text[-1:] == '\\n':\n text = text[:-1]\n if use_tee:\n print(text)\n return proc.returncode, text\n\n\ndef _quote_arg(arg):\n \"\"\"\n Quote the argument for safe use in a shell command line.\n \"\"\"\n # If there is a quote in the string, assume relevants parts of the\n # string are already quoted (e.g. '-I\"C:\\\\Program Files\\\\...\"')\n if '\"' not in arg and ' ' in arg:\n return '\"%s\"' % arg\n return arg\n\n\ndef _is_sequence(arg):\n if isinstance(arg, (str, bytes)):\n return False\n try:\n len(arg)\n return True\n except Exception:\n return False\n", "path": "numba/pycc/platform.py"}]}
| 3,796 | 204 |
gh_patches_debug_7347
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-10645
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
🎛️ Migrate header search Stimulus Controller
> ℹ️ **Part of the [Stimulus 🎛️ RFC 78](https://github.com/wagtail/rfcs/pull/78)**
### Is your proposal related to a problem?
We have a core.js implementations of JavaScript code that, when a matching search input receives changes, will trigger an async request to the relevant search results listing. Once the endpoint returns with HTML, it will be patched into the results container HTML element.
### Describe the solution you'd like
* Create a stimulus controller `w-search` that will replace the existing ad-hoc JS implementation
* The behaviour should be exactly the same as current state but using Stimulus data attributes for the behaviour & classes declaration (note: likely we will drop the `autofocus` and may not re-introduce the `slide` jQuery animation)
* Controller should be written in TypeScript
* Ensure that the existing unit tests are created to reflect this new behaviour
* We will need to document an upgrade consideration that the previous `window.headerSearch` approach will not work in a future release.
* We may want to introduce a console warning once all the Wagtail usage of `window.headerSearch` has been removed
* Nice to have - a Storybook story for this component
### Additional context
* Implementation https://github.com/wagtail/wagtail/blob/main/client/src/entrypoints/admin/core.js#L251-L306
* There is a very similar (almost cut & paste) of logic that is used in the chooser modals for searching here https://github.com/wagtail/wagtail/blob/main/client/src/includes/chooserModal.js#L109-L176 (the Stimulus will likely replace this but may be out of scope for this issue
### Potential approach
#### Support `input` only usage (with using `window.headerSearch` config)
```JS
window.headerSearch = {
url: "{% url 'wagtailimages:listing_results' %}",
targetOutput: "#image-results"
}
```
```html
<div class="w-field__input" data-field-input="">
<svg class="icon icon-search w-field__icon" aria-hidden="true">
<use href="#icon-search"></use>
</svg>
<input
type="text"
name="q"
placeholder="Search images"
data-controller="w-search"
data-action="change->w-search#search cut->w-search#search keyup->w-search#search paste->w-search#search"
id="id_q"
/>
</div>
```
#### Support `input` only usage
```html
<div class="w-field__input" data-field-input="">
<svg class="icon icon-search w-field__icon" aria-hidden="true">
<use href="#icon-search"></use>
</svg>
<input
type="text"
name="q"
placeholder="Search images"
data-controller="w-search"
data-action="change->w-search#search cut->w-search#search keyup->w-search#search paste->w-search#search"
id="id_q"
data-w-search-results-value="#image-results"
data-w-search-url-value="/admin/images/results/"
/>
</div>
```
#### Support controlled form with search input as a target
```html
<form
class="col search-form"
action="/admin/images/"
method="get"
novalidate=""
role="search"
data-controller="w-search"
data-w-search-url-value="/admin/images/results/"
>
<div class="w-field__wrapper w-mb-0" data-field-wrapper="">
<label class="w-field__label w-sr-only" for="id_q" id="id_q-label">Search term</label>
<div class="w-field w-field--char_field w-field--text_input">
<div class="w-field__input" data-field-input="">
<svg class="icon icon-search w-field__icon" aria-hidden="true"><use href="#icon-search"></use></svg>
<input
type="text"
name="q"
placeholder="Search images"
data-w-search-target="input"
data-action="change->w-search#search cut->w-search#search keyup->w-search#search paste->w-search#search"
id="id_q"
/>
</div>
</div>
</div>
<div class="visuallyhidden"><input disabled="" type="submit" aria-hidden="true" /></div>
</form>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/admin/forms/search.py`
Content:
```
1 from django import forms
2 from django.utils.translation import gettext as _
3 from django.utils.translation import gettext_lazy
4
5
6 class SearchForm(forms.Form):
7 def __init__(self, *args, **kwargs):
8 placeholder = kwargs.pop("placeholder", _("Search"))
9 super().__init__(*args, **kwargs)
10 self.fields["q"].widget.attrs = {"placeholder": placeholder}
11
12 q = forms.CharField(
13 label=gettext_lazy("Search term"),
14 widget=forms.TextInput(),
15 required=False,
16 )
17
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/admin/forms/search.py b/wagtail/admin/forms/search.py
--- a/wagtail/admin/forms/search.py
+++ b/wagtail/admin/forms/search.py
@@ -7,7 +7,10 @@
def __init__(self, *args, **kwargs):
placeholder = kwargs.pop("placeholder", _("Search"))
super().__init__(*args, **kwargs)
- self.fields["q"].widget.attrs = {"placeholder": placeholder}
+ self.fields["q"].widget.attrs = {
+ "placeholder": placeholder,
+ "data-w-swap-target": "input",
+ }
q = forms.CharField(
label=gettext_lazy("Search term"),
|
{"golden_diff": "diff --git a/wagtail/admin/forms/search.py b/wagtail/admin/forms/search.py\n--- a/wagtail/admin/forms/search.py\n+++ b/wagtail/admin/forms/search.py\n@@ -7,7 +7,10 @@\n def __init__(self, *args, **kwargs):\n placeholder = kwargs.pop(\"placeholder\", _(\"Search\"))\n super().__init__(*args, **kwargs)\n- self.fields[\"q\"].widget.attrs = {\"placeholder\": placeholder}\n+ self.fields[\"q\"].widget.attrs = {\n+ \"placeholder\": placeholder,\n+ \"data-w-swap-target\": \"input\",\n+ }\n \n q = forms.CharField(\n label=gettext_lazy(\"Search term\"),\n", "issue": "\ud83c\udf9b\ufe0f Migrate header search Stimulus Controller\n> \u2139\ufe0f **Part of the [Stimulus \ud83c\udf9b\ufe0f RFC 78](https://github.com/wagtail/rfcs/pull/78)**\r\n\r\n### Is your proposal related to a problem?\r\n\r\nWe have a core.js implementations of JavaScript code that, when a matching search input receives changes, will trigger an async request to the relevant search results listing. Once the endpoint returns with HTML, it will be patched into the results container HTML element.\r\n\r\n### Describe the solution you'd like\r\n\r\n* Create a stimulus controller `w-search` that will replace the existing ad-hoc JS implementation\r\n* The behaviour should be exactly the same as current state but using Stimulus data attributes for the behaviour & classes declaration (note: likely we will drop the `autofocus` and may not re-introduce the `slide` jQuery animation)\r\n* Controller should be written in TypeScript\r\n* Ensure that the existing unit tests are created to reflect this new behaviour\r\n* We will need to document an upgrade consideration that the previous `window.headerSearch` approach will not work in a future release.\r\n* We may want to introduce a console warning once all the Wagtail usage of `window.headerSearch` has been removed\r\n* Nice to have - a Storybook story for this component\r\n\r\n### Additional context\r\n\r\n* Implementation https://github.com/wagtail/wagtail/blob/main/client/src/entrypoints/admin/core.js#L251-L306\r\n* There is a very similar (almost cut & paste) of logic that is used in the chooser modals for searching here https://github.com/wagtail/wagtail/blob/main/client/src/includes/chooserModal.js#L109-L176 (the Stimulus will likely replace this but may be out of scope for this issue\r\n\r\n### Potential approach\r\n\r\n#### Support `input` only usage (with using `window.headerSearch` config)\r\n\r\n```JS\r\nwindow.headerSearch = {\r\n url: \"{% url 'wagtailimages:listing_results' %}\",\r\n targetOutput: \"#image-results\"\r\n}\r\n```\r\n\r\n```html\r\n<div class=\"w-field__input\" data-field-input=\"\">\r\n <svg class=\"icon icon-search w-field__icon\" aria-hidden=\"true\">\r\n <use href=\"#icon-search\"></use>\r\n </svg>\r\n <input\r\n type=\"text\"\r\n name=\"q\"\r\n placeholder=\"Search images\"\r\n data-controller=\"w-search\"\r\n data-action=\"change->w-search#search cut->w-search#search keyup->w-search#search paste->w-search#search\"\r\n id=\"id_q\"\r\n />\r\n</div>\r\n```\r\n\r\n#### Support `input` only usage\r\n\r\n```html\r\n<div class=\"w-field__input\" data-field-input=\"\">\r\n <svg class=\"icon icon-search w-field__icon\" aria-hidden=\"true\">\r\n <use href=\"#icon-search\"></use>\r\n </svg>\r\n <input\r\n type=\"text\"\r\n name=\"q\"\r\n placeholder=\"Search images\"\r\n data-controller=\"w-search\"\r\n data-action=\"change->w-search#search cut->w-search#search keyup->w-search#search paste->w-search#search\"\r\n id=\"id_q\"\r\n data-w-search-results-value=\"#image-results\"\r\n data-w-search-url-value=\"/admin/images/results/\"\r\n />\r\n</div>\r\n```\r\n\r\n#### Support controlled form with search input as a target\r\n\r\n```html\r\n<form\r\n class=\"col search-form\"\r\n action=\"/admin/images/\"\r\n method=\"get\"\r\n novalidate=\"\"\r\n role=\"search\"\r\n data-controller=\"w-search\"\r\n data-w-search-url-value=\"/admin/images/results/\"\r\n>\r\n <div class=\"w-field__wrapper w-mb-0\" data-field-wrapper=\"\">\r\n <label class=\"w-field__label w-sr-only\" for=\"id_q\" id=\"id_q-label\">Search term</label>\r\n <div class=\"w-field w-field--char_field w-field--text_input\">\r\n <div class=\"w-field__input\" data-field-input=\"\">\r\n <svg class=\"icon icon-search w-field__icon\" aria-hidden=\"true\"><use href=\"#icon-search\"></use></svg>\r\n <input\r\n type=\"text\"\r\n name=\"q\"\r\n placeholder=\"Search images\"\r\n data-w-search-target=\"input\"\r\n data-action=\"change->w-search#search cut->w-search#search keyup->w-search#search paste->w-search#search\"\r\n id=\"id_q\"\r\n />\r\n </div>\r\n </div>\r\n </div>\r\n <div class=\"visuallyhidden\"><input disabled=\"\" type=\"submit\" aria-hidden=\"true\" /></div>\r\n</form>\r\n```\n", "before_files": [{"content": "from django import forms\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\n\n\nclass SearchForm(forms.Form):\n def __init__(self, *args, **kwargs):\n placeholder = kwargs.pop(\"placeholder\", _(\"Search\"))\n super().__init__(*args, **kwargs)\n self.fields[\"q\"].widget.attrs = {\"placeholder\": placeholder}\n\n q = forms.CharField(\n label=gettext_lazy(\"Search term\"),\n widget=forms.TextInput(),\n required=False,\n )\n", "path": "wagtail/admin/forms/search.py"}], "after_files": [{"content": "from django import forms\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\n\n\nclass SearchForm(forms.Form):\n def __init__(self, *args, **kwargs):\n placeholder = kwargs.pop(\"placeholder\", _(\"Search\"))\n super().__init__(*args, **kwargs)\n self.fields[\"q\"].widget.attrs = {\n \"placeholder\": placeholder,\n \"data-w-swap-target\": \"input\",\n }\n\n q = forms.CharField(\n label=gettext_lazy(\"Search term\"),\n widget=forms.TextInput(),\n required=False,\n )\n", "path": "wagtail/admin/forms/search.py"}]}
| 1,372 | 149 |
gh_patches_debug_23092
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-6430
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tor2web warning page still using outdated pre-SI-redesign resources
## Description
In the SI redesign, we overlooked the Tor2web page which tries to render an old location for the icon and does not show an icon for the flash warning message.
## Steps to Reproduce
Visit https://demo-source.securedrop.org/tor2web-warning
## Expected Behavior

## Actual Behavior

"Tor Browser" link in tor2web warning is broken
## Description
The "Tor Browser" link in the tor2web warning is broken because it does not specify a protocol, so the browser treats it as a relative link.
## Steps to Reproduce
* Visit `/tor2web-warning` in the SI
* Hover over or click on the "Tor Browser" link, it should send you to a non-existent `/www.torproject.org/projects/torbrowser.html` on the SI's domain.
## Expected Behavior
* Link takes you to Tor Project website.
## Comments
Fix should be as simple as adding "https://" in front.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/source_app/info.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import flask
3 from flask import Blueprint, render_template, send_file, redirect, url_for, flash
4 from flask_babel import gettext
5 import werkzeug
6
7 from io import BytesIO # noqa
8
9 from encryption import EncryptionManager
10 from sdconfig import SDConfig
11 from source_app.utils import get_sourcev3_url
12
13
14 def make_blueprint(config: SDConfig) -> Blueprint:
15 view = Blueprint('info', __name__)
16
17 @view.route('/tor2web-warning')
18 def tor2web_warning() -> flask.Response:
19 flash(gettext("Your connection is not anonymous right now!"), "error")
20 return flask.Response(
21 render_template("tor2web-warning.html", source_url=get_sourcev3_url()),
22 403)
23
24 @view.route('/use-tor')
25 def recommend_tor_browser() -> str:
26 return render_template("use-tor-browser.html")
27
28 @view.route('/public-key')
29 def download_public_key() -> flask.Response:
30 journalist_pubkey = EncryptionManager.get_default().get_journalist_public_key()
31 data = BytesIO(journalist_pubkey.encode('utf-8'))
32 return send_file(data,
33 mimetype="application/pgp-keys",
34 attachment_filename=config.JOURNALIST_KEY + ".asc",
35 as_attachment=True)
36
37 @view.route('/journalist-key')
38 def download_journalist_key() -> werkzeug.wrappers.Response:
39 return redirect(url_for('.download_public_key'), code=301)
40
41 @view.route('/why-public-key')
42 def why_download_public_key() -> str:
43 return render_template("why-public-key.html")
44
45 return view
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/securedrop/source_app/info.py b/securedrop/source_app/info.py
--- a/securedrop/source_app/info.py
+++ b/securedrop/source_app/info.py
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
import flask
-from flask import Blueprint, render_template, send_file, redirect, url_for, flash
+from flask import Blueprint, render_template, send_file, redirect, url_for
from flask_babel import gettext
import werkzeug
@@ -8,7 +8,7 @@
from encryption import EncryptionManager
from sdconfig import SDConfig
-from source_app.utils import get_sourcev3_url
+from source_app.utils import get_sourcev3_url, flash_msg
def make_blueprint(config: SDConfig) -> Blueprint:
@@ -16,7 +16,7 @@
@view.route('/tor2web-warning')
def tor2web_warning() -> flask.Response:
- flash(gettext("Your connection is not anonymous right now!"), "error")
+ flash_msg("error", None, gettext("Your connection is not anonymous right now!"))
return flask.Response(
render_template("tor2web-warning.html", source_url=get_sourcev3_url()),
403)
|
{"golden_diff": "diff --git a/securedrop/source_app/info.py b/securedrop/source_app/info.py\n--- a/securedrop/source_app/info.py\n+++ b/securedrop/source_app/info.py\n@@ -1,6 +1,6 @@\n # -*- coding: utf-8 -*-\n import flask\n-from flask import Blueprint, render_template, send_file, redirect, url_for, flash\n+from flask import Blueprint, render_template, send_file, redirect, url_for\n from flask_babel import gettext\n import werkzeug\n \n@@ -8,7 +8,7 @@\n \n from encryption import EncryptionManager\n from sdconfig import SDConfig\n-from source_app.utils import get_sourcev3_url\n+from source_app.utils import get_sourcev3_url, flash_msg\n \n \n def make_blueprint(config: SDConfig) -> Blueprint:\n@@ -16,7 +16,7 @@\n \n @view.route('/tor2web-warning')\n def tor2web_warning() -> flask.Response:\n- flash(gettext(\"Your connection is not anonymous right now!\"), \"error\")\n+ flash_msg(\"error\", None, gettext(\"Your connection is not anonymous right now!\"))\n return flask.Response(\n render_template(\"tor2web-warning.html\", source_url=get_sourcev3_url()),\n 403)\n", "issue": "Tor2web warning page still using outdated pre-SI-redesign resources\n## Description\r\n\r\nIn the SI redesign, we overlooked the Tor2web page which tries to render an old location for the icon and does not show an icon for the flash warning message.\r\n\r\n## Steps to Reproduce\r\n\r\nVisit https://demo-source.securedrop.org/tor2web-warning\r\n\r\n## Expected Behavior\r\n\r\n\r\n\r\n## Actual Behavior\r\n\r\n\r\n\n\"Tor Browser\" link in tor2web warning is broken\n## Description\r\n\r\nThe \"Tor Browser\" link in the tor2web warning is broken because it does not specify a protocol, so the browser treats it as a relative link.\r\n\r\n## Steps to Reproduce\r\n\r\n* Visit `/tor2web-warning` in the SI\r\n* Hover over or click on the \"Tor Browser\" link, it should send you to a non-existent `/www.torproject.org/projects/torbrowser.html` on the SI's domain.\r\n\r\n## Expected Behavior\r\n\r\n* Link takes you to Tor Project website.\r\n\r\n## Comments\r\n\r\nFix should be as simple as adding \"https://\" in front.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport flask\nfrom flask import Blueprint, render_template, send_file, redirect, url_for, flash\nfrom flask_babel import gettext\nimport werkzeug\n\nfrom io import BytesIO # noqa\n\nfrom encryption import EncryptionManager\nfrom sdconfig import SDConfig\nfrom source_app.utils import get_sourcev3_url\n\n\ndef make_blueprint(config: SDConfig) -> Blueprint:\n view = Blueprint('info', __name__)\n\n @view.route('/tor2web-warning')\n def tor2web_warning() -> flask.Response:\n flash(gettext(\"Your connection is not anonymous right now!\"), \"error\")\n return flask.Response(\n render_template(\"tor2web-warning.html\", source_url=get_sourcev3_url()),\n 403)\n\n @view.route('/use-tor')\n def recommend_tor_browser() -> str:\n return render_template(\"use-tor-browser.html\")\n\n @view.route('/public-key')\n def download_public_key() -> flask.Response:\n journalist_pubkey = EncryptionManager.get_default().get_journalist_public_key()\n data = BytesIO(journalist_pubkey.encode('utf-8'))\n return send_file(data,\n mimetype=\"application/pgp-keys\",\n attachment_filename=config.JOURNALIST_KEY + \".asc\",\n as_attachment=True)\n\n @view.route('/journalist-key')\n def download_journalist_key() -> werkzeug.wrappers.Response:\n return redirect(url_for('.download_public_key'), code=301)\n\n @view.route('/why-public-key')\n def why_download_public_key() -> str:\n return render_template(\"why-public-key.html\")\n\n return view\n", "path": "securedrop/source_app/info.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport flask\nfrom flask import Blueprint, render_template, send_file, redirect, url_for\nfrom flask_babel import gettext\nimport werkzeug\n\nfrom io import BytesIO # noqa\n\nfrom encryption import EncryptionManager\nfrom sdconfig import SDConfig\nfrom source_app.utils import get_sourcev3_url, flash_msg\n\n\ndef make_blueprint(config: SDConfig) -> Blueprint:\n view = Blueprint('info', __name__)\n\n @view.route('/tor2web-warning')\n def tor2web_warning() -> flask.Response:\n flash_msg(\"error\", None, gettext(\"Your connection is not anonymous right now!\"))\n return flask.Response(\n render_template(\"tor2web-warning.html\", source_url=get_sourcev3_url()),\n 403)\n\n @view.route('/use-tor')\n def recommend_tor_browser() -> str:\n return render_template(\"use-tor-browser.html\")\n\n @view.route('/public-key')\n def download_public_key() -> flask.Response:\n journalist_pubkey = EncryptionManager.get_default().get_journalist_public_key()\n data = BytesIO(journalist_pubkey.encode('utf-8'))\n return send_file(data,\n mimetype=\"application/pgp-keys\",\n attachment_filename=config.JOURNALIST_KEY + \".asc\",\n as_attachment=True)\n\n @view.route('/journalist-key')\n def download_journalist_key() -> werkzeug.wrappers.Response:\n return redirect(url_for('.download_public_key'), code=301)\n\n @view.route('/why-public-key')\n def why_download_public_key() -> str:\n return render_template(\"why-public-key.html\")\n\n return view\n", "path": "securedrop/source_app/info.py"}]}
| 1,037 | 270 |
gh_patches_debug_61790
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-3796
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CA-PE production parser down
## Description
This is an automatic error report generated for Canada Prince Edward Island (CA-PE).
Issues:
- No recent data found for `production` parser
## Suggestions
- Try running the parser locally using the command `poetry run test_parser CA-PE production`
- <a href="https://storage.googleapis.com/electricitymap-parser-logs/CA-PE.html">Explore the runtime logs</a>
You can see an overview of all parser issues [here](https://github.com/tmrowco/electricitymap-contrib/wiki/Parser-issues).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/CA_PE.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import json
4
5 # The arrow library is used to handle datetimes consistently with other parsers
6 import arrow
7
8 # The request library is used to fetch content through HTTP
9 import requests
10
11
12 timezone = 'Canada/Atlantic'
13
14
15 def _find_pei_key(pei_list, sought_key):
16 matching_item = [item for item in pei_list
17 if 'header' in item['data']
18 and item['data']['header'].startswith(sought_key)]
19
20 if not matching_item:
21 return None
22
23 return matching_item[0]['data']['actualValue']
24
25
26 def _get_pei_info(requests_obj):
27 url = 'https://wdf.princeedwardisland.ca/workflow'
28 request = {'featureName': 'WindEnergy', 'queryName': 'WindEnergy'}
29 headers = {'Content-Type': 'application/json'}
30 response = requests_obj.post(url, data=json.dumps(request), headers=headers)
31
32 raw_data = response.json().get('data', [])
33
34 datetime_item = [item['data']['text'] for item in raw_data
35 if 'text' in item['data']]
36 if not datetime_item:
37 # unable to get a timestamp, return empty
38 return None
39 datetime_text = datetime_item[0][len('Last updated '):]
40 data_timestamp = arrow.get(datetime_text, 'MMMM D, YYYY HH:mm A').replace(tzinfo='Canada/Atlantic')
41
42 # see https://ruk.ca/content/new-api-endpoint-pei-wind for more info
43 data = {
44 'pei_load': _find_pei_key(raw_data, 'Total On-Island Load'),
45 'pei_wind_gen': _find_pei_key(raw_data, 'Total On-Island Wind Generation'),
46 'pei_fossil_gen': _find_pei_key(raw_data, 'Total On-Island Fossil Fuel Generation'),
47 'pei_wind_used': _find_pei_key(raw_data, 'Wind Power Used On Island'),
48 'pei_wind_exported': _find_pei_key(raw_data, 'Wind Power Exported Off Island'),
49 'datetime': data_timestamp.datetime
50 }
51
52 # the following keys are always required downstream, if we don't have them, no sense returning
53 if data['pei_wind_gen'] is None or data['pei_fossil_gen'] is None:
54 return None
55
56 return data
57
58
59 def fetch_production(zone_key='CA-PE', session=None, target_datetime=None, logger=None) -> dict:
60 """Requests the last known production mix (in MW) of a given country."""
61 if target_datetime:
62 raise NotImplementedError('This parser is not yet able to parse past dates')
63
64 requests_obj = session or requests.session()
65 pei_info = _get_pei_info(requests_obj)
66
67 if pei_info is None:
68 return None
69
70 data = {
71 'datetime': pei_info['datetime'],
72 'zoneKey': zone_key,
73 'production': {
74 'wind': pei_info['pei_wind_gen'],
75
76 # These are oil-fueled ("heavy fuel oil" and "diesel") generators
77 # used as peakers and back-up
78 'oil': pei_info['pei_fossil_gen'],
79
80 # specify some sources that definitely aren't present on PEI as zero,
81 # this allows the analyzer to better estimate CO2eq
82 'coal': 0,
83 'hydro': 0,
84 'nuclear': 0,
85 'geothermal': 0
86 },
87 'storage': {},
88 'source': 'princeedwardisland.ca'
89 }
90
91 return data
92
93
94 def fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None) -> dict:
95 """Requests the last known power exchange (in MW) between two regions."""
96 if target_datetime:
97 raise NotImplementedError('This parser is not yet able to parse past dates')
98
99 sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))
100
101 if sorted_zone_keys != 'CA-NB->CA-PE':
102 raise NotImplementedError('This exchange pair is not implemented')
103
104 requests_obj = session or requests.session()
105 pei_info = _get_pei_info(requests_obj)
106
107 if pei_info is None or pei_info['pei_load'] is None:
108 return None
109
110 # PEI imports most of its electricity. Everything not generated on island
111 # is imported from New Brunswick.
112 # In case of wind, some is paper-"exported" even if there is a net import,
113 # and 'pei_wind_used'/'data5' indicates their accounting of part of the load
114 # served by non-exported wind.
115 # https://www.princeedwardisland.ca/en/feature/pei-wind-energy says:
116 # "Wind Power Exported Off-Island is that portion of wind generation that is supplying
117 # contracts elsewhere. The actual electricity from this portion of wind generation
118 # may stay within PEI but is satisfying a contractual arrangement in another jurisdiction."
119 # We are ignoring these paper exports, as they are an accounting/legal detail
120 # that doesn't actually reflect what happens on the wires.
121 # (New Brunswick being the only interconnection with PEI, "exporting" wind power to NB
122 # then "importing" a balance of NB electricity likely doesn't actually happen.)
123 imported_from_nb = (pei_info['pei_load'] - pei_info['pei_fossil_gen'] - pei_info['pei_wind_gen'])
124
125 # In expected result, "net" represents an export.
126 # We have sorted_zone_keys 'CA-NB->CA-PE', so it's export *from* NB,
127 # and import *to* PEI.
128 data = {
129 'datetime': pei_info['datetime'],
130 'sortedZoneKeys': sorted_zone_keys,
131 'netFlow': imported_from_nb,
132 'source': 'princeedwardisland.ca'
133 }
134
135 return data
136
137
138 if __name__ == '__main__':
139 """Main method, never used by the Electricity Map backend, but handy for testing."""
140
141 print('fetch_production() ->')
142 print(fetch_production())
143
144 print('fetch_exchange("CA-PE", "CA-NB") ->')
145 print(fetch_exchange("CA-PE", "CA-NB"))
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/CA_PE.py b/parsers/CA_PE.py
--- a/parsers/CA_PE.py
+++ b/parsers/CA_PE.py
@@ -24,7 +24,7 @@
def _get_pei_info(requests_obj):
- url = 'https://wdf.princeedwardisland.ca/workflow'
+ url = 'https://wdf.princeedwardisland.ca/api/workflow'
request = {'featureName': 'WindEnergy', 'queryName': 'WindEnergy'}
headers = {'Content-Type': 'application/json'}
response = requests_obj.post(url, data=json.dumps(request), headers=headers)
|
{"golden_diff": "diff --git a/parsers/CA_PE.py b/parsers/CA_PE.py\n--- a/parsers/CA_PE.py\n+++ b/parsers/CA_PE.py\n@@ -24,7 +24,7 @@\n \n \n def _get_pei_info(requests_obj):\n- url = 'https://wdf.princeedwardisland.ca/workflow'\n+ url = 'https://wdf.princeedwardisland.ca/api/workflow'\n request = {'featureName': 'WindEnergy', 'queryName': 'WindEnergy'}\n headers = {'Content-Type': 'application/json'}\n response = requests_obj.post(url, data=json.dumps(request), headers=headers)\n", "issue": "CA-PE production parser down\n## Description\n\nThis is an automatic error report generated for Canada Prince Edward Island (CA-PE).\n\nIssues:\n- No recent data found for `production` parser\n\n## Suggestions\n- Try running the parser locally using the command `poetry run test_parser CA-PE production`\n- <a href=\"https://storage.googleapis.com/electricitymap-parser-logs/CA-PE.html\">Explore the runtime logs</a>\n\nYou can see an overview of all parser issues [here](https://github.com/tmrowco/electricitymap-contrib/wiki/Parser-issues).\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport json\n\n# The arrow library is used to handle datetimes consistently with other parsers\nimport arrow\n\n# The request library is used to fetch content through HTTP\nimport requests\n\n\ntimezone = 'Canada/Atlantic'\n\n\ndef _find_pei_key(pei_list, sought_key):\n matching_item = [item for item in pei_list\n if 'header' in item['data']\n and item['data']['header'].startswith(sought_key)]\n\n if not matching_item:\n return None\n\n return matching_item[0]['data']['actualValue']\n\n\ndef _get_pei_info(requests_obj):\n url = 'https://wdf.princeedwardisland.ca/workflow'\n request = {'featureName': 'WindEnergy', 'queryName': 'WindEnergy'}\n headers = {'Content-Type': 'application/json'}\n response = requests_obj.post(url, data=json.dumps(request), headers=headers)\n\n raw_data = response.json().get('data', [])\n\n datetime_item = [item['data']['text'] for item in raw_data\n if 'text' in item['data']]\n if not datetime_item:\n # unable to get a timestamp, return empty\n return None\n datetime_text = datetime_item[0][len('Last updated '):]\n data_timestamp = arrow.get(datetime_text, 'MMMM D, YYYY HH:mm A').replace(tzinfo='Canada/Atlantic')\n\n # see https://ruk.ca/content/new-api-endpoint-pei-wind for more info\n data = {\n 'pei_load': _find_pei_key(raw_data, 'Total On-Island Load'),\n 'pei_wind_gen': _find_pei_key(raw_data, 'Total On-Island Wind Generation'),\n 'pei_fossil_gen': _find_pei_key(raw_data, 'Total On-Island Fossil Fuel Generation'),\n 'pei_wind_used': _find_pei_key(raw_data, 'Wind Power Used On Island'),\n 'pei_wind_exported': _find_pei_key(raw_data, 'Wind Power Exported Off Island'),\n 'datetime': data_timestamp.datetime\n }\n\n # the following keys are always required downstream, if we don't have them, no sense returning\n if data['pei_wind_gen'] is None or data['pei_fossil_gen'] is None:\n return None\n\n return data\n\n\ndef fetch_production(zone_key='CA-PE', session=None, target_datetime=None, logger=None) -> dict:\n \"\"\"Requests the last known production mix (in MW) of a given country.\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n requests_obj = session or requests.session()\n pei_info = _get_pei_info(requests_obj)\n\n if pei_info is None:\n return None\n\n data = {\n 'datetime': pei_info['datetime'],\n 'zoneKey': zone_key,\n 'production': {\n 'wind': pei_info['pei_wind_gen'],\n\n # These are oil-fueled (\"heavy fuel oil\" and \"diesel\") generators\n # used as peakers and back-up\n 'oil': pei_info['pei_fossil_gen'],\n\n # specify some sources that definitely aren't present on PEI as zero,\n # this allows the analyzer to better estimate CO2eq\n 'coal': 0,\n 'hydro': 0,\n 'nuclear': 0,\n 'geothermal': 0\n },\n 'storage': {},\n 'source': 'princeedwardisland.ca'\n }\n\n return data\n\n\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None) -> dict:\n \"\"\"Requests the last known power exchange (in MW) between two regions.\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))\n\n if sorted_zone_keys != 'CA-NB->CA-PE':\n raise NotImplementedError('This exchange pair is not implemented')\n\n requests_obj = session or requests.session()\n pei_info = _get_pei_info(requests_obj)\n\n if pei_info is None or pei_info['pei_load'] is None:\n return None\n\n # PEI imports most of its electricity. Everything not generated on island\n # is imported from New Brunswick.\n # In case of wind, some is paper-\"exported\" even if there is a net import,\n # and 'pei_wind_used'/'data5' indicates their accounting of part of the load\n # served by non-exported wind.\n # https://www.princeedwardisland.ca/en/feature/pei-wind-energy says:\n # \"Wind Power Exported Off-Island is that portion of wind generation that is supplying\n # contracts elsewhere. The actual electricity from this portion of wind generation\n # may stay within PEI but is satisfying a contractual arrangement in another jurisdiction.\"\n # We are ignoring these paper exports, as they are an accounting/legal detail\n # that doesn't actually reflect what happens on the wires.\n # (New Brunswick being the only interconnection with PEI, \"exporting\" wind power to NB\n # then \"importing\" a balance of NB electricity likely doesn't actually happen.)\n imported_from_nb = (pei_info['pei_load'] - pei_info['pei_fossil_gen'] - pei_info['pei_wind_gen'])\n\n # In expected result, \"net\" represents an export.\n # We have sorted_zone_keys 'CA-NB->CA-PE', so it's export *from* NB,\n # and import *to* PEI.\n data = {\n 'datetime': pei_info['datetime'],\n 'sortedZoneKeys': sorted_zone_keys,\n 'netFlow': imported_from_nb,\n 'source': 'princeedwardisland.ca'\n }\n\n return data\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n print('fetch_production() ->')\n print(fetch_production())\n\n print('fetch_exchange(\"CA-PE\", \"CA-NB\") ->')\n print(fetch_exchange(\"CA-PE\", \"CA-NB\"))\n", "path": "parsers/CA_PE.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport json\n\n# The arrow library is used to handle datetimes consistently with other parsers\nimport arrow\n\n# The request library is used to fetch content through HTTP\nimport requests\n\n\ntimezone = 'Canada/Atlantic'\n\n\ndef _find_pei_key(pei_list, sought_key):\n matching_item = [item for item in pei_list\n if 'header' in item['data']\n and item['data']['header'].startswith(sought_key)]\n\n if not matching_item:\n return None\n\n return matching_item[0]['data']['actualValue']\n\n\ndef _get_pei_info(requests_obj):\n url = 'https://wdf.princeedwardisland.ca/api/workflow'\n request = {'featureName': 'WindEnergy', 'queryName': 'WindEnergy'}\n headers = {'Content-Type': 'application/json'}\n response = requests_obj.post(url, data=json.dumps(request), headers=headers)\n\n raw_data = response.json().get('data', [])\n\n datetime_item = [item['data']['text'] for item in raw_data\n if 'text' in item['data']]\n if not datetime_item:\n # unable to get a timestamp, return empty\n return None\n datetime_text = datetime_item[0][len('Last updated '):]\n data_timestamp = arrow.get(datetime_text, 'MMMM D, YYYY HH:mm A').replace(tzinfo='Canada/Atlantic')\n\n # see https://ruk.ca/content/new-api-endpoint-pei-wind for more info\n data = {\n 'pei_load': _find_pei_key(raw_data, 'Total On-Island Load'),\n 'pei_wind_gen': _find_pei_key(raw_data, 'Total On-Island Wind Generation'),\n 'pei_fossil_gen': _find_pei_key(raw_data, 'Total On-Island Fossil Fuel Generation'),\n 'pei_wind_used': _find_pei_key(raw_data, 'Wind Power Used On Island'),\n 'pei_wind_exported': _find_pei_key(raw_data, 'Wind Power Exported Off Island'),\n 'datetime': data_timestamp.datetime\n }\n\n # the following keys are always required downstream, if we don't have them, no sense returning\n if data['pei_wind_gen'] is None or data['pei_fossil_gen'] is None:\n return None\n\n return data\n\n\ndef fetch_production(zone_key='CA-PE', session=None, target_datetime=None, logger=None) -> dict:\n \"\"\"Requests the last known production mix (in MW) of a given country.\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n requests_obj = session or requests.session()\n pei_info = _get_pei_info(requests_obj)\n\n if pei_info is None:\n return None\n\n data = {\n 'datetime': pei_info['datetime'],\n 'zoneKey': zone_key,\n 'production': {\n 'wind': pei_info['pei_wind_gen'],\n\n # These are oil-fueled (\"heavy fuel oil\" and \"diesel\") generators\n # used as peakers and back-up\n 'oil': pei_info['pei_fossil_gen'],\n\n # specify some sources that definitely aren't present on PEI as zero,\n # this allows the analyzer to better estimate CO2eq\n 'coal': 0,\n 'hydro': 0,\n 'nuclear': 0,\n 'geothermal': 0\n },\n 'storage': {},\n 'source': 'princeedwardisland.ca'\n }\n\n return data\n\n\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None) -> dict:\n \"\"\"Requests the last known power exchange (in MW) between two regions.\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))\n\n if sorted_zone_keys != 'CA-NB->CA-PE':\n raise NotImplementedError('This exchange pair is not implemented')\n\n requests_obj = session or requests.session()\n pei_info = _get_pei_info(requests_obj)\n\n if pei_info is None or pei_info['pei_load'] is None:\n return None\n\n # PEI imports most of its electricity. Everything not generated on island\n # is imported from New Brunswick.\n # In case of wind, some is paper-\"exported\" even if there is a net import,\n # and 'pei_wind_used'/'data5' indicates their accounting of part of the load\n # served by non-exported wind.\n # https://www.princeedwardisland.ca/en/feature/pei-wind-energy says:\n # \"Wind Power Exported Off-Island is that portion of wind generation that is supplying\n # contracts elsewhere. The actual electricity from this portion of wind generation\n # may stay within PEI but is satisfying a contractual arrangement in another jurisdiction.\"\n # We are ignoring these paper exports, as they are an accounting/legal detail\n # that doesn't actually reflect what happens on the wires.\n # (New Brunswick being the only interconnection with PEI, \"exporting\" wind power to NB\n # then \"importing\" a balance of NB electricity likely doesn't actually happen.)\n imported_from_nb = (pei_info['pei_load'] - pei_info['pei_fossil_gen'] - pei_info['pei_wind_gen'])\n\n # In expected result, \"net\" represents an export.\n # We have sorted_zone_keys 'CA-NB->CA-PE', so it's export *from* NB,\n # and import *to* PEI.\n data = {\n 'datetime': pei_info['datetime'],\n 'sortedZoneKeys': sorted_zone_keys,\n 'netFlow': imported_from_nb,\n 'source': 'princeedwardisland.ca'\n }\n\n return data\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n print('fetch_production() ->')\n print(fetch_production())\n\n print('fetch_exchange(\"CA-PE\", \"CA-NB\") ->')\n print(fetch_exchange(\"CA-PE\", \"CA-NB\"))\n", "path": "parsers/CA_PE.py"}]}
| 2,082 | 144 |
gh_patches_debug_23623
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-1760
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document how to read and write a multi-sheet excel
## Introduction
>A high-level, short overview of the problem(s) you are designing a solution for.
We have the implementation, but it is not easy to point to a documented example:

## Background
> Provide the reader with the context surrounding the problem(s) you are trying to solve.
This is a common feature and something Kedro can already do.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kedro/extras/datasets/pandas/excel_dataset.py`
Content:
```
1 """``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying
2 filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.
3 """
4 import logging
5 from copy import deepcopy
6 from io import BytesIO
7 from pathlib import PurePosixPath
8 from typing import Any, Dict, Union
9
10 import fsspec
11 import pandas as pd
12
13 from kedro.io.core import (
14 PROTOCOL_DELIMITER,
15 AbstractVersionedDataSet,
16 DataSetError,
17 Version,
18 get_filepath_str,
19 get_protocol_and_path,
20 )
21
22 logger = logging.getLogger(__name__)
23
24
25 class ExcelDataSet(
26 AbstractVersionedDataSet[
27 Union[pd.DataFrame, Dict[str, pd.DataFrame]],
28 Union[pd.DataFrame, Dict[str, pd.DataFrame]],
29 ]
30 ):
31 """``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying
32 filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.
33
34 Example adding a catalog entry with the ``YAML API``:
35
36 .. code-block:: yaml
37
38 >>> rockets:
39 >>> type: pandas.ExcelDataSet
40 >>> filepath: gcs://your_bucket/rockets.xlsx
41 >>> fs_args:
42 >>> project: my-project
43 >>> credentials: my_gcp_credentials
44 >>> save_args:
45 >>> sheet_name: Sheet1
46 >>> load_args:
47 >>> sheet_name: Sheet1
48 >>>
49 >>> shuttles:
50 >>> type: pandas.ExcelDataSet
51 >>> filepath: data/01_raw/shuttles.xlsx
52
53 Example using Python API:
54 ::
55
56 >>> from kedro.extras.datasets.pandas import ExcelDataSet
57 >>> import pandas as pd
58 >>>
59 >>> data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],
60 >>> 'col3': [5, 6]})
61 >>>
62 >>> # data_set = ExcelDataSet(filepath="gcs://bucket/test.xlsx")
63 >>> data_set = ExcelDataSet(filepath="test.xlsx")
64 >>> data_set.save(data)
65 >>> reloaded = data_set.load()
66 >>> assert data.equals(reloaded)
67
68 """
69
70 DEFAULT_LOAD_ARGS = {"engine": "openpyxl"}
71 DEFAULT_SAVE_ARGS = {"index": False}
72
73 # pylint: disable=too-many-arguments
74 def __init__(
75 self,
76 filepath: str,
77 engine: str = "openpyxl",
78 load_args: Dict[str, Any] = None,
79 save_args: Dict[str, Any] = None,
80 version: Version = None,
81 credentials: Dict[str, Any] = None,
82 fs_args: Dict[str, Any] = None,
83 ) -> None:
84 """Creates a new instance of ``ExcelDataSet`` pointing to a concrete Excel file
85 on a specific filesystem.
86
87 Args:
88 filepath: Filepath in POSIX format to a Excel file prefixed with a protocol like
89 `s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.
90 The prefix should be any protocol supported by ``fsspec``.
91 Note: `http(s)` doesn't support versioning.
92 engine: The engine used to write to excel files. The default
93 engine is 'openpyxl'.
94 load_args: Pandas options for loading Excel files.
95 Here you can find all available arguments:
96 https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html
97 All defaults are preserved, but "engine", which is set to "openpyxl".
98 Supports multi-sheet Excel files (include `sheet_name = None` in `load_args`).
99 save_args: Pandas options for saving Excel files.
100 Here you can find all available arguments:
101 https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html
102 All defaults are preserved, but "index", which is set to False.
103 If you would like to specify options for the `ExcelWriter`,
104 you can include them under the "writer" key. Here you can
105 find all available arguments:
106 https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.ExcelWriter.html
107 version: If specified, should be an instance of
108 ``kedro.io.core.Version``. If its ``load`` attribute is
109 None, the latest version will be loaded. If its ``save``
110 attribute is None, save version will be autogenerated.
111 credentials: Credentials required to get access to the underlying filesystem.
112 E.g. for ``GCSFileSystem`` it should look like `{"token": None}`.
113 fs_args: Extra arguments to pass into underlying filesystem class constructor
114 (e.g. `{"project": "my-project"}` for ``GCSFileSystem``).
115
116 Raises:
117 DataSetError: If versioning is enabled while in append mode.
118 """
119 _fs_args = deepcopy(fs_args) or {}
120 _credentials = deepcopy(credentials) or {}
121
122 protocol, path = get_protocol_and_path(filepath, version)
123 if protocol == "file":
124 _fs_args.setdefault("auto_mkdir", True)
125
126 self._protocol = protocol
127 self._storage_options = {**_credentials, **_fs_args}
128 self._fs = fsspec.filesystem(self._protocol, **self._storage_options)
129
130 super().__init__(
131 filepath=PurePosixPath(path),
132 version=version,
133 exists_function=self._fs.exists,
134 glob_function=self._fs.glob,
135 )
136
137 # Handle default load arguments
138 self._load_args = deepcopy(self.DEFAULT_LOAD_ARGS)
139 if load_args is not None:
140 self._load_args.update(load_args)
141
142 # Handle default save arguments
143 self._save_args = deepcopy(self.DEFAULT_SAVE_ARGS)
144 if save_args is not None:
145 self._save_args.update(save_args)
146 self._writer_args = self._save_args.pop("writer", {}) # type: ignore
147 self._writer_args.setdefault("engine", engine or "openpyxl") # type: ignore
148
149 if version and self._writer_args.get("mode") == "a": # type: ignore
150 raise DataSetError(
151 "'ExcelDataSet' doesn't support versioning in append mode."
152 )
153
154 if "storage_options" in self._save_args or "storage_options" in self._load_args:
155 logger.warning(
156 "Dropping 'storage_options' for %s, "
157 "please specify them under 'fs_args' or 'credentials'.",
158 self._filepath,
159 )
160 self._save_args.pop("storage_options", None)
161 self._load_args.pop("storage_options", None)
162
163 def _describe(self) -> Dict[str, Any]:
164 return dict(
165 filepath=self._filepath,
166 protocol=self._protocol,
167 load_args=self._load_args,
168 save_args=self._save_args,
169 writer_args=self._writer_args,
170 version=self._version,
171 )
172
173 def _load(self) -> Union[pd.DataFrame, Dict[str, pd.DataFrame]]:
174 load_path = str(self._get_load_path())
175 if self._protocol == "file":
176 # file:// protocol seems to misbehave on Windows
177 # (<urlopen error file not on local host>),
178 # so we don't join that back to the filepath;
179 # storage_options also don't work with local paths
180 return pd.read_excel(load_path, **self._load_args)
181
182 load_path = f"{self._protocol}{PROTOCOL_DELIMITER}{load_path}"
183 return pd.read_excel(
184 load_path, storage_options=self._storage_options, **self._load_args
185 )
186
187 def _save(self, data: Union[pd.DataFrame, Dict[str, pd.DataFrame]]) -> None:
188 output = BytesIO()
189 save_path = get_filepath_str(self._get_save_path(), self._protocol)
190
191 # pylint: disable=abstract-class-instantiated
192 with pd.ExcelWriter(output, **self._writer_args) as writer:
193 if isinstance(data, dict):
194 for sheet_name, sheet_data in data.items():
195 sheet_data.to_excel(
196 writer, sheet_name=sheet_name, **self._save_args
197 )
198 else:
199 data.to_excel(writer, **self._save_args)
200
201 with self._fs.open(save_path, mode="wb") as fs_file:
202 fs_file.write(output.getvalue())
203
204 self._invalidate_cache()
205
206 def _exists(self) -> bool:
207 try:
208 load_path = get_filepath_str(self._get_load_path(), self._protocol)
209 except DataSetError:
210 return False
211
212 return self._fs.exists(load_path)
213
214 def _release(self) -> None:
215 super()._release()
216 self._invalidate_cache()
217
218 def _invalidate_cache(self) -> None:
219 """Invalidate underlying filesystem caches."""
220 filepath = get_filepath_str(self._filepath, self._protocol)
221 self._fs.invalidate_cache(filepath)
222
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kedro/extras/datasets/pandas/excel_dataset.py b/kedro/extras/datasets/pandas/excel_dataset.py
--- a/kedro/extras/datasets/pandas/excel_dataset.py
+++ b/kedro/extras/datasets/pandas/excel_dataset.py
@@ -65,6 +65,36 @@
>>> reloaded = data_set.load()
>>> assert data.equals(reloaded)
+ Note: To save a multi-sheet excel file, no special ``save_args`` are required.
+ Instead, return a dictionary of ``Dict[str, pd.DataFrame]`` where the string
+ keys are your sheet names.
+
+ Example adding a catalog entry for multi-sheet excel file with the ``YAML API``:
+
+ .. code-block:: yaml
+
+ >>> trains:
+ >>> type: pandas.ExcelDataSet
+ >>> filepath: data/02_intermediate/company/trains.xlsx
+ >>> load_args:
+ >>> sheet_name: [Sheet1, Sheet2, Sheet3]
+
+ Example multi-sheet excel file using Python API:
+ ::
+
+ >>> from kedro.extras.datasets.pandas import ExcelDataSet
+ >>> import pandas as pd
+ >>>
+ >>> dataframe = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],
+ >>> 'col3': [5, 6]})
+ >>> another_dataframe = pd.DataFrame({"x": [10, 20], "y": ["hello", "world"]})
+ >>> multiframe = {"Sheet1": dataframe, "Sheet2": another_dataframe}
+ >>> data_set = ExcelDataSet(filepath="test.xlsx", load_args = {"sheet_name": None})
+ >>> data_set.save(multiframe)
+ >>> reloaded = data_set.load()
+ >>> assert multiframe["Sheet1"].equals(reloaded["Sheet1"])
+ >>> assert multiframe["Sheet2"].equals(reloaded["Sheet2"])
+
"""
DEFAULT_LOAD_ARGS = {"engine": "openpyxl"}
|
{"golden_diff": "diff --git a/kedro/extras/datasets/pandas/excel_dataset.py b/kedro/extras/datasets/pandas/excel_dataset.py\n--- a/kedro/extras/datasets/pandas/excel_dataset.py\n+++ b/kedro/extras/datasets/pandas/excel_dataset.py\n@@ -65,6 +65,36 @@\n >>> reloaded = data_set.load()\n >>> assert data.equals(reloaded)\n \n+ Note: To save a multi-sheet excel file, no special ``save_args`` are required.\n+ Instead, return a dictionary of ``Dict[str, pd.DataFrame]`` where the string\n+ keys are your sheet names.\n+\n+ Example adding a catalog entry for multi-sheet excel file with the ``YAML API``:\n+\n+ .. code-block:: yaml\n+\n+ >>> trains:\n+ >>> type: pandas.ExcelDataSet\n+ >>> filepath: data/02_intermediate/company/trains.xlsx\n+ >>> load_args:\n+ >>> sheet_name: [Sheet1, Sheet2, Sheet3]\n+\n+ Example multi-sheet excel file using Python API:\n+ ::\n+\n+ >>> from kedro.extras.datasets.pandas import ExcelDataSet\n+ >>> import pandas as pd\n+ >>>\n+ >>> dataframe = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],\n+ >>> 'col3': [5, 6]})\n+ >>> another_dataframe = pd.DataFrame({\"x\": [10, 20], \"y\": [\"hello\", \"world\"]})\n+ >>> multiframe = {\"Sheet1\": dataframe, \"Sheet2\": another_dataframe}\n+ >>> data_set = ExcelDataSet(filepath=\"test.xlsx\", load_args = {\"sheet_name\": None})\n+ >>> data_set.save(multiframe)\n+ >>> reloaded = data_set.load()\n+ >>> assert multiframe[\"Sheet1\"].equals(reloaded[\"Sheet1\"])\n+ >>> assert multiframe[\"Sheet2\"].equals(reloaded[\"Sheet2\"])\n+\n \"\"\"\n \n DEFAULT_LOAD_ARGS = {\"engine\": \"openpyxl\"}\n", "issue": "Document how to read and write a multi-sheet excel\n## Introduction\r\n\r\n>A high-level, short overview of the problem(s) you are designing a solution for.\r\n\r\nWe have the implementation, but it is not easy to point to a documented example:\r\n\r\n\r\n\r\n## Background\r\n\r\n> Provide the reader with the context surrounding the problem(s) you are trying to solve.\r\n\r\nThis is a common feature and something Kedro can already do.\r\n\n", "before_files": [{"content": "\"\"\"``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying\nfilesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.\n\"\"\"\nimport logging\nfrom copy import deepcopy\nfrom io import BytesIO\nfrom pathlib import PurePosixPath\nfrom typing import Any, Dict, Union\n\nimport fsspec\nimport pandas as pd\n\nfrom kedro.io.core import (\n PROTOCOL_DELIMITER,\n AbstractVersionedDataSet,\n DataSetError,\n Version,\n get_filepath_str,\n get_protocol_and_path,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass ExcelDataSet(\n AbstractVersionedDataSet[\n Union[pd.DataFrame, Dict[str, pd.DataFrame]],\n Union[pd.DataFrame, Dict[str, pd.DataFrame]],\n ]\n):\n \"\"\"``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying\n filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.\n\n Example adding a catalog entry with the ``YAML API``:\n\n .. code-block:: yaml\n\n >>> rockets:\n >>> type: pandas.ExcelDataSet\n >>> filepath: gcs://your_bucket/rockets.xlsx\n >>> fs_args:\n >>> project: my-project\n >>> credentials: my_gcp_credentials\n >>> save_args:\n >>> sheet_name: Sheet1\n >>> load_args:\n >>> sheet_name: Sheet1\n >>>\n >>> shuttles:\n >>> type: pandas.ExcelDataSet\n >>> filepath: data/01_raw/shuttles.xlsx\n\n Example using Python API:\n ::\n\n >>> from kedro.extras.datasets.pandas import ExcelDataSet\n >>> import pandas as pd\n >>>\n >>> data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],\n >>> 'col3': [5, 6]})\n >>>\n >>> # data_set = ExcelDataSet(filepath=\"gcs://bucket/test.xlsx\")\n >>> data_set = ExcelDataSet(filepath=\"test.xlsx\")\n >>> data_set.save(data)\n >>> reloaded = data_set.load()\n >>> assert data.equals(reloaded)\n\n \"\"\"\n\n DEFAULT_LOAD_ARGS = {\"engine\": \"openpyxl\"}\n DEFAULT_SAVE_ARGS = {\"index\": False}\n\n # pylint: disable=too-many-arguments\n def __init__(\n self,\n filepath: str,\n engine: str = \"openpyxl\",\n load_args: Dict[str, Any] = None,\n save_args: Dict[str, Any] = None,\n version: Version = None,\n credentials: Dict[str, Any] = None,\n fs_args: Dict[str, Any] = None,\n ) -> None:\n \"\"\"Creates a new instance of ``ExcelDataSet`` pointing to a concrete Excel file\n on a specific filesystem.\n\n Args:\n filepath: Filepath in POSIX format to a Excel file prefixed with a protocol like\n `s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.\n The prefix should be any protocol supported by ``fsspec``.\n Note: `http(s)` doesn't support versioning.\n engine: The engine used to write to excel files. The default\n engine is 'openpyxl'.\n load_args: Pandas options for loading Excel files.\n Here you can find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html\n All defaults are preserved, but \"engine\", which is set to \"openpyxl\".\n Supports multi-sheet Excel files (include `sheet_name = None` in `load_args`).\n save_args: Pandas options for saving Excel files.\n Here you can find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html\n All defaults are preserved, but \"index\", which is set to False.\n If you would like to specify options for the `ExcelWriter`,\n you can include them under the \"writer\" key. Here you can\n find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.ExcelWriter.html\n version: If specified, should be an instance of\n ``kedro.io.core.Version``. If its ``load`` attribute is\n None, the latest version will be loaded. If its ``save``\n attribute is None, save version will be autogenerated.\n credentials: Credentials required to get access to the underlying filesystem.\n E.g. for ``GCSFileSystem`` it should look like `{\"token\": None}`.\n fs_args: Extra arguments to pass into underlying filesystem class constructor\n (e.g. `{\"project\": \"my-project\"}` for ``GCSFileSystem``).\n\n Raises:\n DataSetError: If versioning is enabled while in append mode.\n \"\"\"\n _fs_args = deepcopy(fs_args) or {}\n _credentials = deepcopy(credentials) or {}\n\n protocol, path = get_protocol_and_path(filepath, version)\n if protocol == \"file\":\n _fs_args.setdefault(\"auto_mkdir\", True)\n\n self._protocol = protocol\n self._storage_options = {**_credentials, **_fs_args}\n self._fs = fsspec.filesystem(self._protocol, **self._storage_options)\n\n super().__init__(\n filepath=PurePosixPath(path),\n version=version,\n exists_function=self._fs.exists,\n glob_function=self._fs.glob,\n )\n\n # Handle default load arguments\n self._load_args = deepcopy(self.DEFAULT_LOAD_ARGS)\n if load_args is not None:\n self._load_args.update(load_args)\n\n # Handle default save arguments\n self._save_args = deepcopy(self.DEFAULT_SAVE_ARGS)\n if save_args is not None:\n self._save_args.update(save_args)\n self._writer_args = self._save_args.pop(\"writer\", {}) # type: ignore\n self._writer_args.setdefault(\"engine\", engine or \"openpyxl\") # type: ignore\n\n if version and self._writer_args.get(\"mode\") == \"a\": # type: ignore\n raise DataSetError(\n \"'ExcelDataSet' doesn't support versioning in append mode.\"\n )\n\n if \"storage_options\" in self._save_args or \"storage_options\" in self._load_args:\n logger.warning(\n \"Dropping 'storage_options' for %s, \"\n \"please specify them under 'fs_args' or 'credentials'.\",\n self._filepath,\n )\n self._save_args.pop(\"storage_options\", None)\n self._load_args.pop(\"storage_options\", None)\n\n def _describe(self) -> Dict[str, Any]:\n return dict(\n filepath=self._filepath,\n protocol=self._protocol,\n load_args=self._load_args,\n save_args=self._save_args,\n writer_args=self._writer_args,\n version=self._version,\n )\n\n def _load(self) -> Union[pd.DataFrame, Dict[str, pd.DataFrame]]:\n load_path = str(self._get_load_path())\n if self._protocol == \"file\":\n # file:// protocol seems to misbehave on Windows\n # (<urlopen error file not on local host>),\n # so we don't join that back to the filepath;\n # storage_options also don't work with local paths\n return pd.read_excel(load_path, **self._load_args)\n\n load_path = f\"{self._protocol}{PROTOCOL_DELIMITER}{load_path}\"\n return pd.read_excel(\n load_path, storage_options=self._storage_options, **self._load_args\n )\n\n def _save(self, data: Union[pd.DataFrame, Dict[str, pd.DataFrame]]) -> None:\n output = BytesIO()\n save_path = get_filepath_str(self._get_save_path(), self._protocol)\n\n # pylint: disable=abstract-class-instantiated\n with pd.ExcelWriter(output, **self._writer_args) as writer:\n if isinstance(data, dict):\n for sheet_name, sheet_data in data.items():\n sheet_data.to_excel(\n writer, sheet_name=sheet_name, **self._save_args\n )\n else:\n data.to_excel(writer, **self._save_args)\n\n with self._fs.open(save_path, mode=\"wb\") as fs_file:\n fs_file.write(output.getvalue())\n\n self._invalidate_cache()\n\n def _exists(self) -> bool:\n try:\n load_path = get_filepath_str(self._get_load_path(), self._protocol)\n except DataSetError:\n return False\n\n return self._fs.exists(load_path)\n\n def _release(self) -> None:\n super()._release()\n self._invalidate_cache()\n\n def _invalidate_cache(self) -> None:\n \"\"\"Invalidate underlying filesystem caches.\"\"\"\n filepath = get_filepath_str(self._filepath, self._protocol)\n self._fs.invalidate_cache(filepath)\n", "path": "kedro/extras/datasets/pandas/excel_dataset.py"}], "after_files": [{"content": "\"\"\"``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying\nfilesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.\n\"\"\"\nimport logging\nfrom copy import deepcopy\nfrom io import BytesIO\nfrom pathlib import PurePosixPath\nfrom typing import Any, Dict, Union\n\nimport fsspec\nimport pandas as pd\n\nfrom kedro.io.core import (\n PROTOCOL_DELIMITER,\n AbstractVersionedDataSet,\n DataSetError,\n Version,\n get_filepath_str,\n get_protocol_and_path,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass ExcelDataSet(\n AbstractVersionedDataSet[\n Union[pd.DataFrame, Dict[str, pd.DataFrame]],\n Union[pd.DataFrame, Dict[str, pd.DataFrame]],\n ]\n):\n \"\"\"``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying\n filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.\n\n Example adding a catalog entry with the ``YAML API``:\n\n .. code-block:: yaml\n\n >>> rockets:\n >>> type: pandas.ExcelDataSet\n >>> filepath: gcs://your_bucket/rockets.xlsx\n >>> fs_args:\n >>> project: my-project\n >>> credentials: my_gcp_credentials\n >>> save_args:\n >>> sheet_name: Sheet1\n >>> load_args:\n >>> sheet_name: Sheet1\n >>>\n >>> shuttles:\n >>> type: pandas.ExcelDataSet\n >>> filepath: data/01_raw/shuttles.xlsx\n\n Example using Python API:\n ::\n\n >>> from kedro.extras.datasets.pandas import ExcelDataSet\n >>> import pandas as pd\n >>>\n >>> data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],\n >>> 'col3': [5, 6]})\n >>>\n >>> # data_set = ExcelDataSet(filepath=\"gcs://bucket/test.xlsx\")\n >>> data_set = ExcelDataSet(filepath=\"test.xlsx\")\n >>> data_set.save(data)\n >>> reloaded = data_set.load()\n >>> assert data.equals(reloaded)\n\n Note: To save a multi-sheet excel file, no special ``save_args`` are required.\n Instead, return a dictionary of ``Dict[str, pd.DataFrame]`` where the string\n keys are your sheet names.\n\n Example adding a catalog entry for multi-sheet excel file with the ``YAML API``:\n\n .. code-block:: yaml\n\n >>> trains:\n >>> type: pandas.ExcelDataSet\n >>> filepath: data/02_intermediate/company/trains.xlsx\n >>> load_args:\n >>> sheet_name: [Sheet1, Sheet2, Sheet3]\n\n Example multi-sheet excel file using Python API:\n ::\n\n >>> from kedro.extras.datasets.pandas import ExcelDataSet\n >>> import pandas as pd\n >>>\n >>> dataframe = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],\n >>> 'col3': [5, 6]})\n >>> another_dataframe = pd.DataFrame({\"x\": [10, 20], \"y\": [\"hello\", \"world\"]})\n >>> multiframe = {\"Sheet1\": dataframe, \"Sheet2\": another_dataframe}\n >>> data_set = ExcelDataSet(filepath=\"test.xlsx\", load_args = {\"sheet_name\": None})\n >>> data_set.save(multiframe)\n >>> reloaded = data_set.load()\n >>> assert multiframe[\"Sheet1\"].equals(reloaded[\"Sheet1\"])\n >>> assert multiframe[\"Sheet2\"].equals(reloaded[\"Sheet2\"])\n\n \"\"\"\n\n DEFAULT_LOAD_ARGS = {\"engine\": \"openpyxl\"}\n DEFAULT_SAVE_ARGS = {\"index\": False}\n\n # pylint: disable=too-many-arguments\n def __init__(\n self,\n filepath: str,\n engine: str = \"openpyxl\",\n load_args: Dict[str, Any] = None,\n save_args: Dict[str, Any] = None,\n version: Version = None,\n credentials: Dict[str, Any] = None,\n fs_args: Dict[str, Any] = None,\n ) -> None:\n \"\"\"Creates a new instance of ``ExcelDataSet`` pointing to a concrete Excel file\n on a specific filesystem.\n\n Args:\n filepath: Filepath in POSIX format to a Excel file prefixed with a protocol like\n `s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.\n The prefix should be any protocol supported by ``fsspec``.\n Note: `http(s)` doesn't support versioning.\n engine: The engine used to write to excel files. The default\n engine is 'openpyxl'.\n load_args: Pandas options for loading Excel files.\n Here you can find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html\n All defaults are preserved, but \"engine\", which is set to \"openpyxl\".\n Supports multi-sheet Excel files (include `sheet_name = None` in `load_args`).\n save_args: Pandas options for saving Excel files.\n Here you can find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html\n All defaults are preserved, but \"index\", which is set to False.\n If you would like to specify options for the `ExcelWriter`,\n you can include them under the \"writer\" key. Here you can\n find all available arguments:\n https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.ExcelWriter.html\n version: If specified, should be an instance of\n ``kedro.io.core.Version``. If its ``load`` attribute is\n None, the latest version will be loaded. If its ``save``\n attribute is None, save version will be autogenerated.\n credentials: Credentials required to get access to the underlying filesystem.\n E.g. for ``GCSFileSystem`` it should look like `{\"token\": None}`.\n fs_args: Extra arguments to pass into underlying filesystem class constructor\n (e.g. `{\"project\": \"my-project\"}` for ``GCSFileSystem``).\n\n Raises:\n DataSetError: If versioning is enabled while in append mode.\n \"\"\"\n _fs_args = deepcopy(fs_args) or {}\n _credentials = deepcopy(credentials) or {}\n\n protocol, path = get_protocol_and_path(filepath, version)\n if protocol == \"file\":\n _fs_args.setdefault(\"auto_mkdir\", True)\n\n self._protocol = protocol\n self._storage_options = {**_credentials, **_fs_args}\n self._fs = fsspec.filesystem(self._protocol, **self._storage_options)\n\n super().__init__(\n filepath=PurePosixPath(path),\n version=version,\n exists_function=self._fs.exists,\n glob_function=self._fs.glob,\n )\n\n # Handle default load arguments\n self._load_args = deepcopy(self.DEFAULT_LOAD_ARGS)\n if load_args is not None:\n self._load_args.update(load_args)\n\n # Handle default save arguments\n self._save_args = deepcopy(self.DEFAULT_SAVE_ARGS)\n if save_args is not None:\n self._save_args.update(save_args)\n self._writer_args = self._save_args.pop(\"writer\", {}) # type: ignore\n self._writer_args.setdefault(\"engine\", engine or \"openpyxl\") # type: ignore\n\n if version and self._writer_args.get(\"mode\") == \"a\": # type: ignore\n raise DataSetError(\n \"'ExcelDataSet' doesn't support versioning in append mode.\"\n )\n\n if \"storage_options\" in self._save_args or \"storage_options\" in self._load_args:\n logger.warning(\n \"Dropping 'storage_options' for %s, \"\n \"please specify them under 'fs_args' or 'credentials'.\",\n self._filepath,\n )\n self._save_args.pop(\"storage_options\", None)\n self._load_args.pop(\"storage_options\", None)\n\n def _describe(self) -> Dict[str, Any]:\n return dict(\n filepath=self._filepath,\n protocol=self._protocol,\n load_args=self._load_args,\n save_args=self._save_args,\n writer_args=self._writer_args,\n version=self._version,\n )\n\n def _load(self) -> Union[pd.DataFrame, Dict[str, pd.DataFrame]]:\n load_path = str(self._get_load_path())\n if self._protocol == \"file\":\n # file:// protocol seems to misbehave on Windows\n # (<urlopen error file not on local host>),\n # so we don't join that back to the filepath;\n # storage_options also don't work with local paths\n return pd.read_excel(load_path, **self._load_args)\n\n load_path = f\"{self._protocol}{PROTOCOL_DELIMITER}{load_path}\"\n return pd.read_excel(\n load_path, storage_options=self._storage_options, **self._load_args\n )\n\n def _save(self, data: Union[pd.DataFrame, Dict[str, pd.DataFrame]]) -> None:\n output = BytesIO()\n save_path = get_filepath_str(self._get_save_path(), self._protocol)\n\n # pylint: disable=abstract-class-instantiated\n with pd.ExcelWriter(output, **self._writer_args) as writer:\n if isinstance(data, dict):\n for sheet_name, sheet_data in data.items():\n sheet_data.to_excel(\n writer, sheet_name=sheet_name, **self._save_args\n )\n else:\n data.to_excel(writer, **self._save_args)\n\n with self._fs.open(save_path, mode=\"wb\") as fs_file:\n fs_file.write(output.getvalue())\n\n self._invalidate_cache()\n\n def _exists(self) -> bool:\n try:\n load_path = get_filepath_str(self._get_load_path(), self._protocol)\n except DataSetError:\n return False\n\n return self._fs.exists(load_path)\n\n def _release(self) -> None:\n super()._release()\n self._invalidate_cache()\n\n def _invalidate_cache(self) -> None:\n \"\"\"Invalidate underlying filesystem caches.\"\"\"\n filepath = get_filepath_str(self._filepath, self._protocol)\n self._fs.invalidate_cache(filepath)\n", "path": "kedro/extras/datasets/pandas/excel_dataset.py"}]}
| 2,897 | 456 |
gh_patches_debug_955
|
rasdani/github-patches
|
git_diff
|
Textualize__rich-2108
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Rich's IPython extension doesn't work
**Describe the bug**
When trying to use `%load_ext rich` in **IPython** on Terminal it says following:
```
%Python 3.10.3 (main, Mar 17 2022, 04:46:20) [Clang 12.0.8 (https://android.googlesource.com/toolchain/llvm-project c935d99d7
Type 'copyright', 'credits' or 'license' for more information
IPython 8.1.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: %load_ext rich
The rich module is not an IPython extension.
```
**Platform**
<details>
<summary>Click to expand</summary>
What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
I may ask you to copy and paste the output of the following commands. It may save some time if you do it now.
If you're using Rich in a terminal:
```
python -m rich.diagnose
pip freeze | grep rich
```
If you're using Rich in a Jupyter Notebook, run the following snippet in a cell
and paste the output in your bug report.
```python
from rich.diagnose import report
report()
```
</details>
```
❯ python -m rich.diagnose
pip freeze | grep rich
╭────────────────── <class 'rich.console.Console'> ──────────────────╮
│ A high level console interface. │
│ │
│ ╭────────────────────────────────────────────────────────────────╮ │
│ │ <console width=70 ColorSystem.TRUECOLOR> │ │
│ ╰────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' │
│ encoding='utf-8'> │
│ height = 45 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions( │
│ width=70, │
│ height=45 │
│ ), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=70, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=45, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=70, height=45) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 70 │
╰────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': 'xterm-256color', │
│ 'COLORTERM': 'truecolor', │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Linux"
rich @ file:///storage/emulated/0/Projects/rich
```
[](https://asciinema.org/a/Xd3qDv897tjdEll0csW5XZk0T)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rich/__init__.py`
Content:
```
1 """Rich text and beautiful formatting in the terminal."""
2
3 import os
4 from typing import Callable, IO, TYPE_CHECKING, Any, Optional, Union
5
6
7 __all__ = ["get_console", "reconfigure", "print", "inspect"]
8
9 if TYPE_CHECKING:
10 from .console import Console
11
12 # Global console used by alternative print
13 _console: Optional["Console"] = None
14
15 _IMPORT_CWD = os.path.abspath(os.getcwd())
16
17
18 def get_console() -> "Console":
19 """Get a global :class:`~rich.console.Console` instance. This function is used when Rich requires a Console,
20 and hasn't been explicitly given one.
21
22 Returns:
23 Console: A console instance.
24 """
25 global _console
26 if _console is None:
27 from .console import Console
28
29 _console = Console()
30
31 return _console
32
33
34 def reconfigure(*args: Any, **kwargs: Any) -> None:
35 """Reconfigures the global console by replacing it with another.
36
37 Args:
38 console (Console): Replacement console instance.
39 """
40 from rich.console import Console
41
42 new_console = Console(*args, **kwargs)
43 _console = get_console()
44 _console.__dict__ = new_console.__dict__
45
46
47 def print(
48 *objects: Any,
49 sep: str = " ",
50 end: str = "\n",
51 file: Optional[IO[str]] = None,
52 flush: bool = False,
53 ) -> None:
54 r"""Print object(s) supplied via positional arguments.
55 This function has an identical signature to the built-in print.
56 For more advanced features, see the :class:`~rich.console.Console` class.
57
58 Args:
59 sep (str, optional): Separator between printed objects. Defaults to " ".
60 end (str, optional): Character to write at end of output. Defaults to "\\n".
61 file (IO[str], optional): File to write to, or None for stdout. Defaults to None.
62 flush (bool, optional): Has no effect as Rich always flushes output. Defaults to False.
63
64 """
65 from .console import Console
66
67 write_console = get_console() if file is None else Console(file=file)
68 return write_console.print(*objects, sep=sep, end=end)
69
70
71 def print_json(
72 json: Optional[str] = None,
73 *,
74 data: Any = None,
75 indent: Union[None, int, str] = 2,
76 highlight: bool = True,
77 skip_keys: bool = False,
78 ensure_ascii: bool = True,
79 check_circular: bool = True,
80 allow_nan: bool = True,
81 default: Optional[Callable[[Any], Any]] = None,
82 sort_keys: bool = False,
83 ) -> None:
84 """Pretty prints JSON. Output will be valid JSON.
85
86 Args:
87 json (str): A string containing JSON.
88 data (Any): If json is not supplied, then encode this data.
89 indent (int, optional): Number of spaces to indent. Defaults to 2.
90 highlight (bool, optional): Enable highlighting of output: Defaults to True.
91 skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.
92 ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.
93 check_circular (bool, optional): Check for circular references. Defaults to True.
94 allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.
95 default (Callable, optional): A callable that converts values that can not be encoded
96 in to something that can be JSON encoded. Defaults to None.
97 sort_keys (bool, optional): Sort dictionary keys. Defaults to False.
98 """
99
100 get_console().print_json(
101 json,
102 data=data,
103 indent=indent,
104 highlight=highlight,
105 skip_keys=skip_keys,
106 ensure_ascii=ensure_ascii,
107 check_circular=check_circular,
108 allow_nan=allow_nan,
109 default=default,
110 sort_keys=sort_keys,
111 )
112
113
114 def inspect(
115 obj: Any,
116 *,
117 console: Optional["Console"] = None,
118 title: Optional[str] = None,
119 help: bool = False,
120 methods: bool = False,
121 docs: bool = True,
122 private: bool = False,
123 dunder: bool = False,
124 sort: bool = True,
125 all: bool = False,
126 value: bool = True,
127 ) -> None:
128 """Inspect any Python object.
129
130 * inspect(<OBJECT>) to see summarized info.
131 * inspect(<OBJECT>, methods=True) to see methods.
132 * inspect(<OBJECT>, help=True) to see full (non-abbreviated) help.
133 * inspect(<OBJECT>, private=True) to see private attributes (single underscore).
134 * inspect(<OBJECT>, dunder=True) to see attributes beginning with double underscore.
135 * inspect(<OBJECT>, all=True) to see all attributes.
136
137 Args:
138 obj (Any): An object to inspect.
139 title (str, optional): Title to display over inspect result, or None use type. Defaults to None.
140 help (bool, optional): Show full help text rather than just first paragraph. Defaults to False.
141 methods (bool, optional): Enable inspection of callables. Defaults to False.
142 docs (bool, optional): Also render doc strings. Defaults to True.
143 private (bool, optional): Show private attributes (beginning with underscore). Defaults to False.
144 dunder (bool, optional): Show attributes starting with double underscore. Defaults to False.
145 sort (bool, optional): Sort attributes alphabetically. Defaults to True.
146 all (bool, optional): Show all attributes. Defaults to False.
147 value (bool, optional): Pretty print value. Defaults to True.
148 """
149 _console = console or get_console()
150 from rich._inspect import Inspect
151
152 # Special case for inspect(inspect)
153 is_inspect = obj is inspect
154
155 _inspect = Inspect(
156 obj,
157 title=title,
158 help=is_inspect or help,
159 methods=is_inspect or methods,
160 docs=is_inspect or docs,
161 private=private,
162 dunder=dunder,
163 sort=sort,
164 all=all,
165 value=value,
166 )
167 _console.print(_inspect)
168
169
170 if __name__ == "__main__": # pragma: no cover
171 print("Hello, **World**")
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rich/__init__.py b/rich/__init__.py
--- a/rich/__init__.py
+++ b/rich/__init__.py
@@ -3,6 +3,7 @@
import os
from typing import Callable, IO, TYPE_CHECKING, Any, Optional, Union
+from ._extension import load_ipython_extension
__all__ = ["get_console", "reconfigure", "print", "inspect"]
|
{"golden_diff": "diff --git a/rich/__init__.py b/rich/__init__.py\n--- a/rich/__init__.py\n+++ b/rich/__init__.py\n@@ -3,6 +3,7 @@\n import os\n from typing import Callable, IO, TYPE_CHECKING, Any, Optional, Union\n \n+from ._extension import load_ipython_extension\n \n __all__ = [\"get_console\", \"reconfigure\", \"print\", \"inspect\"]\n", "issue": "[BUG] Rich's IPython extension doesn't work\n**Describe the bug**\r\n\r\nWhen trying to use `%load_ext rich` in **IPython** on Terminal it says following:\r\n```\r\n%Python 3.10.3 (main, Mar 17 2022, 04:46:20) [Clang 12.0.8 (https://android.googlesource.com/toolchain/llvm-project c935d99d7\r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 8.1.1 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: %load_ext rich\r\nThe rich module is not an IPython extension.\r\n```\r\n\r\n**Platform**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\nWhat platform (Win/Linux/Mac) are you running on? What terminal software are you using?\r\n\r\nI may ask you to copy and paste the output of the following commands. It may save some time if you do it now.\r\n\r\nIf you're using Rich in a terminal:\r\n\r\n```\r\npython -m rich.diagnose\r\npip freeze | grep rich\r\n```\r\n\r\nIf you're using Rich in a Jupyter Notebook, run the following snippet in a cell\r\nand paste the output in your bug report.\r\n\r\n```python\r\nfrom rich.diagnose import report\r\nreport()\r\n```\r\n\r\n</details>\r\n\r\n```\r\n\u276f python -m rich.diagnose\r\npip freeze | grep rich\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 <console width=70 ColorSystem.TRUECOLOR> \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'truecolor' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' \u2502\r\n\u2502 encoding='utf-8'> \u2502\r\n\u2502 height = 45 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = False \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions( \u2502\r\n\u2502 width=70, \u2502\r\n\u2502 height=45 \u2502\r\n\u2502 ), \u2502\r\n\u2502 legacy_windows=False, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=70, \u2502\r\n\u2502 is_terminal=True, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=45, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=70, height=45) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 70 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u256e\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'xterm-256color', \u2502\r\n\u2502 'COLORTERM': 'truecolor', \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': None, \u2502\r\n\u2502 'COLUMNS': None, \u2502\r\n\u2502 'LINES': None, \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nplatform=\"Linux\"\r\nrich @ file:///storage/emulated/0/Projects/rich\r\n```\r\n\r\n[](https://asciinema.org/a/Xd3qDv897tjdEll0csW5XZk0T)\n", "before_files": [{"content": "\"\"\"Rich text and beautiful formatting in the terminal.\"\"\"\n\nimport os\nfrom typing import Callable, IO, TYPE_CHECKING, Any, Optional, Union\n\n\n__all__ = [\"get_console\", \"reconfigure\", \"print\", \"inspect\"]\n\nif TYPE_CHECKING:\n from .console import Console\n\n# Global console used by alternative print\n_console: Optional[\"Console\"] = None\n\n_IMPORT_CWD = os.path.abspath(os.getcwd())\n\n\ndef get_console() -> \"Console\":\n \"\"\"Get a global :class:`~rich.console.Console` instance. This function is used when Rich requires a Console,\n and hasn't been explicitly given one.\n\n Returns:\n Console: A console instance.\n \"\"\"\n global _console\n if _console is None:\n from .console import Console\n\n _console = Console()\n\n return _console\n\n\ndef reconfigure(*args: Any, **kwargs: Any) -> None:\n \"\"\"Reconfigures the global console by replacing it with another.\n\n Args:\n console (Console): Replacement console instance.\n \"\"\"\n from rich.console import Console\n\n new_console = Console(*args, **kwargs)\n _console = get_console()\n _console.__dict__ = new_console.__dict__\n\n\ndef print(\n *objects: Any,\n sep: str = \" \",\n end: str = \"\\n\",\n file: Optional[IO[str]] = None,\n flush: bool = False,\n) -> None:\n r\"\"\"Print object(s) supplied via positional arguments.\n This function has an identical signature to the built-in print.\n For more advanced features, see the :class:`~rich.console.Console` class.\n\n Args:\n sep (str, optional): Separator between printed objects. Defaults to \" \".\n end (str, optional): Character to write at end of output. Defaults to \"\\\\n\".\n file (IO[str], optional): File to write to, or None for stdout. Defaults to None.\n flush (bool, optional): Has no effect as Rich always flushes output. Defaults to False.\n\n \"\"\"\n from .console import Console\n\n write_console = get_console() if file is None else Console(file=file)\n return write_console.print(*objects, sep=sep, end=end)\n\n\ndef print_json(\n json: Optional[str] = None,\n *,\n data: Any = None,\n indent: Union[None, int, str] = 2,\n highlight: bool = True,\n skip_keys: bool = False,\n ensure_ascii: bool = True,\n check_circular: bool = True,\n allow_nan: bool = True,\n default: Optional[Callable[[Any], Any]] = None,\n sort_keys: bool = False,\n) -> None:\n \"\"\"Pretty prints JSON. Output will be valid JSON.\n\n Args:\n json (str): A string containing JSON.\n data (Any): If json is not supplied, then encode this data.\n indent (int, optional): Number of spaces to indent. Defaults to 2.\n highlight (bool, optional): Enable highlighting of output: Defaults to True.\n skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.\n ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.\n check_circular (bool, optional): Check for circular references. Defaults to True.\n allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.\n default (Callable, optional): A callable that converts values that can not be encoded\n in to something that can be JSON encoded. Defaults to None.\n sort_keys (bool, optional): Sort dictionary keys. Defaults to False.\n \"\"\"\n\n get_console().print_json(\n json,\n data=data,\n indent=indent,\n highlight=highlight,\n skip_keys=skip_keys,\n ensure_ascii=ensure_ascii,\n check_circular=check_circular,\n allow_nan=allow_nan,\n default=default,\n sort_keys=sort_keys,\n )\n\n\ndef inspect(\n obj: Any,\n *,\n console: Optional[\"Console\"] = None,\n title: Optional[str] = None,\n help: bool = False,\n methods: bool = False,\n docs: bool = True,\n private: bool = False,\n dunder: bool = False,\n sort: bool = True,\n all: bool = False,\n value: bool = True,\n) -> None:\n \"\"\"Inspect any Python object.\n\n * inspect(<OBJECT>) to see summarized info.\n * inspect(<OBJECT>, methods=True) to see methods.\n * inspect(<OBJECT>, help=True) to see full (non-abbreviated) help.\n * inspect(<OBJECT>, private=True) to see private attributes (single underscore).\n * inspect(<OBJECT>, dunder=True) to see attributes beginning with double underscore.\n * inspect(<OBJECT>, all=True) to see all attributes.\n\n Args:\n obj (Any): An object to inspect.\n title (str, optional): Title to display over inspect result, or None use type. Defaults to None.\n help (bool, optional): Show full help text rather than just first paragraph. Defaults to False.\n methods (bool, optional): Enable inspection of callables. Defaults to False.\n docs (bool, optional): Also render doc strings. Defaults to True.\n private (bool, optional): Show private attributes (beginning with underscore). Defaults to False.\n dunder (bool, optional): Show attributes starting with double underscore. Defaults to False.\n sort (bool, optional): Sort attributes alphabetically. Defaults to True.\n all (bool, optional): Show all attributes. Defaults to False.\n value (bool, optional): Pretty print value. Defaults to True.\n \"\"\"\n _console = console or get_console()\n from rich._inspect import Inspect\n\n # Special case for inspect(inspect)\n is_inspect = obj is inspect\n\n _inspect = Inspect(\n obj,\n title=title,\n help=is_inspect or help,\n methods=is_inspect or methods,\n docs=is_inspect or docs,\n private=private,\n dunder=dunder,\n sort=sort,\n all=all,\n value=value,\n )\n _console.print(_inspect)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n print(\"Hello, **World**\")\n", "path": "rich/__init__.py"}], "after_files": [{"content": "\"\"\"Rich text and beautiful formatting in the terminal.\"\"\"\n\nimport os\nfrom typing import Callable, IO, TYPE_CHECKING, Any, Optional, Union\n\nfrom ._extension import load_ipython_extension\n\n__all__ = [\"get_console\", \"reconfigure\", \"print\", \"inspect\"]\n\nif TYPE_CHECKING:\n from .console import Console\n\n# Global console used by alternative print\n_console: Optional[\"Console\"] = None\n\n_IMPORT_CWD = os.path.abspath(os.getcwd())\n\n\ndef get_console() -> \"Console\":\n \"\"\"Get a global :class:`~rich.console.Console` instance. This function is used when Rich requires a Console,\n and hasn't been explicitly given one.\n\n Returns:\n Console: A console instance.\n \"\"\"\n global _console\n if _console is None:\n from .console import Console\n\n _console = Console()\n\n return _console\n\n\ndef reconfigure(*args: Any, **kwargs: Any) -> None:\n \"\"\"Reconfigures the global console by replacing it with another.\n\n Args:\n console (Console): Replacement console instance.\n \"\"\"\n from rich.console import Console\n\n new_console = Console(*args, **kwargs)\n _console = get_console()\n _console.__dict__ = new_console.__dict__\n\n\ndef print(\n *objects: Any,\n sep: str = \" \",\n end: str = \"\\n\",\n file: Optional[IO[str]] = None,\n flush: bool = False,\n) -> None:\n r\"\"\"Print object(s) supplied via positional arguments.\n This function has an identical signature to the built-in print.\n For more advanced features, see the :class:`~rich.console.Console` class.\n\n Args:\n sep (str, optional): Separator between printed objects. Defaults to \" \".\n end (str, optional): Character to write at end of output. Defaults to \"\\\\n\".\n file (IO[str], optional): File to write to, or None for stdout. Defaults to None.\n flush (bool, optional): Has no effect as Rich always flushes output. Defaults to False.\n\n \"\"\"\n from .console import Console\n\n write_console = get_console() if file is None else Console(file=file)\n return write_console.print(*objects, sep=sep, end=end)\n\n\ndef print_json(\n json: Optional[str] = None,\n *,\n data: Any = None,\n indent: Union[None, int, str] = 2,\n highlight: bool = True,\n skip_keys: bool = False,\n ensure_ascii: bool = True,\n check_circular: bool = True,\n allow_nan: bool = True,\n default: Optional[Callable[[Any], Any]] = None,\n sort_keys: bool = False,\n) -> None:\n \"\"\"Pretty prints JSON. Output will be valid JSON.\n\n Args:\n json (str): A string containing JSON.\n data (Any): If json is not supplied, then encode this data.\n indent (int, optional): Number of spaces to indent. Defaults to 2.\n highlight (bool, optional): Enable highlighting of output: Defaults to True.\n skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.\n ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.\n check_circular (bool, optional): Check for circular references. Defaults to True.\n allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.\n default (Callable, optional): A callable that converts values that can not be encoded\n in to something that can be JSON encoded. Defaults to None.\n sort_keys (bool, optional): Sort dictionary keys. Defaults to False.\n \"\"\"\n\n get_console().print_json(\n json,\n data=data,\n indent=indent,\n highlight=highlight,\n skip_keys=skip_keys,\n ensure_ascii=ensure_ascii,\n check_circular=check_circular,\n allow_nan=allow_nan,\n default=default,\n sort_keys=sort_keys,\n )\n\n\ndef inspect(\n obj: Any,\n *,\n console: Optional[\"Console\"] = None,\n title: Optional[str] = None,\n help: bool = False,\n methods: bool = False,\n docs: bool = True,\n private: bool = False,\n dunder: bool = False,\n sort: bool = True,\n all: bool = False,\n value: bool = True,\n) -> None:\n \"\"\"Inspect any Python object.\n\n * inspect(<OBJECT>) to see summarized info.\n * inspect(<OBJECT>, methods=True) to see methods.\n * inspect(<OBJECT>, help=True) to see full (non-abbreviated) help.\n * inspect(<OBJECT>, private=True) to see private attributes (single underscore).\n * inspect(<OBJECT>, dunder=True) to see attributes beginning with double underscore.\n * inspect(<OBJECT>, all=True) to see all attributes.\n\n Args:\n obj (Any): An object to inspect.\n title (str, optional): Title to display over inspect result, or None use type. Defaults to None.\n help (bool, optional): Show full help text rather than just first paragraph. Defaults to False.\n methods (bool, optional): Enable inspection of callables. Defaults to False.\n docs (bool, optional): Also render doc strings. Defaults to True.\n private (bool, optional): Show private attributes (beginning with underscore). Defaults to False.\n dunder (bool, optional): Show attributes starting with double underscore. Defaults to False.\n sort (bool, optional): Sort attributes alphabetically. Defaults to True.\n all (bool, optional): Show all attributes. Defaults to False.\n value (bool, optional): Pretty print value. Defaults to True.\n \"\"\"\n _console = console or get_console()\n from rich._inspect import Inspect\n\n # Special case for inspect(inspect)\n is_inspect = obj is inspect\n\n _inspect = Inspect(\n obj,\n title=title,\n help=is_inspect or help,\n methods=is_inspect or methods,\n docs=is_inspect or docs,\n private=private,\n dunder=dunder,\n sort=sort,\n all=all,\n value=value,\n )\n _console.print(_inspect)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n print(\"Hello, **World**\")\n", "path": "rich/__init__.py"}]}
| 3,156 | 94 |
gh_patches_debug_10336
|
rasdani/github-patches
|
git_diff
|
replicate__cog-555
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`choices` turn input to enum type
With 'choices' of strings, the input is no long string but changed to enum type.
Refer to https://github.com/wty-ustc/HairCLIP/pull/16/files#diff-73c1982d8a085dc10fda2ac7b6f202ae3ff9530ee6a15991c5339051eb10a49aR61, where `editing_type` should be string eg. "both" but the value shows `editing_type.both` and type `<enum 'editing_type'>`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/cog/predictor.py`
Content:
```
1 from abc import ABC, abstractmethod
2 from collections.abc import Iterator
3 import enum
4 import importlib
5 import inspect
6 import os.path
7 from pathlib import Path
8 from pydantic import create_model, BaseModel
9 from pydantic.fields import FieldInfo
10 from typing import List
11
12 # Added in Python 3.8. Can be from typing if we drop support for <3.8.
13 from typing_extensions import get_origin, get_args
14 import yaml
15
16 from .errors import ConfigDoesNotExist, PredictorNotSet
17 from .types import Input, Path as CogPath, File as CogFile
18
19
20 ALLOWED_INPUT_TYPES = [str, int, float, bool, CogFile, CogPath]
21
22
23 class BasePredictor(ABC):
24 def setup(self):
25 """
26 An optional method to prepare the model so multiple predictions run efficiently.
27 """
28
29 @abstractmethod
30 def predict(self, **kwargs):
31 """
32 Run a single prediction on the model
33 """
34
35
36 def run_prediction(predictor, inputs, cleanup_functions):
37 """
38 Run the predictor on the inputs, and append resulting paths
39 to cleanup functions for removal.
40 """
41 result = predictor.predict(**inputs)
42 if isinstance(result, Path):
43 cleanup_functions.append(result.unlink)
44 return result
45
46
47 def load_predictor():
48 """
49 Reads cog.yaml and constructs an instance of the user-defined Predictor class.
50 """
51
52 # Assumes the working directory is /src
53 config_path = os.path.abspath("cog.yaml")
54 try:
55 with open(config_path) as fh:
56 config = yaml.safe_load(fh)
57 except FileNotFoundError:
58 raise ConfigDoesNotExist(
59 f"Could not find {config_path}",
60 )
61
62 if "predict" not in config:
63 raise PredictorNotSet(
64 "Can't run predictions: 'predict' option not found in cog.yaml"
65 )
66
67 predict_string = config["predict"]
68 module_path, class_name = predict_string.split(":", 1)
69 module_name = os.path.basename(module_path).split(".py", 1)[0]
70 spec = importlib.util.spec_from_file_location(module_name, module_path)
71 module = importlib.util.module_from_spec(spec)
72 spec.loader.exec_module(module)
73 predictor_class = getattr(module, class_name)
74 return predictor_class()
75
76
77 # Base class for inputs, constructed dynamically in get_input_type().
78 # (This can't be a docstring or it gets passed through to the schema.)
79 class BaseInput(BaseModel):
80 def cleanup(self):
81 """
82 Cleanup any temporary files created by the input.
83 """
84 for _, value in self:
85 # Note this is pathlib.Path, which cog.Path is a subclass of. A pathlib.Path object shouldn't make its way here,
86 # but both have an unlink() method, so may as well be safe.
87 if isinstance(value, Path):
88 # This could be missing_ok=True when we drop support for Python 3.7
89 if value.exists():
90 value.unlink()
91
92
93 def get_input_type(predictor: BasePredictor):
94 """
95 Creates a Pydantic Input model from the arguments of a Predictor's predict() method.
96
97 class Predictor(BasePredictor):
98 def predict(self, text: str):
99 ...
100
101 programmatically creates a model like this:
102
103 class Input(BaseModel):
104 text: str
105 """
106
107 signature = inspect.signature(predictor.predict)
108 create_model_kwargs = {}
109
110 order = 0
111
112 for name, parameter in signature.parameters.items():
113 InputType = parameter.annotation
114
115 if InputType is inspect.Signature.empty:
116 raise TypeError(
117 f"No input type provided for parameter `{name}`. Supported input types are: {readable_types_list(ALLOWED_INPUT_TYPES)}."
118 )
119 elif InputType not in ALLOWED_INPUT_TYPES:
120 raise TypeError(
121 f"Unsupported input type {human_readable_type_name(InputType)} for parameter `{name}`. Supported input types are: {readable_types_list(ALLOWED_INPUT_TYPES)}."
122 )
123
124 # if no default is specified, create an empty, required input
125 if parameter.default is inspect.Signature.empty:
126 default = Input()
127 else:
128 default = parameter.default
129 # If user hasn't used `Input`, then wrap it in that
130 if not isinstance(default, FieldInfo):
131 default = Input(default=default)
132
133 # Fields aren't ordered, so use this pattern to ensure defined order
134 # https://github.com/go-openapi/spec/pull/116
135 default.extra["x-order"] = order
136 order += 1
137
138 # Choices!
139 if default.extra.get("choices"):
140 choices = default.extra["choices"]
141 # It will be passed automatically as 'enum' in the schema, so remove it as an extra field.
142 del default.extra["choices"]
143 if InputType == str:
144
145 class StringEnum(str, enum.Enum):
146 pass
147
148 InputType = StringEnum(name, {value: value for value in choices})
149 elif InputType == int:
150 InputType = enum.IntEnum(name, {str(value): value for value in choices})
151 else:
152 raise TypeError(
153 f"The input {name} uses the option choices. Choices can only be used with str or int types."
154 )
155
156 create_model_kwargs[name] = (InputType, default)
157
158 return create_model("Input", **create_model_kwargs, __base__=BaseInput)
159
160
161 def get_output_type(predictor: BasePredictor):
162 """
163 Creates a Pydantic Output model from the return type annotation of a Predictor's predict() method.
164 """
165
166 signature = inspect.signature(predictor.predict)
167 if signature.return_annotation is inspect.Signature.empty:
168 raise TypeError(
169 """You must set an output type. If your model can return multiple output types, you can explicitly set `Any` as the output type.
170
171 For example:
172
173 from typing import Any
174
175 def predict(
176 self,
177 image: Path = Input(description="Input image"),
178 ) -> Any:
179 ...
180 """
181 )
182 else:
183 OutputType = signature.return_annotation
184
185 # The type that goes in the response is a list of the yielded type
186 if get_origin(OutputType) is Iterator:
187 OutputType = List[get_args(OutputType)[0]]
188
189 if not hasattr(OutputType, "__name__") or OutputType.__name__ != "Output":
190 # Wrap the type in a model called "Output" so it is a consistent name in the OpenAPI schema
191 class Output(BaseModel):
192 __root__: OutputType
193
194 OutputType = Output
195
196 return OutputType
197
198
199 def human_readable_type_name(t):
200 """
201 Generates a useful-for-humans label for a type. For builtin types, it's just the class name (eg "str" or "int"). For other types, it includes the module (eg "pathlib.Path" or "cog.File").
202
203 The special case for Cog modules is because the type lives in `cog.types` internally, but just `cog` when included as a dependency.
204 """
205 module = t.__module__
206 if module == "builtins":
207 return t.__qualname__
208 elif module.split(".")[0] == "cog":
209 module = "cog"
210 return module + "." + t.__qualname__
211
212
213 def readable_types_list(type_list):
214 return ", ".join(human_readable_type_name(t) for t in type_list)
215
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/cog/predictor.py b/python/cog/predictor.py
--- a/python/cog/predictor.py
+++ b/python/cog/predictor.py
@@ -77,6 +77,11 @@
# Base class for inputs, constructed dynamically in get_input_type().
# (This can't be a docstring or it gets passed through to the schema.)
class BaseInput(BaseModel):
+ class Config:
+ # When using `choices`, the type is converted into an enum to validate
+ # But, after validation, we want to pass the actual value to predict(), not the enum object
+ use_enum_values = True
+
def cleanup(self):
"""
Cleanup any temporary files created by the input.
|
{"golden_diff": "diff --git a/python/cog/predictor.py b/python/cog/predictor.py\n--- a/python/cog/predictor.py\n+++ b/python/cog/predictor.py\n@@ -77,6 +77,11 @@\n # Base class for inputs, constructed dynamically in get_input_type().\n # (This can't be a docstring or it gets passed through to the schema.)\n class BaseInput(BaseModel):\n+ class Config:\n+ # When using `choices`, the type is converted into an enum to validate\n+ # But, after validation, we want to pass the actual value to predict(), not the enum object\n+ use_enum_values = True\n+\n def cleanup(self):\n \"\"\"\n Cleanup any temporary files created by the input.\n", "issue": "`choices` turn input to enum type\nWith 'choices' of strings, the input is no long string but changed to enum type. \r\nRefer to https://github.com/wty-ustc/HairCLIP/pull/16/files#diff-73c1982d8a085dc10fda2ac7b6f202ae3ff9530ee6a15991c5339051eb10a49aR61, where `editing_type` should be string eg. \"both\" but the value shows `editing_type.both` and type `<enum 'editing_type'>`\r\n\r\n\n", "before_files": [{"content": "from abc import ABC, abstractmethod\nfrom collections.abc import Iterator\nimport enum\nimport importlib\nimport inspect\nimport os.path\nfrom pathlib import Path\nfrom pydantic import create_model, BaseModel\nfrom pydantic.fields import FieldInfo\nfrom typing import List\n\n# Added in Python 3.8. Can be from typing if we drop support for <3.8.\nfrom typing_extensions import get_origin, get_args\nimport yaml\n\nfrom .errors import ConfigDoesNotExist, PredictorNotSet\nfrom .types import Input, Path as CogPath, File as CogFile\n\n\nALLOWED_INPUT_TYPES = [str, int, float, bool, CogFile, CogPath]\n\n\nclass BasePredictor(ABC):\n def setup(self):\n \"\"\"\n An optional method to prepare the model so multiple predictions run efficiently.\n \"\"\"\n\n @abstractmethod\n def predict(self, **kwargs):\n \"\"\"\n Run a single prediction on the model\n \"\"\"\n\n\ndef run_prediction(predictor, inputs, cleanup_functions):\n \"\"\"\n Run the predictor on the inputs, and append resulting paths\n to cleanup functions for removal.\n \"\"\"\n result = predictor.predict(**inputs)\n if isinstance(result, Path):\n cleanup_functions.append(result.unlink)\n return result\n\n\ndef load_predictor():\n \"\"\"\n Reads cog.yaml and constructs an instance of the user-defined Predictor class.\n \"\"\"\n\n # Assumes the working directory is /src\n config_path = os.path.abspath(\"cog.yaml\")\n try:\n with open(config_path) as fh:\n config = yaml.safe_load(fh)\n except FileNotFoundError:\n raise ConfigDoesNotExist(\n f\"Could not find {config_path}\",\n )\n\n if \"predict\" not in config:\n raise PredictorNotSet(\n \"Can't run predictions: 'predict' option not found in cog.yaml\"\n )\n\n predict_string = config[\"predict\"]\n module_path, class_name = predict_string.split(\":\", 1)\n module_name = os.path.basename(module_path).split(\".py\", 1)[0]\n spec = importlib.util.spec_from_file_location(module_name, module_path)\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n predictor_class = getattr(module, class_name)\n return predictor_class()\n\n\n# Base class for inputs, constructed dynamically in get_input_type().\n# (This can't be a docstring or it gets passed through to the schema.)\nclass BaseInput(BaseModel):\n def cleanup(self):\n \"\"\"\n Cleanup any temporary files created by the input.\n \"\"\"\n for _, value in self:\n # Note this is pathlib.Path, which cog.Path is a subclass of. A pathlib.Path object shouldn't make its way here,\n # but both have an unlink() method, so may as well be safe.\n if isinstance(value, Path):\n # This could be missing_ok=True when we drop support for Python 3.7\n if value.exists():\n value.unlink()\n\n\ndef get_input_type(predictor: BasePredictor):\n \"\"\"\n Creates a Pydantic Input model from the arguments of a Predictor's predict() method.\n\n class Predictor(BasePredictor):\n def predict(self, text: str):\n ...\n\n programmatically creates a model like this:\n\n class Input(BaseModel):\n text: str\n \"\"\"\n\n signature = inspect.signature(predictor.predict)\n create_model_kwargs = {}\n\n order = 0\n\n for name, parameter in signature.parameters.items():\n InputType = parameter.annotation\n\n if InputType is inspect.Signature.empty:\n raise TypeError(\n f\"No input type provided for parameter `{name}`. Supported input types are: {readable_types_list(ALLOWED_INPUT_TYPES)}.\"\n )\n elif InputType not in ALLOWED_INPUT_TYPES:\n raise TypeError(\n f\"Unsupported input type {human_readable_type_name(InputType)} for parameter `{name}`. Supported input types are: {readable_types_list(ALLOWED_INPUT_TYPES)}.\"\n )\n\n # if no default is specified, create an empty, required input\n if parameter.default is inspect.Signature.empty:\n default = Input()\n else:\n default = parameter.default\n # If user hasn't used `Input`, then wrap it in that\n if not isinstance(default, FieldInfo):\n default = Input(default=default)\n\n # Fields aren't ordered, so use this pattern to ensure defined order\n # https://github.com/go-openapi/spec/pull/116\n default.extra[\"x-order\"] = order\n order += 1\n\n # Choices!\n if default.extra.get(\"choices\"):\n choices = default.extra[\"choices\"]\n # It will be passed automatically as 'enum' in the schema, so remove it as an extra field.\n del default.extra[\"choices\"]\n if InputType == str:\n\n class StringEnum(str, enum.Enum):\n pass\n\n InputType = StringEnum(name, {value: value for value in choices})\n elif InputType == int:\n InputType = enum.IntEnum(name, {str(value): value for value in choices})\n else:\n raise TypeError(\n f\"The input {name} uses the option choices. Choices can only be used with str or int types.\"\n )\n\n create_model_kwargs[name] = (InputType, default)\n\n return create_model(\"Input\", **create_model_kwargs, __base__=BaseInput)\n\n\ndef get_output_type(predictor: BasePredictor):\n \"\"\"\n Creates a Pydantic Output model from the return type annotation of a Predictor's predict() method.\n \"\"\"\n\n signature = inspect.signature(predictor.predict)\n if signature.return_annotation is inspect.Signature.empty:\n raise TypeError(\n \"\"\"You must set an output type. If your model can return multiple output types, you can explicitly set `Any` as the output type.\n\nFor example:\n\n from typing import Any\n\n def predict(\n self,\n image: Path = Input(description=\"Input image\"),\n ) -> Any:\n ...\n\"\"\"\n )\n else:\n OutputType = signature.return_annotation\n\n # The type that goes in the response is a list of the yielded type\n if get_origin(OutputType) is Iterator:\n OutputType = List[get_args(OutputType)[0]]\n\n if not hasattr(OutputType, \"__name__\") or OutputType.__name__ != \"Output\":\n # Wrap the type in a model called \"Output\" so it is a consistent name in the OpenAPI schema\n class Output(BaseModel):\n __root__: OutputType\n\n OutputType = Output\n\n return OutputType\n\n\ndef human_readable_type_name(t):\n \"\"\"\n Generates a useful-for-humans label for a type. For builtin types, it's just the class name (eg \"str\" or \"int\"). For other types, it includes the module (eg \"pathlib.Path\" or \"cog.File\").\n\n The special case for Cog modules is because the type lives in `cog.types` internally, but just `cog` when included as a dependency.\n \"\"\"\n module = t.__module__\n if module == \"builtins\":\n return t.__qualname__\n elif module.split(\".\")[0] == \"cog\":\n module = \"cog\"\n return module + \".\" + t.__qualname__\n\n\ndef readable_types_list(type_list):\n return \", \".join(human_readable_type_name(t) for t in type_list)\n", "path": "python/cog/predictor.py"}], "after_files": [{"content": "from abc import ABC, abstractmethod\nfrom collections.abc import Iterator\nimport enum\nimport importlib\nimport inspect\nimport os.path\nfrom pathlib import Path\nfrom pydantic import create_model, BaseModel\nfrom pydantic.fields import FieldInfo\nfrom typing import List\n\n# Added in Python 3.8. Can be from typing if we drop support for <3.8.\nfrom typing_extensions import get_origin, get_args\nimport yaml\n\nfrom .errors import ConfigDoesNotExist, PredictorNotSet\nfrom .types import Input, Path as CogPath, File as CogFile\n\n\nALLOWED_INPUT_TYPES = [str, int, float, bool, CogFile, CogPath]\n\n\nclass BasePredictor(ABC):\n def setup(self):\n \"\"\"\n An optional method to prepare the model so multiple predictions run efficiently.\n \"\"\"\n\n @abstractmethod\n def predict(self, **kwargs):\n \"\"\"\n Run a single prediction on the model\n \"\"\"\n\n\ndef run_prediction(predictor, inputs, cleanup_functions):\n \"\"\"\n Run the predictor on the inputs, and append resulting paths\n to cleanup functions for removal.\n \"\"\"\n result = predictor.predict(**inputs)\n if isinstance(result, Path):\n cleanup_functions.append(result.unlink)\n return result\n\n\ndef load_predictor():\n \"\"\"\n Reads cog.yaml and constructs an instance of the user-defined Predictor class.\n \"\"\"\n\n # Assumes the working directory is /src\n config_path = os.path.abspath(\"cog.yaml\")\n try:\n with open(config_path) as fh:\n config = yaml.safe_load(fh)\n except FileNotFoundError:\n raise ConfigDoesNotExist(\n f\"Could not find {config_path}\",\n )\n\n if \"predict\" not in config:\n raise PredictorNotSet(\n \"Can't run predictions: 'predict' option not found in cog.yaml\"\n )\n\n predict_string = config[\"predict\"]\n module_path, class_name = predict_string.split(\":\", 1)\n module_name = os.path.basename(module_path).split(\".py\", 1)[0]\n spec = importlib.util.spec_from_file_location(module_name, module_path)\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module)\n predictor_class = getattr(module, class_name)\n return predictor_class()\n\n\n# Base class for inputs, constructed dynamically in get_input_type().\n# (This can't be a docstring or it gets passed through to the schema.)\nclass BaseInput(BaseModel):\n class Config:\n # When using `choices`, the type is converted into an enum to validate\n # But, after validation, we want to pass the actual value to predict(), not the enum object\n use_enum_values = True\n\n def cleanup(self):\n \"\"\"\n Cleanup any temporary files created by the input.\n \"\"\"\n for _, value in self:\n # Note this is pathlib.Path, which cog.Path is a subclass of. A pathlib.Path object shouldn't make its way here,\n # but both have an unlink() method, so may as well be safe.\n if isinstance(value, Path):\n # This could be missing_ok=True when we drop support for Python 3.7\n if value.exists():\n value.unlink()\n\n\ndef get_input_type(predictor: BasePredictor):\n \"\"\"\n Creates a Pydantic Input model from the arguments of a Predictor's predict() method.\n\n class Predictor(BasePredictor):\n def predict(self, text: str):\n ...\n\n programmatically creates a model like this:\n\n class Input(BaseModel):\n text: str\n \"\"\"\n\n signature = inspect.signature(predictor.predict)\n create_model_kwargs = {}\n\n order = 0\n\n for name, parameter in signature.parameters.items():\n InputType = parameter.annotation\n\n if InputType is inspect.Signature.empty:\n raise TypeError(\n f\"No input type provided for parameter `{name}`. Supported input types are: {readable_types_list(ALLOWED_INPUT_TYPES)}.\"\n )\n elif InputType not in ALLOWED_INPUT_TYPES:\n raise TypeError(\n f\"Unsupported input type {human_readable_type_name(InputType)} for parameter `{name}`. Supported input types are: {readable_types_list(ALLOWED_INPUT_TYPES)}.\"\n )\n\n # if no default is specified, create an empty, required input\n if parameter.default is inspect.Signature.empty:\n default = Input()\n else:\n default = parameter.default\n # If user hasn't used `Input`, then wrap it in that\n if not isinstance(default, FieldInfo):\n default = Input(default=default)\n\n # Fields aren't ordered, so use this pattern to ensure defined order\n # https://github.com/go-openapi/spec/pull/116\n default.extra[\"x-order\"] = order\n order += 1\n\n # Choices!\n if default.extra.get(\"choices\"):\n choices = default.extra[\"choices\"]\n # It will be passed automatically as 'enum' in the schema, so remove it as an extra field.\n del default.extra[\"choices\"]\n if InputType == str:\n\n class StringEnum(str, enum.Enum):\n pass\n\n InputType = StringEnum(name, {value: value for value in choices})\n elif InputType == int:\n InputType = enum.IntEnum(name, {str(value): value for value in choices})\n else:\n raise TypeError(\n f\"The input {name} uses the option choices. Choices can only be used with str or int types.\"\n )\n\n create_model_kwargs[name] = (InputType, default)\n\n return create_model(\"Input\", **create_model_kwargs, __base__=BaseInput)\n\n\ndef get_output_type(predictor: BasePredictor):\n \"\"\"\n Creates a Pydantic Output model from the return type annotation of a Predictor's predict() method.\n \"\"\"\n\n signature = inspect.signature(predictor.predict)\n if signature.return_annotation is inspect.Signature.empty:\n raise TypeError(\n \"\"\"You must set an output type. If your model can return multiple output types, you can explicitly set `Any` as the output type.\n\nFor example:\n\n from typing import Any\n\n def predict(\n self,\n image: Path = Input(description=\"Input image\"),\n ) -> Any:\n ...\n\"\"\"\n )\n else:\n OutputType = signature.return_annotation\n\n # The type that goes in the response is a list of the yielded type\n if get_origin(OutputType) is Iterator:\n OutputType = List[get_args(OutputType)[0]]\n\n if not hasattr(OutputType, \"__name__\") or OutputType.__name__ != \"Output\":\n # Wrap the type in a model called \"Output\" so it is a consistent name in the OpenAPI schema\n class Output(BaseModel):\n __root__: OutputType\n\n OutputType = Output\n\n return OutputType\n\n\ndef human_readable_type_name(t):\n \"\"\"\n Generates a useful-for-humans label for a type. For builtin types, it's just the class name (eg \"str\" or \"int\"). For other types, it includes the module (eg \"pathlib.Path\" or \"cog.File\").\n\n The special case for Cog modules is because the type lives in `cog.types` internally, but just `cog` when included as a dependency.\n \"\"\"\n module = t.__module__\n if module == \"builtins\":\n return t.__qualname__\n elif module.split(\".\")[0] == \"cog\":\n module = \"cog\"\n return module + \".\" + t.__qualname__\n\n\ndef readable_types_list(type_list):\n return \", \".join(human_readable_type_name(t) for t in type_list)\n", "path": "python/cog/predictor.py"}]}
| 2,537 | 162 |
gh_patches_debug_10569
|
rasdani/github-patches
|
git_diff
|
gratipay__gratipay.com-2047
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Username change fails silently
When you change your username, you're querying [`username.json.spt`](https://github.com/gittip/www.gittip.com/blob/0464c57465aed49a95a2c546f0d9987ad5b9b3fa/www/%25username/username.json.spt). If the desired username is invalid, we respond back with a user-friendly message (though the UI is ugly). Unfortunately, this behavior is currently broken because it returns HTML instead of the expected JSON.
[IRC](https://botbot.me/freenode/gittip/msg/9518377/), from working on #1849
Thanks to @thomasboyt for [reporting the problem](https://botbot.me/freenode/gittip/msg/9517625/) :heart:
The Aspen ticket for this is: gittip/aspen-python#279
Username change fails silently
When you change your username, you're querying [`username.json.spt`](https://github.com/gittip/www.gittip.com/blob/0464c57465aed49a95a2c546f0d9987ad5b9b3fa/www/%25username/username.json.spt). If the desired username is invalid, we respond back with a user-friendly message (though the UI is ugly). Unfortunately, this behavior is currently broken because it returns HTML instead of the expected JSON.
[IRC](https://botbot.me/freenode/gittip/msg/9518377/), from working on #1849
Thanks to @thomasboyt for [reporting the problem](https://botbot.me/freenode/gittip/msg/9517625/) :heart:
The Aspen ticket for this is: gittip/aspen-python#279
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gittip/exceptions.py`
Content:
```
1 """
2 This module contains exceptions shared across application code.
3 """
4
5 from __future__ import print_function, unicode_literals
6
7
8
9 class UnknownPlatform(Exception): pass
10
11 class ProblemChangingUsername(Exception):
12 def __str__(self):
13 return self.msg.format(self.args[0])
14
15 class UsernameIsEmpty(ProblemChangingUsername):
16 msg = "You need to provide a username!"
17
18 class UsernameTooLong(ProblemChangingUsername):
19 msg = "The username '{}' is too long."
20
21 # Not passing the potentially unicode characters back because of:
22 # https://github.com/gittip/aspen-python/issues/177
23 class UsernameContainsInvalidCharacters(ProblemChangingUsername):
24 msg = "That username contains invalid characters."
25
26 class UsernameIsRestricted(ProblemChangingUsername):
27 msg = "The username '{}' is restricted."
28
29 class UsernameAlreadyTaken(ProblemChangingUsername):
30 msg = "The username '{}' is already taken."
31
32 class TooGreedy(Exception): pass
33 class NoSelfTipping(Exception): pass
34 class BadAmount(Exception): pass
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gittip/exceptions.py b/gittip/exceptions.py
--- a/gittip/exceptions.py
+++ b/gittip/exceptions.py
@@ -18,10 +18,8 @@
class UsernameTooLong(ProblemChangingUsername):
msg = "The username '{}' is too long."
-# Not passing the potentially unicode characters back because of:
-# https://github.com/gittip/aspen-python/issues/177
class UsernameContainsInvalidCharacters(ProblemChangingUsername):
- msg = "That username contains invalid characters."
+ msg = "The username '{}' contains invalid characters."
class UsernameIsRestricted(ProblemChangingUsername):
msg = "The username '{}' is restricted."
|
{"golden_diff": "diff --git a/gittip/exceptions.py b/gittip/exceptions.py\n--- a/gittip/exceptions.py\n+++ b/gittip/exceptions.py\n@@ -18,10 +18,8 @@\n class UsernameTooLong(ProblemChangingUsername):\n msg = \"The username '{}' is too long.\"\n \n-# Not passing the potentially unicode characters back because of:\n-# https://github.com/gittip/aspen-python/issues/177\n class UsernameContainsInvalidCharacters(ProblemChangingUsername):\n- msg = \"That username contains invalid characters.\"\n+ msg = \"The username '{}' contains invalid characters.\"\n \n class UsernameIsRestricted(ProblemChangingUsername):\n msg = \"The username '{}' is restricted.\"\n", "issue": "Username change fails silently\nWhen you change your username, you're querying [`username.json.spt`](https://github.com/gittip/www.gittip.com/blob/0464c57465aed49a95a2c546f0d9987ad5b9b3fa/www/%25username/username.json.spt). If the desired username is invalid, we respond back with a user-friendly message (though the UI is ugly). Unfortunately, this behavior is currently broken because it returns HTML instead of the expected JSON.\n\n[IRC](https://botbot.me/freenode/gittip/msg/9518377/), from working on #1849\n\nThanks to @thomasboyt for [reporting the problem](https://botbot.me/freenode/gittip/msg/9517625/) :heart:\n\nThe Aspen ticket for this is: gittip/aspen-python#279\n\nUsername change fails silently\nWhen you change your username, you're querying [`username.json.spt`](https://github.com/gittip/www.gittip.com/blob/0464c57465aed49a95a2c546f0d9987ad5b9b3fa/www/%25username/username.json.spt). If the desired username is invalid, we respond back with a user-friendly message (though the UI is ugly). Unfortunately, this behavior is currently broken because it returns HTML instead of the expected JSON.\n\n[IRC](https://botbot.me/freenode/gittip/msg/9518377/), from working on #1849\n\nThanks to @thomasboyt for [reporting the problem](https://botbot.me/freenode/gittip/msg/9517625/) :heart:\n\nThe Aspen ticket for this is: gittip/aspen-python#279\n\n", "before_files": [{"content": "\"\"\"\nThis module contains exceptions shared across application code.\n\"\"\"\n\nfrom __future__ import print_function, unicode_literals\n\n\n\nclass UnknownPlatform(Exception): pass\n\nclass ProblemChangingUsername(Exception):\n def __str__(self):\n return self.msg.format(self.args[0])\n\nclass UsernameIsEmpty(ProblemChangingUsername):\n msg = \"You need to provide a username!\"\n\nclass UsernameTooLong(ProblemChangingUsername):\n msg = \"The username '{}' is too long.\"\n\n# Not passing the potentially unicode characters back because of:\n# https://github.com/gittip/aspen-python/issues/177\nclass UsernameContainsInvalidCharacters(ProblemChangingUsername):\n msg = \"That username contains invalid characters.\"\n\nclass UsernameIsRestricted(ProblemChangingUsername):\n msg = \"The username '{}' is restricted.\"\n\nclass UsernameAlreadyTaken(ProblemChangingUsername):\n msg = \"The username '{}' is already taken.\"\n\nclass TooGreedy(Exception): pass\nclass NoSelfTipping(Exception): pass\nclass BadAmount(Exception): pass\n", "path": "gittip/exceptions.py"}], "after_files": [{"content": "\"\"\"\nThis module contains exceptions shared across application code.\n\"\"\"\n\nfrom __future__ import print_function, unicode_literals\n\n\n\nclass UnknownPlatform(Exception): pass\n\nclass ProblemChangingUsername(Exception):\n def __str__(self):\n return self.msg.format(self.args[0])\n\nclass UsernameIsEmpty(ProblemChangingUsername):\n msg = \"You need to provide a username!\"\n\nclass UsernameTooLong(ProblemChangingUsername):\n msg = \"The username '{}' is too long.\"\n\nclass UsernameContainsInvalidCharacters(ProblemChangingUsername):\n msg = \"The username '{}' contains invalid characters.\"\n\nclass UsernameIsRestricted(ProblemChangingUsername):\n msg = \"The username '{}' is restricted.\"\n\nclass UsernameAlreadyTaken(ProblemChangingUsername):\n msg = \"The username '{}' is already taken.\"\n\nclass TooGreedy(Exception): pass\nclass NoSelfTipping(Exception): pass\nclass BadAmount(Exception): pass\n", "path": "gittip/exceptions.py"}]}
| 954 | 153 |
gh_patches_debug_9668
|
rasdani/github-patches
|
git_diff
|
secdev__scapy-1928
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TTL setting in IGMP.igmpize method
#### Brief description
IGMP.igmpize method in Scapy 2.3.3 was setting IP.ttl value to 1. Currently ttl is not modified by this function.
I consider this a bug as in docstring one can read:
> 4. ttl = 1 (RFC 2236, section 2)
The new behaviour has been introduced in revision 8329256dcaefb0aa7e1c9ec95e15abdd7607362a
If you agree it is actually a bug, then I can provide fix :)
#### Environment
- Scapy version: 2.4.2
- Python 2.7
#### How to reproduce
**Scapy 2.4.2**
```
>>> p = Ether() / IP(ttl=64) / IGMP()
>>> p.ttl
64
>>> p[IGMP].igmpize()
True
>>> p.ttl
64
```
**Scapy 2.3.3**
```
>>> p = Ether() / IP(ttl=64) / IGMP()
>>> p.ttl
64
>>> p[IGMP].igmpize(p[IP], p[Ether])
True
>>> p.ttl
1
```
#### Actual result
```
>>> p[IGMP].igmpize()
True
>>> p.ttl
64
```
#### Expected result
```
>>> p[IGMP].igmpize()
True
>>> p.ttl
1
```
#### Related resources
Full docstring of igmpize:
```
"""Called to explicitly fixup the packet according to the IGMP RFC
The rules are:
General:
1. the Max Response time is meaningful only in Membership Queries and should be zero
IP:
1. Send General Group Query to 224.0.0.1 (all systems)
2. Send Leave Group to 224.0.0.2 (all routers)
3a.Otherwise send the packet to the group address
3b.Send reports/joins to the group address
4. ttl = 1 (RFC 2236, section 2)
5. send the packet with the router alert IP option (RFC 2236, section 2)
Ether:
1. Recalculate destination
Returns:
True The tuple ether/ip/self passed all check and represents
a proper IGMP packet.
False One of more validation checks failed and no fields
were adjusted.
The function will examine the IGMP message to assure proper format.
Corrections will be attempted if possible. The IP header is then properly
adjusted to ensure correct formatting and assignment. The Ethernet header
is then adjusted to the proper IGMP packet format.
"""
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scapy/contrib/igmp.py`
Content:
```
1 #! /usr/bin/env python
2
3 # This file is part of Scapy
4 # Scapy is free software: you can redistribute it and/or modify
5 # it under the terms of the GNU General Public License as published by
6 # the Free Software Foundation, either version 2 of the License, or
7 # any later version.
8 #
9 # Scapy is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU General Public License for more details.
13 #
14 # You should have received a copy of the GNU General Public License
15 # along with Scapy. If not, see <http://www.gnu.org/licenses/>.
16
17 # flake8: noqa: E501
18
19 # scapy.contrib.description = Internet Group Management Protocol v1/v2 (IGMP/IGMPv2)
20 # scapy.contrib.status = loads
21
22 from __future__ import print_function
23 from scapy.compat import chb, orb
24 from scapy.error import warning
25 from scapy.fields import ByteEnumField, ByteField, IPField, XShortField
26 from scapy.layers.inet import IP, IPOption_Router_Alert
27 from scapy.layers.l2 import Ether, getmacbyip
28 from scapy.packet import bind_layers, Packet
29 from scapy.utils import atol, checksum
30
31
32 def isValidMCAddr(ip):
33 """convert dotted quad string to long and check the first octet"""
34 FirstOct = atol(ip) >> 24 & 0xFF
35 return (FirstOct >= 224) and (FirstOct <= 239)
36
37
38 class IGMP(Packet):
39 """IGMP Message Class for v1 and v2.
40
41 This class is derived from class Packet. You need call "igmpize()"
42 so the packet is transformed according the RFC when sent.
43 a=Ether(src="00:01:02:03:04:05")
44 b=IP(src="1.2.3.4")
45 c=IGMP(type=0x12, gaddr="224.2.3.4")
46 x = a/b/c
47 x[IGMP].igmpize()
48 sendp(a/b/c, iface="en0")
49
50 Parameters:
51 type IGMP type field, 0x11, 0x12, 0x16 or 0x17
52 mrcode Maximum Response time (zero for v1)
53 gaddr Multicast Group Address 224.x.x.x/4
54
55 See RFC2236, Section 2. Introduction for definitions of proper
56 IGMPv2 message format http://www.faqs.org/rfcs/rfc2236.html
57
58 """
59 name = "IGMP"
60
61 igmptypes = {0x11: "Group Membership Query",
62 0x12: "Version 1 - Membership Report",
63 0x16: "Version 2 - Membership Report",
64 0x17: "Leave Group"}
65
66 fields_desc = [ByteEnumField("type", 0x11, igmptypes),
67 ByteField("mrcode", 20),
68 XShortField("chksum", None),
69 IPField("gaddr", "0.0.0.0")]
70
71 def post_build(self, p, pay):
72 """Called implicitly before a packet is sent to compute and place IGMP checksum.
73
74 Parameters:
75 self The instantiation of an IGMP class
76 p The IGMP message in hex in network byte order
77 pay Additional payload for the IGMP message
78 """
79 p += pay
80 if self.chksum is None:
81 ck = checksum(p)
82 p = p[:2] + chb(ck >> 8) + chb(ck & 0xff) + p[4:]
83 return p
84
85 @classmethod
86 def dispatch_hook(cls, _pkt=None, *args, **kargs):
87 if _pkt and len(_pkt) >= 4:
88 from scapy.contrib.igmpv3 import IGMPv3
89 if orb(_pkt[0]) in [0x22, 0x30, 0x31, 0x32]:
90 return IGMPv3
91 if orb(_pkt[0]) == 0x11 and len(_pkt) >= 12:
92 return IGMPv3
93 return IGMP
94
95 def igmpize(self):
96 """Called to explicitly fixup the packet according to the IGMP RFC
97
98 The rules are:
99 General:
100 1. the Max Response time is meaningful only in Membership Queries and should be zero
101 IP:
102 1. Send General Group Query to 224.0.0.1 (all systems)
103 2. Send Leave Group to 224.0.0.2 (all routers)
104 3a.Otherwise send the packet to the group address
105 3b.Send reports/joins to the group address
106 4. ttl = 1 (RFC 2236, section 2)
107 5. send the packet with the router alert IP option (RFC 2236, section 2)
108 Ether:
109 1. Recalculate destination
110
111 Returns:
112 True The tuple ether/ip/self passed all check and represents
113 a proper IGMP packet.
114 False One of more validation checks failed and no fields
115 were adjusted.
116
117 The function will examine the IGMP message to assure proper format.
118 Corrections will be attempted if possible. The IP header is then properly
119 adjusted to ensure correct formatting and assignment. The Ethernet header
120 is then adjusted to the proper IGMP packet format.
121 """
122 gaddr = self.gaddr if hasattr(self, "gaddr") and self.gaddr else "0.0.0.0" # noqa: E501
123 underlayer = self.underlayer
124 if self.type not in [0x11, 0x30]: # General Rule 1 # noqa: E501
125 self.mrcode = 0
126 if isinstance(underlayer, IP):
127 if (self.type == 0x11):
128 if (gaddr == "0.0.0.0"):
129 underlayer.dst = "224.0.0.1" # IP rule 1 # noqa: E501
130 elif isValidMCAddr(gaddr):
131 underlayer.dst = gaddr # IP rule 3a # noqa: E501
132 else:
133 warning("Invalid IGMP Group Address detected !")
134 return False
135 elif ((self.type == 0x17) and isValidMCAddr(gaddr)):
136 underlayer.dst = "224.0.0.2" # IP rule 2 # noqa: E501
137 elif ((self.type == 0x12) or (self.type == 0x16)) and (isValidMCAddr(gaddr)): # noqa: E501
138 underlayer.dst = gaddr # IP rule 3b # noqa: E501
139 else:
140 warning("Invalid IGMP Type detected !")
141 return False
142 if not any(isinstance(x, IPOption_Router_Alert) for x in underlayer.options): # noqa: E501
143 underlayer.options.append(IPOption_Router_Alert())
144 _root = self.firstlayer()
145 if _root.haslayer(Ether):
146 # Force recalculate Ether dst
147 _root[Ether].dst = getmacbyip(underlayer.dst) # Ether rule 1 # noqa: E501
148 from scapy.contrib.igmpv3 import IGMPv3
149 if isinstance(self, IGMPv3):
150 self.encode_maxrespcode()
151 return True
152
153 def mysummary(self):
154 """Display a summary of the IGMP object."""
155 if isinstance(self.underlayer, IP):
156 return self.underlayer.sprintf("IGMP: %IP.src% > %IP.dst% %IGMP.type% %IGMP.gaddr%") # noqa: E501
157 else:
158 return self.sprintf("IGMP %IGMP.type% %IGMP.gaddr%")
159
160
161 bind_layers(IP, IGMP, frag=0,
162 proto=2,
163 ttl=1)
164
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scapy/contrib/igmp.py b/scapy/contrib/igmp.py
--- a/scapy/contrib/igmp.py
+++ b/scapy/contrib/igmp.py
@@ -141,6 +141,7 @@
return False
if not any(isinstance(x, IPOption_Router_Alert) for x in underlayer.options): # noqa: E501
underlayer.options.append(IPOption_Router_Alert())
+ underlayer.ttl = 1 # IP rule 4
_root = self.firstlayer()
if _root.haslayer(Ether):
# Force recalculate Ether dst
|
{"golden_diff": "diff --git a/scapy/contrib/igmp.py b/scapy/contrib/igmp.py\n--- a/scapy/contrib/igmp.py\n+++ b/scapy/contrib/igmp.py\n@@ -141,6 +141,7 @@\n return False\n if not any(isinstance(x, IPOption_Router_Alert) for x in underlayer.options): # noqa: E501\n underlayer.options.append(IPOption_Router_Alert())\n+ underlayer.ttl = 1 # IP rule 4\n _root = self.firstlayer()\n if _root.haslayer(Ether):\n # Force recalculate Ether dst\n", "issue": "TTL setting in IGMP.igmpize method\n#### Brief description\r\n\r\nIGMP.igmpize method in Scapy 2.3.3 was setting IP.ttl value to 1. Currently ttl is not modified by this function. \r\n\r\nI consider this a bug as in docstring one can read: \r\n> 4. ttl = 1 (RFC 2236, section 2)\r\n\r\nThe new behaviour has been introduced in revision 8329256dcaefb0aa7e1c9ec95e15abdd7607362a\r\n\r\nIf you agree it is actually a bug, then I can provide fix :) \r\n\r\n#### Environment\r\n\r\n- Scapy version: 2.4.2\r\n- Python 2.7\r\n\r\n#### How to reproduce\r\n\r\n**Scapy 2.4.2**\r\n\r\n```\r\n>>> p = Ether() / IP(ttl=64) / IGMP()\r\n>>> p.ttl\r\n64\r\n>>> p[IGMP].igmpize()\r\nTrue\r\n>>> p.ttl\r\n64\r\n```\r\n\r\n**Scapy 2.3.3**\r\n\r\n```\r\n>>> p = Ether() / IP(ttl=64) / IGMP()\r\n>>> p.ttl\r\n64\r\n>>> p[IGMP].igmpize(p[IP], p[Ether])\r\nTrue\r\n>>> p.ttl\r\n1\r\n```\r\n\r\n\r\n#### Actual result\r\n\r\n```\r\n>>> p[IGMP].igmpize()\r\nTrue\r\n>>> p.ttl\r\n64\r\n```\r\n\r\n#### Expected result\r\n\r\n```\r\n>>> p[IGMP].igmpize()\r\nTrue\r\n>>> p.ttl\r\n1\r\n```\r\n\r\n#### Related resources\r\n\r\nFull docstring of igmpize:\r\n\r\n```\r\n \"\"\"Called to explicitly fixup the packet according to the IGMP RFC\r\n\r\n The rules are:\r\n General:\r\n 1. the Max Response time is meaningful only in Membership Queries and should be zero\r\n IP:\r\n 1. Send General Group Query to 224.0.0.1 (all systems)\r\n 2. Send Leave Group to 224.0.0.2 (all routers)\r\n 3a.Otherwise send the packet to the group address\r\n 3b.Send reports/joins to the group address\r\n 4. ttl = 1 (RFC 2236, section 2)\r\n 5. send the packet with the router alert IP option (RFC 2236, section 2)\r\n Ether:\r\n 1. Recalculate destination\r\n\r\n Returns:\r\n True The tuple ether/ip/self passed all check and represents\r\n a proper IGMP packet.\r\n False One of more validation checks failed and no fields\r\n were adjusted.\r\n\r\n The function will examine the IGMP message to assure proper format.\r\n Corrections will be attempted if possible. The IP header is then properly\r\n adjusted to ensure correct formatting and assignment. The Ethernet header\r\n is then adjusted to the proper IGMP packet format.\r\n \"\"\"\r\n```\n", "before_files": [{"content": "#! /usr/bin/env python\n\n# This file is part of Scapy\n# Scapy is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 2 of the License, or\n# any later version.\n#\n# Scapy is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Scapy. If not, see <http://www.gnu.org/licenses/>.\n\n# flake8: noqa: E501\n\n# scapy.contrib.description = Internet Group Management Protocol v1/v2 (IGMP/IGMPv2)\n# scapy.contrib.status = loads\n\nfrom __future__ import print_function\nfrom scapy.compat import chb, orb\nfrom scapy.error import warning\nfrom scapy.fields import ByteEnumField, ByteField, IPField, XShortField\nfrom scapy.layers.inet import IP, IPOption_Router_Alert\nfrom scapy.layers.l2 import Ether, getmacbyip\nfrom scapy.packet import bind_layers, Packet\nfrom scapy.utils import atol, checksum\n\n\ndef isValidMCAddr(ip):\n \"\"\"convert dotted quad string to long and check the first octet\"\"\"\n FirstOct = atol(ip) >> 24 & 0xFF\n return (FirstOct >= 224) and (FirstOct <= 239)\n\n\nclass IGMP(Packet):\n \"\"\"IGMP Message Class for v1 and v2.\n\nThis class is derived from class Packet. You need call \"igmpize()\"\nso the packet is transformed according the RFC when sent.\na=Ether(src=\"00:01:02:03:04:05\")\nb=IP(src=\"1.2.3.4\")\nc=IGMP(type=0x12, gaddr=\"224.2.3.4\")\nx = a/b/c\nx[IGMP].igmpize()\nsendp(a/b/c, iface=\"en0\")\n\n Parameters:\n type IGMP type field, 0x11, 0x12, 0x16 or 0x17\n mrcode Maximum Response time (zero for v1)\n gaddr Multicast Group Address 224.x.x.x/4\n\nSee RFC2236, Section 2. Introduction for definitions of proper\nIGMPv2 message format http://www.faqs.org/rfcs/rfc2236.html\n\n \"\"\"\n name = \"IGMP\"\n\n igmptypes = {0x11: \"Group Membership Query\",\n 0x12: \"Version 1 - Membership Report\",\n 0x16: \"Version 2 - Membership Report\",\n 0x17: \"Leave Group\"}\n\n fields_desc = [ByteEnumField(\"type\", 0x11, igmptypes),\n ByteField(\"mrcode\", 20),\n XShortField(\"chksum\", None),\n IPField(\"gaddr\", \"0.0.0.0\")]\n\n def post_build(self, p, pay):\n \"\"\"Called implicitly before a packet is sent to compute and place IGMP checksum.\n\n Parameters:\n self The instantiation of an IGMP class\n p The IGMP message in hex in network byte order\n pay Additional payload for the IGMP message\n \"\"\"\n p += pay\n if self.chksum is None:\n ck = checksum(p)\n p = p[:2] + chb(ck >> 8) + chb(ck & 0xff) + p[4:]\n return p\n\n @classmethod\n def dispatch_hook(cls, _pkt=None, *args, **kargs):\n if _pkt and len(_pkt) >= 4:\n from scapy.contrib.igmpv3 import IGMPv3\n if orb(_pkt[0]) in [0x22, 0x30, 0x31, 0x32]:\n return IGMPv3\n if orb(_pkt[0]) == 0x11 and len(_pkt) >= 12:\n return IGMPv3\n return IGMP\n\n def igmpize(self):\n \"\"\"Called to explicitly fixup the packet according to the IGMP RFC\n\n The rules are:\n General:\n 1. the Max Response time is meaningful only in Membership Queries and should be zero\n IP:\n 1. Send General Group Query to 224.0.0.1 (all systems)\n 2. Send Leave Group to 224.0.0.2 (all routers)\n 3a.Otherwise send the packet to the group address\n 3b.Send reports/joins to the group address\n 4. ttl = 1 (RFC 2236, section 2)\n 5. send the packet with the router alert IP option (RFC 2236, section 2)\n Ether:\n 1. Recalculate destination\n\n Returns:\n True The tuple ether/ip/self passed all check and represents\n a proper IGMP packet.\n False One of more validation checks failed and no fields\n were adjusted.\n\n The function will examine the IGMP message to assure proper format.\n Corrections will be attempted if possible. The IP header is then properly\n adjusted to ensure correct formatting and assignment. The Ethernet header\n is then adjusted to the proper IGMP packet format.\n \"\"\"\n gaddr = self.gaddr if hasattr(self, \"gaddr\") and self.gaddr else \"0.0.0.0\" # noqa: E501\n underlayer = self.underlayer\n if self.type not in [0x11, 0x30]: # General Rule 1 # noqa: E501\n self.mrcode = 0\n if isinstance(underlayer, IP):\n if (self.type == 0x11):\n if (gaddr == \"0.0.0.0\"):\n underlayer.dst = \"224.0.0.1\" # IP rule 1 # noqa: E501\n elif isValidMCAddr(gaddr):\n underlayer.dst = gaddr # IP rule 3a # noqa: E501\n else:\n warning(\"Invalid IGMP Group Address detected !\")\n return False\n elif ((self.type == 0x17) and isValidMCAddr(gaddr)):\n underlayer.dst = \"224.0.0.2\" # IP rule 2 # noqa: E501\n elif ((self.type == 0x12) or (self.type == 0x16)) and (isValidMCAddr(gaddr)): # noqa: E501\n underlayer.dst = gaddr # IP rule 3b # noqa: E501\n else:\n warning(\"Invalid IGMP Type detected !\")\n return False\n if not any(isinstance(x, IPOption_Router_Alert) for x in underlayer.options): # noqa: E501\n underlayer.options.append(IPOption_Router_Alert())\n _root = self.firstlayer()\n if _root.haslayer(Ether):\n # Force recalculate Ether dst\n _root[Ether].dst = getmacbyip(underlayer.dst) # Ether rule 1 # noqa: E501\n from scapy.contrib.igmpv3 import IGMPv3\n if isinstance(self, IGMPv3):\n self.encode_maxrespcode()\n return True\n\n def mysummary(self):\n \"\"\"Display a summary of the IGMP object.\"\"\"\n if isinstance(self.underlayer, IP):\n return self.underlayer.sprintf(\"IGMP: %IP.src% > %IP.dst% %IGMP.type% %IGMP.gaddr%\") # noqa: E501\n else:\n return self.sprintf(\"IGMP %IGMP.type% %IGMP.gaddr%\")\n\n\nbind_layers(IP, IGMP, frag=0,\n proto=2,\n ttl=1)\n", "path": "scapy/contrib/igmp.py"}], "after_files": [{"content": "#! /usr/bin/env python\n\n# This file is part of Scapy\n# Scapy is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 2 of the License, or\n# any later version.\n#\n# Scapy is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Scapy. If not, see <http://www.gnu.org/licenses/>.\n\n# flake8: noqa: E501\n\n# scapy.contrib.description = Internet Group Management Protocol v1/v2 (IGMP/IGMPv2)\n# scapy.contrib.status = loads\n\nfrom __future__ import print_function\nfrom scapy.compat import chb, orb\nfrom scapy.error import warning\nfrom scapy.fields import ByteEnumField, ByteField, IPField, XShortField\nfrom scapy.layers.inet import IP, IPOption_Router_Alert\nfrom scapy.layers.l2 import Ether, getmacbyip\nfrom scapy.packet import bind_layers, Packet\nfrom scapy.utils import atol, checksum\n\n\ndef isValidMCAddr(ip):\n \"\"\"convert dotted quad string to long and check the first octet\"\"\"\n FirstOct = atol(ip) >> 24 & 0xFF\n return (FirstOct >= 224) and (FirstOct <= 239)\n\n\nclass IGMP(Packet):\n \"\"\"IGMP Message Class for v1 and v2.\n\nThis class is derived from class Packet. You need call \"igmpize()\"\nso the packet is transformed according the RFC when sent.\na=Ether(src=\"00:01:02:03:04:05\")\nb=IP(src=\"1.2.3.4\")\nc=IGMP(type=0x12, gaddr=\"224.2.3.4\")\nx = a/b/c\nx[IGMP].igmpize()\nsendp(a/b/c, iface=\"en0\")\n\n Parameters:\n type IGMP type field, 0x11, 0x12, 0x16 or 0x17\n mrcode Maximum Response time (zero for v1)\n gaddr Multicast Group Address 224.x.x.x/4\n\nSee RFC2236, Section 2. Introduction for definitions of proper\nIGMPv2 message format http://www.faqs.org/rfcs/rfc2236.html\n\n \"\"\"\n name = \"IGMP\"\n\n igmptypes = {0x11: \"Group Membership Query\",\n 0x12: \"Version 1 - Membership Report\",\n 0x16: \"Version 2 - Membership Report\",\n 0x17: \"Leave Group\"}\n\n fields_desc = [ByteEnumField(\"type\", 0x11, igmptypes),\n ByteField(\"mrcode\", 20),\n XShortField(\"chksum\", None),\n IPField(\"gaddr\", \"0.0.0.0\")]\n\n def post_build(self, p, pay):\n \"\"\"Called implicitly before a packet is sent to compute and place IGMP checksum.\n\n Parameters:\n self The instantiation of an IGMP class\n p The IGMP message in hex in network byte order\n pay Additional payload for the IGMP message\n \"\"\"\n p += pay\n if self.chksum is None:\n ck = checksum(p)\n p = p[:2] + chb(ck >> 8) + chb(ck & 0xff) + p[4:]\n return p\n\n @classmethod\n def dispatch_hook(cls, _pkt=None, *args, **kargs):\n if _pkt and len(_pkt) >= 4:\n from scapy.contrib.igmpv3 import IGMPv3\n if orb(_pkt[0]) in [0x22, 0x30, 0x31, 0x32]:\n return IGMPv3\n if orb(_pkt[0]) == 0x11 and len(_pkt) >= 12:\n return IGMPv3\n return IGMP\n\n def igmpize(self):\n \"\"\"Called to explicitly fixup the packet according to the IGMP RFC\n\n The rules are:\n General:\n 1. the Max Response time is meaningful only in Membership Queries and should be zero\n IP:\n 1. Send General Group Query to 224.0.0.1 (all systems)\n 2. Send Leave Group to 224.0.0.2 (all routers)\n 3a.Otherwise send the packet to the group address\n 3b.Send reports/joins to the group address\n 4. ttl = 1 (RFC 2236, section 2)\n 5. send the packet with the router alert IP option (RFC 2236, section 2)\n Ether:\n 1. Recalculate destination\n\n Returns:\n True The tuple ether/ip/self passed all check and represents\n a proper IGMP packet.\n False One of more validation checks failed and no fields\n were adjusted.\n\n The function will examine the IGMP message to assure proper format.\n Corrections will be attempted if possible. The IP header is then properly\n adjusted to ensure correct formatting and assignment. The Ethernet header\n is then adjusted to the proper IGMP packet format.\n \"\"\"\n gaddr = self.gaddr if hasattr(self, \"gaddr\") and self.gaddr else \"0.0.0.0\" # noqa: E501\n underlayer = self.underlayer\n if self.type not in [0x11, 0x30]: # General Rule 1 # noqa: E501\n self.mrcode = 0\n if isinstance(underlayer, IP):\n if (self.type == 0x11):\n if (gaddr == \"0.0.0.0\"):\n underlayer.dst = \"224.0.0.1\" # IP rule 1 # noqa: E501\n elif isValidMCAddr(gaddr):\n underlayer.dst = gaddr # IP rule 3a # noqa: E501\n else:\n warning(\"Invalid IGMP Group Address detected !\")\n return False\n elif ((self.type == 0x17) and isValidMCAddr(gaddr)):\n underlayer.dst = \"224.0.0.2\" # IP rule 2 # noqa: E501\n elif ((self.type == 0x12) or (self.type == 0x16)) and (isValidMCAddr(gaddr)): # noqa: E501\n underlayer.dst = gaddr # IP rule 3b # noqa: E501\n else:\n warning(\"Invalid IGMP Type detected !\")\n return False\n if not any(isinstance(x, IPOption_Router_Alert) for x in underlayer.options): # noqa: E501\n underlayer.options.append(IPOption_Router_Alert())\n underlayer.ttl = 1 # IP rule 4\n _root = self.firstlayer()\n if _root.haslayer(Ether):\n # Force recalculate Ether dst\n _root[Ether].dst = getmacbyip(underlayer.dst) # Ether rule 1 # noqa: E501\n from scapy.contrib.igmpv3 import IGMPv3\n if isinstance(self, IGMPv3):\n self.encode_maxrespcode()\n return True\n\n def mysummary(self):\n \"\"\"Display a summary of the IGMP object.\"\"\"\n if isinstance(self.underlayer, IP):\n return self.underlayer.sprintf(\"IGMP: %IP.src% > %IP.dst% %IGMP.type% %IGMP.gaddr%\") # noqa: E501\n else:\n return self.sprintf(\"IGMP %IGMP.type% %IGMP.gaddr%\")\n\n\nbind_layers(IP, IGMP, frag=0,\n proto=2,\n ttl=1)\n", "path": "scapy/contrib/igmp.py"}]}
| 3,133 | 146 |
gh_patches_debug_23361
|
rasdani/github-patches
|
git_diff
|
NVIDIA__NVFlare-106
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing config_validator option in provisioning tool
Provisioning tool needs to have config_validator option so the generated fed_server.json can have that information.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nvflare/lighter/impl/static_file.py`
Content:
```
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import os
17
18 from nvflare.lighter.spec import Builder
19 from nvflare.lighter.utils import sh_replace
20
21
22 class StaticFileBuilder(Builder):
23 def __init__(self, enable_byoc=False, config_folder="", docker_image=""):
24 self.enable_byoc = enable_byoc
25 self.config_folder = config_folder
26 self.docker_image = docker_image
27
28 def _write(self, file_full_path, content, mode, exe=False):
29 mode = mode + "w"
30 with open(file_full_path, mode) as f:
31 f.write(content)
32 if exe:
33 os.chmod(file_full_path, 0o755)
34
35 def _build_server(self, server, ctx):
36 config = json.loads(self.template["fed_server"])
37 dest_dir = self.get_kit_dir(server, ctx)
38 server_0 = config["servers"][0]
39 server_0["name"] = self.study_name
40 admin_port = server.props.get("admin_port", 8003)
41 ctx["admin_port"] = admin_port
42 fed_learn_port = server.props.get("fed_learn_port", 8002)
43 ctx["fed_learn_port"] = fed_learn_port
44 ctx["server_name"] = server.name
45 server_0["service"]["target"] = f"{server.name}:{fed_learn_port}"
46 server_0["admin_host"] = server.name
47 server_0["admin_port"] = admin_port
48 config["enable_byoc"] = server.enable_byoc
49 self._write(os.path.join(dest_dir, "fed_server.json"), json.dumps(config), "t")
50 replacement_dict = {
51 "admin_port": admin_port,
52 "fed_learn_port": fed_learn_port,
53 "config_folder": self.config_folder,
54 "docker_image": self.docker_image,
55 }
56 if self.docker_image:
57 self._write(
58 os.path.join(dest_dir, "docker.sh"),
59 sh_replace(self.template["docker_svr_sh"], replacement_dict),
60 "t",
61 exe=True,
62 )
63 self._write(
64 os.path.join(dest_dir, "start.sh"),
65 self.template["start_svr_sh"],
66 "t",
67 exe=True,
68 )
69 self._write(
70 os.path.join(dest_dir, "sub_start.sh"),
71 sh_replace(self.template["sub_start_svr_sh"], replacement_dict),
72 "t",
73 exe=True,
74 )
75 self._write(
76 os.path.join(dest_dir, "log.config"),
77 self.template["log_config"],
78 "t",
79 )
80 self._write(
81 os.path.join(dest_dir, "readme.txt"),
82 self.template["readme_fs"],
83 "t",
84 )
85 self._write(
86 os.path.join(dest_dir, "stop_fl.sh"),
87 self.template["stop_fl_sh"],
88 "t",
89 exe=True,
90 )
91
92 def _build_client(self, client, ctx):
93 config = json.loads(self.template["fed_client"])
94 dest_dir = self.get_kit_dir(client, ctx)
95 fed_learn_port = ctx.get("fed_learn_port")
96 server_name = ctx.get("server_name")
97 config["servers"][0]["service"]["target"] = f"{server_name}:{fed_learn_port}"
98 config["servers"][0]["name"] = self.study_name
99 config["enable_byoc"] = client.enable_byoc
100 replacement_dict = {
101 "client_name": f"{client.subject}",
102 "config_folder": self.config_folder,
103 "docker_image": self.docker_image,
104 }
105
106 self._write(os.path.join(dest_dir, "fed_client.json"), json.dumps(config), "t")
107 if self.docker_image:
108 self._write(
109 os.path.join(dest_dir, "docker.sh"),
110 sh_replace(self.template["docker_cln_sh"], replacement_dict),
111 "t",
112 exe=True,
113 )
114 self._write(
115 os.path.join(dest_dir, "start.sh"),
116 self.template["start_cln_sh"],
117 "t",
118 exe=True,
119 )
120 self._write(
121 os.path.join(dest_dir, "sub_start.sh"),
122 sh_replace(self.template["sub_start_cln_sh"], replacement_dict),
123 "t",
124 exe=True,
125 )
126 self._write(
127 os.path.join(dest_dir, "log.config"),
128 self.template["log_config"],
129 "t",
130 )
131 self._write(
132 os.path.join(dest_dir, "readme.txt"),
133 self.template["readme_fc"],
134 "t",
135 )
136 self._write(
137 os.path.join(dest_dir, "stop_fl.sh"),
138 self.template["stop_fl_sh"],
139 "t",
140 exe=True,
141 )
142
143 def _build_admin(self, admin, ctx):
144 dest_dir = self.get_kit_dir(admin, ctx)
145 admin_port = ctx.get("admin_port")
146 server_name = ctx.get("server_name")
147
148 replacement_dict = {
149 "cn": f"{server_name}",
150 "admin_port": f"{admin_port}",
151 "docker_image": self.docker_image,
152 }
153 if self.docker_image:
154 self._write(
155 os.path.join(dest_dir, "docker.sh"),
156 sh_replace(self.template["docker_adm_sh"], replacement_dict),
157 "t",
158 exe=True,
159 )
160 self._write(
161 os.path.join(dest_dir, "fl_admin.sh"),
162 sh_replace(self.template["fl_admin_sh"], replacement_dict),
163 "t",
164 exe=True,
165 )
166 self._write(
167 os.path.join(dest_dir, "readme.txt"),
168 self.template["readme_am"],
169 "t",
170 )
171
172 def build(self, study, ctx):
173 self.template = ctx.get("template")
174 server = study.get_participants_by_type("server")
175 self.study_name = study.name
176 self._build_server(server, ctx)
177
178 for client in study.get_participants_by_type("client", first_only=False):
179 self._build_client(client, ctx)
180
181 for admin in study.get_participants_by_type("admin", first_only=False):
182 self._build_admin(admin, ctx)
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nvflare/lighter/impl/static_file.py b/nvflare/lighter/impl/static_file.py
--- a/nvflare/lighter/impl/static_file.py
+++ b/nvflare/lighter/impl/static_file.py
@@ -20,10 +20,11 @@
class StaticFileBuilder(Builder):
- def __init__(self, enable_byoc=False, config_folder="", docker_image=""):
+ def __init__(self, enable_byoc=False, config_folder="", app_validator="", docker_image=""):
self.enable_byoc = enable_byoc
self.config_folder = config_folder
self.docker_image = docker_image
+ self.app_validator = app_validator
def _write(self, file_full_path, content, mode, exe=False):
mode = mode + "w"
@@ -46,6 +47,8 @@
server_0["admin_host"] = server.name
server_0["admin_port"] = admin_port
config["enable_byoc"] = server.enable_byoc
+ if self.app_validator:
+ config["app_validator"] = {"path": self.app_validator}
self._write(os.path.join(dest_dir, "fed_server.json"), json.dumps(config), "t")
replacement_dict = {
"admin_port": admin_port,
|
{"golden_diff": "diff --git a/nvflare/lighter/impl/static_file.py b/nvflare/lighter/impl/static_file.py\n--- a/nvflare/lighter/impl/static_file.py\n+++ b/nvflare/lighter/impl/static_file.py\n@@ -20,10 +20,11 @@\n \n \n class StaticFileBuilder(Builder):\n- def __init__(self, enable_byoc=False, config_folder=\"\", docker_image=\"\"):\n+ def __init__(self, enable_byoc=False, config_folder=\"\", app_validator=\"\", docker_image=\"\"):\n self.enable_byoc = enable_byoc\n self.config_folder = config_folder\n self.docker_image = docker_image\n+ self.app_validator = app_validator\n \n def _write(self, file_full_path, content, mode, exe=False):\n mode = mode + \"w\"\n@@ -46,6 +47,8 @@\n server_0[\"admin_host\"] = server.name\n server_0[\"admin_port\"] = admin_port\n config[\"enable_byoc\"] = server.enable_byoc\n+ if self.app_validator:\n+ config[\"app_validator\"] = {\"path\": self.app_validator}\n self._write(os.path.join(dest_dir, \"fed_server.json\"), json.dumps(config), \"t\")\n replacement_dict = {\n \"admin_port\": admin_port,\n", "issue": "Missing config_validator option in provisioning tool\nProvisioning tool needs to have config_validator option so the generated fed_server.json can have that information.\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\nimport os\n\nfrom nvflare.lighter.spec import Builder\nfrom nvflare.lighter.utils import sh_replace\n\n\nclass StaticFileBuilder(Builder):\n def __init__(self, enable_byoc=False, config_folder=\"\", docker_image=\"\"):\n self.enable_byoc = enable_byoc\n self.config_folder = config_folder\n self.docker_image = docker_image\n\n def _write(self, file_full_path, content, mode, exe=False):\n mode = mode + \"w\"\n with open(file_full_path, mode) as f:\n f.write(content)\n if exe:\n os.chmod(file_full_path, 0o755)\n\n def _build_server(self, server, ctx):\n config = json.loads(self.template[\"fed_server\"])\n dest_dir = self.get_kit_dir(server, ctx)\n server_0 = config[\"servers\"][0]\n server_0[\"name\"] = self.study_name\n admin_port = server.props.get(\"admin_port\", 8003)\n ctx[\"admin_port\"] = admin_port\n fed_learn_port = server.props.get(\"fed_learn_port\", 8002)\n ctx[\"fed_learn_port\"] = fed_learn_port\n ctx[\"server_name\"] = server.name\n server_0[\"service\"][\"target\"] = f\"{server.name}:{fed_learn_port}\"\n server_0[\"admin_host\"] = server.name\n server_0[\"admin_port\"] = admin_port\n config[\"enable_byoc\"] = server.enable_byoc\n self._write(os.path.join(dest_dir, \"fed_server.json\"), json.dumps(config), \"t\")\n replacement_dict = {\n \"admin_port\": admin_port,\n \"fed_learn_port\": fed_learn_port,\n \"config_folder\": self.config_folder,\n \"docker_image\": self.docker_image,\n }\n if self.docker_image:\n self._write(\n os.path.join(dest_dir, \"docker.sh\"),\n sh_replace(self.template[\"docker_svr_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"start.sh\"),\n self.template[\"start_svr_sh\"],\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"sub_start.sh\"),\n sh_replace(self.template[\"sub_start_svr_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"log.config\"),\n self.template[\"log_config\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"readme.txt\"),\n self.template[\"readme_fs\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"stop_fl.sh\"),\n self.template[\"stop_fl_sh\"],\n \"t\",\n exe=True,\n )\n\n def _build_client(self, client, ctx):\n config = json.loads(self.template[\"fed_client\"])\n dest_dir = self.get_kit_dir(client, ctx)\n fed_learn_port = ctx.get(\"fed_learn_port\")\n server_name = ctx.get(\"server_name\")\n config[\"servers\"][0][\"service\"][\"target\"] = f\"{server_name}:{fed_learn_port}\"\n config[\"servers\"][0][\"name\"] = self.study_name\n config[\"enable_byoc\"] = client.enable_byoc\n replacement_dict = {\n \"client_name\": f\"{client.subject}\",\n \"config_folder\": self.config_folder,\n \"docker_image\": self.docker_image,\n }\n\n self._write(os.path.join(dest_dir, \"fed_client.json\"), json.dumps(config), \"t\")\n if self.docker_image:\n self._write(\n os.path.join(dest_dir, \"docker.sh\"),\n sh_replace(self.template[\"docker_cln_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"start.sh\"),\n self.template[\"start_cln_sh\"],\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"sub_start.sh\"),\n sh_replace(self.template[\"sub_start_cln_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"log.config\"),\n self.template[\"log_config\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"readme.txt\"),\n self.template[\"readme_fc\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"stop_fl.sh\"),\n self.template[\"stop_fl_sh\"],\n \"t\",\n exe=True,\n )\n\n def _build_admin(self, admin, ctx):\n dest_dir = self.get_kit_dir(admin, ctx)\n admin_port = ctx.get(\"admin_port\")\n server_name = ctx.get(\"server_name\")\n\n replacement_dict = {\n \"cn\": f\"{server_name}\",\n \"admin_port\": f\"{admin_port}\",\n \"docker_image\": self.docker_image,\n }\n if self.docker_image:\n self._write(\n os.path.join(dest_dir, \"docker.sh\"),\n sh_replace(self.template[\"docker_adm_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"fl_admin.sh\"),\n sh_replace(self.template[\"fl_admin_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"readme.txt\"),\n self.template[\"readme_am\"],\n \"t\",\n )\n\n def build(self, study, ctx):\n self.template = ctx.get(\"template\")\n server = study.get_participants_by_type(\"server\")\n self.study_name = study.name\n self._build_server(server, ctx)\n\n for client in study.get_participants_by_type(\"client\", first_only=False):\n self._build_client(client, ctx)\n\n for admin in study.get_participants_by_type(\"admin\", first_only=False):\n self._build_admin(admin, ctx)\n", "path": "nvflare/lighter/impl/static_file.py"}], "after_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\nimport os\n\nfrom nvflare.lighter.spec import Builder\nfrom nvflare.lighter.utils import sh_replace\n\n\nclass StaticFileBuilder(Builder):\n def __init__(self, enable_byoc=False, config_folder=\"\", app_validator=\"\", docker_image=\"\"):\n self.enable_byoc = enable_byoc\n self.config_folder = config_folder\n self.docker_image = docker_image\n self.app_validator = app_validator\n\n def _write(self, file_full_path, content, mode, exe=False):\n mode = mode + \"w\"\n with open(file_full_path, mode) as f:\n f.write(content)\n if exe:\n os.chmod(file_full_path, 0o755)\n\n def _build_server(self, server, ctx):\n config = json.loads(self.template[\"fed_server\"])\n dest_dir = self.get_kit_dir(server, ctx)\n server_0 = config[\"servers\"][0]\n server_0[\"name\"] = self.study_name\n admin_port = server.props.get(\"admin_port\", 8003)\n ctx[\"admin_port\"] = admin_port\n fed_learn_port = server.props.get(\"fed_learn_port\", 8002)\n ctx[\"fed_learn_port\"] = fed_learn_port\n ctx[\"server_name\"] = server.name\n server_0[\"service\"][\"target\"] = f\"{server.name}:{fed_learn_port}\"\n server_0[\"admin_host\"] = server.name\n server_0[\"admin_port\"] = admin_port\n config[\"enable_byoc\"] = server.enable_byoc\n if self.app_validator:\n config[\"app_validator\"] = {\"path\": self.app_validator}\n self._write(os.path.join(dest_dir, \"fed_server.json\"), json.dumps(config), \"t\")\n replacement_dict = {\n \"admin_port\": admin_port,\n \"fed_learn_port\": fed_learn_port,\n \"config_folder\": self.config_folder,\n \"docker_image\": self.docker_image,\n }\n if self.docker_image:\n self._write(\n os.path.join(dest_dir, \"docker.sh\"),\n sh_replace(self.template[\"docker_svr_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"start.sh\"),\n self.template[\"start_svr_sh\"],\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"sub_start.sh\"),\n sh_replace(self.template[\"sub_start_svr_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"log.config\"),\n self.template[\"log_config\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"readme.txt\"),\n self.template[\"readme_fs\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"stop_fl.sh\"),\n self.template[\"stop_fl_sh\"],\n \"t\",\n exe=True,\n )\n\n def _build_client(self, client, ctx):\n config = json.loads(self.template[\"fed_client\"])\n dest_dir = self.get_kit_dir(client, ctx)\n fed_learn_port = ctx.get(\"fed_learn_port\")\n server_name = ctx.get(\"server_name\")\n config[\"servers\"][0][\"service\"][\"target\"] = f\"{server_name}:{fed_learn_port}\"\n config[\"servers\"][0][\"name\"] = self.study_name\n config[\"enable_byoc\"] = client.enable_byoc\n replacement_dict = {\n \"client_name\": f\"{client.subject}\",\n \"config_folder\": self.config_folder,\n \"docker_image\": self.docker_image,\n }\n\n self._write(os.path.join(dest_dir, \"fed_client.json\"), json.dumps(config), \"t\")\n if self.docker_image:\n self._write(\n os.path.join(dest_dir, \"docker.sh\"),\n sh_replace(self.template[\"docker_cln_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"start.sh\"),\n self.template[\"start_cln_sh\"],\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"sub_start.sh\"),\n sh_replace(self.template[\"sub_start_cln_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"log.config\"),\n self.template[\"log_config\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"readme.txt\"),\n self.template[\"readme_fc\"],\n \"t\",\n )\n self._write(\n os.path.join(dest_dir, \"stop_fl.sh\"),\n self.template[\"stop_fl_sh\"],\n \"t\",\n exe=True,\n )\n\n def _build_admin(self, admin, ctx):\n dest_dir = self.get_kit_dir(admin, ctx)\n admin_port = ctx.get(\"admin_port\")\n server_name = ctx.get(\"server_name\")\n\n replacement_dict = {\n \"cn\": f\"{server_name}\",\n \"admin_port\": f\"{admin_port}\",\n \"docker_image\": self.docker_image,\n }\n if self.docker_image:\n self._write(\n os.path.join(dest_dir, \"docker.sh\"),\n sh_replace(self.template[\"docker_adm_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"fl_admin.sh\"),\n sh_replace(self.template[\"fl_admin_sh\"], replacement_dict),\n \"t\",\n exe=True,\n )\n self._write(\n os.path.join(dest_dir, \"readme.txt\"),\n self.template[\"readme_am\"],\n \"t\",\n )\n\n def build(self, study, ctx):\n self.template = ctx.get(\"template\")\n server = study.get_participants_by_type(\"server\")\n self.study_name = study.name\n self._build_server(server, ctx)\n\n for client in study.get_participants_by_type(\"client\", first_only=False):\n self._build_client(client, ctx)\n\n for admin in study.get_participants_by_type(\"admin\", first_only=False):\n self._build_admin(admin, ctx)\n", "path": "nvflare/lighter/impl/static_file.py"}]}
| 2,168 | 283 |
gh_patches_debug_60356
|
rasdani/github-patches
|
git_diff
|
blaze__blaze-1037
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cytoolz is required to import blaze, but it's not listed in requirements_strict.txt
In a fresh virtualenv, `pip install blaze && python -c "import blaze"` fails with:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ssanderson/.virtualenvs/blaze/local/lib/python2.7/site-packages/blaze/__init__.py", line 18, in <module>
from .utils import ignoring
File "/home/ssanderson/.virtualenvs/blaze/local/lib/python2.7/site-packages/blaze/utils.py", line 7, in <module>
from cytoolz import nth
ImportError: No module named cytoolz
```
Is there a reason cytoolz isn't in the strict requirements if it's necessary to even import the top-level module?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `blaze/utils.py`
Content:
```
1 from __future__ import absolute_import, division, print_function
2
3 import os
4 import datetime
5 from functools import wraps
6
7 from cytoolz import nth
8 from itertools import islice
9 from collections import Iterator
10 from multiprocessing.pool import ThreadPool
11
12 # these are used throughout blaze, don't remove them
13 from odo.utils import tmpfile, filetext, filetexts, raises, keywords, ignoring
14
15 import psutil
16 import numpy as np
17
18 # Imports that replace older utils.
19 from .compatibility import map, zip
20
21 from .dispatch import dispatch
22
23 thread_pool = ThreadPool(psutil.NUM_CPUS)
24
25
26 def nth_list(n, seq):
27 """
28
29 >>> tuple(nth_list([0, 1, 4], 'Hello'))
30 ('H', 'e', 'o')
31 >>> tuple(nth_list([4, 1, 0], 'Hello'))
32 ('o', 'e', 'H')
33 >>> tuple(nth_list([0, 0, 0], 'Hello'))
34 ('H', 'H', 'H')
35 """
36 seq = iter(seq)
37
38 result = []
39 old = 0
40 item = next(seq)
41 for index in sorted(n):
42 for i in range(index - old):
43 item = next(seq)
44 result.append(item)
45 old = index
46
47 order = [x[1] for x in sorted(zip(n, range(len(n))))]
48 return (result[i] for i in order)
49
50
51 def get(ind, coll, lazy=False):
52 """
53
54 >>> get(0, 'Hello')
55 'H'
56
57 >>> get([1, 0], 'Hello')
58 ('e', 'H')
59
60 >>> get(slice(1, 4), 'Hello')
61 ('e', 'l', 'l')
62
63 >>> get(slice(1, 4), 'Hello', lazy=True)
64 <itertools.islice object at ...>
65 """
66 if isinstance(ind, list):
67 result = nth_list(ind, coll)
68 elif isinstance(ind, slice):
69 result = islice(coll, ind.start, ind.stop, ind.step)
70 else:
71 if isinstance(coll, Iterator):
72 result = nth(ind, coll)
73 else:
74 result = coll[ind]
75 if not lazy and isinstance(result, Iterator):
76 result = tuple(result)
77 return result
78
79
80 def ndget(ind, data):
81 """
82 Get from N-Dimensional getable
83
84 Can index with elements, lists, or slices. Mimic's numpy fancy indexing on
85 generic indexibles.
86
87 >>> data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
88 >>> ndget(0, data)
89 [[1, 2], [3, 4]]
90 >>> ndget((0, 1), data)
91 [3, 4]
92 >>> ndget((0, 0, 0), data)
93 1
94 >>> ndget((slice(0, 2), [0, 1], 0), data)
95 ((1, 3), (5, 7))
96 """
97 if isinstance(ind, tuple) and len(ind) == 1:
98 ind = ind[0]
99 if not isinstance(ind, tuple):
100 return get(ind, data)
101 result = get(ind[0], data)
102 if isinstance(ind[0], (list, slice)):
103 return type(result)(ndget(ind[1:], row) for row in result)
104 else:
105 return ndget(ind[1:], result)
106
107
108 def normalize_to_date(dt):
109 if isinstance(dt, datetime.datetime) and not dt.time():
110 return dt.date()
111 else:
112 return dt
113
114
115 def assert_allclose(lhs, rhs):
116 for tb in map(zip, lhs, rhs):
117 for left, right in tb:
118 if isinstance(left, (np.floating, float)):
119 # account for nans
120 assert np.all(np.isclose(left, right, equal_nan=True))
121 continue
122 if isinstance(left, datetime.datetime):
123 left = normalize_to_date(left)
124 if isinstance(right, datetime.datetime):
125 right = normalize_to_date(right)
126 assert left == right
127
128
129 def example(filename, datapath=os.path.join('examples', 'data')):
130 import blaze
131 return os.path.join(os.path.dirname(blaze.__file__), datapath, filename)
132
133
134 def available_memory():
135 return psutil.virtual_memory().available
136
137
138 def listpack(x):
139 """
140 >>> listpack(1)
141 [1]
142 >>> listpack((1, 2))
143 [1, 2]
144 >>> listpack([1, 2])
145 [1, 2]
146 """
147 if isinstance(x, tuple):
148 return list(x)
149 elif isinstance(x, list):
150 return x
151 else:
152 return [x]
153
154
155 @dispatch(datetime.datetime)
156 def json_dumps(dt):
157 s = dt.isoformat()
158 if not dt.tzname():
159 s = s + 'Z'
160 return s
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/blaze/utils.py b/blaze/utils.py
--- a/blaze/utils.py
+++ b/blaze/utils.py
@@ -4,7 +4,11 @@
import datetime
from functools import wraps
-from cytoolz import nth
+try:
+ from cytoolz import nth
+except ImportError:
+ from toolz import nth
+
from itertools import islice
from collections import Iterator
from multiprocessing.pool import ThreadPool
|
{"golden_diff": "diff --git a/blaze/utils.py b/blaze/utils.py\n--- a/blaze/utils.py\n+++ b/blaze/utils.py\n@@ -4,7 +4,11 @@\n import datetime\n from functools import wraps\n \n-from cytoolz import nth\n+try:\n+ from cytoolz import nth\n+except ImportError:\n+ from toolz import nth\n+\n from itertools import islice\n from collections import Iterator\n from multiprocessing.pool import ThreadPool\n", "issue": "cytoolz is required to import blaze, but it's not listed in requirements_strict.txt\nIn a fresh virtualenv, `pip install blaze && python -c \"import blaze\"` fails with:\n\n```\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/home/ssanderson/.virtualenvs/blaze/local/lib/python2.7/site-packages/blaze/__init__.py\", line 18, in <module>\n from .utils import ignoring\n File \"/home/ssanderson/.virtualenvs/blaze/local/lib/python2.7/site-packages/blaze/utils.py\", line 7, in <module>\n from cytoolz import nth\nImportError: No module named cytoolz\n```\n\nIs there a reason cytoolz isn't in the strict requirements if it's necessary to even import the top-level module?\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport os\nimport datetime\nfrom functools import wraps\n\nfrom cytoolz import nth\nfrom itertools import islice\nfrom collections import Iterator\nfrom multiprocessing.pool import ThreadPool\n\n# these are used throughout blaze, don't remove them\nfrom odo.utils import tmpfile, filetext, filetexts, raises, keywords, ignoring\n\nimport psutil\nimport numpy as np\n\n# Imports that replace older utils.\nfrom .compatibility import map, zip\n\nfrom .dispatch import dispatch\n\nthread_pool = ThreadPool(psutil.NUM_CPUS)\n\n\ndef nth_list(n, seq):\n \"\"\"\n\n >>> tuple(nth_list([0, 1, 4], 'Hello'))\n ('H', 'e', 'o')\n >>> tuple(nth_list([4, 1, 0], 'Hello'))\n ('o', 'e', 'H')\n >>> tuple(nth_list([0, 0, 0], 'Hello'))\n ('H', 'H', 'H')\n \"\"\"\n seq = iter(seq)\n\n result = []\n old = 0\n item = next(seq)\n for index in sorted(n):\n for i in range(index - old):\n item = next(seq)\n result.append(item)\n old = index\n\n order = [x[1] for x in sorted(zip(n, range(len(n))))]\n return (result[i] for i in order)\n\n\ndef get(ind, coll, lazy=False):\n \"\"\"\n\n >>> get(0, 'Hello')\n 'H'\n\n >>> get([1, 0], 'Hello')\n ('e', 'H')\n\n >>> get(slice(1, 4), 'Hello')\n ('e', 'l', 'l')\n\n >>> get(slice(1, 4), 'Hello', lazy=True)\n <itertools.islice object at ...>\n \"\"\"\n if isinstance(ind, list):\n result = nth_list(ind, coll)\n elif isinstance(ind, slice):\n result = islice(coll, ind.start, ind.stop, ind.step)\n else:\n if isinstance(coll, Iterator):\n result = nth(ind, coll)\n else:\n result = coll[ind]\n if not lazy and isinstance(result, Iterator):\n result = tuple(result)\n return result\n\n\ndef ndget(ind, data):\n \"\"\"\n Get from N-Dimensional getable\n\n Can index with elements, lists, or slices. Mimic's numpy fancy indexing on\n generic indexibles.\n\n >>> data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]\n >>> ndget(0, data)\n [[1, 2], [3, 4]]\n >>> ndget((0, 1), data)\n [3, 4]\n >>> ndget((0, 0, 0), data)\n 1\n >>> ndget((slice(0, 2), [0, 1], 0), data)\n ((1, 3), (5, 7))\n \"\"\"\n if isinstance(ind, tuple) and len(ind) == 1:\n ind = ind[0]\n if not isinstance(ind, tuple):\n return get(ind, data)\n result = get(ind[0], data)\n if isinstance(ind[0], (list, slice)):\n return type(result)(ndget(ind[1:], row) for row in result)\n else:\n return ndget(ind[1:], result)\n\n\ndef normalize_to_date(dt):\n if isinstance(dt, datetime.datetime) and not dt.time():\n return dt.date()\n else:\n return dt\n\n\ndef assert_allclose(lhs, rhs):\n for tb in map(zip, lhs, rhs):\n for left, right in tb:\n if isinstance(left, (np.floating, float)):\n # account for nans\n assert np.all(np.isclose(left, right, equal_nan=True))\n continue\n if isinstance(left, datetime.datetime):\n left = normalize_to_date(left)\n if isinstance(right, datetime.datetime):\n right = normalize_to_date(right)\n assert left == right\n\n\ndef example(filename, datapath=os.path.join('examples', 'data')):\n import blaze\n return os.path.join(os.path.dirname(blaze.__file__), datapath, filename)\n\n\ndef available_memory():\n return psutil.virtual_memory().available\n\n\ndef listpack(x):\n \"\"\"\n >>> listpack(1)\n [1]\n >>> listpack((1, 2))\n [1, 2]\n >>> listpack([1, 2])\n [1, 2]\n \"\"\"\n if isinstance(x, tuple):\n return list(x)\n elif isinstance(x, list):\n return x\n else:\n return [x]\n\n\n@dispatch(datetime.datetime)\ndef json_dumps(dt):\n s = dt.isoformat()\n if not dt.tzname():\n s = s + 'Z'\n return s\n", "path": "blaze/utils.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport os\nimport datetime\nfrom functools import wraps\n\ntry:\n from cytoolz import nth\nexcept ImportError:\n from toolz import nth\n\nfrom itertools import islice\nfrom collections import Iterator\nfrom multiprocessing.pool import ThreadPool\n\n# these are used throughout blaze, don't remove them\nfrom odo.utils import tmpfile, filetext, filetexts, raises, keywords, ignoring\n\nimport psutil\nimport numpy as np\n\n# Imports that replace older utils.\nfrom .compatibility import map, zip\n\nfrom .dispatch import dispatch\n\nthread_pool = ThreadPool(psutil.NUM_CPUS)\n\n\ndef nth_list(n, seq):\n \"\"\"\n\n >>> tuple(nth_list([0, 1, 4], 'Hello'))\n ('H', 'e', 'o')\n >>> tuple(nth_list([4, 1, 0], 'Hello'))\n ('o', 'e', 'H')\n >>> tuple(nth_list([0, 0, 0], 'Hello'))\n ('H', 'H', 'H')\n \"\"\"\n seq = iter(seq)\n\n result = []\n old = 0\n item = next(seq)\n for index in sorted(n):\n for i in range(index - old):\n item = next(seq)\n result.append(item)\n old = index\n\n order = [x[1] for x in sorted(zip(n, range(len(n))))]\n return (result[i] for i in order)\n\n\ndef get(ind, coll, lazy=False):\n \"\"\"\n\n >>> get(0, 'Hello')\n 'H'\n\n >>> get([1, 0], 'Hello')\n ('e', 'H')\n\n >>> get(slice(1, 4), 'Hello')\n ('e', 'l', 'l')\n\n >>> get(slice(1, 4), 'Hello', lazy=True)\n <itertools.islice object at ...>\n \"\"\"\n if isinstance(ind, list):\n result = nth_list(ind, coll)\n elif isinstance(ind, slice):\n result = islice(coll, ind.start, ind.stop, ind.step)\n else:\n if isinstance(coll, Iterator):\n result = nth(ind, coll)\n else:\n result = coll[ind]\n if not lazy and isinstance(result, Iterator):\n result = tuple(result)\n return result\n\n\ndef ndget(ind, data):\n \"\"\"\n Get from N-Dimensional getable\n\n Can index with elements, lists, or slices. Mimic's numpy fancy indexing on\n generic indexibles.\n\n >>> data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]\n >>> ndget(0, data)\n [[1, 2], [3, 4]]\n >>> ndget((0, 1), data)\n [3, 4]\n >>> ndget((0, 0, 0), data)\n 1\n >>> ndget((slice(0, 2), [0, 1], 0), data)\n ((1, 3), (5, 7))\n \"\"\"\n if isinstance(ind, tuple) and len(ind) == 1:\n ind = ind[0]\n if not isinstance(ind, tuple):\n return get(ind, data)\n result = get(ind[0], data)\n if isinstance(ind[0], (list, slice)):\n return type(result)(ndget(ind[1:], row) for row in result)\n else:\n return ndget(ind[1:], result)\n\n\ndef normalize_to_date(dt):\n if isinstance(dt, datetime.datetime) and not dt.time():\n return dt.date()\n else:\n return dt\n\n\ndef assert_allclose(lhs, rhs):\n for tb in map(zip, lhs, rhs):\n for left, right in tb:\n if isinstance(left, (np.floating, float)):\n # account for nans\n assert np.all(np.isclose(left, right, equal_nan=True))\n continue\n if isinstance(left, datetime.datetime):\n left = normalize_to_date(left)\n if isinstance(right, datetime.datetime):\n right = normalize_to_date(right)\n assert left == right\n\n\ndef example(filename, datapath=os.path.join('examples', 'data')):\n import blaze\n return os.path.join(os.path.dirname(blaze.__file__), datapath, filename)\n\n\ndef available_memory():\n return psutil.virtual_memory().available\n\n\ndef listpack(x):\n \"\"\"\n >>> listpack(1)\n [1]\n >>> listpack((1, 2))\n [1, 2]\n >>> listpack([1, 2])\n [1, 2]\n \"\"\"\n if isinstance(x, tuple):\n return list(x)\n elif isinstance(x, list):\n return x\n else:\n return [x]\n\n\n@dispatch(datetime.datetime)\ndef json_dumps(dt):\n s = dt.isoformat()\n if not dt.tzname():\n s = s + 'Z'\n return s\n", "path": "blaze/utils.py"}]}
| 1,901 | 96 |
gh_patches_debug_47848
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-404
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rate stars don't work
You should be able to click to give a star rating to a book on the book page, it doesn't do anything.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/activitypub/note.py`
Content:
```
1 ''' note serializer and children thereof '''
2 from dataclasses import dataclass, field
3 from typing import Dict, List
4
5 from .base_activity import ActivityObject, Link
6 from .image import Image
7
8 @dataclass(init=False)
9 class Tombstone(ActivityObject):
10 ''' the placeholder for a deleted status '''
11 published: str
12 deleted: str
13 type: str = 'Tombstone'
14
15
16 @dataclass(init=False)
17 class Note(ActivityObject):
18 ''' Note activity '''
19 published: str
20 attributedTo: str
21 content: str
22 to: List[str] = field(default_factory=lambda: [])
23 cc: List[str] = field(default_factory=lambda: [])
24 replies: Dict = field(default_factory=lambda: {})
25 inReplyTo: str = ''
26 summary: str = ''
27 tag: List[Link] = field(default_factory=lambda: [])
28 attachment: List[Image] = field(default_factory=lambda: [])
29 sensitive: bool = False
30 type: str = 'Note'
31
32
33 @dataclass(init=False)
34 class Article(Note):
35 ''' what's an article except a note with more fields '''
36 name: str
37 type: str = 'Article'
38
39
40 @dataclass(init=False)
41 class GeneratedNote(Note):
42 ''' just a re-typed note '''
43 type: str = 'GeneratedNote'
44
45
46 @dataclass(init=False)
47 class Comment(Note):
48 ''' like a note but with a book '''
49 inReplyToBook: str
50 type: str = 'Comment'
51
52
53 @dataclass(init=False)
54 class Review(Comment):
55 ''' a full book review '''
56 name: str
57 rating: int = None
58 type: str = 'Review'
59
60
61 @dataclass(init=False)
62 class Quotation(Comment):
63 ''' a quote and commentary on a book '''
64 quote: str
65 type: str = 'Quotation'
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/activitypub/note.py b/bookwyrm/activitypub/note.py
--- a/bookwyrm/activitypub/note.py
+++ b/bookwyrm/activitypub/note.py
@@ -53,7 +53,7 @@
@dataclass(init=False)
class Review(Comment):
''' a full book review '''
- name: str
+ name: str = None
rating: int = None
type: str = 'Review'
|
{"golden_diff": "diff --git a/bookwyrm/activitypub/note.py b/bookwyrm/activitypub/note.py\n--- a/bookwyrm/activitypub/note.py\n+++ b/bookwyrm/activitypub/note.py\n@@ -53,7 +53,7 @@\n @dataclass(init=False)\n class Review(Comment):\n ''' a full book review '''\n- name: str\n+ name: str = None\n rating: int = None\n type: str = 'Review'\n", "issue": "Rate stars don't work\nYou should be able to click to give a star rating to a book on the book page, it doesn't do anything.\n", "before_files": [{"content": "''' note serializer and children thereof '''\nfrom dataclasses import dataclass, field\nfrom typing import Dict, List\n\nfrom .base_activity import ActivityObject, Link\nfrom .image import Image\n\n@dataclass(init=False)\nclass Tombstone(ActivityObject):\n ''' the placeholder for a deleted status '''\n published: str\n deleted: str\n type: str = 'Tombstone'\n\n\n@dataclass(init=False)\nclass Note(ActivityObject):\n ''' Note activity '''\n published: str\n attributedTo: str\n content: str\n to: List[str] = field(default_factory=lambda: [])\n cc: List[str] = field(default_factory=lambda: [])\n replies: Dict = field(default_factory=lambda: {})\n inReplyTo: str = ''\n summary: str = ''\n tag: List[Link] = field(default_factory=lambda: [])\n attachment: List[Image] = field(default_factory=lambda: [])\n sensitive: bool = False\n type: str = 'Note'\n\n\n@dataclass(init=False)\nclass Article(Note):\n ''' what's an article except a note with more fields '''\n name: str\n type: str = 'Article'\n\n\n@dataclass(init=False)\nclass GeneratedNote(Note):\n ''' just a re-typed note '''\n type: str = 'GeneratedNote'\n\n\n@dataclass(init=False)\nclass Comment(Note):\n ''' like a note but with a book '''\n inReplyToBook: str\n type: str = 'Comment'\n\n\n@dataclass(init=False)\nclass Review(Comment):\n ''' a full book review '''\n name: str\n rating: int = None\n type: str = 'Review'\n\n\n@dataclass(init=False)\nclass Quotation(Comment):\n ''' a quote and commentary on a book '''\n quote: str\n type: str = 'Quotation'\n", "path": "bookwyrm/activitypub/note.py"}], "after_files": [{"content": "''' note serializer and children thereof '''\nfrom dataclasses import dataclass, field\nfrom typing import Dict, List\n\nfrom .base_activity import ActivityObject, Link\nfrom .image import Image\n\n@dataclass(init=False)\nclass Tombstone(ActivityObject):\n ''' the placeholder for a deleted status '''\n published: str\n deleted: str\n type: str = 'Tombstone'\n\n\n@dataclass(init=False)\nclass Note(ActivityObject):\n ''' Note activity '''\n published: str\n attributedTo: str\n content: str\n to: List[str] = field(default_factory=lambda: [])\n cc: List[str] = field(default_factory=lambda: [])\n replies: Dict = field(default_factory=lambda: {})\n inReplyTo: str = ''\n summary: str = ''\n tag: List[Link] = field(default_factory=lambda: [])\n attachment: List[Image] = field(default_factory=lambda: [])\n sensitive: bool = False\n type: str = 'Note'\n\n\n@dataclass(init=False)\nclass Article(Note):\n ''' what's an article except a note with more fields '''\n name: str\n type: str = 'Article'\n\n\n@dataclass(init=False)\nclass GeneratedNote(Note):\n ''' just a re-typed note '''\n type: str = 'GeneratedNote'\n\n\n@dataclass(init=False)\nclass Comment(Note):\n ''' like a note but with a book '''\n inReplyToBook: str\n type: str = 'Comment'\n\n\n@dataclass(init=False)\nclass Review(Comment):\n ''' a full book review '''\n name: str = None\n rating: int = None\n type: str = 'Review'\n\n\n@dataclass(init=False)\nclass Quotation(Comment):\n ''' a quote and commentary on a book '''\n quote: str\n type: str = 'Quotation'\n", "path": "bookwyrm/activitypub/note.py"}]}
| 811 | 103 |
gh_patches_debug_18295
|
rasdani/github-patches
|
git_diff
|
avocado-framework__avocado-5562
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Handle "could not import module" errors gracefully
**Describe the bug**
Avocado does not handle "could not import module" errors very gracefully, with error messages that are quite cryptic.
**Steps to reproduce**
Write a valid `avocado-instrumented` test, but with an invalid import. Example:
```python
from avocado import Test
import foo
class PassTest(Test):
"""
Example test that passes.
:avocado: tags=fast
"""
def test(self):
"""
A test simply doesn't have to fail in order to pass
"""
```
And run it:
```
$ avocado run examples/tests/passtest.py
JOB ID : 3fee9803715e414a16c3dcf1ddb9ff2f6dc6c0bd
JOB LOG : /home/cleber/avocado/job-results/job-2022-08-11T10.24-3fee980/job.log
(1/1) examples/tests/passtest.py:PassTest.test: STARTED
(1/1) examples/tests/passtest.py:PassTest.test: ERROR: Test.__init__() got an unexpected keyword argument 'run.results_dir' (0.01 s)
RESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML : /home/cleber/avocado/job-results/job-2022-08-11T10.24-3fee980/results.html
JOB TIME : 1.47 s
```
**Expected behavior**
Instead of "unexpected argument..." a more clear error message such as: "failed to import the file containing the test" or something similar.
**Current behavior**
From original reporter @jnsnow:
```
(08/27) tests/protocol.py:Connect.testBadUNIX: ERROR:
Test.__init__() got an unexpected keyword argument 'run.results_dir'
(0.01 s)
```
**System information (please complete the following information):**
- OS: ```LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 36 (Thirty Six)
Release: 36
Codename: ThirtySix```
- Avocado version: 5a0c5b2348da450397287a0954e4c335c0d590a9
- Avocado installation method: git
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `avocado/core/utils/loader.py`
Content:
```
1 import importlib
2 import inspect
3 import os
4 import sys
5
6 from avocado.core import test
7 from avocado.utils import stacktrace
8
9
10 class TestError(test.Test):
11 """
12 Generic test error.
13 """
14
15 def __init__(self, *args, **kwargs):
16 exception = kwargs.pop("exception")
17 test.Test.__init__(self, *args, **kwargs)
18 self.exception = exception
19
20 def test(self):
21 self.error(self.exception)
22
23
24 def load_test(test_factory):
25 """
26 Load test from the test factory.
27
28 :param test_factory: a pair of test class and parameters.
29 :type test_factory: tuple
30 :return: an instance of :class:`avocado.core.test.Test`.
31 """
32 test_class, test_parameters = test_factory
33 if "modulePath" in test_parameters:
34 test_path = test_parameters.pop("modulePath")
35 else:
36 test_path = None
37 if isinstance(test_class, str):
38 module_name = os.path.basename(test_path).split(".")[0]
39 test_module_dir = os.path.abspath(os.path.dirname(test_path))
40 # Tests with local dir imports need this
41 try:
42 sys.path.insert(0, test_module_dir)
43 test_module = importlib.import_module(module_name)
44 except: # pylint: disable=W0702
45 # On load_module exception we fake the test class and pass
46 # the exc_info as parameter to be logged.
47 test_parameters["methodName"] = "test"
48 exception = stacktrace.prepare_exc_info(sys.exc_info())
49 test_parameters["exception"] = exception
50 return TestError(**test_parameters)
51 finally:
52 if test_module_dir in sys.path:
53 sys.path.remove(test_module_dir)
54 for _, obj in inspect.getmembers(test_module):
55 if (
56 inspect.isclass(obj)
57 and obj.__name__ == test_class
58 and inspect.getmodule(obj) == test_module
59 ):
60 if issubclass(obj, test.Test):
61 test_class = obj
62 break
63 if "run.results_dir" in test_parameters:
64 test_parameters["base_logdir"] = test_parameters.pop("run.results_dir")
65 test_instance = test_class(**test_parameters)
66
67 return test_instance
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/avocado/core/utils/loader.py b/avocado/core/utils/loader.py
--- a/avocado/core/utils/loader.py
+++ b/avocado/core/utils/loader.py
@@ -30,6 +30,8 @@
:return: an instance of :class:`avocado.core.test.Test`.
"""
test_class, test_parameters = test_factory
+ if "run.results_dir" in test_parameters:
+ test_parameters["base_logdir"] = test_parameters.pop("run.results_dir")
if "modulePath" in test_parameters:
test_path = test_parameters.pop("modulePath")
else:
@@ -60,8 +62,6 @@
if issubclass(obj, test.Test):
test_class = obj
break
- if "run.results_dir" in test_parameters:
- test_parameters["base_logdir"] = test_parameters.pop("run.results_dir")
test_instance = test_class(**test_parameters)
return test_instance
|
{"golden_diff": "diff --git a/avocado/core/utils/loader.py b/avocado/core/utils/loader.py\n--- a/avocado/core/utils/loader.py\n+++ b/avocado/core/utils/loader.py\n@@ -30,6 +30,8 @@\n :return: an instance of :class:`avocado.core.test.Test`.\n \"\"\"\n test_class, test_parameters = test_factory\n+ if \"run.results_dir\" in test_parameters:\n+ test_parameters[\"base_logdir\"] = test_parameters.pop(\"run.results_dir\")\n if \"modulePath\" in test_parameters:\n test_path = test_parameters.pop(\"modulePath\")\n else:\n@@ -60,8 +62,6 @@\n if issubclass(obj, test.Test):\n test_class = obj\n break\n- if \"run.results_dir\" in test_parameters:\n- test_parameters[\"base_logdir\"] = test_parameters.pop(\"run.results_dir\")\n test_instance = test_class(**test_parameters)\n \n return test_instance\n", "issue": "Handle \"could not import module\" errors gracefully\n**Describe the bug**\r\nAvocado does not handle \"could not import module\" errors very gracefully, with error messages that are quite cryptic.\r\n\r\n**Steps to reproduce**\r\nWrite a valid `avocado-instrumented` test, but with an invalid import. Example:\r\n\r\n```python\r\nfrom avocado import Test\r\n\r\nimport foo\r\n\r\n\r\nclass PassTest(Test):\r\n\r\n \"\"\"\r\n Example test that passes.\r\n\r\n :avocado: tags=fast\r\n \"\"\"\r\n\r\n def test(self):\r\n \"\"\"\r\n A test simply doesn't have to fail in order to pass\r\n \"\"\"\r\n```\r\n\r\nAnd run it:\r\n\r\n```\r\n$ avocado run examples/tests/passtest.py \r\nJOB ID : 3fee9803715e414a16c3dcf1ddb9ff2f6dc6c0bd\r\nJOB LOG : /home/cleber/avocado/job-results/job-2022-08-11T10.24-3fee980/job.log\r\n (1/1) examples/tests/passtest.py:PassTest.test: STARTED\r\n (1/1) examples/tests/passtest.py:PassTest.test: ERROR: Test.__init__() got an unexpected keyword argument 'run.results_dir' (0.01 s)\r\nRESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0\r\nJOB HTML : /home/cleber/avocado/job-results/job-2022-08-11T10.24-3fee980/results.html\r\nJOB TIME : 1.47 s\r\n```\r\n\r\n**Expected behavior**\r\nInstead of \"unexpected argument...\" a more clear error message such as: \"failed to import the file containing the test\" or something similar. \r\n\r\n**Current behavior**\r\n\r\nFrom original reporter @jnsnow:\r\n\r\n```\r\n(08/27) tests/protocol.py:Connect.testBadUNIX: ERROR:\r\n Test.__init__() got an unexpected keyword argument 'run.results_dir'\r\n (0.01 s)\r\n```\r\n\r\n**System information (please complete the following information):**\r\n - OS: ```LSB Version:\t:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch\r\nDistributor ID:\tFedora\r\nDescription:\tFedora release 36 (Thirty Six)\r\nRelease:\t36\r\nCodename:\tThirtySix```\r\n - Avocado version: 5a0c5b2348da450397287a0954e4c335c0d590a9\r\n - Avocado installation method: git\r\n\n", "before_files": [{"content": "import importlib\nimport inspect\nimport os\nimport sys\n\nfrom avocado.core import test\nfrom avocado.utils import stacktrace\n\n\nclass TestError(test.Test):\n \"\"\"\n Generic test error.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n exception = kwargs.pop(\"exception\")\n test.Test.__init__(self, *args, **kwargs)\n self.exception = exception\n\n def test(self):\n self.error(self.exception)\n\n\ndef load_test(test_factory):\n \"\"\"\n Load test from the test factory.\n\n :param test_factory: a pair of test class and parameters.\n :type test_factory: tuple\n :return: an instance of :class:`avocado.core.test.Test`.\n \"\"\"\n test_class, test_parameters = test_factory\n if \"modulePath\" in test_parameters:\n test_path = test_parameters.pop(\"modulePath\")\n else:\n test_path = None\n if isinstance(test_class, str):\n module_name = os.path.basename(test_path).split(\".\")[0]\n test_module_dir = os.path.abspath(os.path.dirname(test_path))\n # Tests with local dir imports need this\n try:\n sys.path.insert(0, test_module_dir)\n test_module = importlib.import_module(module_name)\n except: # pylint: disable=W0702\n # On load_module exception we fake the test class and pass\n # the exc_info as parameter to be logged.\n test_parameters[\"methodName\"] = \"test\"\n exception = stacktrace.prepare_exc_info(sys.exc_info())\n test_parameters[\"exception\"] = exception\n return TestError(**test_parameters)\n finally:\n if test_module_dir in sys.path:\n sys.path.remove(test_module_dir)\n for _, obj in inspect.getmembers(test_module):\n if (\n inspect.isclass(obj)\n and obj.__name__ == test_class\n and inspect.getmodule(obj) == test_module\n ):\n if issubclass(obj, test.Test):\n test_class = obj\n break\n if \"run.results_dir\" in test_parameters:\n test_parameters[\"base_logdir\"] = test_parameters.pop(\"run.results_dir\")\n test_instance = test_class(**test_parameters)\n\n return test_instance\n", "path": "avocado/core/utils/loader.py"}], "after_files": [{"content": "import importlib\nimport inspect\nimport os\nimport sys\n\nfrom avocado.core import test\nfrom avocado.utils import stacktrace\n\n\nclass TestError(test.Test):\n \"\"\"\n Generic test error.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n exception = kwargs.pop(\"exception\")\n test.Test.__init__(self, *args, **kwargs)\n self.exception = exception\n\n def test(self):\n self.error(self.exception)\n\n\ndef load_test(test_factory):\n \"\"\"\n Load test from the test factory.\n\n :param test_factory: a pair of test class and parameters.\n :type test_factory: tuple\n :return: an instance of :class:`avocado.core.test.Test`.\n \"\"\"\n test_class, test_parameters = test_factory\n if \"run.results_dir\" in test_parameters:\n test_parameters[\"base_logdir\"] = test_parameters.pop(\"run.results_dir\")\n if \"modulePath\" in test_parameters:\n test_path = test_parameters.pop(\"modulePath\")\n else:\n test_path = None\n if isinstance(test_class, str):\n module_name = os.path.basename(test_path).split(\".\")[0]\n test_module_dir = os.path.abspath(os.path.dirname(test_path))\n # Tests with local dir imports need this\n try:\n sys.path.insert(0, test_module_dir)\n test_module = importlib.import_module(module_name)\n except: # pylint: disable=W0702\n # On load_module exception we fake the test class and pass\n # the exc_info as parameter to be logged.\n test_parameters[\"methodName\"] = \"test\"\n exception = stacktrace.prepare_exc_info(sys.exc_info())\n test_parameters[\"exception\"] = exception\n return TestError(**test_parameters)\n finally:\n if test_module_dir in sys.path:\n sys.path.remove(test_module_dir)\n for _, obj in inspect.getmembers(test_module):\n if (\n inspect.isclass(obj)\n and obj.__name__ == test_class\n and inspect.getmodule(obj) == test_module\n ):\n if issubclass(obj, test.Test):\n test_class = obj\n break\n test_instance = test_class(**test_parameters)\n\n return test_instance\n", "path": "avocado/core/utils/loader.py"}]}
| 1,522 | 210 |
gh_patches_debug_56084
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-5611
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/inference/benchmark_ops/benchmark_rmsnorm.py`
Content:
```
1 import torch
2
3 from colossalai.kernel.kernel_loader import InferenceOpsLoader
4 from colossalai.kernel.triton import rms_layernorm
5
6 try:
7 import triton # noqa
8 except ImportError:
9 print("please install triton from https://github.com/openai/triton")
10
11 inference_ops = InferenceOpsLoader().load()
12
13 # Triton benchmark plot attributions
14 configs = [
15 triton.testing.Benchmark(
16 x_names=["SEQUENCE_TOTAL"],
17 x_vals=[i for i in range(128, 1025, 128)],
18 line_arg="provider",
19 line_vals=[
20 "vllm_rms_layernorm",
21 "triton_rms_layernorm",
22 "cuda_rms_layernorm",
23 "vllm_rms_layernorm_with_residual",
24 "triton_rms_layernorm_with_residual",
25 "cuda_rms_layernorm_with_residual",
26 ],
27 line_names=[
28 "vllm_rms_layernorm",
29 "triton_rms_layernorm",
30 "cuda_rms_layernorm",
31 "vllm_rms_layernorm_with_residual",
32 "triton_rms_layernorm_with_residual",
33 "cuda_rms_layernorm_with_residual",
34 ],
35 styles=[("red", "-"), ("blue", "-"), ("yellow", "-"), ("red", "--"), ("blue", "--"), ("yellow", "--")],
36 ylabel="ms",
37 plot_name=f"RMSNorm benchmarking results",
38 args={"HIDDEN_SIZE": 1024},
39 )
40 ]
41
42
43 @triton.testing.perf_report(configs)
44 def benchmark_rms_layernorm(
45 provider: str,
46 SEQUENCE_TOTAL: int,
47 HIDDEN_SIZE: int,
48 ):
49 try:
50 from vllm.model_executor.layers.layernorm import RMSNorm
51 except ImportError:
52 raise ImportError("Please install vllm from https://github.com/vllm-project/vllm")
53
54 warmup = 10
55 rep = 1000
56
57 dtype = torch.float16
58 eps = 1e-5
59 x_shape = (SEQUENCE_TOTAL, HIDDEN_SIZE)
60 w_shape = (x_shape[-1],)
61 residual = torch.rand(x_shape, dtype=dtype, device="cuda")
62 weight = torch.ones(w_shape, dtype=dtype, device="cuda")
63 vllm_norm = RMSNorm(hidden_size=HIDDEN_SIZE, eps=eps).to(dtype=dtype, device="cuda")
64 x = -2.3 + 0.5 * torch.randn(x_shape, dtype=dtype, device="cuda")
65 if provider == "vllm_rms_layernorm":
66 fn = lambda: vllm_norm(x)
67 elif provider == "triton_rms_layernorm":
68 fn = lambda: rms_layernorm(x, weight, eps=eps)
69 elif provider == "cuda_rms_layernorm":
70 out = torch.empty_like(x)
71 fn = lambda: inference_ops.rms_layernorm(out, x, weight, eps)
72 elif provider == "vllm_rms_layernorm_with_residual":
73 fn = lambda: vllm_norm(x, residual=residual)
74 elif provider == "triton_rms_layernorm_with_residual":
75 fn = lambda: rms_layernorm(x, weight, eps=eps, residual=residual)
76 elif provider == "cuda_rms_layernorm_with_residual":
77 fn = lambda: inference_ops.fused_add_rms_layernorm(x, residual, weight, eps)
78 else:
79 raise ValueError("Undefined provider.")
80
81 ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)
82
83 return ms
84
85
86 if __name__ == "__main__":
87 benchmark_rms_layernorm.run(save_path=".", print_data=True)
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/inference/benchmark_ops/benchmark_rmsnorm.py b/examples/inference/benchmark_ops/benchmark_rmsnorm.py
--- a/examples/inference/benchmark_ops/benchmark_rmsnorm.py
+++ b/examples/inference/benchmark_ops/benchmark_rmsnorm.py
@@ -35,7 +35,7 @@
styles=[("red", "-"), ("blue", "-"), ("yellow", "-"), ("red", "--"), ("blue", "--"), ("yellow", "--")],
ylabel="ms",
plot_name=f"RMSNorm benchmarking results",
- args={"HIDDEN_SIZE": 1024},
+ args={"HIDDEN_SIZE": 5120},
)
]
|
{"golden_diff": "diff --git a/examples/inference/benchmark_ops/benchmark_rmsnorm.py b/examples/inference/benchmark_ops/benchmark_rmsnorm.py\n--- a/examples/inference/benchmark_ops/benchmark_rmsnorm.py\n+++ b/examples/inference/benchmark_ops/benchmark_rmsnorm.py\n@@ -35,7 +35,7 @@\n styles=[(\"red\", \"-\"), (\"blue\", \"-\"), (\"yellow\", \"-\"), (\"red\", \"--\"), (\"blue\", \"--\"), (\"yellow\", \"--\")],\n ylabel=\"ms\",\n plot_name=f\"RMSNorm benchmarking results\",\n- args={\"HIDDEN_SIZE\": 1024},\n+ args={\"HIDDEN_SIZE\": 5120},\n )\n ]\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import torch\n\nfrom colossalai.kernel.kernel_loader import InferenceOpsLoader\nfrom colossalai.kernel.triton import rms_layernorm\n\ntry:\n import triton # noqa\nexcept ImportError:\n print(\"please install triton from https://github.com/openai/triton\")\n\ninference_ops = InferenceOpsLoader().load()\n\n# Triton benchmark plot attributions\nconfigs = [\n triton.testing.Benchmark(\n x_names=[\"SEQUENCE_TOTAL\"],\n x_vals=[i for i in range(128, 1025, 128)],\n line_arg=\"provider\",\n line_vals=[\n \"vllm_rms_layernorm\",\n \"triton_rms_layernorm\",\n \"cuda_rms_layernorm\",\n \"vllm_rms_layernorm_with_residual\",\n \"triton_rms_layernorm_with_residual\",\n \"cuda_rms_layernorm_with_residual\",\n ],\n line_names=[\n \"vllm_rms_layernorm\",\n \"triton_rms_layernorm\",\n \"cuda_rms_layernorm\",\n \"vllm_rms_layernorm_with_residual\",\n \"triton_rms_layernorm_with_residual\",\n \"cuda_rms_layernorm_with_residual\",\n ],\n styles=[(\"red\", \"-\"), (\"blue\", \"-\"), (\"yellow\", \"-\"), (\"red\", \"--\"), (\"blue\", \"--\"), (\"yellow\", \"--\")],\n ylabel=\"ms\",\n plot_name=f\"RMSNorm benchmarking results\",\n args={\"HIDDEN_SIZE\": 1024},\n )\n]\n\n\[email protected]_report(configs)\ndef benchmark_rms_layernorm(\n provider: str,\n SEQUENCE_TOTAL: int,\n HIDDEN_SIZE: int,\n):\n try:\n from vllm.model_executor.layers.layernorm import RMSNorm\n except ImportError:\n raise ImportError(\"Please install vllm from https://github.com/vllm-project/vllm\")\n\n warmup = 10\n rep = 1000\n\n dtype = torch.float16\n eps = 1e-5\n x_shape = (SEQUENCE_TOTAL, HIDDEN_SIZE)\n w_shape = (x_shape[-1],)\n residual = torch.rand(x_shape, dtype=dtype, device=\"cuda\")\n weight = torch.ones(w_shape, dtype=dtype, device=\"cuda\")\n vllm_norm = RMSNorm(hidden_size=HIDDEN_SIZE, eps=eps).to(dtype=dtype, device=\"cuda\")\n x = -2.3 + 0.5 * torch.randn(x_shape, dtype=dtype, device=\"cuda\")\n if provider == \"vllm_rms_layernorm\":\n fn = lambda: vllm_norm(x)\n elif provider == \"triton_rms_layernorm\":\n fn = lambda: rms_layernorm(x, weight, eps=eps)\n elif provider == \"cuda_rms_layernorm\":\n out = torch.empty_like(x)\n fn = lambda: inference_ops.rms_layernorm(out, x, weight, eps)\n elif provider == \"vllm_rms_layernorm_with_residual\":\n fn = lambda: vllm_norm(x, residual=residual)\n elif provider == \"triton_rms_layernorm_with_residual\":\n fn = lambda: rms_layernorm(x, weight, eps=eps, residual=residual)\n elif provider == \"cuda_rms_layernorm_with_residual\":\n fn = lambda: inference_ops.fused_add_rms_layernorm(x, residual, weight, eps)\n else:\n raise ValueError(\"Undefined provider.\")\n\n ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)\n\n return ms\n\n\nif __name__ == \"__main__\":\n benchmark_rms_layernorm.run(save_path=\".\", print_data=True)\n", "path": "examples/inference/benchmark_ops/benchmark_rmsnorm.py"}], "after_files": [{"content": "import torch\n\nfrom colossalai.kernel.kernel_loader import InferenceOpsLoader\nfrom colossalai.kernel.triton import rms_layernorm\n\ntry:\n import triton # noqa\nexcept ImportError:\n print(\"please install triton from https://github.com/openai/triton\")\n\ninference_ops = InferenceOpsLoader().load()\n\n# Triton benchmark plot attributions\nconfigs = [\n triton.testing.Benchmark(\n x_names=[\"SEQUENCE_TOTAL\"],\n x_vals=[i for i in range(128, 1025, 128)],\n line_arg=\"provider\",\n line_vals=[\n \"vllm_rms_layernorm\",\n \"triton_rms_layernorm\",\n \"cuda_rms_layernorm\",\n \"vllm_rms_layernorm_with_residual\",\n \"triton_rms_layernorm_with_residual\",\n \"cuda_rms_layernorm_with_residual\",\n ],\n line_names=[\n \"vllm_rms_layernorm\",\n \"triton_rms_layernorm\",\n \"cuda_rms_layernorm\",\n \"vllm_rms_layernorm_with_residual\",\n \"triton_rms_layernorm_with_residual\",\n \"cuda_rms_layernorm_with_residual\",\n ],\n styles=[(\"red\", \"-\"), (\"blue\", \"-\"), (\"yellow\", \"-\"), (\"red\", \"--\"), (\"blue\", \"--\"), (\"yellow\", \"--\")],\n ylabel=\"ms\",\n plot_name=f\"RMSNorm benchmarking results\",\n args={\"HIDDEN_SIZE\": 5120},\n )\n]\n\n\[email protected]_report(configs)\ndef benchmark_rms_layernorm(\n provider: str,\n SEQUENCE_TOTAL: int,\n HIDDEN_SIZE: int,\n):\n try:\n from vllm.model_executor.layers.layernorm import RMSNorm\n except ImportError:\n raise ImportError(\"Please install vllm from https://github.com/vllm-project/vllm\")\n\n warmup = 10\n rep = 1000\n\n dtype = torch.float16\n eps = 1e-5\n x_shape = (SEQUENCE_TOTAL, HIDDEN_SIZE)\n w_shape = (x_shape[-1],)\n residual = torch.rand(x_shape, dtype=dtype, device=\"cuda\")\n weight = torch.ones(w_shape, dtype=dtype, device=\"cuda\")\n vllm_norm = RMSNorm(hidden_size=HIDDEN_SIZE, eps=eps).to(dtype=dtype, device=\"cuda\")\n x = -2.3 + 0.5 * torch.randn(x_shape, dtype=dtype, device=\"cuda\")\n if provider == \"vllm_rms_layernorm\":\n fn = lambda: vllm_norm(x)\n elif provider == \"triton_rms_layernorm\":\n fn = lambda: rms_layernorm(x, weight, eps=eps)\n elif provider == \"cuda_rms_layernorm\":\n out = torch.empty_like(x)\n fn = lambda: inference_ops.rms_layernorm(out, x, weight, eps)\n elif provider == \"vllm_rms_layernorm_with_residual\":\n fn = lambda: vllm_norm(x, residual=residual)\n elif provider == \"triton_rms_layernorm_with_residual\":\n fn = lambda: rms_layernorm(x, weight, eps=eps, residual=residual)\n elif provider == \"cuda_rms_layernorm_with_residual\":\n fn = lambda: inference_ops.fused_add_rms_layernorm(x, residual, weight, eps)\n else:\n raise ValueError(\"Undefined provider.\")\n\n ms = triton.testing.do_bench(fn, warmup=warmup, rep=rep)\n\n return ms\n\n\nif __name__ == \"__main__\":\n benchmark_rms_layernorm.run(save_path=\".\", print_data=True)\n", "path": "examples/inference/benchmark_ops/benchmark_rmsnorm.py"}]}
| 1,314 | 154 |
gh_patches_debug_267
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-3010
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rsa.rsa_recover_prime_factors() should return p > q
The documentation for `rsa_recover_prime_factors()` warns that it returns `p` and `q` such that `p < q`. However, things like OpenSSL and BoringSSL seem to require that `p > q`. Given this, would it be feasible to change the order around in cryptography so that it lines up with OpenSSL?
See also: http://crypto.stackexchange.com/questions/18084/in-rsa-why-does-p-have-to-be-bigger-than-q-where-n-p-times-q. @briansmith can provide more commentary if needed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/primitives/asymmetric/rsa.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import abc
8 from fractions import gcd
9
10 import six
11
12 from cryptography import utils
13 from cryptography.exceptions import UnsupportedAlgorithm, _Reasons
14 from cryptography.hazmat.backends.interfaces import RSABackend
15
16
17 @six.add_metaclass(abc.ABCMeta)
18 class RSAPrivateKey(object):
19 @abc.abstractmethod
20 def signer(self, padding, algorithm):
21 """
22 Returns an AsymmetricSignatureContext used for signing data.
23 """
24
25 @abc.abstractmethod
26 def decrypt(self, ciphertext, padding):
27 """
28 Decrypts the provided ciphertext.
29 """
30
31 @abc.abstractproperty
32 def key_size(self):
33 """
34 The bit length of the public modulus.
35 """
36
37 @abc.abstractmethod
38 def public_key(self):
39 """
40 The RSAPublicKey associated with this private key.
41 """
42
43 @abc.abstractmethod
44 def sign(self, data, padding, algorithm):
45 """
46 Signs the data.
47 """
48
49
50 @six.add_metaclass(abc.ABCMeta)
51 class RSAPrivateKeyWithSerialization(RSAPrivateKey):
52 @abc.abstractmethod
53 def private_numbers(self):
54 """
55 Returns an RSAPrivateNumbers.
56 """
57
58 @abc.abstractmethod
59 def private_bytes(self, encoding, format, encryption_algorithm):
60 """
61 Returns the key serialized as bytes.
62 """
63
64
65 @six.add_metaclass(abc.ABCMeta)
66 class RSAPublicKey(object):
67 @abc.abstractmethod
68 def verifier(self, signature, padding, algorithm):
69 """
70 Returns an AsymmetricVerificationContext used for verifying signatures.
71 """
72
73 @abc.abstractmethod
74 def encrypt(self, plaintext, padding):
75 """
76 Encrypts the given plaintext.
77 """
78
79 @abc.abstractproperty
80 def key_size(self):
81 """
82 The bit length of the public modulus.
83 """
84
85 @abc.abstractmethod
86 def public_numbers(self):
87 """
88 Returns an RSAPublicNumbers
89 """
90
91 @abc.abstractmethod
92 def public_bytes(self, encoding, format):
93 """
94 Returns the key serialized as bytes.
95 """
96
97 @abc.abstractmethod
98 def verify(self, signature, data, padding, algorithm):
99 """
100 Verifies the signature of the data.
101 """
102
103
104 RSAPublicKeyWithSerialization = RSAPublicKey
105
106
107 def generate_private_key(public_exponent, key_size, backend):
108 if not isinstance(backend, RSABackend):
109 raise UnsupportedAlgorithm(
110 "Backend object does not implement RSABackend.",
111 _Reasons.BACKEND_MISSING_INTERFACE
112 )
113
114 _verify_rsa_parameters(public_exponent, key_size)
115 return backend.generate_rsa_private_key(public_exponent, key_size)
116
117
118 def _verify_rsa_parameters(public_exponent, key_size):
119 if public_exponent < 3:
120 raise ValueError("public_exponent must be >= 3.")
121
122 if public_exponent & 1 == 0:
123 raise ValueError("public_exponent must be odd.")
124
125 if key_size < 512:
126 raise ValueError("key_size must be at least 512-bits.")
127
128
129 def _check_private_key_components(p, q, private_exponent, dmp1, dmq1, iqmp,
130 public_exponent, modulus):
131 if modulus < 3:
132 raise ValueError("modulus must be >= 3.")
133
134 if p >= modulus:
135 raise ValueError("p must be < modulus.")
136
137 if q >= modulus:
138 raise ValueError("q must be < modulus.")
139
140 if dmp1 >= modulus:
141 raise ValueError("dmp1 must be < modulus.")
142
143 if dmq1 >= modulus:
144 raise ValueError("dmq1 must be < modulus.")
145
146 if iqmp >= modulus:
147 raise ValueError("iqmp must be < modulus.")
148
149 if private_exponent >= modulus:
150 raise ValueError("private_exponent must be < modulus.")
151
152 if public_exponent < 3 or public_exponent >= modulus:
153 raise ValueError("public_exponent must be >= 3 and < modulus.")
154
155 if public_exponent & 1 == 0:
156 raise ValueError("public_exponent must be odd.")
157
158 if dmp1 & 1 == 0:
159 raise ValueError("dmp1 must be odd.")
160
161 if dmq1 & 1 == 0:
162 raise ValueError("dmq1 must be odd.")
163
164 if p * q != modulus:
165 raise ValueError("p*q must equal modulus.")
166
167
168 def _check_public_key_components(e, n):
169 if n < 3:
170 raise ValueError("n must be >= 3.")
171
172 if e < 3 or e >= n:
173 raise ValueError("e must be >= 3 and < n.")
174
175 if e & 1 == 0:
176 raise ValueError("e must be odd.")
177
178
179 def _modinv(e, m):
180 """
181 Modular Multiplicative Inverse. Returns x such that: (x*e) mod m == 1
182 """
183 x1, y1, x2, y2 = 1, 0, 0, 1
184 a, b = e, m
185 while b > 0:
186 q, r = divmod(a, b)
187 xn, yn = x1 - q * x2, y1 - q * y2
188 a, b, x1, y1, x2, y2 = b, r, x2, y2, xn, yn
189 return x1 % m
190
191
192 def rsa_crt_iqmp(p, q):
193 """
194 Compute the CRT (q ** -1) % p value from RSA primes p and q.
195 """
196 return _modinv(q, p)
197
198
199 def rsa_crt_dmp1(private_exponent, p):
200 """
201 Compute the CRT private_exponent % (p - 1) value from the RSA
202 private_exponent (d) and p.
203 """
204 return private_exponent % (p - 1)
205
206
207 def rsa_crt_dmq1(private_exponent, q):
208 """
209 Compute the CRT private_exponent % (q - 1) value from the RSA
210 private_exponent (d) and q.
211 """
212 return private_exponent % (q - 1)
213
214
215 # Controls the number of iterations rsa_recover_prime_factors will perform
216 # to obtain the prime factors. Each iteration increments by 2 so the actual
217 # maximum attempts is half this number.
218 _MAX_RECOVERY_ATTEMPTS = 1000
219
220
221 def rsa_recover_prime_factors(n, e, d):
222 """
223 Compute factors p and q from the private exponent d. We assume that n has
224 no more than two factors. This function is adapted from code in PyCrypto.
225 """
226 # See 8.2.2(i) in Handbook of Applied Cryptography.
227 ktot = d * e - 1
228 # The quantity d*e-1 is a multiple of phi(n), even,
229 # and can be represented as t*2^s.
230 t = ktot
231 while t % 2 == 0:
232 t = t // 2
233 # Cycle through all multiplicative inverses in Zn.
234 # The algorithm is non-deterministic, but there is a 50% chance
235 # any candidate a leads to successful factoring.
236 # See "Digitalized Signatures and Public Key Functions as Intractable
237 # as Factorization", M. Rabin, 1979
238 spotted = False
239 a = 2
240 while not spotted and a < _MAX_RECOVERY_ATTEMPTS:
241 k = t
242 # Cycle through all values a^{t*2^i}=a^k
243 while k < ktot:
244 cand = pow(a, k, n)
245 # Check if a^k is a non-trivial root of unity (mod n)
246 if cand != 1 and cand != (n - 1) and pow(cand, 2, n) == 1:
247 # We have found a number such that (cand-1)(cand+1)=0 (mod n).
248 # Either of the terms divides n.
249 p = gcd(cand + 1, n)
250 spotted = True
251 break
252 k *= 2
253 # This value was not any good... let's try another!
254 a += 2
255 if not spotted:
256 raise ValueError("Unable to compute factors p and q from exponent d.")
257 # Found !
258 q, r = divmod(n, p)
259 assert r == 0
260
261 return (p, q)
262
263
264 class RSAPrivateNumbers(object):
265 def __init__(self, p, q, d, dmp1, dmq1, iqmp,
266 public_numbers):
267 if (
268 not isinstance(p, six.integer_types) or
269 not isinstance(q, six.integer_types) or
270 not isinstance(d, six.integer_types) or
271 not isinstance(dmp1, six.integer_types) or
272 not isinstance(dmq1, six.integer_types) or
273 not isinstance(iqmp, six.integer_types)
274 ):
275 raise TypeError(
276 "RSAPrivateNumbers p, q, d, dmp1, dmq1, iqmp arguments must"
277 " all be an integers."
278 )
279
280 if not isinstance(public_numbers, RSAPublicNumbers):
281 raise TypeError(
282 "RSAPrivateNumbers public_numbers must be an RSAPublicNumbers"
283 " instance."
284 )
285
286 self._p = p
287 self._q = q
288 self._d = d
289 self._dmp1 = dmp1
290 self._dmq1 = dmq1
291 self._iqmp = iqmp
292 self._public_numbers = public_numbers
293
294 p = utils.read_only_property("_p")
295 q = utils.read_only_property("_q")
296 d = utils.read_only_property("_d")
297 dmp1 = utils.read_only_property("_dmp1")
298 dmq1 = utils.read_only_property("_dmq1")
299 iqmp = utils.read_only_property("_iqmp")
300 public_numbers = utils.read_only_property("_public_numbers")
301
302 def private_key(self, backend):
303 return backend.load_rsa_private_numbers(self)
304
305 def __eq__(self, other):
306 if not isinstance(other, RSAPrivateNumbers):
307 return NotImplemented
308
309 return (
310 self.p == other.p and
311 self.q == other.q and
312 self.d == other.d and
313 self.dmp1 == other.dmp1 and
314 self.dmq1 == other.dmq1 and
315 self.iqmp == other.iqmp and
316 self.public_numbers == other.public_numbers
317 )
318
319 def __ne__(self, other):
320 return not self == other
321
322 def __hash__(self):
323 return hash((
324 self.p,
325 self.q,
326 self.d,
327 self.dmp1,
328 self.dmq1,
329 self.iqmp,
330 self.public_numbers,
331 ))
332
333
334 class RSAPublicNumbers(object):
335 def __init__(self, e, n):
336 if (
337 not isinstance(e, six.integer_types) or
338 not isinstance(n, six.integer_types)
339 ):
340 raise TypeError("RSAPublicNumbers arguments must be integers.")
341
342 self._e = e
343 self._n = n
344
345 e = utils.read_only_property("_e")
346 n = utils.read_only_property("_n")
347
348 def public_key(self, backend):
349 return backend.load_rsa_public_numbers(self)
350
351 def __repr__(self):
352 return "<RSAPublicNumbers(e={0.e}, n={0.n})>".format(self)
353
354 def __eq__(self, other):
355 if not isinstance(other, RSAPublicNumbers):
356 return NotImplemented
357
358 return self.e == other.e and self.n == other.n
359
360 def __ne__(self, other):
361 return not self == other
362
363 def __hash__(self):
364 return hash((self.e, self.n))
365
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cryptography/hazmat/primitives/asymmetric/rsa.py b/src/cryptography/hazmat/primitives/asymmetric/rsa.py
--- a/src/cryptography/hazmat/primitives/asymmetric/rsa.py
+++ b/src/cryptography/hazmat/primitives/asymmetric/rsa.py
@@ -257,7 +257,7 @@
# Found !
q, r = divmod(n, p)
assert r == 0
-
+ p, q = sorted((p, q), reverse=True)
return (p, q)
|
{"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/asymmetric/rsa.py b/src/cryptography/hazmat/primitives/asymmetric/rsa.py\n--- a/src/cryptography/hazmat/primitives/asymmetric/rsa.py\n+++ b/src/cryptography/hazmat/primitives/asymmetric/rsa.py\n@@ -257,7 +257,7 @@\n # Found !\n q, r = divmod(n, p)\n assert r == 0\n-\n+ p, q = sorted((p, q), reverse=True)\n return (p, q)\n", "issue": "rsa.rsa_recover_prime_factors() should return p > q\nThe documentation for `rsa_recover_prime_factors()` warns that it returns `p` and `q` such that `p < q`. However, things like OpenSSL and BoringSSL seem to require that `p > q`. Given this, would it be feasible to change the order around in cryptography so that it lines up with OpenSSL?\n\nSee also: http://crypto.stackexchange.com/questions/18084/in-rsa-why-does-p-have-to-be-bigger-than-q-where-n-p-times-q. @briansmith can provide more commentary if needed.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\nfrom fractions import gcd\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.backends.interfaces import RSABackend\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass RSAPrivateKey(object):\n @abc.abstractmethod\n def signer(self, padding, algorithm):\n \"\"\"\n Returns an AsymmetricSignatureContext used for signing data.\n \"\"\"\n\n @abc.abstractmethod\n def decrypt(self, ciphertext, padding):\n \"\"\"\n Decrypts the provided ciphertext.\n \"\"\"\n\n @abc.abstractproperty\n def key_size(self):\n \"\"\"\n The bit length of the public modulus.\n \"\"\"\n\n @abc.abstractmethod\n def public_key(self):\n \"\"\"\n The RSAPublicKey associated with this private key.\n \"\"\"\n\n @abc.abstractmethod\n def sign(self, data, padding, algorithm):\n \"\"\"\n Signs the data.\n \"\"\"\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass RSAPrivateKeyWithSerialization(RSAPrivateKey):\n @abc.abstractmethod\n def private_numbers(self):\n \"\"\"\n Returns an RSAPrivateNumbers.\n \"\"\"\n\n @abc.abstractmethod\n def private_bytes(self, encoding, format, encryption_algorithm):\n \"\"\"\n Returns the key serialized as bytes.\n \"\"\"\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass RSAPublicKey(object):\n @abc.abstractmethod\n def verifier(self, signature, padding, algorithm):\n \"\"\"\n Returns an AsymmetricVerificationContext used for verifying signatures.\n \"\"\"\n\n @abc.abstractmethod\n def encrypt(self, plaintext, padding):\n \"\"\"\n Encrypts the given plaintext.\n \"\"\"\n\n @abc.abstractproperty\n def key_size(self):\n \"\"\"\n The bit length of the public modulus.\n \"\"\"\n\n @abc.abstractmethod\n def public_numbers(self):\n \"\"\"\n Returns an RSAPublicNumbers\n \"\"\"\n\n @abc.abstractmethod\n def public_bytes(self, encoding, format):\n \"\"\"\n Returns the key serialized as bytes.\n \"\"\"\n\n @abc.abstractmethod\n def verify(self, signature, data, padding, algorithm):\n \"\"\"\n Verifies the signature of the data.\n \"\"\"\n\n\nRSAPublicKeyWithSerialization = RSAPublicKey\n\n\ndef generate_private_key(public_exponent, key_size, backend):\n if not isinstance(backend, RSABackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement RSABackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n _verify_rsa_parameters(public_exponent, key_size)\n return backend.generate_rsa_private_key(public_exponent, key_size)\n\n\ndef _verify_rsa_parameters(public_exponent, key_size):\n if public_exponent < 3:\n raise ValueError(\"public_exponent must be >= 3.\")\n\n if public_exponent & 1 == 0:\n raise ValueError(\"public_exponent must be odd.\")\n\n if key_size < 512:\n raise ValueError(\"key_size must be at least 512-bits.\")\n\n\ndef _check_private_key_components(p, q, private_exponent, dmp1, dmq1, iqmp,\n public_exponent, modulus):\n if modulus < 3:\n raise ValueError(\"modulus must be >= 3.\")\n\n if p >= modulus:\n raise ValueError(\"p must be < modulus.\")\n\n if q >= modulus:\n raise ValueError(\"q must be < modulus.\")\n\n if dmp1 >= modulus:\n raise ValueError(\"dmp1 must be < modulus.\")\n\n if dmq1 >= modulus:\n raise ValueError(\"dmq1 must be < modulus.\")\n\n if iqmp >= modulus:\n raise ValueError(\"iqmp must be < modulus.\")\n\n if private_exponent >= modulus:\n raise ValueError(\"private_exponent must be < modulus.\")\n\n if public_exponent < 3 or public_exponent >= modulus:\n raise ValueError(\"public_exponent must be >= 3 and < modulus.\")\n\n if public_exponent & 1 == 0:\n raise ValueError(\"public_exponent must be odd.\")\n\n if dmp1 & 1 == 0:\n raise ValueError(\"dmp1 must be odd.\")\n\n if dmq1 & 1 == 0:\n raise ValueError(\"dmq1 must be odd.\")\n\n if p * q != modulus:\n raise ValueError(\"p*q must equal modulus.\")\n\n\ndef _check_public_key_components(e, n):\n if n < 3:\n raise ValueError(\"n must be >= 3.\")\n\n if e < 3 or e >= n:\n raise ValueError(\"e must be >= 3 and < n.\")\n\n if e & 1 == 0:\n raise ValueError(\"e must be odd.\")\n\n\ndef _modinv(e, m):\n \"\"\"\n Modular Multiplicative Inverse. Returns x such that: (x*e) mod m == 1\n \"\"\"\n x1, y1, x2, y2 = 1, 0, 0, 1\n a, b = e, m\n while b > 0:\n q, r = divmod(a, b)\n xn, yn = x1 - q * x2, y1 - q * y2\n a, b, x1, y1, x2, y2 = b, r, x2, y2, xn, yn\n return x1 % m\n\n\ndef rsa_crt_iqmp(p, q):\n \"\"\"\n Compute the CRT (q ** -1) % p value from RSA primes p and q.\n \"\"\"\n return _modinv(q, p)\n\n\ndef rsa_crt_dmp1(private_exponent, p):\n \"\"\"\n Compute the CRT private_exponent % (p - 1) value from the RSA\n private_exponent (d) and p.\n \"\"\"\n return private_exponent % (p - 1)\n\n\ndef rsa_crt_dmq1(private_exponent, q):\n \"\"\"\n Compute the CRT private_exponent % (q - 1) value from the RSA\n private_exponent (d) and q.\n \"\"\"\n return private_exponent % (q - 1)\n\n\n# Controls the number of iterations rsa_recover_prime_factors will perform\n# to obtain the prime factors. Each iteration increments by 2 so the actual\n# maximum attempts is half this number.\n_MAX_RECOVERY_ATTEMPTS = 1000\n\n\ndef rsa_recover_prime_factors(n, e, d):\n \"\"\"\n Compute factors p and q from the private exponent d. We assume that n has\n no more than two factors. This function is adapted from code in PyCrypto.\n \"\"\"\n # See 8.2.2(i) in Handbook of Applied Cryptography.\n ktot = d * e - 1\n # The quantity d*e-1 is a multiple of phi(n), even,\n # and can be represented as t*2^s.\n t = ktot\n while t % 2 == 0:\n t = t // 2\n # Cycle through all multiplicative inverses in Zn.\n # The algorithm is non-deterministic, but there is a 50% chance\n # any candidate a leads to successful factoring.\n # See \"Digitalized Signatures and Public Key Functions as Intractable\n # as Factorization\", M. Rabin, 1979\n spotted = False\n a = 2\n while not spotted and a < _MAX_RECOVERY_ATTEMPTS:\n k = t\n # Cycle through all values a^{t*2^i}=a^k\n while k < ktot:\n cand = pow(a, k, n)\n # Check if a^k is a non-trivial root of unity (mod n)\n if cand != 1 and cand != (n - 1) and pow(cand, 2, n) == 1:\n # We have found a number such that (cand-1)(cand+1)=0 (mod n).\n # Either of the terms divides n.\n p = gcd(cand + 1, n)\n spotted = True\n break\n k *= 2\n # This value was not any good... let's try another!\n a += 2\n if not spotted:\n raise ValueError(\"Unable to compute factors p and q from exponent d.\")\n # Found !\n q, r = divmod(n, p)\n assert r == 0\n\n return (p, q)\n\n\nclass RSAPrivateNumbers(object):\n def __init__(self, p, q, d, dmp1, dmq1, iqmp,\n public_numbers):\n if (\n not isinstance(p, six.integer_types) or\n not isinstance(q, six.integer_types) or\n not isinstance(d, six.integer_types) or\n not isinstance(dmp1, six.integer_types) or\n not isinstance(dmq1, six.integer_types) or\n not isinstance(iqmp, six.integer_types)\n ):\n raise TypeError(\n \"RSAPrivateNumbers p, q, d, dmp1, dmq1, iqmp arguments must\"\n \" all be an integers.\"\n )\n\n if not isinstance(public_numbers, RSAPublicNumbers):\n raise TypeError(\n \"RSAPrivateNumbers public_numbers must be an RSAPublicNumbers\"\n \" instance.\"\n )\n\n self._p = p\n self._q = q\n self._d = d\n self._dmp1 = dmp1\n self._dmq1 = dmq1\n self._iqmp = iqmp\n self._public_numbers = public_numbers\n\n p = utils.read_only_property(\"_p\")\n q = utils.read_only_property(\"_q\")\n d = utils.read_only_property(\"_d\")\n dmp1 = utils.read_only_property(\"_dmp1\")\n dmq1 = utils.read_only_property(\"_dmq1\")\n iqmp = utils.read_only_property(\"_iqmp\")\n public_numbers = utils.read_only_property(\"_public_numbers\")\n\n def private_key(self, backend):\n return backend.load_rsa_private_numbers(self)\n\n def __eq__(self, other):\n if not isinstance(other, RSAPrivateNumbers):\n return NotImplemented\n\n return (\n self.p == other.p and\n self.q == other.q and\n self.d == other.d and\n self.dmp1 == other.dmp1 and\n self.dmq1 == other.dmq1 and\n self.iqmp == other.iqmp and\n self.public_numbers == other.public_numbers\n )\n\n def __ne__(self, other):\n return not self == other\n\n def __hash__(self):\n return hash((\n self.p,\n self.q,\n self.d,\n self.dmp1,\n self.dmq1,\n self.iqmp,\n self.public_numbers,\n ))\n\n\nclass RSAPublicNumbers(object):\n def __init__(self, e, n):\n if (\n not isinstance(e, six.integer_types) or\n not isinstance(n, six.integer_types)\n ):\n raise TypeError(\"RSAPublicNumbers arguments must be integers.\")\n\n self._e = e\n self._n = n\n\n e = utils.read_only_property(\"_e\")\n n = utils.read_only_property(\"_n\")\n\n def public_key(self, backend):\n return backend.load_rsa_public_numbers(self)\n\n def __repr__(self):\n return \"<RSAPublicNumbers(e={0.e}, n={0.n})>\".format(self)\n\n def __eq__(self, other):\n if not isinstance(other, RSAPublicNumbers):\n return NotImplemented\n\n return self.e == other.e and self.n == other.n\n\n def __ne__(self, other):\n return not self == other\n\n def __hash__(self):\n return hash((self.e, self.n))\n", "path": "src/cryptography/hazmat/primitives/asymmetric/rsa.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\nfrom fractions import gcd\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.backends.interfaces import RSABackend\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass RSAPrivateKey(object):\n @abc.abstractmethod\n def signer(self, padding, algorithm):\n \"\"\"\n Returns an AsymmetricSignatureContext used for signing data.\n \"\"\"\n\n @abc.abstractmethod\n def decrypt(self, ciphertext, padding):\n \"\"\"\n Decrypts the provided ciphertext.\n \"\"\"\n\n @abc.abstractproperty\n def key_size(self):\n \"\"\"\n The bit length of the public modulus.\n \"\"\"\n\n @abc.abstractmethod\n def public_key(self):\n \"\"\"\n The RSAPublicKey associated with this private key.\n \"\"\"\n\n @abc.abstractmethod\n def sign(self, data, padding, algorithm):\n \"\"\"\n Signs the data.\n \"\"\"\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass RSAPrivateKeyWithSerialization(RSAPrivateKey):\n @abc.abstractmethod\n def private_numbers(self):\n \"\"\"\n Returns an RSAPrivateNumbers.\n \"\"\"\n\n @abc.abstractmethod\n def private_bytes(self, encoding, format, encryption_algorithm):\n \"\"\"\n Returns the key serialized as bytes.\n \"\"\"\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass RSAPublicKey(object):\n @abc.abstractmethod\n def verifier(self, signature, padding, algorithm):\n \"\"\"\n Returns an AsymmetricVerificationContext used for verifying signatures.\n \"\"\"\n\n @abc.abstractmethod\n def encrypt(self, plaintext, padding):\n \"\"\"\n Encrypts the given plaintext.\n \"\"\"\n\n @abc.abstractproperty\n def key_size(self):\n \"\"\"\n The bit length of the public modulus.\n \"\"\"\n\n @abc.abstractmethod\n def public_numbers(self):\n \"\"\"\n Returns an RSAPublicNumbers\n \"\"\"\n\n @abc.abstractmethod\n def public_bytes(self, encoding, format):\n \"\"\"\n Returns the key serialized as bytes.\n \"\"\"\n\n @abc.abstractmethod\n def verify(self, signature, data, padding, algorithm):\n \"\"\"\n Verifies the signature of the data.\n \"\"\"\n\n\nRSAPublicKeyWithSerialization = RSAPublicKey\n\n\ndef generate_private_key(public_exponent, key_size, backend):\n if not isinstance(backend, RSABackend):\n raise UnsupportedAlgorithm(\n \"Backend object does not implement RSABackend.\",\n _Reasons.BACKEND_MISSING_INTERFACE\n )\n\n _verify_rsa_parameters(public_exponent, key_size)\n return backend.generate_rsa_private_key(public_exponent, key_size)\n\n\ndef _verify_rsa_parameters(public_exponent, key_size):\n if public_exponent < 3:\n raise ValueError(\"public_exponent must be >= 3.\")\n\n if public_exponent & 1 == 0:\n raise ValueError(\"public_exponent must be odd.\")\n\n if key_size < 512:\n raise ValueError(\"key_size must be at least 512-bits.\")\n\n\ndef _check_private_key_components(p, q, private_exponent, dmp1, dmq1, iqmp,\n public_exponent, modulus):\n if modulus < 3:\n raise ValueError(\"modulus must be >= 3.\")\n\n if p >= modulus:\n raise ValueError(\"p must be < modulus.\")\n\n if q >= modulus:\n raise ValueError(\"q must be < modulus.\")\n\n if dmp1 >= modulus:\n raise ValueError(\"dmp1 must be < modulus.\")\n\n if dmq1 >= modulus:\n raise ValueError(\"dmq1 must be < modulus.\")\n\n if iqmp >= modulus:\n raise ValueError(\"iqmp must be < modulus.\")\n\n if private_exponent >= modulus:\n raise ValueError(\"private_exponent must be < modulus.\")\n\n if public_exponent < 3 or public_exponent >= modulus:\n raise ValueError(\"public_exponent must be >= 3 and < modulus.\")\n\n if public_exponent & 1 == 0:\n raise ValueError(\"public_exponent must be odd.\")\n\n if dmp1 & 1 == 0:\n raise ValueError(\"dmp1 must be odd.\")\n\n if dmq1 & 1 == 0:\n raise ValueError(\"dmq1 must be odd.\")\n\n if p * q != modulus:\n raise ValueError(\"p*q must equal modulus.\")\n\n\ndef _check_public_key_components(e, n):\n if n < 3:\n raise ValueError(\"n must be >= 3.\")\n\n if e < 3 or e >= n:\n raise ValueError(\"e must be >= 3 and < n.\")\n\n if e & 1 == 0:\n raise ValueError(\"e must be odd.\")\n\n\ndef _modinv(e, m):\n \"\"\"\n Modular Multiplicative Inverse. Returns x such that: (x*e) mod m == 1\n \"\"\"\n x1, y1, x2, y2 = 1, 0, 0, 1\n a, b = e, m\n while b > 0:\n q, r = divmod(a, b)\n xn, yn = x1 - q * x2, y1 - q * y2\n a, b, x1, y1, x2, y2 = b, r, x2, y2, xn, yn\n return x1 % m\n\n\ndef rsa_crt_iqmp(p, q):\n \"\"\"\n Compute the CRT (q ** -1) % p value from RSA primes p and q.\n \"\"\"\n return _modinv(q, p)\n\n\ndef rsa_crt_dmp1(private_exponent, p):\n \"\"\"\n Compute the CRT private_exponent % (p - 1) value from the RSA\n private_exponent (d) and p.\n \"\"\"\n return private_exponent % (p - 1)\n\n\ndef rsa_crt_dmq1(private_exponent, q):\n \"\"\"\n Compute the CRT private_exponent % (q - 1) value from the RSA\n private_exponent (d) and q.\n \"\"\"\n return private_exponent % (q - 1)\n\n\n# Controls the number of iterations rsa_recover_prime_factors will perform\n# to obtain the prime factors. Each iteration increments by 2 so the actual\n# maximum attempts is half this number.\n_MAX_RECOVERY_ATTEMPTS = 1000\n\n\ndef rsa_recover_prime_factors(n, e, d):\n \"\"\"\n Compute factors p and q from the private exponent d. We assume that n has\n no more than two factors. This function is adapted from code in PyCrypto.\n \"\"\"\n # See 8.2.2(i) in Handbook of Applied Cryptography.\n ktot = d * e - 1\n # The quantity d*e-1 is a multiple of phi(n), even,\n # and can be represented as t*2^s.\n t = ktot\n while t % 2 == 0:\n t = t // 2\n # Cycle through all multiplicative inverses in Zn.\n # The algorithm is non-deterministic, but there is a 50% chance\n # any candidate a leads to successful factoring.\n # See \"Digitalized Signatures and Public Key Functions as Intractable\n # as Factorization\", M. Rabin, 1979\n spotted = False\n a = 2\n while not spotted and a < _MAX_RECOVERY_ATTEMPTS:\n k = t\n # Cycle through all values a^{t*2^i}=a^k\n while k < ktot:\n cand = pow(a, k, n)\n # Check if a^k is a non-trivial root of unity (mod n)\n if cand != 1 and cand != (n - 1) and pow(cand, 2, n) == 1:\n # We have found a number such that (cand-1)(cand+1)=0 (mod n).\n # Either of the terms divides n.\n p = gcd(cand + 1, n)\n spotted = True\n break\n k *= 2\n # This value was not any good... let's try another!\n a += 2\n if not spotted:\n raise ValueError(\"Unable to compute factors p and q from exponent d.\")\n # Found !\n q, r = divmod(n, p)\n assert r == 0\n p, q = sorted((p, q), reverse=True)\n return (p, q)\n\n\nclass RSAPrivateNumbers(object):\n def __init__(self, p, q, d, dmp1, dmq1, iqmp,\n public_numbers):\n if (\n not isinstance(p, six.integer_types) or\n not isinstance(q, six.integer_types) or\n not isinstance(d, six.integer_types) or\n not isinstance(dmp1, six.integer_types) or\n not isinstance(dmq1, six.integer_types) or\n not isinstance(iqmp, six.integer_types)\n ):\n raise TypeError(\n \"RSAPrivateNumbers p, q, d, dmp1, dmq1, iqmp arguments must\"\n \" all be an integers.\"\n )\n\n if not isinstance(public_numbers, RSAPublicNumbers):\n raise TypeError(\n \"RSAPrivateNumbers public_numbers must be an RSAPublicNumbers\"\n \" instance.\"\n )\n\n self._p = p\n self._q = q\n self._d = d\n self._dmp1 = dmp1\n self._dmq1 = dmq1\n self._iqmp = iqmp\n self._public_numbers = public_numbers\n\n p = utils.read_only_property(\"_p\")\n q = utils.read_only_property(\"_q\")\n d = utils.read_only_property(\"_d\")\n dmp1 = utils.read_only_property(\"_dmp1\")\n dmq1 = utils.read_only_property(\"_dmq1\")\n iqmp = utils.read_only_property(\"_iqmp\")\n public_numbers = utils.read_only_property(\"_public_numbers\")\n\n def private_key(self, backend):\n return backend.load_rsa_private_numbers(self)\n\n def __eq__(self, other):\n if not isinstance(other, RSAPrivateNumbers):\n return NotImplemented\n\n return (\n self.p == other.p and\n self.q == other.q and\n self.d == other.d and\n self.dmp1 == other.dmp1 and\n self.dmq1 == other.dmq1 and\n self.iqmp == other.iqmp and\n self.public_numbers == other.public_numbers\n )\n\n def __ne__(self, other):\n return not self == other\n\n def __hash__(self):\n return hash((\n self.p,\n self.q,\n self.d,\n self.dmp1,\n self.dmq1,\n self.iqmp,\n self.public_numbers,\n ))\n\n\nclass RSAPublicNumbers(object):\n def __init__(self, e, n):\n if (\n not isinstance(e, six.integer_types) or\n not isinstance(n, six.integer_types)\n ):\n raise TypeError(\"RSAPublicNumbers arguments must be integers.\")\n\n self._e = e\n self._n = n\n\n e = utils.read_only_property(\"_e\")\n n = utils.read_only_property(\"_n\")\n\n def public_key(self, backend):\n return backend.load_rsa_public_numbers(self)\n\n def __repr__(self):\n return \"<RSAPublicNumbers(e={0.e}, n={0.n})>\".format(self)\n\n def __eq__(self, other):\n if not isinstance(other, RSAPublicNumbers):\n return NotImplemented\n\n return self.e == other.e and self.n == other.n\n\n def __ne__(self, other):\n return not self == other\n\n def __hash__(self):\n return hash((self.e, self.n))\n", "path": "src/cryptography/hazmat/primitives/asymmetric/rsa.py"}]}
| 4,040 | 123 |
gh_patches_debug_33315
|
rasdani/github-patches
|
git_diff
|
cookiecutter__cookiecutter-666
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Idea: have a way to specify context via command line
Something like repeat arguments:
```
cookiecutter mytemplate -Cname=my-project -Cgithub-user=ionelmc
```
Or maybe the whole json?
```
cookiecutter mytemplate --context='{"name": "my-project", "github-user": "ionelmc"}'
```
Or variable arguments?
```
cookiecutter mytemplate --context-name=my-project --context-github-user=ionelmc
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/cli.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.cli
6 -----------------
7
8 Main `cookiecutter` CLI.
9 """
10
11 import os
12 import sys
13 import logging
14 import json
15
16 import click
17
18 from cookiecutter import __version__
19 from cookiecutter.config import USER_CONFIG_PATH
20 from cookiecutter.main import cookiecutter
21 from cookiecutter.exceptions import (
22 OutputDirExistsException,
23 InvalidModeException,
24 FailedHookException,
25 UndefinedVariableInTemplate,
26 UnknownExtension,
27 RepositoryNotFound
28 )
29
30 logger = logging.getLogger(__name__)
31
32
33 def version_msg():
34 """Returns the Cookiecutter version, location and Python powering it."""
35 python_version = sys.version[:3]
36 location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
37 message = u'Cookiecutter %(version)s from {} (Python {})'
38 return message.format(location, python_version)
39
40
41 @click.command(context_settings=dict(help_option_names=[u'-h', u'--help']))
42 @click.version_option(__version__, u'-V', u'--version', message=version_msg())
43 @click.argument(u'template')
44 @click.option(
45 u'--no-input', is_flag=True,
46 help=u'Do not prompt for parameters and only use cookiecutter.json '
47 u'file content',
48 )
49 @click.option(
50 u'-c', u'--checkout',
51 help=u'branch, tag or commit to checkout after git clone',
52 )
53 @click.option(
54 '-v', '--verbose',
55 is_flag=True, help='Print debug information', default=False
56 )
57 @click.option(
58 u'--replay', is_flag=True,
59 help=u'Do not prompt for parameters and only use information entered '
60 u'previously',
61 )
62 @click.option(
63 u'-f', u'--overwrite-if-exists', is_flag=True,
64 help=u'Overwrite the contents of the output directory if it already exists'
65 )
66 @click.option(
67 u'-o', u'--output-dir', default='.', type=click.Path(),
68 help=u'Where to output the generated project dir into'
69 )
70 @click.option(
71 u'--config-file', type=click.Path(), default=USER_CONFIG_PATH,
72 help=u'User configuration file'
73 )
74 @click.option(
75 u'--default-config', is_flag=True,
76 help=u'Do not load a config file. Use the defaults instead'
77 )
78 def main(template, no_input, checkout, verbose, replay, overwrite_if_exists,
79 output_dir, config_file, default_config):
80 """Create a project from a Cookiecutter project template (TEMPLATE)."""
81 if verbose:
82 logging.basicConfig(
83 format=u'%(levelname)s %(filename)s: %(message)s',
84 level=logging.DEBUG
85 )
86 else:
87 # Log info and above to console
88 logging.basicConfig(
89 format=u'%(levelname)s: %(message)s',
90 level=logging.INFO
91 )
92
93 try:
94 # If you _need_ to support a local template in a directory
95 # called 'help', use a qualified path to the directory.
96 if template == u'help':
97 click.echo(click.get_current_context().get_help())
98 sys.exit(0)
99
100 user_config = None if default_config else config_file
101
102 cookiecutter(
103 template, checkout, no_input,
104 replay=replay,
105 overwrite_if_exists=overwrite_if_exists,
106 output_dir=output_dir,
107 config_file=user_config
108 )
109 except (OutputDirExistsException,
110 InvalidModeException,
111 FailedHookException,
112 UnknownExtension,
113 RepositoryNotFound) as e:
114 click.echo(e)
115 sys.exit(1)
116 except UndefinedVariableInTemplate as undefined_err:
117 click.echo('{}'.format(undefined_err.message))
118 click.echo('Error message: {}'.format(undefined_err.error.message))
119
120 context_str = json.dumps(
121 undefined_err.context,
122 indent=4,
123 sort_keys=True
124 )
125 click.echo('Context: {}'.format(context_str))
126 sys.exit(1)
127
128
129 if __name__ == "__main__":
130 main()
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cookiecutter/cli.py b/cookiecutter/cli.py
--- a/cookiecutter/cli.py
+++ b/cookiecutter/cli.py
@@ -38,9 +38,23 @@
return message.format(location, python_version)
+def validate_extra_context(ctx, param, value):
+ for s in value:
+ if '=' not in s:
+ raise click.BadParameter(
+ 'EXTRA_CONTEXT should contain items of the form key=value; '
+ "'{}' doesn't match that form".format(s)
+ )
+
+ # Convert tuple -- e.g.: (u'program_name=foobar', u'startsecs=66')
+ # to dict -- e.g.: {'program_name': 'foobar', 'startsecs': '66'}
+ return dict(s.split('=', 1) for s in value) or None
+
+
@click.command(context_settings=dict(help_option_names=[u'-h', u'--help']))
@click.version_option(__version__, u'-V', u'--version', message=version_msg())
@click.argument(u'template')
[email protected](u'extra_context', nargs=-1, callback=validate_extra_context)
@click.option(
u'--no-input', is_flag=True,
help=u'Do not prompt for parameters and only use cookiecutter.json '
@@ -75,8 +89,8 @@
u'--default-config', is_flag=True,
help=u'Do not load a config file. Use the defaults instead'
)
-def main(template, no_input, checkout, verbose, replay, overwrite_if_exists,
- output_dir, config_file, default_config):
+def main(template, extra_context, no_input, checkout, verbose, replay,
+ overwrite_if_exists, output_dir, config_file, default_config):
"""Create a project from a Cookiecutter project template (TEMPLATE)."""
if verbose:
logging.basicConfig(
@@ -101,6 +115,7 @@
cookiecutter(
template, checkout, no_input,
+ extra_context=extra_context,
replay=replay,
overwrite_if_exists=overwrite_if_exists,
output_dir=output_dir,
|
{"golden_diff": "diff --git a/cookiecutter/cli.py b/cookiecutter/cli.py\n--- a/cookiecutter/cli.py\n+++ b/cookiecutter/cli.py\n@@ -38,9 +38,23 @@\n return message.format(location, python_version)\n \n \n+def validate_extra_context(ctx, param, value):\n+ for s in value:\n+ if '=' not in s:\n+ raise click.BadParameter(\n+ 'EXTRA_CONTEXT should contain items of the form key=value; '\n+ \"'{}' doesn't match that form\".format(s)\n+ )\n+\n+ # Convert tuple -- e.g.: (u'program_name=foobar', u'startsecs=66')\n+ # to dict -- e.g.: {'program_name': 'foobar', 'startsecs': '66'}\n+ return dict(s.split('=', 1) for s in value) or None\n+\n+\n @click.command(context_settings=dict(help_option_names=[u'-h', u'--help']))\n @click.version_option(__version__, u'-V', u'--version', message=version_msg())\n @click.argument(u'template')\[email protected](u'extra_context', nargs=-1, callback=validate_extra_context)\n @click.option(\n u'--no-input', is_flag=True,\n help=u'Do not prompt for parameters and only use cookiecutter.json '\n@@ -75,8 +89,8 @@\n u'--default-config', is_flag=True,\n help=u'Do not load a config file. Use the defaults instead'\n )\n-def main(template, no_input, checkout, verbose, replay, overwrite_if_exists,\n- output_dir, config_file, default_config):\n+def main(template, extra_context, no_input, checkout, verbose, replay,\n+ overwrite_if_exists, output_dir, config_file, default_config):\n \"\"\"Create a project from a Cookiecutter project template (TEMPLATE).\"\"\"\n if verbose:\n logging.basicConfig(\n@@ -101,6 +115,7 @@\n \n cookiecutter(\n template, checkout, no_input,\n+ extra_context=extra_context,\n replay=replay,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir,\n", "issue": "Idea: have a way to specify context via command line\nSomething like repeat arguments:\n\n```\ncookiecutter mytemplate -Cname=my-project -Cgithub-user=ionelmc\n```\n\nOr maybe the whole json?\n\n```\ncookiecutter mytemplate --context='{\"name\": \"my-project\", \"github-user\": \"ionelmc\"}'\n```\n\nOr variable arguments?\n\n```\ncookiecutter mytemplate --context-name=my-project --context-github-user=ionelmc\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.cli\n-----------------\n\nMain `cookiecutter` CLI.\n\"\"\"\n\nimport os\nimport sys\nimport logging\nimport json\n\nimport click\n\nfrom cookiecutter import __version__\nfrom cookiecutter.config import USER_CONFIG_PATH\nfrom cookiecutter.main import cookiecutter\nfrom cookiecutter.exceptions import (\n OutputDirExistsException,\n InvalidModeException,\n FailedHookException,\n UndefinedVariableInTemplate,\n UnknownExtension,\n RepositoryNotFound\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef version_msg():\n \"\"\"Returns the Cookiecutter version, location and Python powering it.\"\"\"\n python_version = sys.version[:3]\n location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n message = u'Cookiecutter %(version)s from {} (Python {})'\n return message.format(location, python_version)\n\n\[email protected](context_settings=dict(help_option_names=[u'-h', u'--help']))\[email protected]_option(__version__, u'-V', u'--version', message=version_msg())\[email protected](u'template')\[email protected](\n u'--no-input', is_flag=True,\n help=u'Do not prompt for parameters and only use cookiecutter.json '\n u'file content',\n)\[email protected](\n u'-c', u'--checkout',\n help=u'branch, tag or commit to checkout after git clone',\n)\[email protected](\n '-v', '--verbose',\n is_flag=True, help='Print debug information', default=False\n)\[email protected](\n u'--replay', is_flag=True,\n help=u'Do not prompt for parameters and only use information entered '\n u'previously',\n)\[email protected](\n u'-f', u'--overwrite-if-exists', is_flag=True,\n help=u'Overwrite the contents of the output directory if it already exists'\n)\[email protected](\n u'-o', u'--output-dir', default='.', type=click.Path(),\n help=u'Where to output the generated project dir into'\n)\[email protected](\n u'--config-file', type=click.Path(), default=USER_CONFIG_PATH,\n help=u'User configuration file'\n)\[email protected](\n u'--default-config', is_flag=True,\n help=u'Do not load a config file. Use the defaults instead'\n)\ndef main(template, no_input, checkout, verbose, replay, overwrite_if_exists,\n output_dir, config_file, default_config):\n \"\"\"Create a project from a Cookiecutter project template (TEMPLATE).\"\"\"\n if verbose:\n logging.basicConfig(\n format=u'%(levelname)s %(filename)s: %(message)s',\n level=logging.DEBUG\n )\n else:\n # Log info and above to console\n logging.basicConfig(\n format=u'%(levelname)s: %(message)s',\n level=logging.INFO\n )\n\n try:\n # If you _need_ to support a local template in a directory\n # called 'help', use a qualified path to the directory.\n if template == u'help':\n click.echo(click.get_current_context().get_help())\n sys.exit(0)\n\n user_config = None if default_config else config_file\n\n cookiecutter(\n template, checkout, no_input,\n replay=replay,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir,\n config_file=user_config\n )\n except (OutputDirExistsException,\n InvalidModeException,\n FailedHookException,\n UnknownExtension,\n RepositoryNotFound) as e:\n click.echo(e)\n sys.exit(1)\n except UndefinedVariableInTemplate as undefined_err:\n click.echo('{}'.format(undefined_err.message))\n click.echo('Error message: {}'.format(undefined_err.error.message))\n\n context_str = json.dumps(\n undefined_err.context,\n indent=4,\n sort_keys=True\n )\n click.echo('Context: {}'.format(context_str))\n sys.exit(1)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "cookiecutter/cli.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.cli\n-----------------\n\nMain `cookiecutter` CLI.\n\"\"\"\n\nimport os\nimport sys\nimport logging\nimport json\n\nimport click\n\nfrom cookiecutter import __version__\nfrom cookiecutter.config import USER_CONFIG_PATH\nfrom cookiecutter.main import cookiecutter\nfrom cookiecutter.exceptions import (\n OutputDirExistsException,\n InvalidModeException,\n FailedHookException,\n UndefinedVariableInTemplate,\n UnknownExtension,\n RepositoryNotFound\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef version_msg():\n \"\"\"Returns the Cookiecutter version, location and Python powering it.\"\"\"\n python_version = sys.version[:3]\n location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n message = u'Cookiecutter %(version)s from {} (Python {})'\n return message.format(location, python_version)\n\n\ndef validate_extra_context(ctx, param, value):\n for s in value:\n if '=' not in s:\n raise click.BadParameter(\n 'EXTRA_CONTEXT should contain items of the form key=value; '\n \"'{}' doesn't match that form\".format(s)\n )\n\n # Convert tuple -- e.g.: (u'program_name=foobar', u'startsecs=66')\n # to dict -- e.g.: {'program_name': 'foobar', 'startsecs': '66'}\n return dict(s.split('=', 1) for s in value) or None\n\n\[email protected](context_settings=dict(help_option_names=[u'-h', u'--help']))\[email protected]_option(__version__, u'-V', u'--version', message=version_msg())\[email protected](u'template')\[email protected](u'extra_context', nargs=-1, callback=validate_extra_context)\[email protected](\n u'--no-input', is_flag=True,\n help=u'Do not prompt for parameters and only use cookiecutter.json '\n u'file content',\n)\[email protected](\n u'-c', u'--checkout',\n help=u'branch, tag or commit to checkout after git clone',\n)\[email protected](\n '-v', '--verbose',\n is_flag=True, help='Print debug information', default=False\n)\[email protected](\n u'--replay', is_flag=True,\n help=u'Do not prompt for parameters and only use information entered '\n u'previously',\n)\[email protected](\n u'-f', u'--overwrite-if-exists', is_flag=True,\n help=u'Overwrite the contents of the output directory if it already exists'\n)\[email protected](\n u'-o', u'--output-dir', default='.', type=click.Path(),\n help=u'Where to output the generated project dir into'\n)\[email protected](\n u'--config-file', type=click.Path(), default=USER_CONFIG_PATH,\n help=u'User configuration file'\n)\[email protected](\n u'--default-config', is_flag=True,\n help=u'Do not load a config file. Use the defaults instead'\n)\ndef main(template, extra_context, no_input, checkout, verbose, replay,\n overwrite_if_exists, output_dir, config_file, default_config):\n \"\"\"Create a project from a Cookiecutter project template (TEMPLATE).\"\"\"\n if verbose:\n logging.basicConfig(\n format=u'%(levelname)s %(filename)s: %(message)s',\n level=logging.DEBUG\n )\n else:\n # Log info and above to console\n logging.basicConfig(\n format=u'%(levelname)s: %(message)s',\n level=logging.INFO\n )\n\n try:\n # If you _need_ to support a local template in a directory\n # called 'help', use a qualified path to the directory.\n if template == u'help':\n click.echo(click.get_current_context().get_help())\n sys.exit(0)\n\n user_config = None if default_config else config_file\n\n cookiecutter(\n template, checkout, no_input,\n extra_context=extra_context,\n replay=replay,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir,\n config_file=user_config\n )\n except (OutputDirExistsException,\n InvalidModeException,\n FailedHookException,\n UnknownExtension,\n RepositoryNotFound) as e:\n click.echo(e)\n sys.exit(1)\n except UndefinedVariableInTemplate as undefined_err:\n click.echo('{}'.format(undefined_err.message))\n click.echo('Error message: {}'.format(undefined_err.error.message))\n\n context_str = json.dumps(\n undefined_err.context,\n indent=4,\n sort_keys=True\n )\n click.echo('Context: {}'.format(context_str))\n sys.exit(1)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "cookiecutter/cli.py"}]}
| 1,515 | 475 |
gh_patches_debug_11410
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-4764
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
unable to plot file from repo subfolder
DVC version: `1.8.4`
```
#!/bin/bash
set -ex
rm -rf wspace
mkdir wspace
pushd wspace
mkdir repo
pushd repo
git init --quiet
dvc init --quiet
mkdir subfolder
pushd subfolder
echo '[{"val":1},{"val":3}]' >> plot.json
dvc plots show plot.json
```
fails with `unexpected error - 'plot.json'`
Need to fix for both Git repo and DVC repo.
Probably related to: #4665 #4559
Also, as @sudoandros mentioned under #4665 this error does not happen when dealing with `metrics`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/repo/plots/__init__.py`
Content:
```
1 import logging
2
3 from funcy import cached_property, first, project
4
5 from dvc.exceptions import DvcException, NoPlotsError
6 from dvc.repo.collect import collect
7 from dvc.schema import PLOT_PROPS
8 from dvc.tree.repo import RepoTree
9 from dvc.utils import relpath
10
11 logger = logging.getLogger(__name__)
12
13
14 class NotAPlotError(DvcException):
15 def __init__(self, out):
16 super().__init__(
17 f"'{out}' is not a known plot. Use `dvc plots modify` to turn it "
18 "into one."
19 )
20
21
22 class PropsNotFoundError(DvcException):
23 pass
24
25
26 class Plots:
27 def __init__(self, repo):
28 self.repo = repo
29
30 def collect(self, targets=None, revs=None):
31 """Collects all props and data for plots.
32
33 Returns a structure like:
34 {rev: {plots.csv: {
35 props: {x: ..., "header": ..., ...},
36 data: "...data as a string...",
37 }}}
38 Data parsing is postponed, since it's affected by props.
39 """
40 targets = [targets] if isinstance(targets, str) else targets or []
41 data = {}
42 for rev in self.repo.brancher(revs=revs):
43 # .brancher() adds unwanted workspace
44 if revs is not None and rev not in revs:
45 continue
46 rev = rev or "workspace"
47
48 tree = RepoTree(self.repo)
49 plots = _collect_plots(self.repo, targets, rev)
50 for path_info, props in plots.items():
51 datafile = relpath(path_info, self.repo.root_dir)
52 if rev not in data:
53 data[rev] = {}
54 data[rev].update({datafile: {"props": props}})
55
56 # Load data from git or dvc cache
57 try:
58 with tree.open(path_info) as fd:
59 data[rev][datafile]["data"] = fd.read()
60 except FileNotFoundError:
61 # This might happen simply because cache is absent
62 pass
63
64 return data
65
66 @staticmethod
67 def render(data, revs=None, props=None, templates=None):
68 """Renders plots"""
69 props = props or {}
70
71 # Merge data by plot file and apply overriding props
72 plots = _prepare_plots(data, revs, props)
73
74 return {
75 datafile: _render(datafile, desc["data"], desc["props"], templates)
76 for datafile, desc in plots.items()
77 }
78
79 def show(self, targets=None, revs=None, props=None, templates=None):
80 from .data import NoMetricInHistoryError
81
82 data = self.collect(targets, revs)
83
84 # If any mentioned plot doesn't have any data then that's an error
85 targets = [targets] if isinstance(targets, str) else targets or []
86 for target in targets:
87 if not any("data" in d[target] for d in data.values()):
88 raise NoMetricInHistoryError(target)
89
90 # No data at all is a special error with a special message
91 if not data:
92 raise NoPlotsError()
93
94 if templates is None:
95 templates = self.templates
96 return self.render(data, revs, props, templates)
97
98 def diff(self, *args, **kwargs):
99 from .diff import diff
100
101 return diff(self.repo, *args, **kwargs)
102
103 @staticmethod
104 def _unset(out, props):
105 missing = list(set(props) - set(out.plot.keys()))
106 if missing:
107 raise PropsNotFoundError(
108 f"display properties {missing} not found in plot '{out}'"
109 )
110
111 for prop in props:
112 out.plot.pop(prop)
113
114 def modify(self, path, props=None, unset=None):
115 from dvc.dvcfile import Dvcfile
116
117 props = props or {}
118 template = props.get("template")
119 if template:
120 self.templates.get_template(template)
121
122 (out,) = self.repo.find_outs_by_path(path)
123 if not out.plot and unset is not None:
124 raise NotAPlotError(out)
125
126 # This out will become a plot unless it is one already
127 if not isinstance(out.plot, dict):
128 out.plot = {}
129
130 if unset:
131 self._unset(out, unset)
132
133 out.plot.update(props)
134
135 # Empty dict will move it to non-plots
136 if not out.plot:
137 out.plot = True
138
139 out.verify_metric()
140
141 dvcfile = Dvcfile(self.repo, out.stage.path)
142 dvcfile.dump(out.stage, update_lock=False)
143
144 @cached_property
145 def templates(self):
146 from .template import PlotTemplates
147
148 return PlotTemplates(self.repo.dvc_dir)
149
150
151 def _is_plot(out):
152 return bool(out.plot)
153
154
155 def _collect_plots(repo, targets=None, rev=None):
156 plots, path_infos = collect(
157 repo, output_filter=_is_plot, targets=targets, rev=rev
158 )
159 result = {plot.path_info: _plot_props(plot) for plot in plots}
160 result.update({path_info: {} for path_info in path_infos})
161 return result
162
163
164 def _plot_props(out):
165 if not out.plot:
166 raise NotAPlotError(out)
167 if isinstance(out.plot, list):
168 raise DvcException("Multiple plots per data file not supported.")
169 if isinstance(out.plot, bool):
170 return {}
171
172 return project(out.plot, PLOT_PROPS)
173
174
175 def _prepare_plots(data, revs, props):
176 """Groups data by plot file.
177
178 Also resolves props conflicts between revs and applies global props.
179 """
180 # we go in order revs are supplied on props conflict first ones win.
181 revs = iter(data) if revs is None else revs
182
183 plots, props_revs = {}, {}
184 for rev in revs:
185 # Asked for revision without data
186 if rev not in data:
187 continue
188
189 for datafile, desc in data[rev].items():
190 # We silently skip on an absent data file,
191 # see also try/except/pass in .collect()
192 if "data" not in desc:
193 continue
194
195 # props from command line overwrite plot props from out definition
196 full_props = {**desc["props"], **props}
197
198 if datafile in plots:
199 saved = plots[datafile]
200 if saved["props"] != full_props:
201 logger.warning(
202 f"Inconsistent plot props for '{datafile}' in "
203 f"'{props_revs[datafile]}' and '{rev}'. "
204 f"Going to use ones from '{props_revs[datafile]}'"
205 )
206
207 saved["data"][rev] = desc["data"]
208 else:
209 plots[datafile] = {
210 "props": full_props,
211 "data": {rev: desc["data"]},
212 }
213 # Save rev we got props from
214 props_revs[datafile] = rev
215
216 return plots
217
218
219 def _render(datafile, datas, props, templates):
220 from .data import PlotData, plot_data
221
222 # Copy it to not modify a passed value
223 props = props.copy()
224
225 # Add x and y to fields if set
226 fields = props.get("fields")
227 if fields is not None:
228 fields = {*fields, props.get("x"), props.get("y")} - {None}
229
230 template = templates.load(props.get("template") or "default")
231
232 # If x is not set add index field
233 if not props.get("x") and template.has_anchor("x"):
234 props["append_index"] = True
235 props["x"] = PlotData.INDEX_FIELD
236
237 # Parse all data, preprocess it and collect as a list of dicts
238 data = []
239 for rev, datablob in datas.items():
240 rev_data = plot_data(datafile, rev, datablob).to_datapoints(
241 fields=fields,
242 path=props.get("path"),
243 header=props.get("header", True),
244 append_index=props.get("append_index", False),
245 )
246 data.extend(rev_data)
247
248 # If y is not set then use last field not used yet
249 if not props.get("y") and template.has_anchor("y"):
250 fields = list(first(data))
251 skip = (PlotData.REVISION_FIELD, props.get("x"))
252 props["y"] = first(f for f in reversed(fields) if f not in skip)
253
254 return template.render(data, props=props)
255
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dvc/repo/plots/__init__.py b/dvc/repo/plots/__init__.py
--- a/dvc/repo/plots/__init__.py
+++ b/dvc/repo/plots/__init__.py
@@ -84,7 +84,8 @@
# If any mentioned plot doesn't have any data then that's an error
targets = [targets] if isinstance(targets, str) else targets or []
for target in targets:
- if not any("data" in d[target] for d in data.values()):
+ rpath = relpath(target, self.repo.root_dir)
+ if not any("data" in d[rpath] for d in data.values()):
raise NoMetricInHistoryError(target)
# No data at all is a special error with a special message
|
{"golden_diff": "diff --git a/dvc/repo/plots/__init__.py b/dvc/repo/plots/__init__.py\n--- a/dvc/repo/plots/__init__.py\n+++ b/dvc/repo/plots/__init__.py\n@@ -84,7 +84,8 @@\n # If any mentioned plot doesn't have any data then that's an error\n targets = [targets] if isinstance(targets, str) else targets or []\n for target in targets:\n- if not any(\"data\" in d[target] for d in data.values()):\n+ rpath = relpath(target, self.repo.root_dir)\n+ if not any(\"data\" in d[rpath] for d in data.values()):\n raise NoMetricInHistoryError(target)\n \n # No data at all is a special error with a special message\n", "issue": "unable to plot file from repo subfolder\nDVC version: `1.8.4`\r\n\r\n```\r\n#!/bin/bash\r\n\r\nset -ex\r\n\r\nrm -rf wspace\r\nmkdir wspace\r\npushd wspace\r\n\r\nmkdir repo\r\npushd repo\r\n\r\ngit init --quiet\r\ndvc init --quiet\r\n\r\nmkdir subfolder\r\npushd subfolder\r\n\r\necho '[{\"val\":1},{\"val\":3}]' >> plot.json\r\n\r\ndvc plots show plot.json\r\n```\r\n\r\nfails with `unexpected error - 'plot.json'`\r\n\r\nNeed to fix for both Git repo and DVC repo.\r\n\r\nProbably related to: #4665 #4559\r\n\r\nAlso, as @sudoandros mentioned under #4665 this error does not happen when dealing with `metrics`.\n", "before_files": [{"content": "import logging\n\nfrom funcy import cached_property, first, project\n\nfrom dvc.exceptions import DvcException, NoPlotsError\nfrom dvc.repo.collect import collect\nfrom dvc.schema import PLOT_PROPS\nfrom dvc.tree.repo import RepoTree\nfrom dvc.utils import relpath\n\nlogger = logging.getLogger(__name__)\n\n\nclass NotAPlotError(DvcException):\n def __init__(self, out):\n super().__init__(\n f\"'{out}' is not a known plot. Use `dvc plots modify` to turn it \"\n \"into one.\"\n )\n\n\nclass PropsNotFoundError(DvcException):\n pass\n\n\nclass Plots:\n def __init__(self, repo):\n self.repo = repo\n\n def collect(self, targets=None, revs=None):\n \"\"\"Collects all props and data for plots.\n\n Returns a structure like:\n {rev: {plots.csv: {\n props: {x: ..., \"header\": ..., ...},\n data: \"...data as a string...\",\n }}}\n Data parsing is postponed, since it's affected by props.\n \"\"\"\n targets = [targets] if isinstance(targets, str) else targets or []\n data = {}\n for rev in self.repo.brancher(revs=revs):\n # .brancher() adds unwanted workspace\n if revs is not None and rev not in revs:\n continue\n rev = rev or \"workspace\"\n\n tree = RepoTree(self.repo)\n plots = _collect_plots(self.repo, targets, rev)\n for path_info, props in plots.items():\n datafile = relpath(path_info, self.repo.root_dir)\n if rev not in data:\n data[rev] = {}\n data[rev].update({datafile: {\"props\": props}})\n\n # Load data from git or dvc cache\n try:\n with tree.open(path_info) as fd:\n data[rev][datafile][\"data\"] = fd.read()\n except FileNotFoundError:\n # This might happen simply because cache is absent\n pass\n\n return data\n\n @staticmethod\n def render(data, revs=None, props=None, templates=None):\n \"\"\"Renders plots\"\"\"\n props = props or {}\n\n # Merge data by plot file and apply overriding props\n plots = _prepare_plots(data, revs, props)\n\n return {\n datafile: _render(datafile, desc[\"data\"], desc[\"props\"], templates)\n for datafile, desc in plots.items()\n }\n\n def show(self, targets=None, revs=None, props=None, templates=None):\n from .data import NoMetricInHistoryError\n\n data = self.collect(targets, revs)\n\n # If any mentioned plot doesn't have any data then that's an error\n targets = [targets] if isinstance(targets, str) else targets or []\n for target in targets:\n if not any(\"data\" in d[target] for d in data.values()):\n raise NoMetricInHistoryError(target)\n\n # No data at all is a special error with a special message\n if not data:\n raise NoPlotsError()\n\n if templates is None:\n templates = self.templates\n return self.render(data, revs, props, templates)\n\n def diff(self, *args, **kwargs):\n from .diff import diff\n\n return diff(self.repo, *args, **kwargs)\n\n @staticmethod\n def _unset(out, props):\n missing = list(set(props) - set(out.plot.keys()))\n if missing:\n raise PropsNotFoundError(\n f\"display properties {missing} not found in plot '{out}'\"\n )\n\n for prop in props:\n out.plot.pop(prop)\n\n def modify(self, path, props=None, unset=None):\n from dvc.dvcfile import Dvcfile\n\n props = props or {}\n template = props.get(\"template\")\n if template:\n self.templates.get_template(template)\n\n (out,) = self.repo.find_outs_by_path(path)\n if not out.plot and unset is not None:\n raise NotAPlotError(out)\n\n # This out will become a plot unless it is one already\n if not isinstance(out.plot, dict):\n out.plot = {}\n\n if unset:\n self._unset(out, unset)\n\n out.plot.update(props)\n\n # Empty dict will move it to non-plots\n if not out.plot:\n out.plot = True\n\n out.verify_metric()\n\n dvcfile = Dvcfile(self.repo, out.stage.path)\n dvcfile.dump(out.stage, update_lock=False)\n\n @cached_property\n def templates(self):\n from .template import PlotTemplates\n\n return PlotTemplates(self.repo.dvc_dir)\n\n\ndef _is_plot(out):\n return bool(out.plot)\n\n\ndef _collect_plots(repo, targets=None, rev=None):\n plots, path_infos = collect(\n repo, output_filter=_is_plot, targets=targets, rev=rev\n )\n result = {plot.path_info: _plot_props(plot) for plot in plots}\n result.update({path_info: {} for path_info in path_infos})\n return result\n\n\ndef _plot_props(out):\n if not out.plot:\n raise NotAPlotError(out)\n if isinstance(out.plot, list):\n raise DvcException(\"Multiple plots per data file not supported.\")\n if isinstance(out.plot, bool):\n return {}\n\n return project(out.plot, PLOT_PROPS)\n\n\ndef _prepare_plots(data, revs, props):\n \"\"\"Groups data by plot file.\n\n Also resolves props conflicts between revs and applies global props.\n \"\"\"\n # we go in order revs are supplied on props conflict first ones win.\n revs = iter(data) if revs is None else revs\n\n plots, props_revs = {}, {}\n for rev in revs:\n # Asked for revision without data\n if rev not in data:\n continue\n\n for datafile, desc in data[rev].items():\n # We silently skip on an absent data file,\n # see also try/except/pass in .collect()\n if \"data\" not in desc:\n continue\n\n # props from command line overwrite plot props from out definition\n full_props = {**desc[\"props\"], **props}\n\n if datafile in plots:\n saved = plots[datafile]\n if saved[\"props\"] != full_props:\n logger.warning(\n f\"Inconsistent plot props for '{datafile}' in \"\n f\"'{props_revs[datafile]}' and '{rev}'. \"\n f\"Going to use ones from '{props_revs[datafile]}'\"\n )\n\n saved[\"data\"][rev] = desc[\"data\"]\n else:\n plots[datafile] = {\n \"props\": full_props,\n \"data\": {rev: desc[\"data\"]},\n }\n # Save rev we got props from\n props_revs[datafile] = rev\n\n return plots\n\n\ndef _render(datafile, datas, props, templates):\n from .data import PlotData, plot_data\n\n # Copy it to not modify a passed value\n props = props.copy()\n\n # Add x and y to fields if set\n fields = props.get(\"fields\")\n if fields is not None:\n fields = {*fields, props.get(\"x\"), props.get(\"y\")} - {None}\n\n template = templates.load(props.get(\"template\") or \"default\")\n\n # If x is not set add index field\n if not props.get(\"x\") and template.has_anchor(\"x\"):\n props[\"append_index\"] = True\n props[\"x\"] = PlotData.INDEX_FIELD\n\n # Parse all data, preprocess it and collect as a list of dicts\n data = []\n for rev, datablob in datas.items():\n rev_data = plot_data(datafile, rev, datablob).to_datapoints(\n fields=fields,\n path=props.get(\"path\"),\n header=props.get(\"header\", True),\n append_index=props.get(\"append_index\", False),\n )\n data.extend(rev_data)\n\n # If y is not set then use last field not used yet\n if not props.get(\"y\") and template.has_anchor(\"y\"):\n fields = list(first(data))\n skip = (PlotData.REVISION_FIELD, props.get(\"x\"))\n props[\"y\"] = first(f for f in reversed(fields) if f not in skip)\n\n return template.render(data, props=props)\n", "path": "dvc/repo/plots/__init__.py"}], "after_files": [{"content": "import logging\n\nfrom funcy import cached_property, first, project\n\nfrom dvc.exceptions import DvcException, NoPlotsError\nfrom dvc.repo.collect import collect\nfrom dvc.schema import PLOT_PROPS\nfrom dvc.tree.repo import RepoTree\nfrom dvc.utils import relpath\n\nlogger = logging.getLogger(__name__)\n\n\nclass NotAPlotError(DvcException):\n def __init__(self, out):\n super().__init__(\n f\"'{out}' is not a known plot. Use `dvc plots modify` to turn it \"\n \"into one.\"\n )\n\n\nclass PropsNotFoundError(DvcException):\n pass\n\n\nclass Plots:\n def __init__(self, repo):\n self.repo = repo\n\n def collect(self, targets=None, revs=None):\n \"\"\"Collects all props and data for plots.\n\n Returns a structure like:\n {rev: {plots.csv: {\n props: {x: ..., \"header\": ..., ...},\n data: \"...data as a string...\",\n }}}\n Data parsing is postponed, since it's affected by props.\n \"\"\"\n targets = [targets] if isinstance(targets, str) else targets or []\n data = {}\n for rev in self.repo.brancher(revs=revs):\n # .brancher() adds unwanted workspace\n if revs is not None and rev not in revs:\n continue\n rev = rev or \"workspace\"\n\n tree = RepoTree(self.repo)\n plots = _collect_plots(self.repo, targets, rev)\n for path_info, props in plots.items():\n datafile = relpath(path_info, self.repo.root_dir)\n if rev not in data:\n data[rev] = {}\n data[rev].update({datafile: {\"props\": props}})\n\n # Load data from git or dvc cache\n try:\n with tree.open(path_info) as fd:\n data[rev][datafile][\"data\"] = fd.read()\n except FileNotFoundError:\n # This might happen simply because cache is absent\n pass\n\n return data\n\n @staticmethod\n def render(data, revs=None, props=None, templates=None):\n \"\"\"Renders plots\"\"\"\n props = props or {}\n\n # Merge data by plot file and apply overriding props\n plots = _prepare_plots(data, revs, props)\n\n return {\n datafile: _render(datafile, desc[\"data\"], desc[\"props\"], templates)\n for datafile, desc in plots.items()\n }\n\n def show(self, targets=None, revs=None, props=None, templates=None):\n from .data import NoMetricInHistoryError\n\n data = self.collect(targets, revs)\n\n # If any mentioned plot doesn't have any data then that's an error\n targets = [targets] if isinstance(targets, str) else targets or []\n for target in targets:\n rpath = relpath(target, self.repo.root_dir)\n if not any(\"data\" in d[rpath] for d in data.values()):\n raise NoMetricInHistoryError(target)\n\n # No data at all is a special error with a special message\n if not data:\n raise NoPlotsError()\n\n if templates is None:\n templates = self.templates\n return self.render(data, revs, props, templates)\n\n def diff(self, *args, **kwargs):\n from .diff import diff\n\n return diff(self.repo, *args, **kwargs)\n\n @staticmethod\n def _unset(out, props):\n missing = list(set(props) - set(out.plot.keys()))\n if missing:\n raise PropsNotFoundError(\n f\"display properties {missing} not found in plot '{out}'\"\n )\n\n for prop in props:\n out.plot.pop(prop)\n\n def modify(self, path, props=None, unset=None):\n from dvc.dvcfile import Dvcfile\n\n props = props or {}\n template = props.get(\"template\")\n if template:\n self.templates.get_template(template)\n\n (out,) = self.repo.find_outs_by_path(path)\n if not out.plot and unset is not None:\n raise NotAPlotError(out)\n\n # This out will become a plot unless it is one already\n if not isinstance(out.plot, dict):\n out.plot = {}\n\n if unset:\n self._unset(out, unset)\n\n out.plot.update(props)\n\n # Empty dict will move it to non-plots\n if not out.plot:\n out.plot = True\n\n out.verify_metric()\n\n dvcfile = Dvcfile(self.repo, out.stage.path)\n dvcfile.dump(out.stage, update_lock=False)\n\n @cached_property\n def templates(self):\n from .template import PlotTemplates\n\n return PlotTemplates(self.repo.dvc_dir)\n\n\ndef _is_plot(out):\n return bool(out.plot)\n\n\ndef _collect_plots(repo, targets=None, rev=None):\n plots, path_infos = collect(\n repo, output_filter=_is_plot, targets=targets, rev=rev\n )\n result = {plot.path_info: _plot_props(plot) for plot in plots}\n result.update({path_info: {} for path_info in path_infos})\n return result\n\n\ndef _plot_props(out):\n if not out.plot:\n raise NotAPlotError(out)\n if isinstance(out.plot, list):\n raise DvcException(\"Multiple plots per data file not supported.\")\n if isinstance(out.plot, bool):\n return {}\n\n return project(out.plot, PLOT_PROPS)\n\n\ndef _prepare_plots(data, revs, props):\n \"\"\"Groups data by plot file.\n\n Also resolves props conflicts between revs and applies global props.\n \"\"\"\n # we go in order revs are supplied on props conflict first ones win.\n revs = iter(data) if revs is None else revs\n\n plots, props_revs = {}, {}\n for rev in revs:\n # Asked for revision without data\n if rev not in data:\n continue\n\n for datafile, desc in data[rev].items():\n # We silently skip on an absent data file,\n # see also try/except/pass in .collect()\n if \"data\" not in desc:\n continue\n\n # props from command line overwrite plot props from out definition\n full_props = {**desc[\"props\"], **props}\n\n if datafile in plots:\n saved = plots[datafile]\n if saved[\"props\"] != full_props:\n logger.warning(\n f\"Inconsistent plot props for '{datafile}' in \"\n f\"'{props_revs[datafile]}' and '{rev}'. \"\n f\"Going to use ones from '{props_revs[datafile]}'\"\n )\n\n saved[\"data\"][rev] = desc[\"data\"]\n else:\n plots[datafile] = {\n \"props\": full_props,\n \"data\": {rev: desc[\"data\"]},\n }\n # Save rev we got props from\n props_revs[datafile] = rev\n\n return plots\n\n\ndef _render(datafile, datas, props, templates):\n from .data import PlotData, plot_data\n\n # Copy it to not modify a passed value\n props = props.copy()\n\n # Add x and y to fields if set\n fields = props.get(\"fields\")\n if fields is not None:\n fields = {*fields, props.get(\"x\"), props.get(\"y\")} - {None}\n\n template = templates.load(props.get(\"template\") or \"default\")\n\n # If x is not set add index field\n if not props.get(\"x\") and template.has_anchor(\"x\"):\n props[\"append_index\"] = True\n props[\"x\"] = PlotData.INDEX_FIELD\n\n # Parse all data, preprocess it and collect as a list of dicts\n data = []\n for rev, datablob in datas.items():\n rev_data = plot_data(datafile, rev, datablob).to_datapoints(\n fields=fields,\n path=props.get(\"path\"),\n header=props.get(\"header\", True),\n append_index=props.get(\"append_index\", False),\n )\n data.extend(rev_data)\n\n # If y is not set then use last field not used yet\n if not props.get(\"y\") and template.has_anchor(\"y\"):\n fields = list(first(data))\n skip = (PlotData.REVISION_FIELD, props.get(\"x\"))\n props[\"y\"] = first(f for f in reversed(fields) if f not in skip)\n\n return template.render(data, props=props)\n", "path": "dvc/repo/plots/__init__.py"}]}
| 2,921 | 179 |
gh_patches_debug_21799
|
rasdani/github-patches
|
git_diff
|
spack__spack-9072
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MVAPICH2 depends on libibverbs
I was testing out Spack by attempting an installation of mvapich2 on my Fedora laptop. Of course, I don't have any libibverbs installed on my laptop, and even with all of the variants disabled, I was unable to build mvapich2:
```
configure: error: 'libibverbs not found. Did you specify --with-ib-libpath=?'
```
I could use the system package manager to do this, but how would people feel about a Spack package for libibverbs? It looks like you can download it here:
https://www.openfabrics.org/downloads/libibverbs/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/mvapich2/package.py`
Content:
```
1 ##############################################################################
2 # Copyright (c) 2013-2018, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/spack/spack
10 # Please also see the NOTICE and LICENSE files for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25 import sys
26
27 from spack import *
28 from spack.error import SpackError
29
30
31 def _process_manager_validator(values):
32 if len(values) > 1 and 'slurm' in values:
33 raise SpackError(
34 'slurm cannot be activated along with other process managers'
35 )
36
37
38 class Mvapich2(AutotoolsPackage):
39 """MVAPICH2 is an MPI implementation for Infiniband networks."""
40 homepage = "http://mvapich.cse.ohio-state.edu/"
41 url = "http://mvapich.cse.ohio-state.edu/download/mvapich/mv2/mvapich2-2.2.tar.gz"
42 list_url = "http://mvapich.cse.ohio-state.edu/downloads/"
43
44 version('2.3rc2', '6fcf22fe2a16023b462ef57614daa357')
45 version('2.3rc1', '386d79ae36b2136d203826465ad8b6cc')
46 version('2.3a', '87c3fbf8a755b53806fa9ecb21453445')
47
48 # Prefer the latest stable release
49 version('2.3', sha256='01d5fb592454ddd9ecc17e91c8983b6aea0e7559aa38f410b111c8ef385b50dd', preferred=True)
50 version('2.2', '939b65ebe5b89a5bc822cdab0f31f96e')
51 version('2.1', '0095ceecb19bbb7fb262131cb9c2cdd6')
52 version('2.0', '9fbb68a4111a8b6338e476dc657388b4')
53
54 provides('mpi')
55 provides('mpi@:3.0')
56
57 variant('debug', default=False,
58 description='Enable debug info and error messages at run-time')
59
60 variant('cuda', default=False,
61 description='Enable CUDA extension')
62
63 variant('regcache', default=True,
64 description='Enable memory registration cache')
65
66 # Accepted values are:
67 # single - No threads (MPI_THREAD_SINGLE)
68 # funneled - Only the main thread calls MPI (MPI_THREAD_FUNNELED)
69 # serialized - User serializes calls to MPI (MPI_THREAD_SERIALIZED)
70 # multiple - Fully multi-threaded (MPI_THREAD_MULTIPLE)
71 # runtime - Alias to "multiple"
72 variant(
73 'threads',
74 default='multiple',
75 values=('single', 'funneled', 'serialized', 'multiple'),
76 multi=False,
77 description='Control the level of thread support'
78 )
79
80 # 32 is needed when job size exceeds 32768 cores
81 variant(
82 'ch3_rank_bits',
83 default='32',
84 values=('16', '32'),
85 multi=False,
86 description='Number of bits allocated to the rank field (16 or 32)'
87 )
88
89 variant(
90 'process_managers',
91 description='List of the process managers to activate',
92 values=('slurm', 'hydra', 'gforker', 'remshell'),
93 multi=True,
94 validator=_process_manager_validator
95 )
96
97 variant(
98 'fabrics',
99 description='The fabric enabled for this build',
100 default='psm',
101 values=(
102 'psm', 'sock', 'nemesisib', 'nemesis', 'mrail', 'nemesisibtcp',
103 'nemesistcpib'
104 )
105 )
106
107 variant(
108 'alloca',
109 default=False,
110 description='Use alloca to allocate temporary memory if available'
111 )
112
113 variant(
114 'file_systems',
115 description='List of the ROMIO file systems to activate',
116 values=('lustre', 'gpfs', 'nfs', 'ufs'),
117 multi=True
118 )
119
120 depends_on('bison', type='build')
121 depends_on('libpciaccess', when=(sys.platform != 'darwin'))
122 depends_on('cuda', when='+cuda')
123 depends_on('psm', when='fabrics=psm')
124
125 filter_compiler_wrappers(
126 'mpicc', 'mpicxx', 'mpif77', 'mpif90', 'mpifort', relative_root='bin'
127 )
128
129 @property
130 def libs(self):
131 query_parameters = self.spec.last_query.extra_parameters
132 libraries = ['libmpi']
133
134 if 'cxx' in query_parameters:
135 libraries = ['libmpicxx'] + libraries
136
137 return find_libraries(
138 libraries, root=self.prefix, shared=True, recursive=True
139 )
140
141 @property
142 def process_manager_options(self):
143 spec = self.spec
144
145 other_pms = []
146 for x in ('hydra', 'gforker', 'remshell'):
147 if 'process_managers={0}'.format(x) in spec:
148 other_pms.append(x)
149
150 opts = []
151 if len(other_pms) > 0:
152 opts = ['--with-pm=%s' % ':'.join(other_pms)]
153
154 # See: http://slurm.schedmd.com/mpi_guide.html#mvapich2
155 if 'process_managers=slurm' in spec:
156 opts = [
157 '--with-pmi=pmi2',
158 '--with-pm=slurm'
159 ]
160
161 return opts
162
163 @property
164 def network_options(self):
165 opts = []
166 # From here on I can suppose that only one variant has been selected
167 if 'fabrics=psm' in self.spec:
168 opts = [
169 "--with-device=ch3:psm",
170 "--with-psm={0}".format(self.spec['psm'].prefix)
171 ]
172 elif 'fabrics=sock' in self.spec:
173 opts = ["--with-device=ch3:sock"]
174 elif 'fabrics=nemesistcpib' in self.spec:
175 opts = ["--with-device=ch3:nemesis:tcp,ib"]
176 elif 'fabrics=nemesisibtcp' in self.spec:
177 opts = ["--with-device=ch3:nemesis:ib,tcp"]
178 elif 'fabrics=nemesisib' in self.spec:
179 opts = ["--with-device=ch3:nemesis:ib"]
180 elif 'fabrics=nemesis' in self.spec:
181 opts = ["--with-device=ch3:nemesis"]
182 elif 'fabrics=mrail' in self.spec:
183 opts = ["--with-device=ch3:mrail", "--with-rdma=gen2"]
184 return opts
185
186 @property
187 def file_system_options(self):
188 spec = self.spec
189
190 fs = []
191 for x in ('lustre', 'gpfs', 'nfs', 'ufs'):
192 if 'file_systems={0}'.format(x) in spec:
193 fs.append(x)
194
195 opts = []
196 if len(fs) > 0:
197 opts.append('--with-file-system=%s' % '+'.join(fs))
198
199 return opts
200
201 def setup_environment(self, spack_env, run_env):
202 spec = self.spec
203 # mvapich2 configure fails when F90 and F90FLAGS are set
204 spack_env.unset('F90')
205 spack_env.unset('F90FLAGS')
206 if 'process_managers=slurm' in spec:
207 run_env.set('SLURM_MPI_TYPE', 'pmi2')
208
209 def setup_dependent_environment(self, spack_env, run_env, dependent_spec):
210 spack_env.set('MPICC', join_path(self.prefix.bin, 'mpicc'))
211 spack_env.set('MPICXX', join_path(self.prefix.bin, 'mpicxx'))
212 spack_env.set('MPIF77', join_path(self.prefix.bin, 'mpif77'))
213 spack_env.set('MPIF90', join_path(self.prefix.bin, 'mpif90'))
214
215 spack_env.set('MPICH_CC', spack_cc)
216 spack_env.set('MPICH_CXX', spack_cxx)
217 spack_env.set('MPICH_F77', spack_f77)
218 spack_env.set('MPICH_F90', spack_fc)
219 spack_env.set('MPICH_FC', spack_fc)
220
221 def setup_dependent_package(self, module, dependent_spec):
222 self.spec.mpicc = join_path(self.prefix.bin, 'mpicc')
223 self.spec.mpicxx = join_path(self.prefix.bin, 'mpicxx')
224 self.spec.mpifc = join_path(self.prefix.bin, 'mpif90')
225 self.spec.mpif77 = join_path(self.prefix.bin, 'mpif77')
226 self.spec.mpicxx_shared_libs = [
227 join_path(self.prefix.lib, 'libmpicxx.{0}'.format(dso_suffix)),
228 join_path(self.prefix.lib, 'libmpi.{0}'.format(dso_suffix))
229 ]
230
231 @run_before('configure')
232 def die_without_fortran(self):
233 # Until we can pass variants such as +fortran through virtual
234 # dependencies depends_on('mpi'), require Fortran compiler to
235 # avoid delayed build errors in dependents.
236 if (self.compiler.f77 is None) or (self.compiler.fc is None):
237 raise InstallError(
238 'Mvapich2 requires both C and Fortran compilers!'
239 )
240
241 def configure_args(self):
242 spec = self.spec
243 args = [
244 '--enable-shared',
245 '--enable-romio',
246 '--disable-silent-rules',
247 '--disable-new-dtags',
248 '--enable-fortran=all',
249 "--enable-threads={0}".format(spec.variants['threads'].value),
250 "--with-ch3-rank-bits={0}".format(
251 spec.variants['ch3_rank_bits'].value),
252 ]
253
254 args.extend(self.enable_or_disable('alloca'))
255
256 if '+debug' in self.spec:
257 args.extend([
258 '--disable-fast',
259 '--enable-error-checking=runtime',
260 '--enable-error-messages=all',
261 # Permits debugging with TotalView
262 '--enable-g=dbg',
263 '--enable-debuginfo'
264 ])
265 else:
266 args.append('--enable-fast=all')
267
268 if '+cuda' in self.spec:
269 args.extend([
270 '--enable-cuda',
271 '--with-cuda={0}'.format(spec['cuda'].prefix)
272 ])
273 else:
274 args.append('--disable-cuda')
275
276 if '+regcache' in self.spec:
277 args.append('--enable-registration-cache')
278 else:
279 args.append('--disable-registration-cache')
280
281 args.extend(self.process_manager_options)
282 args.extend(self.network_options)
283 args.extend(self.file_system_options)
284 return args
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/var/spack/repos/builtin/packages/mvapich2/package.py b/var/spack/repos/builtin/packages/mvapich2/package.py
--- a/var/spack/repos/builtin/packages/mvapich2/package.py
+++ b/var/spack/repos/builtin/packages/mvapich2/package.py
@@ -121,6 +121,10 @@
depends_on('libpciaccess', when=(sys.platform != 'darwin'))
depends_on('cuda', when='+cuda')
depends_on('psm', when='fabrics=psm')
+ depends_on('rdma-core', when='fabrics=mrail')
+ depends_on('rdma-core', when='fabrics=nemesisib')
+ depends_on('rdma-core', when='fabrics=nemesistcpib')
+ depends_on('rdma-core', when='fabrics=nemesisibtcp')
filter_compiler_wrappers(
'mpicc', 'mpicxx', 'mpif77', 'mpif90', 'mpifort', relative_root='bin'
@@ -180,7 +184,8 @@
elif 'fabrics=nemesis' in self.spec:
opts = ["--with-device=ch3:nemesis"]
elif 'fabrics=mrail' in self.spec:
- opts = ["--with-device=ch3:mrail", "--with-rdma=gen2"]
+ opts = ["--with-device=ch3:mrail", "--with-rdma=gen2",
+ "--disable-mcast"]
return opts
@property
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/mvapich2/package.py b/var/spack/repos/builtin/packages/mvapich2/package.py\n--- a/var/spack/repos/builtin/packages/mvapich2/package.py\n+++ b/var/spack/repos/builtin/packages/mvapich2/package.py\n@@ -121,6 +121,10 @@\n depends_on('libpciaccess', when=(sys.platform != 'darwin'))\n depends_on('cuda', when='+cuda')\n depends_on('psm', when='fabrics=psm')\n+ depends_on('rdma-core', when='fabrics=mrail')\n+ depends_on('rdma-core', when='fabrics=nemesisib')\n+ depends_on('rdma-core', when='fabrics=nemesistcpib')\n+ depends_on('rdma-core', when='fabrics=nemesisibtcp')\n \n filter_compiler_wrappers(\n 'mpicc', 'mpicxx', 'mpif77', 'mpif90', 'mpifort', relative_root='bin'\n@@ -180,7 +184,8 @@\n elif 'fabrics=nemesis' in self.spec:\n opts = [\"--with-device=ch3:nemesis\"]\n elif 'fabrics=mrail' in self.spec:\n- opts = [\"--with-device=ch3:mrail\", \"--with-rdma=gen2\"]\n+ opts = [\"--with-device=ch3:mrail\", \"--with-rdma=gen2\",\n+ \"--disable-mcast\"]\n return opts\n \n @property\n", "issue": "MVAPICH2 depends on libibverbs\nI was testing out Spack by attempting an installation of mvapich2 on my Fedora laptop. Of course, I don't have any libibverbs installed on my laptop, and even with all of the variants disabled, I was unable to build mvapich2:\r\n```\r\nconfigure: error: 'libibverbs not found. Did you specify --with-ib-libpath=?'\r\n```\r\nI could use the system package manager to do this, but how would people feel about a Spack package for libibverbs? It looks like you can download it here:\r\nhttps://www.openfabrics.org/downloads/libibverbs/\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2018, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/spack/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nimport sys\n\nfrom spack import *\nfrom spack.error import SpackError\n\n\ndef _process_manager_validator(values):\n if len(values) > 1 and 'slurm' in values:\n raise SpackError(\n 'slurm cannot be activated along with other process managers'\n )\n\n\nclass Mvapich2(AutotoolsPackage):\n \"\"\"MVAPICH2 is an MPI implementation for Infiniband networks.\"\"\"\n homepage = \"http://mvapich.cse.ohio-state.edu/\"\n url = \"http://mvapich.cse.ohio-state.edu/download/mvapich/mv2/mvapich2-2.2.tar.gz\"\n list_url = \"http://mvapich.cse.ohio-state.edu/downloads/\"\n\n version('2.3rc2', '6fcf22fe2a16023b462ef57614daa357')\n version('2.3rc1', '386d79ae36b2136d203826465ad8b6cc')\n version('2.3a', '87c3fbf8a755b53806fa9ecb21453445')\n\n # Prefer the latest stable release\n version('2.3', sha256='01d5fb592454ddd9ecc17e91c8983b6aea0e7559aa38f410b111c8ef385b50dd', preferred=True)\n version('2.2', '939b65ebe5b89a5bc822cdab0f31f96e')\n version('2.1', '0095ceecb19bbb7fb262131cb9c2cdd6')\n version('2.0', '9fbb68a4111a8b6338e476dc657388b4')\n\n provides('mpi')\n provides('mpi@:3.0')\n\n variant('debug', default=False,\n description='Enable debug info and error messages at run-time')\n\n variant('cuda', default=False,\n description='Enable CUDA extension')\n\n variant('regcache', default=True,\n description='Enable memory registration cache')\n\n # Accepted values are:\n # single - No threads (MPI_THREAD_SINGLE)\n # funneled - Only the main thread calls MPI (MPI_THREAD_FUNNELED)\n # serialized - User serializes calls to MPI (MPI_THREAD_SERIALIZED)\n # multiple - Fully multi-threaded (MPI_THREAD_MULTIPLE)\n # runtime - Alias to \"multiple\"\n variant(\n 'threads',\n default='multiple',\n values=('single', 'funneled', 'serialized', 'multiple'),\n multi=False,\n description='Control the level of thread support'\n )\n\n # 32 is needed when job size exceeds 32768 cores\n variant(\n 'ch3_rank_bits',\n default='32',\n values=('16', '32'),\n multi=False,\n description='Number of bits allocated to the rank field (16 or 32)'\n )\n\n variant(\n 'process_managers',\n description='List of the process managers to activate',\n values=('slurm', 'hydra', 'gforker', 'remshell'),\n multi=True,\n validator=_process_manager_validator\n )\n\n variant(\n 'fabrics',\n description='The fabric enabled for this build',\n default='psm',\n values=(\n 'psm', 'sock', 'nemesisib', 'nemesis', 'mrail', 'nemesisibtcp',\n 'nemesistcpib'\n )\n )\n\n variant(\n 'alloca',\n default=False,\n description='Use alloca to allocate temporary memory if available'\n )\n\n variant(\n 'file_systems',\n description='List of the ROMIO file systems to activate',\n values=('lustre', 'gpfs', 'nfs', 'ufs'),\n multi=True\n )\n\n depends_on('bison', type='build')\n depends_on('libpciaccess', when=(sys.platform != 'darwin'))\n depends_on('cuda', when='+cuda')\n depends_on('psm', when='fabrics=psm')\n\n filter_compiler_wrappers(\n 'mpicc', 'mpicxx', 'mpif77', 'mpif90', 'mpifort', relative_root='bin'\n )\n\n @property\n def libs(self):\n query_parameters = self.spec.last_query.extra_parameters\n libraries = ['libmpi']\n\n if 'cxx' in query_parameters:\n libraries = ['libmpicxx'] + libraries\n\n return find_libraries(\n libraries, root=self.prefix, shared=True, recursive=True\n )\n\n @property\n def process_manager_options(self):\n spec = self.spec\n\n other_pms = []\n for x in ('hydra', 'gforker', 'remshell'):\n if 'process_managers={0}'.format(x) in spec:\n other_pms.append(x)\n\n opts = []\n if len(other_pms) > 0:\n opts = ['--with-pm=%s' % ':'.join(other_pms)]\n\n # See: http://slurm.schedmd.com/mpi_guide.html#mvapich2\n if 'process_managers=slurm' in spec:\n opts = [\n '--with-pmi=pmi2',\n '--with-pm=slurm'\n ]\n\n return opts\n\n @property\n def network_options(self):\n opts = []\n # From here on I can suppose that only one variant has been selected\n if 'fabrics=psm' in self.spec:\n opts = [\n \"--with-device=ch3:psm\",\n \"--with-psm={0}\".format(self.spec['psm'].prefix)\n ]\n elif 'fabrics=sock' in self.spec:\n opts = [\"--with-device=ch3:sock\"]\n elif 'fabrics=nemesistcpib' in self.spec:\n opts = [\"--with-device=ch3:nemesis:tcp,ib\"]\n elif 'fabrics=nemesisibtcp' in self.spec:\n opts = [\"--with-device=ch3:nemesis:ib,tcp\"]\n elif 'fabrics=nemesisib' in self.spec:\n opts = [\"--with-device=ch3:nemesis:ib\"]\n elif 'fabrics=nemesis' in self.spec:\n opts = [\"--with-device=ch3:nemesis\"]\n elif 'fabrics=mrail' in self.spec:\n opts = [\"--with-device=ch3:mrail\", \"--with-rdma=gen2\"]\n return opts\n\n @property\n def file_system_options(self):\n spec = self.spec\n\n fs = []\n for x in ('lustre', 'gpfs', 'nfs', 'ufs'):\n if 'file_systems={0}'.format(x) in spec:\n fs.append(x)\n\n opts = []\n if len(fs) > 0:\n opts.append('--with-file-system=%s' % '+'.join(fs))\n\n return opts\n\n def setup_environment(self, spack_env, run_env):\n spec = self.spec\n # mvapich2 configure fails when F90 and F90FLAGS are set\n spack_env.unset('F90')\n spack_env.unset('F90FLAGS')\n if 'process_managers=slurm' in spec:\n run_env.set('SLURM_MPI_TYPE', 'pmi2')\n\n def setup_dependent_environment(self, spack_env, run_env, dependent_spec):\n spack_env.set('MPICC', join_path(self.prefix.bin, 'mpicc'))\n spack_env.set('MPICXX', join_path(self.prefix.bin, 'mpicxx'))\n spack_env.set('MPIF77', join_path(self.prefix.bin, 'mpif77'))\n spack_env.set('MPIF90', join_path(self.prefix.bin, 'mpif90'))\n\n spack_env.set('MPICH_CC', spack_cc)\n spack_env.set('MPICH_CXX', spack_cxx)\n spack_env.set('MPICH_F77', spack_f77)\n spack_env.set('MPICH_F90', spack_fc)\n spack_env.set('MPICH_FC', spack_fc)\n\n def setup_dependent_package(self, module, dependent_spec):\n self.spec.mpicc = join_path(self.prefix.bin, 'mpicc')\n self.spec.mpicxx = join_path(self.prefix.bin, 'mpicxx')\n self.spec.mpifc = join_path(self.prefix.bin, 'mpif90')\n self.spec.mpif77 = join_path(self.prefix.bin, 'mpif77')\n self.spec.mpicxx_shared_libs = [\n join_path(self.prefix.lib, 'libmpicxx.{0}'.format(dso_suffix)),\n join_path(self.prefix.lib, 'libmpi.{0}'.format(dso_suffix))\n ]\n\n @run_before('configure')\n def die_without_fortran(self):\n # Until we can pass variants such as +fortran through virtual\n # dependencies depends_on('mpi'), require Fortran compiler to\n # avoid delayed build errors in dependents.\n if (self.compiler.f77 is None) or (self.compiler.fc is None):\n raise InstallError(\n 'Mvapich2 requires both C and Fortran compilers!'\n )\n\n def configure_args(self):\n spec = self.spec\n args = [\n '--enable-shared',\n '--enable-romio',\n '--disable-silent-rules',\n '--disable-new-dtags',\n '--enable-fortran=all',\n \"--enable-threads={0}\".format(spec.variants['threads'].value),\n \"--with-ch3-rank-bits={0}\".format(\n spec.variants['ch3_rank_bits'].value),\n ]\n\n args.extend(self.enable_or_disable('alloca'))\n\n if '+debug' in self.spec:\n args.extend([\n '--disable-fast',\n '--enable-error-checking=runtime',\n '--enable-error-messages=all',\n # Permits debugging with TotalView\n '--enable-g=dbg',\n '--enable-debuginfo'\n ])\n else:\n args.append('--enable-fast=all')\n\n if '+cuda' in self.spec:\n args.extend([\n '--enable-cuda',\n '--with-cuda={0}'.format(spec['cuda'].prefix)\n ])\n else:\n args.append('--disable-cuda')\n\n if '+regcache' in self.spec:\n args.append('--enable-registration-cache')\n else:\n args.append('--disable-registration-cache')\n\n args.extend(self.process_manager_options)\n args.extend(self.network_options)\n args.extend(self.file_system_options)\n return args\n", "path": "var/spack/repos/builtin/packages/mvapich2/package.py"}], "after_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2018, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/spack/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nimport sys\n\nfrom spack import *\nfrom spack.error import SpackError\n\n\ndef _process_manager_validator(values):\n if len(values) > 1 and 'slurm' in values:\n raise SpackError(\n 'slurm cannot be activated along with other process managers'\n )\n\n\nclass Mvapich2(AutotoolsPackage):\n \"\"\"MVAPICH2 is an MPI implementation for Infiniband networks.\"\"\"\n homepage = \"http://mvapich.cse.ohio-state.edu/\"\n url = \"http://mvapich.cse.ohio-state.edu/download/mvapich/mv2/mvapich2-2.2.tar.gz\"\n list_url = \"http://mvapich.cse.ohio-state.edu/downloads/\"\n\n version('2.3rc2', '6fcf22fe2a16023b462ef57614daa357')\n version('2.3rc1', '386d79ae36b2136d203826465ad8b6cc')\n version('2.3a', '87c3fbf8a755b53806fa9ecb21453445')\n\n # Prefer the latest stable release\n version('2.3', sha256='01d5fb592454ddd9ecc17e91c8983b6aea0e7559aa38f410b111c8ef385b50dd', preferred=True)\n version('2.2', '939b65ebe5b89a5bc822cdab0f31f96e')\n version('2.1', '0095ceecb19bbb7fb262131cb9c2cdd6')\n version('2.0', '9fbb68a4111a8b6338e476dc657388b4')\n\n provides('mpi')\n provides('mpi@:3.0')\n\n variant('debug', default=False,\n description='Enable debug info and error messages at run-time')\n\n variant('cuda', default=False,\n description='Enable CUDA extension')\n\n variant('regcache', default=True,\n description='Enable memory registration cache')\n\n # Accepted values are:\n # single - No threads (MPI_THREAD_SINGLE)\n # funneled - Only the main thread calls MPI (MPI_THREAD_FUNNELED)\n # serialized - User serializes calls to MPI (MPI_THREAD_SERIALIZED)\n # multiple - Fully multi-threaded (MPI_THREAD_MULTIPLE)\n # runtime - Alias to \"multiple\"\n variant(\n 'threads',\n default='multiple',\n values=('single', 'funneled', 'serialized', 'multiple'),\n multi=False,\n description='Control the level of thread support'\n )\n\n # 32 is needed when job size exceeds 32768 cores\n variant(\n 'ch3_rank_bits',\n default='32',\n values=('16', '32'),\n multi=False,\n description='Number of bits allocated to the rank field (16 or 32)'\n )\n\n variant(\n 'process_managers',\n description='List of the process managers to activate',\n values=('slurm', 'hydra', 'gforker', 'remshell'),\n multi=True,\n validator=_process_manager_validator\n )\n\n variant(\n 'fabrics',\n description='The fabric enabled for this build',\n default='psm',\n values=(\n 'psm', 'sock', 'nemesisib', 'nemesis', 'mrail', 'nemesisibtcp',\n 'nemesistcpib'\n )\n )\n\n variant(\n 'alloca',\n default=False,\n description='Use alloca to allocate temporary memory if available'\n )\n\n variant(\n 'file_systems',\n description='List of the ROMIO file systems to activate',\n values=('lustre', 'gpfs', 'nfs', 'ufs'),\n multi=True\n )\n\n depends_on('bison', type='build')\n depends_on('libpciaccess', when=(sys.platform != 'darwin'))\n depends_on('cuda', when='+cuda')\n depends_on('psm', when='fabrics=psm')\n depends_on('rdma-core', when='fabrics=mrail')\n depends_on('rdma-core', when='fabrics=nemesisib')\n depends_on('rdma-core', when='fabrics=nemesistcpib')\n depends_on('rdma-core', when='fabrics=nemesisibtcp')\n\n filter_compiler_wrappers(\n 'mpicc', 'mpicxx', 'mpif77', 'mpif90', 'mpifort', relative_root='bin'\n )\n\n @property\n def libs(self):\n query_parameters = self.spec.last_query.extra_parameters\n libraries = ['libmpi']\n\n if 'cxx' in query_parameters:\n libraries = ['libmpicxx'] + libraries\n\n return find_libraries(\n libraries, root=self.prefix, shared=True, recursive=True\n )\n\n @property\n def process_manager_options(self):\n spec = self.spec\n\n other_pms = []\n for x in ('hydra', 'gforker', 'remshell'):\n if 'process_managers={0}'.format(x) in spec:\n other_pms.append(x)\n\n opts = []\n if len(other_pms) > 0:\n opts = ['--with-pm=%s' % ':'.join(other_pms)]\n\n # See: http://slurm.schedmd.com/mpi_guide.html#mvapich2\n if 'process_managers=slurm' in spec:\n opts = [\n '--with-pmi=pmi2',\n '--with-pm=slurm'\n ]\n\n return opts\n\n @property\n def network_options(self):\n opts = []\n # From here on I can suppose that only one variant has been selected\n if 'fabrics=psm' in self.spec:\n opts = [\n \"--with-device=ch3:psm\",\n \"--with-psm={0}\".format(self.spec['psm'].prefix)\n ]\n elif 'fabrics=sock' in self.spec:\n opts = [\"--with-device=ch3:sock\"]\n elif 'fabrics=nemesistcpib' in self.spec:\n opts = [\"--with-device=ch3:nemesis:tcp,ib\"]\n elif 'fabrics=nemesisibtcp' in self.spec:\n opts = [\"--with-device=ch3:nemesis:ib,tcp\"]\n elif 'fabrics=nemesisib' in self.spec:\n opts = [\"--with-device=ch3:nemesis:ib\"]\n elif 'fabrics=nemesis' in self.spec:\n opts = [\"--with-device=ch3:nemesis\"]\n elif 'fabrics=mrail' in self.spec:\n opts = [\"--with-device=ch3:mrail\", \"--with-rdma=gen2\",\n \"--disable-mcast\"]\n return opts\n\n @property\n def file_system_options(self):\n spec = self.spec\n\n fs = []\n for x in ('lustre', 'gpfs', 'nfs', 'ufs'):\n if 'file_systems={0}'.format(x) in spec:\n fs.append(x)\n\n opts = []\n if len(fs) > 0:\n opts.append('--with-file-system=%s' % '+'.join(fs))\n\n return opts\n\n def setup_environment(self, spack_env, run_env):\n spec = self.spec\n # mvapich2 configure fails when F90 and F90FLAGS are set\n spack_env.unset('F90')\n spack_env.unset('F90FLAGS')\n if 'process_managers=slurm' in spec:\n run_env.set('SLURM_MPI_TYPE', 'pmi2')\n\n def setup_dependent_environment(self, spack_env, run_env, dependent_spec):\n spack_env.set('MPICC', join_path(self.prefix.bin, 'mpicc'))\n spack_env.set('MPICXX', join_path(self.prefix.bin, 'mpicxx'))\n spack_env.set('MPIF77', join_path(self.prefix.bin, 'mpif77'))\n spack_env.set('MPIF90', join_path(self.prefix.bin, 'mpif90'))\n\n spack_env.set('MPICH_CC', spack_cc)\n spack_env.set('MPICH_CXX', spack_cxx)\n spack_env.set('MPICH_F77', spack_f77)\n spack_env.set('MPICH_F90', spack_fc)\n spack_env.set('MPICH_FC', spack_fc)\n\n def setup_dependent_package(self, module, dependent_spec):\n self.spec.mpicc = join_path(self.prefix.bin, 'mpicc')\n self.spec.mpicxx = join_path(self.prefix.bin, 'mpicxx')\n self.spec.mpifc = join_path(self.prefix.bin, 'mpif90')\n self.spec.mpif77 = join_path(self.prefix.bin, 'mpif77')\n self.spec.mpicxx_shared_libs = [\n join_path(self.prefix.lib, 'libmpicxx.{0}'.format(dso_suffix)),\n join_path(self.prefix.lib, 'libmpi.{0}'.format(dso_suffix))\n ]\n\n @run_before('configure')\n def die_without_fortran(self):\n # Until we can pass variants such as +fortran through virtual\n # dependencies depends_on('mpi'), require Fortran compiler to\n # avoid delayed build errors in dependents.\n if (self.compiler.f77 is None) or (self.compiler.fc is None):\n raise InstallError(\n 'Mvapich2 requires both C and Fortran compilers!'\n )\n\n def configure_args(self):\n spec = self.spec\n args = [\n '--enable-shared',\n '--enable-romio',\n '--disable-silent-rules',\n '--disable-new-dtags',\n '--enable-fortran=all',\n \"--enable-threads={0}\".format(spec.variants['threads'].value),\n \"--with-ch3-rank-bits={0}\".format(\n spec.variants['ch3_rank_bits'].value),\n ]\n\n args.extend(self.enable_or_disable('alloca'))\n\n if '+debug' in self.spec:\n args.extend([\n '--disable-fast',\n '--enable-error-checking=runtime',\n '--enable-error-messages=all',\n # Permits debugging with TotalView\n '--enable-g=dbg',\n '--enable-debuginfo'\n ])\n else:\n args.append('--enable-fast=all')\n\n if '+cuda' in self.spec:\n args.extend([\n '--enable-cuda',\n '--with-cuda={0}'.format(spec['cuda'].prefix)\n ])\n else:\n args.append('--disable-cuda')\n\n if '+regcache' in self.spec:\n args.append('--enable-registration-cache')\n else:\n args.append('--disable-registration-cache')\n\n args.extend(self.process_manager_options)\n args.extend(self.network_options)\n args.extend(self.file_system_options)\n return args\n", "path": "var/spack/repos/builtin/packages/mvapich2/package.py"}]}
| 3,873 | 351 |
gh_patches_debug_24258
|
rasdani/github-patches
|
git_diff
|
PyGithub__PyGithub-638
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Team add_membership should take parameter for role
The github api allows the use of this call to specify the role of the new member, https://developer.github.com/v3/orgs/teams/#add-or-update-team-membership.
Team.add_membership should allow the same.
Team add_membership method needs a test
Team add_membership should take parameter for role
The github api allows the use of this call to specify the role of the new member, https://developer.github.com/v3/orgs/teams/#add-or-update-team-membership.
Team.add_membership should allow the same.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `github/Team.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # ########################## Copyrights and license ############################
4 # #
5 # Copyright 2012 Vincent Jacques <[email protected]> #
6 # Copyright 2012 Zearin <[email protected]> #
7 # Copyright 2013 AKFish <[email protected]> #
8 # Copyright 2013 Vincent Jacques <[email protected]> #
9 # Copyright 2013 martinqt <[email protected]> #
10 # #
11 # This file is part of PyGithub. #
12 # http://pygithub.github.io/PyGithub/v1/index.html #
13 # #
14 # PyGithub is free software: you can redistribute it and/or modify it under #
15 # the terms of the GNU Lesser General Public License as published by the Free #
16 # Software Foundation, either version 3 of the License, or (at your option) #
17 # any later version. #
18 # #
19 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
20 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
21 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
22 # details. #
23 # #
24 # You should have received a copy of the GNU Lesser General Public License #
25 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
26 # #
27 # ##############################################################################
28
29 import github.GithubObject
30 import github.PaginatedList
31
32 import github.Repository
33 import github.NamedUser
34
35
36 class Team(github.GithubObject.CompletableGithubObject):
37 """
38 This class represents Teams. The reference can be found here http://developer.github.com/v3/orgs/teams/
39 """
40
41 def __repr__(self):
42 return self.get__repr__({"id": self._id.value, "name": self._name.value})
43
44 @property
45 def id(self):
46 """
47 :type: integer
48 """
49 self._completeIfNotSet(self._id)
50 return self._id.value
51
52 @property
53 def members_count(self):
54 """
55 :type: integer
56 """
57 self._completeIfNotSet(self._members_count)
58 return self._members_count.value
59
60 @property
61 def members_url(self):
62 """
63 :type: string
64 """
65 self._completeIfNotSet(self._members_url)
66 return self._members_url.value
67
68 @property
69 def name(self):
70 """
71 :type: string
72 """
73 self._completeIfNotSet(self._name)
74 return self._name.value
75
76 @property
77 def permission(self):
78 """
79 :type: string
80 """
81 self._completeIfNotSet(self._permission)
82 return self._permission.value
83
84 @property
85 def repos_count(self):
86 """
87 :type: integer
88 """
89 self._completeIfNotSet(self._repos_count)
90 return self._repos_count.value
91
92 @property
93 def repositories_url(self):
94 """
95 :type: string
96 """
97 self._completeIfNotSet(self._repositories_url)
98 return self._repositories_url.value
99
100 @property
101 def slug(self):
102 """
103 :type: string
104 """
105 self._completeIfNotSet(self._slug)
106 return self._slug.value
107
108 @property
109 def url(self):
110 """
111 :type: string
112 """
113 self._completeIfNotSet(self._url)
114 return self._url.value
115
116 def add_to_members(self, member):
117 """
118 :calls: `PUT /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_
119 :param member: :class:`github.NamedUser.NamedUser`
120 :rtype: None
121 """
122 assert isinstance(member, github.NamedUser.NamedUser), member
123 headers, data = self._requester.requestJsonAndCheck(
124 "PUT",
125 self.url + "/members/" + member._identity
126 )
127
128 def add_membership(self, member):
129 """
130 :calls: `PUT /teams/:id/memberships/:user <http://developer.github.com/v3/orgs/teams>`_
131 :param member: :class:`github.Nameduser.NamedUser`
132 :rtype: None
133 """
134 assert isinstance(member, github.NamedUser.NamedUser), member
135 headers, data = self._requester.requestJsonAndCheck(
136 "PUT",
137 self.url + "/memberships/" + member._identity
138 )
139
140 def add_to_repos(self, repo):
141 """
142 :calls: `PUT /teams/:id/repos/:org/:repo <http://developer.github.com/v3/orgs/teams>`_
143 :param repo: :class:`github.Repository.Repository`
144 :rtype: None
145 """
146 assert isinstance(repo, github.Repository.Repository), repo
147 headers, data = self._requester.requestJsonAndCheck(
148 "PUT",
149 self.url + "/repos/" + repo._identity
150 )
151
152 def set_repo_permission(self, repo, permission):
153 """
154 :calls: `PUT /teams/:id/repos/:org/:repo <http://developer.github.com/v3/orgs/teams>`_
155 :param repo: :class:`github.Repository.Repository`
156 :param permission: string
157 :rtype: None
158 """
159 assert isinstance(repo, github.Repository.Repository), repo
160 put_parameters = {
161 "permission": permission,
162 }
163 headers, data = self._requester.requestJsonAndCheck(
164 "PUT",
165 self.url + "/repos/" + repo._identity,
166 input=put_parameters
167 )
168
169 def delete(self):
170 """
171 :calls: `DELETE /teams/:id <http://developer.github.com/v3/orgs/teams>`_
172 :rtype: None
173 """
174 headers, data = self._requester.requestJsonAndCheck(
175 "DELETE",
176 self.url
177 )
178
179 def edit(self, name, permission=github.GithubObject.NotSet):
180 """
181 :calls: `PATCH /teams/:id <http://developer.github.com/v3/orgs/teams>`_
182 :param name: string
183 :param permission: string
184 :rtype: None
185 """
186 assert isinstance(name, (str, unicode)), name
187 assert permission is github.GithubObject.NotSet or isinstance(permission, (str, unicode)), permission
188 post_parameters = {
189 "name": name,
190 }
191 if permission is not github.GithubObject.NotSet:
192 post_parameters["permission"] = permission
193 headers, data = self._requester.requestJsonAndCheck(
194 "PATCH",
195 self.url,
196 input=post_parameters
197 )
198 self._useAttributes(data)
199
200 def get_members(self):
201 """
202 :calls: `GET /teams/:id/members <http://developer.github.com/v3/orgs/teams>`_
203 :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.NamedUser.NamedUser`
204 """
205 return github.PaginatedList.PaginatedList(
206 github.NamedUser.NamedUser,
207 self._requester,
208 self.url + "/members",
209 None
210 )
211
212 def get_repos(self):
213 """
214 :calls: `GET /teams/:id/repos <http://developer.github.com/v3/orgs/teams>`_
215 :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.Repository.Repository`
216 """
217 return github.PaginatedList.PaginatedList(
218 github.Repository.Repository,
219 self._requester,
220 self.url + "/repos",
221 None
222 )
223
224 def has_in_members(self, member):
225 """
226 :calls: `GET /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_
227 :param member: :class:`github.NamedUser.NamedUser`
228 :rtype: bool
229 """
230 assert isinstance(member, github.NamedUser.NamedUser), member
231 status, headers, data = self._requester.requestJson(
232 "GET",
233 self.url + "/members/" + member._identity
234 )
235 return status == 204
236
237 def has_in_repos(self, repo):
238 """
239 :calls: `GET /teams/:id/repos/:owner/:repo <http://developer.github.com/v3/orgs/teams>`_
240 :param repo: :class:`github.Repository.Repository`
241 :rtype: bool
242 """
243 assert isinstance(repo, github.Repository.Repository), repo
244 status, headers, data = self._requester.requestJson(
245 "GET",
246 self.url + "/repos/" + repo._identity
247 )
248 return status == 204
249
250 def remove_from_members(self, member):
251 """
252 :calls: `DELETE /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_
253 :param member: :class:`github.NamedUser.NamedUser`
254 :rtype: None
255 """
256 assert isinstance(member, github.NamedUser.NamedUser), member
257 headers, data = self._requester.requestJsonAndCheck(
258 "DELETE",
259 self.url + "/members/" + member._identity
260 )
261
262 def remove_from_repos(self, repo):
263 """
264 :calls: `DELETE /teams/:id/repos/:owner/:repo <http://developer.github.com/v3/orgs/teams>`_
265 :param repo: :class:`github.Repository.Repository`
266 :rtype: None
267 """
268 assert isinstance(repo, github.Repository.Repository), repo
269 headers, data = self._requester.requestJsonAndCheck(
270 "DELETE",
271 self.url + "/repos/" + repo._identity
272 )
273
274 @property
275 def _identity(self):
276 return self.id
277
278 def _initAttributes(self):
279 self._id = github.GithubObject.NotSet
280 self._members_count = github.GithubObject.NotSet
281 self._members_url = github.GithubObject.NotSet
282 self._name = github.GithubObject.NotSet
283 self._permission = github.GithubObject.NotSet
284 self._repos_count = github.GithubObject.NotSet
285 self._repositories_url = github.GithubObject.NotSet
286 self._slug = github.GithubObject.NotSet
287 self._url = github.GithubObject.NotSet
288
289 def _useAttributes(self, attributes):
290 if "id" in attributes: # pragma no branch
291 self._id = self._makeIntAttribute(attributes["id"])
292 if "members_count" in attributes: # pragma no branch
293 self._members_count = self._makeIntAttribute(attributes["members_count"])
294 if "members_url" in attributes: # pragma no branch
295 self._members_url = self._makeStringAttribute(attributes["members_url"])
296 if "name" in attributes: # pragma no branch
297 self._name = self._makeStringAttribute(attributes["name"])
298 if "permission" in attributes: # pragma no branch
299 self._permission = self._makeStringAttribute(attributes["permission"])
300 if "repos_count" in attributes: # pragma no branch
301 self._repos_count = self._makeIntAttribute(attributes["repos_count"])
302 if "repositories_url" in attributes: # pragma no branch
303 self._repositories_url = self._makeStringAttribute(attributes["repositories_url"])
304 if "slug" in attributes: # pragma no branch
305 self._slug = self._makeStringAttribute(attributes["slug"])
306 if "url" in attributes: # pragma no branch
307 self._url = self._makeStringAttribute(attributes["url"])
308
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/github/Team.py b/github/Team.py
--- a/github/Team.py
+++ b/github/Team.py
@@ -125,16 +125,29 @@
self.url + "/members/" + member._identity
)
- def add_membership(self, member):
+ def add_membership(self, member, role=github.GithubObject.NotSet):
"""
:calls: `PUT /teams/:id/memberships/:user <http://developer.github.com/v3/orgs/teams>`_
:param member: :class:`github.Nameduser.NamedUser`
+ :param role: string
:rtype: None
"""
assert isinstance(member, github.NamedUser.NamedUser), member
+ assert role is github.GithubObject.NotSet or isinstance(
+ role, (str, unicode)), role
+ if role is not github.GithubObject.NotSet:
+ assert role in ['member', 'maintainer']
+ put_parameters = {
+ "role": role,
+ }
+ else:
+ put_parameters = {
+ "role": "member",
+ }
headers, data = self._requester.requestJsonAndCheck(
"PUT",
- self.url + "/memberships/" + member._identity
+ self.url + "/memberships/" + member._identity,
+ input=put_parameters
)
def add_to_repos(self, repo):
|
{"golden_diff": "diff --git a/github/Team.py b/github/Team.py\n--- a/github/Team.py\n+++ b/github/Team.py\n@@ -125,16 +125,29 @@\n self.url + \"/members/\" + member._identity\n )\n \n- def add_membership(self, member):\n+ def add_membership(self, member, role=github.GithubObject.NotSet):\n \"\"\"\n :calls: `PUT /teams/:id/memberships/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.Nameduser.NamedUser`\n+ :param role: string\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n+ assert role is github.GithubObject.NotSet or isinstance(\n+ role, (str, unicode)), role\n+ if role is not github.GithubObject.NotSet:\n+ assert role in ['member', 'maintainer']\n+ put_parameters = {\n+ \"role\": role,\n+ }\n+ else:\n+ put_parameters = {\n+ \"role\": \"member\",\n+ }\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n- self.url + \"/memberships/\" + member._identity\n+ self.url + \"/memberships/\" + member._identity,\n+ input=put_parameters\n )\n \n def add_to_repos(self, repo):\n", "issue": "Team add_membership should take parameter for role\nThe github api allows the use of this call to specify the role of the new member, https://developer.github.com/v3/orgs/teams/#add-or-update-team-membership. \r\n\r\nTeam.add_membership should allow the same.\nTeam add_membership method needs a test\n\nTeam add_membership should take parameter for role\nThe github api allows the use of this call to specify the role of the new member, https://developer.github.com/v3/orgs/teams/#add-or-update-team-membership. \r\n\r\nTeam.add_membership should allow the same.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# ########################## Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 martinqt <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.github.io/PyGithub/v1/index.html #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n# ##############################################################################\n\nimport github.GithubObject\nimport github.PaginatedList\n\nimport github.Repository\nimport github.NamedUser\n\n\nclass Team(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents Teams. The reference can be found here http://developer.github.com/v3/orgs/teams/\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"id\": self._id.value, \"name\": self._name.value})\n\n @property\n def id(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._id)\n return self._id.value\n\n @property\n def members_count(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._members_count)\n return self._members_count.value\n\n @property\n def members_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._members_url)\n return self._members_url.value\n\n @property\n def name(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._name)\n return self._name.value\n\n @property\n def permission(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._permission)\n return self._permission.value\n\n @property\n def repos_count(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._repos_count)\n return self._repos_count.value\n\n @property\n def repositories_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._repositories_url)\n return self._repositories_url.value\n\n @property\n def slug(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._slug)\n return self._slug.value\n\n @property\n def url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._url)\n return self._url.value\n\n def add_to_members(self, member):\n \"\"\"\n :calls: `PUT /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.NamedUser.NamedUser`\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/members/\" + member._identity\n )\n\n def add_membership(self, member):\n \"\"\"\n :calls: `PUT /teams/:id/memberships/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.Nameduser.NamedUser`\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/memberships/\" + member._identity\n )\n\n def add_to_repos(self, repo):\n \"\"\"\n :calls: `PUT /teams/:id/repos/:org/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :rtype: None\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/repos/\" + repo._identity\n )\n\n def set_repo_permission(self, repo, permission):\n \"\"\"\n :calls: `PUT /teams/:id/repos/:org/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :param permission: string\n :rtype: None\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n put_parameters = {\n \"permission\": permission,\n }\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/repos/\" + repo._identity,\n input=put_parameters\n )\n\n def delete(self):\n \"\"\"\n :calls: `DELETE /teams/:id <http://developer.github.com/v3/orgs/teams>`_\n :rtype: None\n \"\"\"\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url\n )\n\n def edit(self, name, permission=github.GithubObject.NotSet):\n \"\"\"\n :calls: `PATCH /teams/:id <http://developer.github.com/v3/orgs/teams>`_\n :param name: string\n :param permission: string\n :rtype: None\n \"\"\"\n assert isinstance(name, (str, unicode)), name\n assert permission is github.GithubObject.NotSet or isinstance(permission, (str, unicode)), permission\n post_parameters = {\n \"name\": name,\n }\n if permission is not github.GithubObject.NotSet:\n post_parameters[\"permission\"] = permission\n headers, data = self._requester.requestJsonAndCheck(\n \"PATCH\",\n self.url,\n input=post_parameters\n )\n self._useAttributes(data)\n\n def get_members(self):\n \"\"\"\n :calls: `GET /teams/:id/members <http://developer.github.com/v3/orgs/teams>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.NamedUser.NamedUser`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.NamedUser.NamedUser,\n self._requester,\n self.url + \"/members\",\n None\n )\n\n def get_repos(self):\n \"\"\"\n :calls: `GET /teams/:id/repos <http://developer.github.com/v3/orgs/teams>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.Repository.Repository`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.Repository.Repository,\n self._requester,\n self.url + \"/repos\",\n None\n )\n\n def has_in_members(self, member):\n \"\"\"\n :calls: `GET /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.NamedUser.NamedUser`\n :rtype: bool\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n status, headers, data = self._requester.requestJson(\n \"GET\",\n self.url + \"/members/\" + member._identity\n )\n return status == 204\n\n def has_in_repos(self, repo):\n \"\"\"\n :calls: `GET /teams/:id/repos/:owner/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :rtype: bool\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n status, headers, data = self._requester.requestJson(\n \"GET\",\n self.url + \"/repos/\" + repo._identity\n )\n return status == 204\n\n def remove_from_members(self, member):\n \"\"\"\n :calls: `DELETE /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.NamedUser.NamedUser`\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url + \"/members/\" + member._identity\n )\n\n def remove_from_repos(self, repo):\n \"\"\"\n :calls: `DELETE /teams/:id/repos/:owner/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :rtype: None\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url + \"/repos/\" + repo._identity\n )\n\n @property\n def _identity(self):\n return self.id\n\n def _initAttributes(self):\n self._id = github.GithubObject.NotSet\n self._members_count = github.GithubObject.NotSet\n self._members_url = github.GithubObject.NotSet\n self._name = github.GithubObject.NotSet\n self._permission = github.GithubObject.NotSet\n self._repos_count = github.GithubObject.NotSet\n self._repositories_url = github.GithubObject.NotSet\n self._slug = github.GithubObject.NotSet\n self._url = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"id\" in attributes: # pragma no branch\n self._id = self._makeIntAttribute(attributes[\"id\"])\n if \"members_count\" in attributes: # pragma no branch\n self._members_count = self._makeIntAttribute(attributes[\"members_count\"])\n if \"members_url\" in attributes: # pragma no branch\n self._members_url = self._makeStringAttribute(attributes[\"members_url\"])\n if \"name\" in attributes: # pragma no branch\n self._name = self._makeStringAttribute(attributes[\"name\"])\n if \"permission\" in attributes: # pragma no branch\n self._permission = self._makeStringAttribute(attributes[\"permission\"])\n if \"repos_count\" in attributes: # pragma no branch\n self._repos_count = self._makeIntAttribute(attributes[\"repos_count\"])\n if \"repositories_url\" in attributes: # pragma no branch\n self._repositories_url = self._makeStringAttribute(attributes[\"repositories_url\"])\n if \"slug\" in attributes: # pragma no branch\n self._slug = self._makeStringAttribute(attributes[\"slug\"])\n if \"url\" in attributes: # pragma no branch\n self._url = self._makeStringAttribute(attributes[\"url\"])\n", "path": "github/Team.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# ########################## Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 martinqt <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.github.io/PyGithub/v1/index.html #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n# ##############################################################################\n\nimport github.GithubObject\nimport github.PaginatedList\n\nimport github.Repository\nimport github.NamedUser\n\n\nclass Team(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents Teams. The reference can be found here http://developer.github.com/v3/orgs/teams/\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"id\": self._id.value, \"name\": self._name.value})\n\n @property\n def id(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._id)\n return self._id.value\n\n @property\n def members_count(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._members_count)\n return self._members_count.value\n\n @property\n def members_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._members_url)\n return self._members_url.value\n\n @property\n def name(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._name)\n return self._name.value\n\n @property\n def permission(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._permission)\n return self._permission.value\n\n @property\n def repos_count(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._repos_count)\n return self._repos_count.value\n\n @property\n def repositories_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._repositories_url)\n return self._repositories_url.value\n\n @property\n def slug(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._slug)\n return self._slug.value\n\n @property\n def url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._url)\n return self._url.value\n\n def add_to_members(self, member):\n \"\"\"\n :calls: `PUT /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.NamedUser.NamedUser`\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/members/\" + member._identity\n )\n\n def add_membership(self, member, role=github.GithubObject.NotSet):\n \"\"\"\n :calls: `PUT /teams/:id/memberships/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.Nameduser.NamedUser`\n :param role: string\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n assert role is github.GithubObject.NotSet or isinstance(\n role, (str, unicode)), role\n if role is not github.GithubObject.NotSet:\n assert role in ['member', 'maintainer']\n put_parameters = {\n \"role\": role,\n }\n else:\n put_parameters = {\n \"role\": \"member\",\n }\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/memberships/\" + member._identity,\n input=put_parameters\n )\n\n def add_to_repos(self, repo):\n \"\"\"\n :calls: `PUT /teams/:id/repos/:org/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :rtype: None\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/repos/\" + repo._identity\n )\n\n def set_repo_permission(self, repo, permission):\n \"\"\"\n :calls: `PUT /teams/:id/repos/:org/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :param permission: string\n :rtype: None\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n put_parameters = {\n \"permission\": permission,\n }\n headers, data = self._requester.requestJsonAndCheck(\n \"PUT\",\n self.url + \"/repos/\" + repo._identity,\n input=put_parameters\n )\n\n def delete(self):\n \"\"\"\n :calls: `DELETE /teams/:id <http://developer.github.com/v3/orgs/teams>`_\n :rtype: None\n \"\"\"\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url\n )\n\n def edit(self, name, permission=github.GithubObject.NotSet):\n \"\"\"\n :calls: `PATCH /teams/:id <http://developer.github.com/v3/orgs/teams>`_\n :param name: string\n :param permission: string\n :rtype: None\n \"\"\"\n assert isinstance(name, (str, unicode)), name\n assert permission is github.GithubObject.NotSet or isinstance(permission, (str, unicode)), permission\n post_parameters = {\n \"name\": name,\n }\n if permission is not github.GithubObject.NotSet:\n post_parameters[\"permission\"] = permission\n headers, data = self._requester.requestJsonAndCheck(\n \"PATCH\",\n self.url,\n input=post_parameters\n )\n self._useAttributes(data)\n\n def get_members(self):\n \"\"\"\n :calls: `GET /teams/:id/members <http://developer.github.com/v3/orgs/teams>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.NamedUser.NamedUser`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.NamedUser.NamedUser,\n self._requester,\n self.url + \"/members\",\n None\n )\n\n def get_repos(self):\n \"\"\"\n :calls: `GET /teams/:id/repos <http://developer.github.com/v3/orgs/teams>`_\n :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.Repository.Repository`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.Repository.Repository,\n self._requester,\n self.url + \"/repos\",\n None\n )\n\n def has_in_members(self, member):\n \"\"\"\n :calls: `GET /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.NamedUser.NamedUser`\n :rtype: bool\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n status, headers, data = self._requester.requestJson(\n \"GET\",\n self.url + \"/members/\" + member._identity\n )\n return status == 204\n\n def has_in_repos(self, repo):\n \"\"\"\n :calls: `GET /teams/:id/repos/:owner/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :rtype: bool\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n status, headers, data = self._requester.requestJson(\n \"GET\",\n self.url + \"/repos/\" + repo._identity\n )\n return status == 204\n\n def remove_from_members(self, member):\n \"\"\"\n :calls: `DELETE /teams/:id/members/:user <http://developer.github.com/v3/orgs/teams>`_\n :param member: :class:`github.NamedUser.NamedUser`\n :rtype: None\n \"\"\"\n assert isinstance(member, github.NamedUser.NamedUser), member\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url + \"/members/\" + member._identity\n )\n\n def remove_from_repos(self, repo):\n \"\"\"\n :calls: `DELETE /teams/:id/repos/:owner/:repo <http://developer.github.com/v3/orgs/teams>`_\n :param repo: :class:`github.Repository.Repository`\n :rtype: None\n \"\"\"\n assert isinstance(repo, github.Repository.Repository), repo\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url + \"/repos/\" + repo._identity\n )\n\n @property\n def _identity(self):\n return self.id\n\n def _initAttributes(self):\n self._id = github.GithubObject.NotSet\n self._members_count = github.GithubObject.NotSet\n self._members_url = github.GithubObject.NotSet\n self._name = github.GithubObject.NotSet\n self._permission = github.GithubObject.NotSet\n self._repos_count = github.GithubObject.NotSet\n self._repositories_url = github.GithubObject.NotSet\n self._slug = github.GithubObject.NotSet\n self._url = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"id\" in attributes: # pragma no branch\n self._id = self._makeIntAttribute(attributes[\"id\"])\n if \"members_count\" in attributes: # pragma no branch\n self._members_count = self._makeIntAttribute(attributes[\"members_count\"])\n if \"members_url\" in attributes: # pragma no branch\n self._members_url = self._makeStringAttribute(attributes[\"members_url\"])\n if \"name\" in attributes: # pragma no branch\n self._name = self._makeStringAttribute(attributes[\"name\"])\n if \"permission\" in attributes: # pragma no branch\n self._permission = self._makeStringAttribute(attributes[\"permission\"])\n if \"repos_count\" in attributes: # pragma no branch\n self._repos_count = self._makeIntAttribute(attributes[\"repos_count\"])\n if \"repositories_url\" in attributes: # pragma no branch\n self._repositories_url = self._makeStringAttribute(attributes[\"repositories_url\"])\n if \"slug\" in attributes: # pragma no branch\n self._slug = self._makeStringAttribute(attributes[\"slug\"])\n if \"url\" in attributes: # pragma no branch\n self._url = self._makeStringAttribute(attributes[\"url\"])\n", "path": "github/Team.py"}]}
| 3,701 | 313 |
gh_patches_debug_366
|
rasdani/github-patches
|
git_diff
|
python-telegram-bot__python-telegram-bot-1485
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use UTC dates
https://github.com/python-telegram-bot/python-telegram-bot/blob/439790375ed8ed493c43e464aa8e2b60a77939db/telegram/utils/helpers.py#L78-L90
Should probably be using `tz=timezone.utc`. Python's `datetime` isn't the best, and `fromtimestamp` by default sets no `tz` information, which uses the local time, which in turn is generally a bad idea.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `telegram/utils/helpers.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2018
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains helper functions."""
20 from collections import defaultdict
21
22 try:
23 import ujson as json
24 except ImportError:
25 import json
26 from html import escape
27
28 import re
29 import signal
30 from datetime import datetime
31
32 # From https://stackoverflow.com/questions/2549939/get-signal-names-from-numbers-in-python
33 _signames = {v: k
34 for k, v in reversed(sorted(vars(signal).items()))
35 if k.startswith('SIG') and not k.startswith('SIG_')}
36
37
38 def get_signal_name(signum):
39 """Returns the signal name of the given signal number."""
40 return _signames[signum]
41
42
43 # Not using future.backports.datetime here as datetime value might be an input from the user,
44 # making every isinstace() call more delicate. So we just use our own compat layer.
45 if hasattr(datetime, 'timestamp'):
46 # Python 3.3+
47 def _timestamp(dt_obj):
48 return dt_obj.timestamp()
49 else:
50 # Python < 3.3 (incl 2.7)
51 from time import mktime
52
53 def _timestamp(dt_obj):
54 return mktime(dt_obj.timetuple())
55
56
57 def escape_markdown(text):
58 """Helper function to escape telegram markup symbols."""
59 escape_chars = '\*_`\['
60 return re.sub(r'([%s])' % escape_chars, r'\\\1', text)
61
62
63 def to_timestamp(dt_obj):
64 """
65 Args:
66 dt_obj (:class:`datetime.datetime`):
67
68 Returns:
69 int:
70
71 """
72 if not dt_obj:
73 return None
74
75 return int(_timestamp(dt_obj))
76
77
78 def from_timestamp(unixtime):
79 """
80 Args:
81 unixtime (int):
82
83 Returns:
84 datetime.datetime:
85
86 """
87 if not unixtime:
88 return None
89
90 return datetime.fromtimestamp(unixtime)
91
92
93 def mention_html(user_id, name):
94 """
95 Args:
96 user_id (:obj:`int`) The user's id which you want to mention.
97 name (:obj:`str`) The name the mention is showing.
98
99 Returns:
100 :obj:`str`: The inline mention for the user as html.
101 """
102 if isinstance(user_id, int):
103 return u'<a href="tg://user?id={}">{}</a>'.format(user_id, escape(name))
104
105
106 def mention_markdown(user_id, name):
107 """
108 Args:
109 user_id (:obj:`int`) The user's id which you want to mention.
110 name (:obj:`str`) The name the mention is showing.
111
112 Returns:
113 :obj:`str`: The inline mention for the user as markdown.
114 """
115 if isinstance(user_id, int):
116 return u'[{}](tg://user?id={})'.format(escape_markdown(name), user_id)
117
118
119 def effective_message_type(entity):
120 """
121 Extracts the type of message as a string identifier from a :class:`telegram.Message` or a
122 :class:`telegram.Update`.
123
124 Args:
125 entity (:obj:`Update` | :obj:`Message`) The ``update`` or ``message`` to extract from
126
127 Returns:
128 str: One of ``Message.MESSAGE_TYPES``
129
130 """
131
132 # Importing on file-level yields cyclic Import Errors
133 from telegram import Message
134 from telegram import Update
135
136 if isinstance(entity, Message):
137 message = entity
138 elif isinstance(entity, Update):
139 message = entity.effective_message
140 else:
141 raise TypeError("entity is not Message or Update (got: {})".format(type(entity)))
142
143 for i in Message.MESSAGE_TYPES:
144 if getattr(message, i, None):
145 return i
146
147 return None
148
149
150 def enocde_conversations_to_json(conversations):
151 """Helper method to encode a conversations dict (that uses tuples as keys) to a
152 JSON-serializable way. Use :attr:`_decode_conversations_from_json` to decode.
153
154 Args:
155 conversations (:obj:`dict`): The conversations dict to transofrm to JSON.
156
157 Returns:
158 :obj:`str`: The JSON-serialized conversations dict
159 """
160 tmp = {}
161 for handler, states in conversations.items():
162 tmp[handler] = {}
163 for key, state in states.items():
164 tmp[handler][json.dumps(key)] = state
165 return json.dumps(tmp)
166
167
168 def decode_conversations_from_json(json_string):
169 """Helper method to decode a conversations dict (that uses tuples as keys) from a
170 JSON-string created with :attr:`_encode_conversations_to_json`.
171
172 Args:
173 json_string (:obj:`str`): The conversations dict as JSON string.
174
175 Returns:
176 :obj:`dict`: The conversations dict after decoding
177 """
178 tmp = json.loads(json_string)
179 conversations = {}
180 for handler, states in tmp.items():
181 conversations[handler] = {}
182 for key, state in states.items():
183 conversations[handler][tuple(json.loads(key))] = state
184 return conversations
185
186
187 def decode_user_chat_data_from_json(data):
188 """Helper method to decode chat or user data (that uses ints as keys) from a
189 JSON-string.
190
191 Args:
192 data (:obj:`str`): The user/chat_data dict as JSON string.
193
194 Returns:
195 :obj:`dict`: The user/chat_data defaultdict after decoding
196 """
197
198 tmp = defaultdict(dict)
199 decoded_data = json.loads(data)
200 for user, data in decoded_data.items():
201 user = int(user)
202 tmp[user] = {}
203 for key, value in data.items():
204 try:
205 key = int(key)
206 except ValueError:
207 pass
208 tmp[user][key] = value
209 return tmp
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/telegram/utils/helpers.py b/telegram/utils/helpers.py
--- a/telegram/utils/helpers.py
+++ b/telegram/utils/helpers.py
@@ -87,7 +87,7 @@
if not unixtime:
return None
- return datetime.fromtimestamp(unixtime)
+ return datetime.utcfromtimestamp(unixtime)
def mention_html(user_id, name):
|
{"golden_diff": "diff --git a/telegram/utils/helpers.py b/telegram/utils/helpers.py\n--- a/telegram/utils/helpers.py\n+++ b/telegram/utils/helpers.py\n@@ -87,7 +87,7 @@\n if not unixtime:\n return None\n \n- return datetime.fromtimestamp(unixtime)\n+ return datetime.utcfromtimestamp(unixtime)\n \n \n def mention_html(user_id, name):\n", "issue": "Use UTC dates\nhttps://github.com/python-telegram-bot/python-telegram-bot/blob/439790375ed8ed493c43e464aa8e2b60a77939db/telegram/utils/helpers.py#L78-L90\r\n\r\nShould probably be using `tz=timezone.utc`. Python's `datetime` isn't the best, and `fromtimestamp` by default sets no `tz` information, which uses the local time, which in turn is generally a bad idea.\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains helper functions.\"\"\"\nfrom collections import defaultdict\n\ntry:\n import ujson as json\nexcept ImportError:\n import json\nfrom html import escape\n\nimport re\nimport signal\nfrom datetime import datetime\n\n# From https://stackoverflow.com/questions/2549939/get-signal-names-from-numbers-in-python\n_signames = {v: k\n for k, v in reversed(sorted(vars(signal).items()))\n if k.startswith('SIG') and not k.startswith('SIG_')}\n\n\ndef get_signal_name(signum):\n \"\"\"Returns the signal name of the given signal number.\"\"\"\n return _signames[signum]\n\n\n# Not using future.backports.datetime here as datetime value might be an input from the user,\n# making every isinstace() call more delicate. So we just use our own compat layer.\nif hasattr(datetime, 'timestamp'):\n # Python 3.3+\n def _timestamp(dt_obj):\n return dt_obj.timestamp()\nelse:\n # Python < 3.3 (incl 2.7)\n from time import mktime\n\n def _timestamp(dt_obj):\n return mktime(dt_obj.timetuple())\n\n\ndef escape_markdown(text):\n \"\"\"Helper function to escape telegram markup symbols.\"\"\"\n escape_chars = '\\*_`\\['\n return re.sub(r'([%s])' % escape_chars, r'\\\\\\1', text)\n\n\ndef to_timestamp(dt_obj):\n \"\"\"\n Args:\n dt_obj (:class:`datetime.datetime`):\n\n Returns:\n int:\n\n \"\"\"\n if not dt_obj:\n return None\n\n return int(_timestamp(dt_obj))\n\n\ndef from_timestamp(unixtime):\n \"\"\"\n Args:\n unixtime (int):\n\n Returns:\n datetime.datetime:\n\n \"\"\"\n if not unixtime:\n return None\n\n return datetime.fromtimestamp(unixtime)\n\n\ndef mention_html(user_id, name):\n \"\"\"\n Args:\n user_id (:obj:`int`) The user's id which you want to mention.\n name (:obj:`str`) The name the mention is showing.\n\n Returns:\n :obj:`str`: The inline mention for the user as html.\n \"\"\"\n if isinstance(user_id, int):\n return u'<a href=\"tg://user?id={}\">{}</a>'.format(user_id, escape(name))\n\n\ndef mention_markdown(user_id, name):\n \"\"\"\n Args:\n user_id (:obj:`int`) The user's id which you want to mention.\n name (:obj:`str`) The name the mention is showing.\n\n Returns:\n :obj:`str`: The inline mention for the user as markdown.\n \"\"\"\n if isinstance(user_id, int):\n return u'[{}](tg://user?id={})'.format(escape_markdown(name), user_id)\n\n\ndef effective_message_type(entity):\n \"\"\"\n Extracts the type of message as a string identifier from a :class:`telegram.Message` or a\n :class:`telegram.Update`.\n\n Args:\n entity (:obj:`Update` | :obj:`Message`) The ``update`` or ``message`` to extract from\n\n Returns:\n str: One of ``Message.MESSAGE_TYPES``\n\n \"\"\"\n\n # Importing on file-level yields cyclic Import Errors\n from telegram import Message\n from telegram import Update\n\n if isinstance(entity, Message):\n message = entity\n elif isinstance(entity, Update):\n message = entity.effective_message\n else:\n raise TypeError(\"entity is not Message or Update (got: {})\".format(type(entity)))\n\n for i in Message.MESSAGE_TYPES:\n if getattr(message, i, None):\n return i\n\n return None\n\n\ndef enocde_conversations_to_json(conversations):\n \"\"\"Helper method to encode a conversations dict (that uses tuples as keys) to a\n JSON-serializable way. Use :attr:`_decode_conversations_from_json` to decode.\n\n Args:\n conversations (:obj:`dict`): The conversations dict to transofrm to JSON.\n\n Returns:\n :obj:`str`: The JSON-serialized conversations dict\n \"\"\"\n tmp = {}\n for handler, states in conversations.items():\n tmp[handler] = {}\n for key, state in states.items():\n tmp[handler][json.dumps(key)] = state\n return json.dumps(tmp)\n\n\ndef decode_conversations_from_json(json_string):\n \"\"\"Helper method to decode a conversations dict (that uses tuples as keys) from a\n JSON-string created with :attr:`_encode_conversations_to_json`.\n\n Args:\n json_string (:obj:`str`): The conversations dict as JSON string.\n\n Returns:\n :obj:`dict`: The conversations dict after decoding\n \"\"\"\n tmp = json.loads(json_string)\n conversations = {}\n for handler, states in tmp.items():\n conversations[handler] = {}\n for key, state in states.items():\n conversations[handler][tuple(json.loads(key))] = state\n return conversations\n\n\ndef decode_user_chat_data_from_json(data):\n \"\"\"Helper method to decode chat or user data (that uses ints as keys) from a\n JSON-string.\n\n Args:\n data (:obj:`str`): The user/chat_data dict as JSON string.\n\n Returns:\n :obj:`dict`: The user/chat_data defaultdict after decoding\n \"\"\"\n\n tmp = defaultdict(dict)\n decoded_data = json.loads(data)\n for user, data in decoded_data.items():\n user = int(user)\n tmp[user] = {}\n for key, value in data.items():\n try:\n key = int(key)\n except ValueError:\n pass\n tmp[user][key] = value\n return tmp\n", "path": "telegram/utils/helpers.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains helper functions.\"\"\"\nfrom collections import defaultdict\n\ntry:\n import ujson as json\nexcept ImportError:\n import json\nfrom html import escape\n\nimport re\nimport signal\nfrom datetime import datetime\n\n# From https://stackoverflow.com/questions/2549939/get-signal-names-from-numbers-in-python\n_signames = {v: k\n for k, v in reversed(sorted(vars(signal).items()))\n if k.startswith('SIG') and not k.startswith('SIG_')}\n\n\ndef get_signal_name(signum):\n \"\"\"Returns the signal name of the given signal number.\"\"\"\n return _signames[signum]\n\n\n# Not using future.backports.datetime here as datetime value might be an input from the user,\n# making every isinstace() call more delicate. So we just use our own compat layer.\nif hasattr(datetime, 'timestamp'):\n # Python 3.3+\n def _timestamp(dt_obj):\n return dt_obj.timestamp()\nelse:\n # Python < 3.3 (incl 2.7)\n from time import mktime\n\n def _timestamp(dt_obj):\n return mktime(dt_obj.timetuple())\n\n\ndef escape_markdown(text):\n \"\"\"Helper function to escape telegram markup symbols.\"\"\"\n escape_chars = '\\*_`\\['\n return re.sub(r'([%s])' % escape_chars, r'\\\\\\1', text)\n\n\ndef to_timestamp(dt_obj):\n \"\"\"\n Args:\n dt_obj (:class:`datetime.datetime`):\n\n Returns:\n int:\n\n \"\"\"\n if not dt_obj:\n return None\n\n return int(_timestamp(dt_obj))\n\n\ndef from_timestamp(unixtime):\n \"\"\"\n Args:\n unixtime (int):\n\n Returns:\n datetime.datetime:\n\n \"\"\"\n if not unixtime:\n return None\n\n return datetime.utcfromtimestamp(unixtime)\n\n\ndef mention_html(user_id, name):\n \"\"\"\n Args:\n user_id (:obj:`int`) The user's id which you want to mention.\n name (:obj:`str`) The name the mention is showing.\n\n Returns:\n :obj:`str`: The inline mention for the user as html.\n \"\"\"\n if isinstance(user_id, int):\n return u'<a href=\"tg://user?id={}\">{}</a>'.format(user_id, escape(name))\n\n\ndef mention_markdown(user_id, name):\n \"\"\"\n Args:\n user_id (:obj:`int`) The user's id which you want to mention.\n name (:obj:`str`) The name the mention is showing.\n\n Returns:\n :obj:`str`: The inline mention for the user as markdown.\n \"\"\"\n if isinstance(user_id, int):\n return u'[{}](tg://user?id={})'.format(escape_markdown(name), user_id)\n\n\ndef effective_message_type(entity):\n \"\"\"\n Extracts the type of message as a string identifier from a :class:`telegram.Message` or a\n :class:`telegram.Update`.\n\n Args:\n entity (:obj:`Update` | :obj:`Message`) The ``update`` or ``message`` to extract from\n\n Returns:\n str: One of ``Message.MESSAGE_TYPES``\n\n \"\"\"\n\n # Importing on file-level yields cyclic Import Errors\n from telegram import Message\n from telegram import Update\n\n if isinstance(entity, Message):\n message = entity\n elif isinstance(entity, Update):\n message = entity.effective_message\n else:\n raise TypeError(\"entity is not Message or Update (got: {})\".format(type(entity)))\n\n for i in Message.MESSAGE_TYPES:\n if getattr(message, i, None):\n return i\n\n return None\n\n\ndef enocde_conversations_to_json(conversations):\n \"\"\"Helper method to encode a conversations dict (that uses tuples as keys) to a\n JSON-serializable way. Use :attr:`_decode_conversations_from_json` to decode.\n\n Args:\n conversations (:obj:`dict`): The conversations dict to transofrm to JSON.\n\n Returns:\n :obj:`str`: The JSON-serialized conversations dict\n \"\"\"\n tmp = {}\n for handler, states in conversations.items():\n tmp[handler] = {}\n for key, state in states.items():\n tmp[handler][json.dumps(key)] = state\n return json.dumps(tmp)\n\n\ndef decode_conversations_from_json(json_string):\n \"\"\"Helper method to decode a conversations dict (that uses tuples as keys) from a\n JSON-string created with :attr:`_encode_conversations_to_json`.\n\n Args:\n json_string (:obj:`str`): The conversations dict as JSON string.\n\n Returns:\n :obj:`dict`: The conversations dict after decoding\n \"\"\"\n tmp = json.loads(json_string)\n conversations = {}\n for handler, states in tmp.items():\n conversations[handler] = {}\n for key, state in states.items():\n conversations[handler][tuple(json.loads(key))] = state\n return conversations\n\n\ndef decode_user_chat_data_from_json(data):\n \"\"\"Helper method to decode chat or user data (that uses ints as keys) from a\n JSON-string.\n\n Args:\n data (:obj:`str`): The user/chat_data dict as JSON string.\n\n Returns:\n :obj:`dict`: The user/chat_data defaultdict after decoding\n \"\"\"\n\n tmp = defaultdict(dict)\n decoded_data = json.loads(data)\n for user, data in decoded_data.items():\n user = int(user)\n tmp[user] = {}\n for key, value in data.items():\n try:\n key = int(key)\n except ValueError:\n pass\n tmp[user][key] = value\n return tmp\n", "path": "telegram/utils/helpers.py"}]}
| 2,308 | 84 |
gh_patches_debug_4402
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-1769
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
JSON Cache for AssumeRoleProvider not truncating files
When we open a file for writing, if we're reusing the same file (same cache key) we don't truncate the file before writing. If the second JSON response is smaller it will result in extra data at the end of the JSON document.
This will trigger a json parsing error, which raises a KeyError, which causes the cred provider to retrieve a new set of temporary credentials because it thinks the file is not in the cache.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/assumerole.py`
Content:
```
1 import os
2 import json
3 import logging
4
5 from botocore.exceptions import ProfileNotFound
6
7 LOG = logging.getLogger(__name__)
8
9
10 def register_assume_role_provider(event_handlers):
11 event_handlers.register('session-initialized',
12 inject_assume_role_provider_cache,
13 unique_id='inject_assume_role_cred_provider_cache')
14
15
16 def inject_assume_role_provider_cache(session, **kwargs):
17 try:
18 cred_chain = session.get_component('credential_provider')
19 except ProfileNotFound:
20 # If a user has provided a profile that does not exist,
21 # trying to retrieve components/config on the session
22 # will raise ProfileNotFound. Sometimes this is invalid:
23 #
24 # "ec2 describe-instances --profile unknown"
25 #
26 # and sometimes this is perfectly valid:
27 #
28 # "configure set region us-west-2 --profile brand-new-profile"
29 #
30 # Because we can't know (and don't want to know) whether
31 # the customer is trying to do something valid, we just
32 # immediately return. If it's invalid something else
33 # up the stack will raise ProfileNotFound, otherwise
34 # the configure (and other) commands will work as expected.
35 LOG.debug("ProfileNotFound caught when trying to inject "
36 "assume-role cred provider cache. Not configuring "
37 "JSONFileCache for assume-role.")
38 return
39 provider = cred_chain.get_provider('assume-role')
40 provider.cache = JSONFileCache()
41
42
43 class JSONFileCache(object):
44 """JSON file cache.
45
46 This provides a dict like interface that stores JSON serializable
47 objects.
48
49 The objects are serialized to JSON and stored in a file. These
50 values can be retrieved at a later time.
51
52 """
53
54 CACHE_DIR = os.path.expanduser(os.path.join('~', '.aws', 'cli', 'cache'))
55
56 def __init__(self, working_dir=CACHE_DIR):
57 self._working_dir = working_dir
58
59 def __contains__(self, cache_key):
60 actual_key = self._convert_cache_key(cache_key)
61 return os.path.isfile(actual_key)
62
63 def __getitem__(self, cache_key):
64 """Retrieve value from a cache key."""
65 actual_key = self._convert_cache_key(cache_key)
66 try:
67 with open(actual_key) as f:
68 return json.load(f)
69 except (OSError, ValueError, IOError):
70 raise KeyError(cache_key)
71
72 def __setitem__(self, cache_key, value):
73 full_key = self._convert_cache_key(cache_key)
74 try:
75 file_content = json.dumps(value)
76 except (TypeError, ValueError):
77 raise ValueError("Value cannot be cached, must be "
78 "JSON serializable: %s" % value)
79 if not os.path.isdir(self._working_dir):
80 os.makedirs(self._working_dir)
81 with os.fdopen(os.open(full_key,
82 os.O_WRONLY | os.O_CREAT, 0o600), 'w') as f:
83 f.write(file_content)
84
85 def _convert_cache_key(self, cache_key):
86 full_path = os.path.join(self._working_dir, cache_key + '.json')
87 return full_path
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/awscli/customizations/assumerole.py b/awscli/customizations/assumerole.py
--- a/awscli/customizations/assumerole.py
+++ b/awscli/customizations/assumerole.py
@@ -80,6 +80,7 @@
os.makedirs(self._working_dir)
with os.fdopen(os.open(full_key,
os.O_WRONLY | os.O_CREAT, 0o600), 'w') as f:
+ f.truncate()
f.write(file_content)
def _convert_cache_key(self, cache_key):
|
{"golden_diff": "diff --git a/awscli/customizations/assumerole.py b/awscli/customizations/assumerole.py\n--- a/awscli/customizations/assumerole.py\n+++ b/awscli/customizations/assumerole.py\n@@ -80,6 +80,7 @@\n os.makedirs(self._working_dir)\n with os.fdopen(os.open(full_key,\n os.O_WRONLY | os.O_CREAT, 0o600), 'w') as f:\n+ f.truncate()\n f.write(file_content)\n \n def _convert_cache_key(self, cache_key):\n", "issue": "JSON Cache for AssumeRoleProvider not truncating files\nWhen we open a file for writing, if we're reusing the same file (same cache key) we don't truncate the file before writing. If the second JSON response is smaller it will result in extra data at the end of the JSON document.\n\nThis will trigger a json parsing error, which raises a KeyError, which causes the cred provider to retrieve a new set of temporary credentials because it thinks the file is not in the cache.\n\n", "before_files": [{"content": "import os\nimport json\nimport logging\n\nfrom botocore.exceptions import ProfileNotFound\n\nLOG = logging.getLogger(__name__)\n\n\ndef register_assume_role_provider(event_handlers):\n event_handlers.register('session-initialized',\n inject_assume_role_provider_cache,\n unique_id='inject_assume_role_cred_provider_cache')\n\n\ndef inject_assume_role_provider_cache(session, **kwargs):\n try:\n cred_chain = session.get_component('credential_provider')\n except ProfileNotFound:\n # If a user has provided a profile that does not exist,\n # trying to retrieve components/config on the session\n # will raise ProfileNotFound. Sometimes this is invalid:\n #\n # \"ec2 describe-instances --profile unknown\"\n #\n # and sometimes this is perfectly valid:\n #\n # \"configure set region us-west-2 --profile brand-new-profile\"\n #\n # Because we can't know (and don't want to know) whether\n # the customer is trying to do something valid, we just\n # immediately return. If it's invalid something else\n # up the stack will raise ProfileNotFound, otherwise\n # the configure (and other) commands will work as expected.\n LOG.debug(\"ProfileNotFound caught when trying to inject \"\n \"assume-role cred provider cache. Not configuring \"\n \"JSONFileCache for assume-role.\")\n return\n provider = cred_chain.get_provider('assume-role')\n provider.cache = JSONFileCache()\n\n\nclass JSONFileCache(object):\n \"\"\"JSON file cache.\n\n This provides a dict like interface that stores JSON serializable\n objects.\n\n The objects are serialized to JSON and stored in a file. These\n values can be retrieved at a later time.\n\n \"\"\"\n\n CACHE_DIR = os.path.expanduser(os.path.join('~', '.aws', 'cli', 'cache'))\n\n def __init__(self, working_dir=CACHE_DIR):\n self._working_dir = working_dir\n\n def __contains__(self, cache_key):\n actual_key = self._convert_cache_key(cache_key)\n return os.path.isfile(actual_key)\n\n def __getitem__(self, cache_key):\n \"\"\"Retrieve value from a cache key.\"\"\"\n actual_key = self._convert_cache_key(cache_key)\n try:\n with open(actual_key) as f:\n return json.load(f)\n except (OSError, ValueError, IOError):\n raise KeyError(cache_key)\n\n def __setitem__(self, cache_key, value):\n full_key = self._convert_cache_key(cache_key)\n try:\n file_content = json.dumps(value)\n except (TypeError, ValueError):\n raise ValueError(\"Value cannot be cached, must be \"\n \"JSON serializable: %s\" % value)\n if not os.path.isdir(self._working_dir):\n os.makedirs(self._working_dir)\n with os.fdopen(os.open(full_key,\n os.O_WRONLY | os.O_CREAT, 0o600), 'w') as f:\n f.write(file_content)\n\n def _convert_cache_key(self, cache_key):\n full_path = os.path.join(self._working_dir, cache_key + '.json')\n return full_path\n", "path": "awscli/customizations/assumerole.py"}], "after_files": [{"content": "import os\nimport json\nimport logging\n\nfrom botocore.exceptions import ProfileNotFound\n\nLOG = logging.getLogger(__name__)\n\n\ndef register_assume_role_provider(event_handlers):\n event_handlers.register('session-initialized',\n inject_assume_role_provider_cache,\n unique_id='inject_assume_role_cred_provider_cache')\n\n\ndef inject_assume_role_provider_cache(session, **kwargs):\n try:\n cred_chain = session.get_component('credential_provider')\n except ProfileNotFound:\n # If a user has provided a profile that does not exist,\n # trying to retrieve components/config on the session\n # will raise ProfileNotFound. Sometimes this is invalid:\n #\n # \"ec2 describe-instances --profile unknown\"\n #\n # and sometimes this is perfectly valid:\n #\n # \"configure set region us-west-2 --profile brand-new-profile\"\n #\n # Because we can't know (and don't want to know) whether\n # the customer is trying to do something valid, we just\n # immediately return. If it's invalid something else\n # up the stack will raise ProfileNotFound, otherwise\n # the configure (and other) commands will work as expected.\n LOG.debug(\"ProfileNotFound caught when trying to inject \"\n \"assume-role cred provider cache. Not configuring \"\n \"JSONFileCache for assume-role.\")\n return\n provider = cred_chain.get_provider('assume-role')\n provider.cache = JSONFileCache()\n\n\nclass JSONFileCache(object):\n \"\"\"JSON file cache.\n\n This provides a dict like interface that stores JSON serializable\n objects.\n\n The objects are serialized to JSON and stored in a file. These\n values can be retrieved at a later time.\n\n \"\"\"\n\n CACHE_DIR = os.path.expanduser(os.path.join('~', '.aws', 'cli', 'cache'))\n\n def __init__(self, working_dir=CACHE_DIR):\n self._working_dir = working_dir\n\n def __contains__(self, cache_key):\n actual_key = self._convert_cache_key(cache_key)\n return os.path.isfile(actual_key)\n\n def __getitem__(self, cache_key):\n \"\"\"Retrieve value from a cache key.\"\"\"\n actual_key = self._convert_cache_key(cache_key)\n try:\n with open(actual_key) as f:\n return json.load(f)\n except (OSError, ValueError, IOError):\n raise KeyError(cache_key)\n\n def __setitem__(self, cache_key, value):\n full_key = self._convert_cache_key(cache_key)\n try:\n file_content = json.dumps(value)\n except (TypeError, ValueError):\n raise ValueError(\"Value cannot be cached, must be \"\n \"JSON serializable: %s\" % value)\n if not os.path.isdir(self._working_dir):\n os.makedirs(self._working_dir)\n with os.fdopen(os.open(full_key,\n os.O_WRONLY | os.O_CREAT, 0o600), 'w') as f:\n f.truncate()\n f.write(file_content)\n\n def _convert_cache_key(self, cache_key):\n full_path = os.path.join(self._working_dir, cache_key + '.json')\n return full_path\n", "path": "awscli/customizations/assumerole.py"}]}
| 1,201 | 124 |
gh_patches_debug_11538
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-2279
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pyobject truncates code at comment
See https://github.com/sphinx-doc/sphinx/issues/2253
Example rendered docs:
http://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tour.html#handling-web-requests-and-responses
rst syntax:
https://github.com/Pylons/pyramid/blame/master/docs/quick_tour.rst#L119-L120
Source code:
https://github.com/Pylons/pyramid/blob/master/docs/quick_tour/requests/app.py#L7
When the bug is fixed and released, we will need to:
- revert the source code sample to use `#` style comments
- bump up the Sphinx version
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 ##############################################################################
2 #
3 # Copyright (c) 2008-2013 Agendaless Consulting and Contributors.
4 # All Rights Reserved.
5 #
6 # This software is subject to the provisions of the BSD-like license at
7 # http://www.repoze.org/LICENSE.txt. A copy of the license should accompany
8 # this distribution. THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL
9 # EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,
10 # THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND
11 # FITNESS FOR A PARTICULAR PURPOSE
12 #
13 ##############################################################################
14
15 import os
16 import sys
17
18 from setuptools import setup, find_packages
19
20 py_version = sys.version_info[:2]
21
22 PY3 = py_version[0] == 3
23
24 if PY3:
25 if py_version < (3, 2):
26 raise RuntimeError('On Python 3, Pyramid requires Python 3.2 or better')
27 else:
28 if py_version < (2, 6):
29 raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')
30
31 here = os.path.abspath(os.path.dirname(__file__))
32 try:
33 with open(os.path.join(here, 'README.rst')) as f:
34 README = f.read()
35 with open(os.path.join(here, 'CHANGES.txt')) as f:
36 CHANGES = f.read()
37 except IOError:
38 README = CHANGES = ''
39
40 install_requires=[
41 'setuptools',
42 'WebOb >= 1.3.1', # request.domain and CookieProfile
43 'repoze.lru >= 0.4', # py3 compat
44 'zope.interface >= 3.8.0', # has zope.interface.registry
45 'zope.deprecation >= 3.5.0', # py3 compat
46 'venusian >= 1.0a3', # ``ignore``
47 'translationstring >= 0.4', # py3 compat
48 'PasteDeploy >= 1.5.0', # py3 compat
49 ]
50
51 tests_require = [
52 'WebTest >= 1.3.1', # py3 compat
53 ]
54
55 if not PY3:
56 tests_require.append('zope.component>=3.11.0')
57
58 docs_extras = [
59 'Sphinx >= 1.3.4',
60 'docutils',
61 'repoze.sphinx.autointerface',
62 'pylons_sphinx_latesturl',
63 'pylons-sphinx-themes',
64 'sphinxcontrib-programoutput',
65 ]
66
67 testing_extras = tests_require + [
68 'nose',
69 'coverage',
70 'virtualenv', # for scaffolding tests
71 ]
72
73 setup(name='pyramid',
74 version='1.5.8',
75 description='The Pyramid Web Framework, a Pylons project',
76 long_description=README + '\n\n' + CHANGES,
77 classifiers=[
78 "Intended Audience :: Developers",
79 "Programming Language :: Python",
80 "Programming Language :: Python :: 2.6",
81 "Programming Language :: Python :: 2.7",
82 "Programming Language :: Python :: 3",
83 "Programming Language :: Python :: 3.2",
84 "Programming Language :: Python :: 3.3",
85 "Programming Language :: Python :: 3.4",
86 "Programming Language :: Python :: 3.5",
87 "Programming Language :: Python :: Implementation :: CPython",
88 "Programming Language :: Python :: Implementation :: PyPy",
89 "Framework :: Pyramid",
90 "Topic :: Internet :: WWW/HTTP",
91 "Topic :: Internet :: WWW/HTTP :: WSGI",
92 "License :: Repoze Public License",
93 ],
94 keywords='web wsgi pylons pyramid',
95 author="Chris McDonough, Agendaless Consulting",
96 author_email="[email protected]",
97 url="http://docs.pylonsproject.org/en/latest/docs/pyramid.html",
98 license="BSD-derived (http://www.repoze.org/LICENSE.txt)",
99 packages=find_packages(),
100 include_package_data=True,
101 zip_safe=False,
102 install_requires = install_requires,
103 extras_require = {
104 'testing':testing_extras,
105 'docs':docs_extras,
106 },
107 tests_require = tests_require,
108 test_suite="pyramid.tests",
109 entry_points = """\
110 [pyramid.scaffold]
111 starter=pyramid.scaffolds:StarterProjectTemplate
112 zodb=pyramid.scaffolds:ZODBProjectTemplate
113 alchemy=pyramid.scaffolds:AlchemyProjectTemplate
114 [console_scripts]
115 pcreate = pyramid.scripts.pcreate:main
116 pserve = pyramid.scripts.pserve:main
117 pshell = pyramid.scripts.pshell:main
118 proutes = pyramid.scripts.proutes:main
119 pviews = pyramid.scripts.pviews:main
120 ptweens = pyramid.scripts.ptweens:main
121 prequest = pyramid.scripts.prequest:main
122 pdistreport = pyramid.scripts.pdistreport:main
123 [paste.server_runner]
124 wsgiref = pyramid.scripts.pserve:wsgiref_server_runner
125 cherrypy = pyramid.scripts.pserve:cherrypy_server_runner
126 """
127 )
128
129
```
Path: `docs/quick_tour/requests/app.py`
Content:
```
1 from wsgiref.simple_server import make_server
2 from pyramid.config import Configurator
3 from pyramid.response import Response
4
5
6 def hello_world(request):
7 """ Some parameters from a request such as /?name=lisa """
8 url = request.url
9 name = request.params.get('name', 'No Name Provided')
10
11 body = 'URL %s with name: %s' % (url, name)
12 return Response(
13 content_type="text/plain",
14 body=body
15 )
16
17
18 if __name__ == '__main__':
19 config = Configurator()
20 config.add_route('hello', '/')
21 config.add_view(hello_world, route_name='hello')
22 app = config.make_wsgi_app()
23 server = make_server('0.0.0.0', 6543, app)
24 server.serve_forever()
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/quick_tour/requests/app.py b/docs/quick_tour/requests/app.py
--- a/docs/quick_tour/requests/app.py
+++ b/docs/quick_tour/requests/app.py
@@ -4,7 +4,7 @@
def hello_world(request):
- """ Some parameters from a request such as /?name=lisa """
+ # Some parameters from a request such as /?name=lisa
url = request.url
name = request.params.get('name', 'No Name Provided')
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -56,7 +56,7 @@
tests_require.append('zope.component>=3.11.0')
docs_extras = [
- 'Sphinx >= 1.3.4',
+ 'Sphinx >= 1.3.5',
'docutils',
'repoze.sphinx.autointerface',
'pylons_sphinx_latesturl',
|
{"golden_diff": "diff --git a/docs/quick_tour/requests/app.py b/docs/quick_tour/requests/app.py\n--- a/docs/quick_tour/requests/app.py\n+++ b/docs/quick_tour/requests/app.py\n@@ -4,7 +4,7 @@\n \n \n def hello_world(request):\n- \"\"\" Some parameters from a request such as /?name=lisa \"\"\"\n+ # Some parameters from a request such as /?name=lisa\n url = request.url\n name = request.params.get('name', 'No Name Provided')\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -56,7 +56,7 @@\n tests_require.append('zope.component>=3.11.0')\n \n docs_extras = [\n- 'Sphinx >= 1.3.4',\n+ 'Sphinx >= 1.3.5',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n", "issue": "pyobject truncates code at comment\nSee https://github.com/sphinx-doc/sphinx/issues/2253\n\nExample rendered docs:\nhttp://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tour.html#handling-web-requests-and-responses\n\nrst syntax:\nhttps://github.com/Pylons/pyramid/blame/master/docs/quick_tour.rst#L119-L120\n\nSource code:\nhttps://github.com/Pylons/pyramid/blob/master/docs/quick_tour/requests/app.py#L7\n\nWhen the bug is fixed and released, we will need to:\n- revert the source code sample to use `#` style comments\n- bump up the Sphinx version\n\n", "before_files": [{"content": "##############################################################################\n#\n# Copyright (c) 2008-2013 Agendaless Consulting and Contributors.\n# All Rights Reserved.\n#\n# This software is subject to the provisions of the BSD-like license at\n# http://www.repoze.org/LICENSE.txt. A copy of the license should accompany\n# this distribution. THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY AND ALL\n# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,\n# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND\n# FITNESS FOR A PARTICULAR PURPOSE\n#\n##############################################################################\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\npy_version = sys.version_info[:2]\n\nPY3 = py_version[0] == 3\n\nif PY3:\n if py_version < (3, 2):\n raise RuntimeError('On Python 3, Pyramid requires Python 3.2 or better')\nelse:\n if py_version < (2, 6):\n raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n with open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n with open(os.path.join(here, 'CHANGES.txt')) as f:\n CHANGES = f.read()\nexcept IOError:\n README = CHANGES = ''\n\ninstall_requires=[\n 'setuptools',\n 'WebOb >= 1.3.1', # request.domain and CookieProfile\n 'repoze.lru >= 0.4', # py3 compat\n 'zope.interface >= 3.8.0', # has zope.interface.registry\n 'zope.deprecation >= 3.5.0', # py3 compat\n 'venusian >= 1.0a3', # ``ignore``\n 'translationstring >= 0.4', # py3 compat\n 'PasteDeploy >= 1.5.0', # py3 compat\n ]\n\ntests_require = [\n 'WebTest >= 1.3.1', # py3 compat\n ]\n\nif not PY3:\n tests_require.append('zope.component>=3.11.0')\n\ndocs_extras = [\n 'Sphinx >= 1.3.4',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n 'pylons-sphinx-themes',\n 'sphinxcontrib-programoutput',\n ]\n\ntesting_extras = tests_require + [\n 'nose',\n 'coverage',\n 'virtualenv', # for scaffolding tests\n ]\n\nsetup(name='pyramid',\n version='1.5.8',\n description='The Pyramid Web Framework, a Pylons project',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.2\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI\",\n \"License :: Repoze Public License\",\n ],\n keywords='web wsgi pylons pyramid',\n author=\"Chris McDonough, Agendaless Consulting\",\n author_email=\"[email protected]\",\n url=\"http://docs.pylonsproject.org/en/latest/docs/pyramid.html\",\n license=\"BSD-derived (http://www.repoze.org/LICENSE.txt)\",\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires = install_requires,\n extras_require = {\n 'testing':testing_extras,\n 'docs':docs_extras,\n },\n tests_require = tests_require,\n test_suite=\"pyramid.tests\",\n entry_points = \"\"\"\\\n [pyramid.scaffold]\n starter=pyramid.scaffolds:StarterProjectTemplate\n zodb=pyramid.scaffolds:ZODBProjectTemplate\n alchemy=pyramid.scaffolds:AlchemyProjectTemplate\n [console_scripts]\n pcreate = pyramid.scripts.pcreate:main\n pserve = pyramid.scripts.pserve:main\n pshell = pyramid.scripts.pshell:main\n proutes = pyramid.scripts.proutes:main\n pviews = pyramid.scripts.pviews:main\n ptweens = pyramid.scripts.ptweens:main\n prequest = pyramid.scripts.prequest:main\n pdistreport = pyramid.scripts.pdistreport:main\n [paste.server_runner]\n wsgiref = pyramid.scripts.pserve:wsgiref_server_runner\n cherrypy = pyramid.scripts.pserve:cherrypy_server_runner\n \"\"\"\n )\n\n", "path": "setup.py"}, {"content": "from wsgiref.simple_server import make_server\nfrom pyramid.config import Configurator\nfrom pyramid.response import Response\n\n\ndef hello_world(request):\n \"\"\" Some parameters from a request such as /?name=lisa \"\"\"\n url = request.url\n name = request.params.get('name', 'No Name Provided')\n\n body = 'URL %s with name: %s' % (url, name)\n return Response(\n content_type=\"text/plain\",\n body=body\n )\n\n\nif __name__ == '__main__':\n config = Configurator()\n config.add_route('hello', '/')\n config.add_view(hello_world, route_name='hello')\n app = config.make_wsgi_app()\n server = make_server('0.0.0.0', 6543, app)\n server.serve_forever()\n", "path": "docs/quick_tour/requests/app.py"}], "after_files": [{"content": "##############################################################################\n#\n# Copyright (c) 2008-2013 Agendaless Consulting and Contributors.\n# All Rights Reserved.\n#\n# This software is subject to the provisions of the BSD-like license at\n# http://www.repoze.org/LICENSE.txt. A copy of the license should accompany\n# this distribution. THIS SOFTWARE IS PROVIDED \"AS IS\" AND ANY AND ALL\n# EXPRESS OR IMPLIED WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO,\n# THE IMPLIED WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND\n# FITNESS FOR A PARTICULAR PURPOSE\n#\n##############################################################################\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\npy_version = sys.version_info[:2]\n\nPY3 = py_version[0] == 3\n\nif PY3:\n if py_version < (3, 2):\n raise RuntimeError('On Python 3, Pyramid requires Python 3.2 or better')\nelse:\n if py_version < (2, 6):\n raise RuntimeError('On Python 2, Pyramid requires Python 2.6 or better')\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n with open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n with open(os.path.join(here, 'CHANGES.txt')) as f:\n CHANGES = f.read()\nexcept IOError:\n README = CHANGES = ''\n\ninstall_requires=[\n 'setuptools',\n 'WebOb >= 1.3.1', # request.domain and CookieProfile\n 'repoze.lru >= 0.4', # py3 compat\n 'zope.interface >= 3.8.0', # has zope.interface.registry\n 'zope.deprecation >= 3.5.0', # py3 compat\n 'venusian >= 1.0a3', # ``ignore``\n 'translationstring >= 0.4', # py3 compat\n 'PasteDeploy >= 1.5.0', # py3 compat\n ]\n\ntests_require = [\n 'WebTest >= 1.3.1', # py3 compat\n ]\n\nif not PY3:\n tests_require.append('zope.component>=3.11.0')\n\ndocs_extras = [\n 'Sphinx >= 1.3.5',\n 'docutils',\n 'repoze.sphinx.autointerface',\n 'pylons_sphinx_latesturl',\n 'pylons-sphinx-themes',\n 'sphinxcontrib-programoutput',\n ]\n\ntesting_extras = tests_require + [\n 'nose',\n 'coverage',\n 'virtualenv', # for scaffolding tests\n ]\n\nsetup(name='pyramid',\n version='1.5.8',\n description='The Pyramid Web Framework, a Pylons project',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.2\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI\",\n \"License :: Repoze Public License\",\n ],\n keywords='web wsgi pylons pyramid',\n author=\"Chris McDonough, Agendaless Consulting\",\n author_email=\"[email protected]\",\n url=\"http://docs.pylonsproject.org/en/latest/docs/pyramid.html\",\n license=\"BSD-derived (http://www.repoze.org/LICENSE.txt)\",\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires = install_requires,\n extras_require = {\n 'testing':testing_extras,\n 'docs':docs_extras,\n },\n tests_require = tests_require,\n test_suite=\"pyramid.tests\",\n entry_points = \"\"\"\\\n [pyramid.scaffold]\n starter=pyramid.scaffolds:StarterProjectTemplate\n zodb=pyramid.scaffolds:ZODBProjectTemplate\n alchemy=pyramid.scaffolds:AlchemyProjectTemplate\n [console_scripts]\n pcreate = pyramid.scripts.pcreate:main\n pserve = pyramid.scripts.pserve:main\n pshell = pyramid.scripts.pshell:main\n proutes = pyramid.scripts.proutes:main\n pviews = pyramid.scripts.pviews:main\n ptweens = pyramid.scripts.ptweens:main\n prequest = pyramid.scripts.prequest:main\n pdistreport = pyramid.scripts.pdistreport:main\n [paste.server_runner]\n wsgiref = pyramid.scripts.pserve:wsgiref_server_runner\n cherrypy = pyramid.scripts.pserve:cherrypy_server_runner\n \"\"\"\n )\n\n", "path": "setup.py"}, {"content": "from wsgiref.simple_server import make_server\nfrom pyramid.config import Configurator\nfrom pyramid.response import Response\n\n\ndef hello_world(request):\n # Some parameters from a request such as /?name=lisa\n url = request.url\n name = request.params.get('name', 'No Name Provided')\n\n body = 'URL %s with name: %s' % (url, name)\n return Response(\n content_type=\"text/plain\",\n body=body\n )\n\n\nif __name__ == '__main__':\n config = Configurator()\n config.add_route('hello', '/')\n config.add_view(hello_world, route_name='hello')\n app = config.make_wsgi_app()\n server = make_server('0.0.0.0', 6543, app)\n server.serve_forever()\n", "path": "docs/quick_tour/requests/app.py"}]}
| 2,048 | 223 |
gh_patches_debug_29757
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-1646
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ChallengeList page should use pagination
This takes a while to load with many challenges and the Automated Evaluation boolean no longer works.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/challenges/views.py`
Content:
```
1 from django.contrib import messages
2 from django.contrib.messages.views import SuccessMessageMixin
3 from django.core.paginator import EmptyPage, Paginator
4 from django.db.models import Q
5 from django.utils.html import format_html
6 from django.views.generic import (
7 CreateView,
8 DeleteView,
9 ListView,
10 TemplateView,
11 UpdateView,
12 )
13
14 from grandchallenge.challenges.filters import ChallengeFilter
15 from grandchallenge.challenges.forms import (
16 ChallengeCreateForm,
17 ChallengeUpdateForm,
18 ExternalChallengeUpdateForm,
19 )
20 from grandchallenge.challenges.models import (
21 Challenge,
22 ExternalChallenge,
23 )
24 from grandchallenge.core.permissions.mixins import (
25 UserIsChallengeAdminMixin,
26 UserIsNotAnonMixin,
27 UserIsStaffMixin,
28 )
29 from grandchallenge.core.templatetags.random_encode import random_encode
30 from grandchallenge.subdomains.mixins import ChallengeSubdomainObjectMixin
31 from grandchallenge.subdomains.utils import reverse
32
33
34 class ChallengeCreate(UserIsNotAnonMixin, SuccessMessageMixin, CreateView):
35 model = Challenge
36 form_class = ChallengeCreateForm
37 success_message = "Challenge successfully created"
38
39 def form_valid(self, form):
40 form.instance.creator = self.request.user
41 return super().form_valid(form)
42
43
44 class ChallengeList(TemplateView):
45 paginate_by = 40
46 template_name = "challenges/challenge_list.html"
47
48 @property
49 def _current_page(self):
50 return int(self.request.GET.get("page", 1))
51
52 @property
53 def _filters_applied(self):
54 return any(k for k in self.request.GET if k.lower() != "page")
55
56 def _get_page(self):
57 self.int_filter = ChallengeFilter(
58 self.request.GET,
59 Challenge.objects.filter(hidden=False)
60 .prefetch_related("phase_set", "publications")
61 .order_by("-created"),
62 )
63 self.ext_filter = ChallengeFilter(
64 self.request.GET,
65 ExternalChallenge.objects.filter(hidden=False)
66 .prefetch_related("publications")
67 .order_by("-created"),
68 )
69
70 int_paginator = Paginator(self.int_filter.qs, self.paginate_by // 2)
71 ext_paginator = Paginator(self.ext_filter.qs, self.paginate_by // 2)
72
73 num_pages = max(int_paginator.num_pages, ext_paginator.num_pages)
74 num_results = int_paginator.count + ext_paginator.count
75
76 try:
77 int_page = int_paginator.page(self._current_page)
78 except EmptyPage:
79 int_page = []
80
81 try:
82 ext_page = ext_paginator.page(self._current_page)
83 except EmptyPage:
84 ext_page = []
85
86 return [*int_page, *ext_page], num_pages, num_results
87
88 def get_context_data(self, *, object_list=None, **kwargs):
89 context = super().get_context_data(**kwargs)
90
91 page_obj, num_pages, num_results = self._get_page()
92
93 context.update(
94 {
95 "int_filter": self.int_filter,
96 "filters_applied": self._filters_applied,
97 "page_obj": page_obj,
98 "num_pages": num_pages,
99 "num_results": num_results,
100 "current_page": self._current_page,
101 "next_page": self._current_page + 1,
102 "previous_page": self._current_page - 1,
103 "jumbotron_title": "Challenges",
104 "jumbotron_description": format_html(
105 (
106 "Here is an overview of all challenges that have been "
107 "organised within the area of medical image analysis "
108 "that we are aware of. Please <a href='{}'>contact "
109 "us</a> if you want to advertise your challenge or "
110 "know of any study that would fit in this overview."
111 ),
112 random_encode("mailto:[email protected]"),
113 ),
114 }
115 )
116
117 return context
118
119
120 class UsersChallengeList(UserIsNotAnonMixin, ListView):
121 model = Challenge
122 template_name = "challenges/challenge_users_list.html"
123
124 def get_queryset(self):
125 queryset = super().get_queryset()
126 if not self.request.user.is_superuser:
127 queryset = queryset.filter(
128 Q(participants_group__in=self.request.user.groups.all())
129 | Q(admins_group__in=self.request.user.groups.all())
130 )
131 return queryset
132
133
134 class ChallengeUpdate(
135 UserIsChallengeAdminMixin,
136 SuccessMessageMixin,
137 ChallengeSubdomainObjectMixin,
138 UpdateView,
139 ):
140 model = Challenge
141 slug_field = "short_name__iexact"
142 slug_url_kwarg = "challenge_short_name"
143 form_class = ChallengeUpdateForm
144 success_message = "Challenge successfully updated"
145 template_name_suffix = "_update"
146
147
148 class ExternalChallengeCreate(
149 UserIsStaffMixin, SuccessMessageMixin, CreateView
150 ):
151 model = ExternalChallenge
152 form_class = ExternalChallengeUpdateForm
153 success_message = (
154 "Your challenge has been successfully submitted. "
155 "An admin will review your challenge before it is published."
156 )
157
158 def form_valid(self, form):
159 form.instance.creator = self.request.user
160 return super().form_valid(form)
161
162 def get_success_url(self):
163 return reverse("challenges:list")
164
165
166 class ExternalChallengeUpdate(
167 UserIsStaffMixin, SuccessMessageMixin, UpdateView
168 ):
169 model = ExternalChallenge
170 slug_field = "short_name__iexact"
171 slug_url_kwarg = "short_name"
172 form_class = ExternalChallengeUpdateForm
173 template_name_suffix = "_update"
174 success_message = "Challenge updated"
175
176 def get_success_url(self):
177 return reverse("challenges:list")
178
179
180 class ExternalChallengeList(UserIsStaffMixin, ListView):
181 model = ExternalChallenge
182
183
184 class ExternalChallengeDelete(UserIsStaffMixin, DeleteView):
185 model = ExternalChallenge
186 slug_field = "short_name__iexact"
187 slug_url_kwarg = "short_name"
188 success_message = "External challenge was successfully deleted"
189
190 def get_success_url(self):
191 return reverse("challenges:external-list")
192
193 def delete(self, request, *args, **kwargs):
194 messages.success(self.request, self.success_message)
195 return super().delete(request, *args, **kwargs)
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/grandchallenge/challenges/views.py b/app/grandchallenge/challenges/views.py
--- a/app/grandchallenge/challenges/views.py
+++ b/app/grandchallenge/challenges/views.py
@@ -27,6 +27,7 @@
UserIsStaffMixin,
)
from grandchallenge.core.templatetags.random_encode import random_encode
+from grandchallenge.datatables.views import Column, PaginatedTableListView
from grandchallenge.subdomains.mixins import ChallengeSubdomainObjectMixin
from grandchallenge.subdomains.utils import reverse
@@ -117,12 +118,33 @@
return context
-class UsersChallengeList(UserIsNotAnonMixin, ListView):
+class UsersChallengeList(UserIsNotAnonMixin, PaginatedTableListView):
model = Challenge
template_name = "challenges/challenge_users_list.html"
+ row_template = "challenges/challenge_users_row.html"
+ search_fields = [
+ "title",
+ "short_name",
+ "description",
+ ]
+ columns = [
+ Column(title="Name", sort_field="short_name"),
+ Column(title="Created", sort_field="created"),
+ Column(title="Admins", sort_field="created"),
+ Column(title="Description", sort_field="description"),
+ Column(title="Automated Evaluation", sort_field="use_evaluation"),
+ ]
+ default_sort_column = 1
def get_queryset(self):
- queryset = super().get_queryset()
+ queryset = (
+ super()
+ .get_queryset()
+ .prefetch_related(
+ "admins_group__user_set__user_profile",
+ "admins_group__user_set__verification",
+ )
+ )
if not self.request.user.is_superuser:
queryset = queryset.filter(
Q(participants_group__in=self.request.user.groups.all())
|
{"golden_diff": "diff --git a/app/grandchallenge/challenges/views.py b/app/grandchallenge/challenges/views.py\n--- a/app/grandchallenge/challenges/views.py\n+++ b/app/grandchallenge/challenges/views.py\n@@ -27,6 +27,7 @@\n UserIsStaffMixin,\n )\n from grandchallenge.core.templatetags.random_encode import random_encode\n+from grandchallenge.datatables.views import Column, PaginatedTableListView\n from grandchallenge.subdomains.mixins import ChallengeSubdomainObjectMixin\n from grandchallenge.subdomains.utils import reverse\n \n@@ -117,12 +118,33 @@\n return context\n \n \n-class UsersChallengeList(UserIsNotAnonMixin, ListView):\n+class UsersChallengeList(UserIsNotAnonMixin, PaginatedTableListView):\n model = Challenge\n template_name = \"challenges/challenge_users_list.html\"\n+ row_template = \"challenges/challenge_users_row.html\"\n+ search_fields = [\n+ \"title\",\n+ \"short_name\",\n+ \"description\",\n+ ]\n+ columns = [\n+ Column(title=\"Name\", sort_field=\"short_name\"),\n+ Column(title=\"Created\", sort_field=\"created\"),\n+ Column(title=\"Admins\", sort_field=\"created\"),\n+ Column(title=\"Description\", sort_field=\"description\"),\n+ Column(title=\"Automated Evaluation\", sort_field=\"use_evaluation\"),\n+ ]\n+ default_sort_column = 1\n \n def get_queryset(self):\n- queryset = super().get_queryset()\n+ queryset = (\n+ super()\n+ .get_queryset()\n+ .prefetch_related(\n+ \"admins_group__user_set__user_profile\",\n+ \"admins_group__user_set__verification\",\n+ )\n+ )\n if not self.request.user.is_superuser:\n queryset = queryset.filter(\n Q(participants_group__in=self.request.user.groups.all())\n", "issue": "ChallengeList page should use pagination\nThis takes a while to load with many challenges and the Automated Evaluation boolean no longer works.\r\n\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.paginator import EmptyPage, Paginator\nfrom django.db.models import Q\nfrom django.utils.html import format_html\nfrom django.views.generic import (\n CreateView,\n DeleteView,\n ListView,\n TemplateView,\n UpdateView,\n)\n\nfrom grandchallenge.challenges.filters import ChallengeFilter\nfrom grandchallenge.challenges.forms import (\n ChallengeCreateForm,\n ChallengeUpdateForm,\n ExternalChallengeUpdateForm,\n)\nfrom grandchallenge.challenges.models import (\n Challenge,\n ExternalChallenge,\n)\nfrom grandchallenge.core.permissions.mixins import (\n UserIsChallengeAdminMixin,\n UserIsNotAnonMixin,\n UserIsStaffMixin,\n)\nfrom grandchallenge.core.templatetags.random_encode import random_encode\nfrom grandchallenge.subdomains.mixins import ChallengeSubdomainObjectMixin\nfrom grandchallenge.subdomains.utils import reverse\n\n\nclass ChallengeCreate(UserIsNotAnonMixin, SuccessMessageMixin, CreateView):\n model = Challenge\n form_class = ChallengeCreateForm\n success_message = \"Challenge successfully created\"\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n return super().form_valid(form)\n\n\nclass ChallengeList(TemplateView):\n paginate_by = 40\n template_name = \"challenges/challenge_list.html\"\n\n @property\n def _current_page(self):\n return int(self.request.GET.get(\"page\", 1))\n\n @property\n def _filters_applied(self):\n return any(k for k in self.request.GET if k.lower() != \"page\")\n\n def _get_page(self):\n self.int_filter = ChallengeFilter(\n self.request.GET,\n Challenge.objects.filter(hidden=False)\n .prefetch_related(\"phase_set\", \"publications\")\n .order_by(\"-created\"),\n )\n self.ext_filter = ChallengeFilter(\n self.request.GET,\n ExternalChallenge.objects.filter(hidden=False)\n .prefetch_related(\"publications\")\n .order_by(\"-created\"),\n )\n\n int_paginator = Paginator(self.int_filter.qs, self.paginate_by // 2)\n ext_paginator = Paginator(self.ext_filter.qs, self.paginate_by // 2)\n\n num_pages = max(int_paginator.num_pages, ext_paginator.num_pages)\n num_results = int_paginator.count + ext_paginator.count\n\n try:\n int_page = int_paginator.page(self._current_page)\n except EmptyPage:\n int_page = []\n\n try:\n ext_page = ext_paginator.page(self._current_page)\n except EmptyPage:\n ext_page = []\n\n return [*int_page, *ext_page], num_pages, num_results\n\n def get_context_data(self, *, object_list=None, **kwargs):\n context = super().get_context_data(**kwargs)\n\n page_obj, num_pages, num_results = self._get_page()\n\n context.update(\n {\n \"int_filter\": self.int_filter,\n \"filters_applied\": self._filters_applied,\n \"page_obj\": page_obj,\n \"num_pages\": num_pages,\n \"num_results\": num_results,\n \"current_page\": self._current_page,\n \"next_page\": self._current_page + 1,\n \"previous_page\": self._current_page - 1,\n \"jumbotron_title\": \"Challenges\",\n \"jumbotron_description\": format_html(\n (\n \"Here is an overview of all challenges that have been \"\n \"organised within the area of medical image analysis \"\n \"that we are aware of. Please <a href='{}'>contact \"\n \"us</a> if you want to advertise your challenge or \"\n \"know of any study that would fit in this overview.\"\n ),\n random_encode(\"mailto:[email protected]\"),\n ),\n }\n )\n\n return context\n\n\nclass UsersChallengeList(UserIsNotAnonMixin, ListView):\n model = Challenge\n template_name = \"challenges/challenge_users_list.html\"\n\n def get_queryset(self):\n queryset = super().get_queryset()\n if not self.request.user.is_superuser:\n queryset = queryset.filter(\n Q(participants_group__in=self.request.user.groups.all())\n | Q(admins_group__in=self.request.user.groups.all())\n )\n return queryset\n\n\nclass ChallengeUpdate(\n UserIsChallengeAdminMixin,\n SuccessMessageMixin,\n ChallengeSubdomainObjectMixin,\n UpdateView,\n):\n model = Challenge\n slug_field = \"short_name__iexact\"\n slug_url_kwarg = \"challenge_short_name\"\n form_class = ChallengeUpdateForm\n success_message = \"Challenge successfully updated\"\n template_name_suffix = \"_update\"\n\n\nclass ExternalChallengeCreate(\n UserIsStaffMixin, SuccessMessageMixin, CreateView\n):\n model = ExternalChallenge\n form_class = ExternalChallengeUpdateForm\n success_message = (\n \"Your challenge has been successfully submitted. \"\n \"An admin will review your challenge before it is published.\"\n )\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n return super().form_valid(form)\n\n def get_success_url(self):\n return reverse(\"challenges:list\")\n\n\nclass ExternalChallengeUpdate(\n UserIsStaffMixin, SuccessMessageMixin, UpdateView\n):\n model = ExternalChallenge\n slug_field = \"short_name__iexact\"\n slug_url_kwarg = \"short_name\"\n form_class = ExternalChallengeUpdateForm\n template_name_suffix = \"_update\"\n success_message = \"Challenge updated\"\n\n def get_success_url(self):\n return reverse(\"challenges:list\")\n\n\nclass ExternalChallengeList(UserIsStaffMixin, ListView):\n model = ExternalChallenge\n\n\nclass ExternalChallengeDelete(UserIsStaffMixin, DeleteView):\n model = ExternalChallenge\n slug_field = \"short_name__iexact\"\n slug_url_kwarg = \"short_name\"\n success_message = \"External challenge was successfully deleted\"\n\n def get_success_url(self):\n return reverse(\"challenges:external-list\")\n\n def delete(self, request, *args, **kwargs):\n messages.success(self.request, self.success_message)\n return super().delete(request, *args, **kwargs)\n", "path": "app/grandchallenge/challenges/views.py"}], "after_files": [{"content": "from django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.paginator import EmptyPage, Paginator\nfrom django.db.models import Q\nfrom django.utils.html import format_html\nfrom django.views.generic import (\n CreateView,\n DeleteView,\n ListView,\n TemplateView,\n UpdateView,\n)\n\nfrom grandchallenge.challenges.filters import ChallengeFilter\nfrom grandchallenge.challenges.forms import (\n ChallengeCreateForm,\n ChallengeUpdateForm,\n ExternalChallengeUpdateForm,\n)\nfrom grandchallenge.challenges.models import (\n Challenge,\n ExternalChallenge,\n)\nfrom grandchallenge.core.permissions.mixins import (\n UserIsChallengeAdminMixin,\n UserIsNotAnonMixin,\n UserIsStaffMixin,\n)\nfrom grandchallenge.core.templatetags.random_encode import random_encode\nfrom grandchallenge.datatables.views import Column, PaginatedTableListView\nfrom grandchallenge.subdomains.mixins import ChallengeSubdomainObjectMixin\nfrom grandchallenge.subdomains.utils import reverse\n\n\nclass ChallengeCreate(UserIsNotAnonMixin, SuccessMessageMixin, CreateView):\n model = Challenge\n form_class = ChallengeCreateForm\n success_message = \"Challenge successfully created\"\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n return super().form_valid(form)\n\n\nclass ChallengeList(TemplateView):\n paginate_by = 40\n template_name = \"challenges/challenge_list.html\"\n\n @property\n def _current_page(self):\n return int(self.request.GET.get(\"page\", 1))\n\n @property\n def _filters_applied(self):\n return any(k for k in self.request.GET if k.lower() != \"page\")\n\n def _get_page(self):\n self.int_filter = ChallengeFilter(\n self.request.GET,\n Challenge.objects.filter(hidden=False)\n .prefetch_related(\"phase_set\", \"publications\")\n .order_by(\"-created\"),\n )\n self.ext_filter = ChallengeFilter(\n self.request.GET,\n ExternalChallenge.objects.filter(hidden=False)\n .prefetch_related(\"publications\")\n .order_by(\"-created\"),\n )\n\n int_paginator = Paginator(self.int_filter.qs, self.paginate_by // 2)\n ext_paginator = Paginator(self.ext_filter.qs, self.paginate_by // 2)\n\n num_pages = max(int_paginator.num_pages, ext_paginator.num_pages)\n num_results = int_paginator.count + ext_paginator.count\n\n try:\n int_page = int_paginator.page(self._current_page)\n except EmptyPage:\n int_page = []\n\n try:\n ext_page = ext_paginator.page(self._current_page)\n except EmptyPage:\n ext_page = []\n\n return [*int_page, *ext_page], num_pages, num_results\n\n def get_context_data(self, *, object_list=None, **kwargs):\n context = super().get_context_data(**kwargs)\n\n page_obj, num_pages, num_results = self._get_page()\n\n context.update(\n {\n \"int_filter\": self.int_filter,\n \"filters_applied\": self._filters_applied,\n \"page_obj\": page_obj,\n \"num_pages\": num_pages,\n \"num_results\": num_results,\n \"current_page\": self._current_page,\n \"next_page\": self._current_page + 1,\n \"previous_page\": self._current_page - 1,\n \"jumbotron_title\": \"Challenges\",\n \"jumbotron_description\": format_html(\n (\n \"Here is an overview of all challenges that have been \"\n \"organised within the area of medical image analysis \"\n \"that we are aware of. Please <a href='{}'>contact \"\n \"us</a> if you want to advertise your challenge or \"\n \"know of any study that would fit in this overview.\"\n ),\n random_encode(\"mailto:[email protected]\"),\n ),\n }\n )\n\n return context\n\n\nclass UsersChallengeList(UserIsNotAnonMixin, PaginatedTableListView):\n model = Challenge\n template_name = \"challenges/challenge_users_list.html\"\n row_template = \"challenges/challenge_users_row.html\"\n search_fields = [\n \"title\",\n \"short_name\",\n \"description\",\n ]\n columns = [\n Column(title=\"Name\", sort_field=\"short_name\"),\n Column(title=\"Created\", sort_field=\"created\"),\n Column(title=\"Admins\", sort_field=\"created\"),\n Column(title=\"Description\", sort_field=\"description\"),\n Column(title=\"Automated Evaluation\", sort_field=\"use_evaluation\"),\n ]\n default_sort_column = 1\n\n def get_queryset(self):\n queryset = (\n super()\n .get_queryset()\n .prefetch_related(\n \"admins_group__user_set__user_profile\",\n \"admins_group__user_set__verification\",\n )\n )\n if not self.request.user.is_superuser:\n queryset = queryset.filter(\n Q(participants_group__in=self.request.user.groups.all())\n | Q(admins_group__in=self.request.user.groups.all())\n )\n return queryset\n\n\nclass ChallengeUpdate(\n UserIsChallengeAdminMixin,\n SuccessMessageMixin,\n ChallengeSubdomainObjectMixin,\n UpdateView,\n):\n model = Challenge\n slug_field = \"short_name__iexact\"\n slug_url_kwarg = \"challenge_short_name\"\n form_class = ChallengeUpdateForm\n success_message = \"Challenge successfully updated\"\n template_name_suffix = \"_update\"\n\n\nclass ExternalChallengeCreate(\n UserIsStaffMixin, SuccessMessageMixin, CreateView\n):\n model = ExternalChallenge\n form_class = ExternalChallengeUpdateForm\n success_message = (\n \"Your challenge has been successfully submitted. \"\n \"An admin will review your challenge before it is published.\"\n )\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n return super().form_valid(form)\n\n def get_success_url(self):\n return reverse(\"challenges:list\")\n\n\nclass ExternalChallengeUpdate(\n UserIsStaffMixin, SuccessMessageMixin, UpdateView\n):\n model = ExternalChallenge\n slug_field = \"short_name__iexact\"\n slug_url_kwarg = \"short_name\"\n form_class = ExternalChallengeUpdateForm\n template_name_suffix = \"_update\"\n success_message = \"Challenge updated\"\n\n def get_success_url(self):\n return reverse(\"challenges:list\")\n\n\nclass ExternalChallengeList(UserIsStaffMixin, ListView):\n model = ExternalChallenge\n\n\nclass ExternalChallengeDelete(UserIsStaffMixin, DeleteView):\n model = ExternalChallenge\n slug_field = \"short_name__iexact\"\n slug_url_kwarg = \"short_name\"\n success_message = \"External challenge was successfully deleted\"\n\n def get_success_url(self):\n return reverse(\"challenges:external-list\")\n\n def delete(self, request, *args, **kwargs):\n messages.success(self.request, self.success_message)\n return super().delete(request, *args, **kwargs)\n", "path": "app/grandchallenge/challenges/views.py"}]}
| 2,089 | 399 |
gh_patches_debug_22909
|
rasdani/github-patches
|
git_diff
|
locustio__locust-1657
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SetConsoleMode throws an error when locust is run from Jenkins Powershell
<!--
If you have a general question about how to use Locust, please check Stack Overflow first https://stackoverflow.com/questions/tagged/locust
You can also ask new questions on SO, https://stackoverflow.com/questions/ask just remember to tag your question with "locust". Do not immediately post your issue here after posting to SO, wait for an answer there instead.
Use this form only for reporting actual bugs in locust. Be mindful that the developers of locust are unpaid volunteers, so make sure you have tried everything you can think of before filing a bug :)
-->
### Describe the bug
<!-- A clear and concise description of what the bug is -->
Error message is printed out to console right after the test is started in Jenkins and job fails on exit code 2 even actual test passes. We have run locust performance tests over a year on Jenkins without similar problem. Tests started to fail recently after we updated Python and Locustio versions to latest. Locust test run without a problem if it is started on Windows desktop Powershell (or cmd).
### Expected behavior
<!-- Tell us what you think should happen -->
Locust tests should run on CI as they run on local machine.
### Actual behavior
<!-- Tell us what happens instead. Include screenshots if this an issue with the GUI. -->
Error "pywintypes.error: (6, 'SetConsoleMode', 'The handle is invalid.')" is printed out to console right after the test is started in Jenkins and job fails on exit code 2 even actual test passes.
### Steps to reproduce
<!-- Please provide a minimal reproducible code example (https://stackoverflow.com/help/minimal-reproducible-example) -->
```
from locust import User, task, between
class MyUser(User):
@task
def my_task(self):
print("executing my_task")
wait_time = between(0.5, 10)
```
Start command
`locust -f .\locust_test.py --headless --run-time 3`
Console from Jenkins:
```
[Locust_test] $ powershell.exe -NonInteractive -ExecutionPolicy Bypass -File C:\Users\locust\AppData\Local\Temp\jenkins11138277147510956709.ps1
[2020-12-10 19:13:09,846] robot-vm/INFO/locust.main: Run time limit set to 3 seconds
[2020-12-10 19:13:09,847] robot-vm/INFO/locust.main: Starting Locust 1.4.1
[2020-12-10 19:13:09,847] robot-vm/INFO/locust.runners: Spawning 1 users at the rate 1 users/s (0 users already running)...
[2020-12-10 19:13:09,847] robot-vm/INFO/locust.runners: All users spawned: MyUser: 1 (1 total running)
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 854, in gevent._gevent_cgreenlet.Greenlet.run
File "c:\python39\lib\site-packages\locust\input_events.py", line 89, in input_listener_func
poller = get_poller()
File "c:\python39\lib\site-packages\locust\input_events.py", line 81, in get_poller
return WindowsKeyPoller()
File "c:\python39\lib\site-packages\locust\input_events.py", line 47, in __init__
self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)
pywintypes.error: (6, 'SetConsoleMode', 'The handle is invalid.')
2020-12-10T17:13:09Z <Greenlet at 0x19066ffdd00: input_listener_func> failed with error
Name # reqs # fails | Avg Min Max Median | req/s failures/s
--------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00
[2020-12-10 19:13:09,855] robot-vm/CRITICAL/locust.main: Unhandled exception in greenlet: <Greenlet at 0x19066ffdd00: input_listener_func>
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 854, in gevent._gevent_cgreenlet.Greenlet.run
File "c:\python39\lib\site-packages\locust\input_events.py", line 89, in input_listener_func
poller = get_poller()
File "c:\python39\lib\site-packages\locust\input_events.py", line 81, in get_poller
return WindowsKeyPoller()
File "c:\python39\lib\site-packages\locust\input_events.py", line 47, in __init__
self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)
pywintypes.error: (6, 'SetConsoleMode', 'The handle is invalid.')
Name # reqs # fails | Avg Min Max Median | req/s failures/s
--------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00
[2020-12-10 19:13:12,486] robot-vm/INFO/locust.main: Time limit reached. Stopping Locust.
[2020-12-10 19:13:12,486] robot-vm/INFO/locust.runners: Stopping 1 users
[2020-12-10 19:13:12,487] robot-vm/INFO/locust.runners: 1 Users have been stopped, 0 still running
[2020-12-10 19:13:12,487] robot-vm/INFO/locust.main: Running teardowns...
[2020-12-10 19:13:12,487] robot-vm/INFO/locust.main: Shutting down (exit code 2), bye.
[2020-12-10 19:13:12,487] robot-vm/INFO/locust.main: Cleaning up runner...
Name # reqs # fails | Avg Min Max Median | req/s failures/s
--------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00
Response time percentiles (approximated)
Type Name 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100% # reqs
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
executing my_task
Build step 'PowerShell' marked build as failure
Archiving artifacts
Finished: FAILURE
REST API
Jenkins 2.249.3
```
Console from local PC's Powershell
```
PS C:\workspace\tmp> locust -f .\locust_test.py --headless --run-time 3
[2020-12-10 18:55:07,816] WL313558/INFO/locust.main: Run time limit set to 3 seconds
[2020-12-10 18:55:07,816] WL313558/INFO/locust.main: Starting Locust 1.4.1
[2020-12-10 18:55:07,816] WL313558/INFO/locust.runners: Spawning 1 users at the rate 1 users/s (0 users already running)...
[2020-12-10 18:55:07,816] WL313558/INFO/locust.runners: All users spawned: MyUser: 1 (1 total running)
Name # reqs # fails | Avg Min Max Median | req/s failures/s
--------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00
executing my_task
Name # reqs # fails | Avg Min Max Median | req/s failures/s
--------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00
executing my_task
[2020-12-10 18:55:10,512] WL313558/INFO/locust.main: Time limit reached. Stopping Locust.
[2020-12-10 18:55:10,513] WL313558/INFO/locust.runners: Stopping 1 users
[2020-12-10 18:55:10,513] WL313558/INFO/locust.runners: 1 Users have been stopped, 0 still running
[2020-12-10 18:55:10,513] WL313558/INFO/locust.main: Running teardowns...
[2020-12-10 18:55:10,513] WL313558/INFO/locust.main: Shutting down (exit code 0), bye.
[2020-12-10 18:55:10,513] WL313558/INFO/locust.main: Cleaning up runner...
Name # reqs # fails | Avg Min Max Median | req/s failures/s
--------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00
Response time percentiles (approximated)
Type Name 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100% # reqs
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
```
### Environment
- OS: Windows 10
- Python version: 3.9.1
- Locust version: 1.4.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locust/input_events.py`
Content:
```
1 import gevent
2 import logging
3 import os
4
5 if os.name == "nt":
6 from win32api import STD_INPUT_HANDLE
7 from win32console import (
8 GetStdHandle,
9 KEY_EVENT,
10 ENABLE_ECHO_INPUT,
11 ENABLE_LINE_INPUT,
12 ENABLE_PROCESSED_INPUT,
13 )
14 else:
15 import sys
16 import select
17 import termios
18 import tty
19
20
21 class InitError(Exception):
22 pass
23
24
25 class UnixKeyPoller:
26 def __init__(self):
27 if sys.stdin.isatty():
28 self.stdin = sys.stdin.fileno()
29 self.tattr = termios.tcgetattr(self.stdin)
30 tty.setcbreak(self.stdin, termios.TCSANOW)
31 else:
32 raise InitError("Terminal was not a tty. Keyboard input disabled")
33
34 def cleanup(self):
35 termios.tcsetattr(self.stdin, termios.TCSANOW, self.tattr)
36
37 def poll(_self):
38 dr, dw, de = select.select([sys.stdin], [], [], 0)
39 if not dr == []:
40 return sys.stdin.read(1)
41 return None
42
43
44 class WindowsKeyPoller:
45 def __init__(self):
46 self.read_handle = GetStdHandle(STD_INPUT_HANDLE)
47 self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)
48 self.cur_event_length = 0
49 self.cur_keys_length = 0
50 self.captured_chars = []
51
52 def cleanup(self):
53 pass
54
55 def poll(self):
56 if self.captured_chars:
57 return self.captured_chars.pop(0)
58
59 events_peek = self.read_handle.PeekConsoleInput(10000)
60
61 if not events_peek:
62 return None
63
64 if not len(events_peek) == self.cur_event_length:
65 for cur_event in events_peek[self.cur_event_length :]:
66 if cur_event.EventType == KEY_EVENT:
67 if ord(cur_event.Char) and cur_event.KeyDown:
68 cur_char = str(cur_event.Char)
69 self.captured_chars.append(cur_char)
70
71 self.cur_event_length = len(events_peek)
72
73 if self.captured_chars:
74 return self.captured_chars.pop(0)
75 else:
76 return None
77
78
79 def get_poller():
80 if os.name == "nt":
81 return WindowsKeyPoller()
82 else:
83 return UnixKeyPoller()
84
85
86 def input_listener(key_to_func_map):
87 def input_listener_func():
88 try:
89 poller = get_poller()
90 except InitError as e:
91 logging.info(e)
92 return
93
94 try:
95 while True:
96 input = poller.poll()
97 if input:
98 for key in key_to_func_map:
99 if input == key:
100 key_to_func_map[key]()
101 else:
102 gevent.sleep(0.2)
103 except Exception as e:
104 logging.warning(f"Exception in keyboard input poller: {e}")
105 finally:
106 poller.cleanup()
107
108 return input_listener_func
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locust/input_events.py b/locust/input_events.py
--- a/locust/input_events.py
+++ b/locust/input_events.py
@@ -1,6 +1,7 @@
import gevent
import logging
import os
+import sys
if os.name == "nt":
from win32api import STD_INPUT_HANDLE
@@ -12,7 +13,6 @@
ENABLE_PROCESSED_INPUT,
)
else:
- import sys
import select
import termios
import tty
@@ -43,11 +43,14 @@
class WindowsKeyPoller:
def __init__(self):
- self.read_handle = GetStdHandle(STD_INPUT_HANDLE)
- self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)
- self.cur_event_length = 0
- self.cur_keys_length = 0
- self.captured_chars = []
+ if sys.stdin.isatty():
+ self.read_handle = GetStdHandle(STD_INPUT_HANDLE)
+ self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)
+ self.cur_event_length = 0
+ self.cur_keys_length = 0
+ self.captured_chars = []
+ else:
+ raise InitError("Terminal was not a tty. Keyboard input disabled")
def cleanup(self):
pass
|
{"golden_diff": "diff --git a/locust/input_events.py b/locust/input_events.py\n--- a/locust/input_events.py\n+++ b/locust/input_events.py\n@@ -1,6 +1,7 @@\n import gevent\n import logging\n import os\n+import sys\n \n if os.name == \"nt\":\n from win32api import STD_INPUT_HANDLE\n@@ -12,7 +13,6 @@\n ENABLE_PROCESSED_INPUT,\n )\n else:\n- import sys\n import select\n import termios\n import tty\n@@ -43,11 +43,14 @@\n \n class WindowsKeyPoller:\n def __init__(self):\n- self.read_handle = GetStdHandle(STD_INPUT_HANDLE)\n- self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)\n- self.cur_event_length = 0\n- self.cur_keys_length = 0\n- self.captured_chars = []\n+ if sys.stdin.isatty():\n+ self.read_handle = GetStdHandle(STD_INPUT_HANDLE)\n+ self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)\n+ self.cur_event_length = 0\n+ self.cur_keys_length = 0\n+ self.captured_chars = []\n+ else:\n+ raise InitError(\"Terminal was not a tty. Keyboard input disabled\")\n \n def cleanup(self):\n pass\n", "issue": "SetConsoleMode throws an error when locust is run from Jenkins Powershell\n<!-- \r\nIf you have a general question about how to use Locust, please check Stack Overflow first https://stackoverflow.com/questions/tagged/locust\r\n\r\nYou can also ask new questions on SO, https://stackoverflow.com/questions/ask just remember to tag your question with \"locust\". Do not immediately post your issue here after posting to SO, wait for an answer there instead.\r\n\r\nUse this form only for reporting actual bugs in locust. Be mindful that the developers of locust are unpaid volunteers, so make sure you have tried everything you can think of before filing a bug :) \r\n-->\r\n\r\n### Describe the bug\r\n<!-- A clear and concise description of what the bug is -->\r\nError message is printed out to console right after the test is started in Jenkins and job fails on exit code 2 even actual test passes. We have run locust performance tests over a year on Jenkins without similar problem. Tests started to fail recently after we updated Python and Locustio versions to latest. Locust test run without a problem if it is started on Windows desktop Powershell (or cmd).\r\n\r\n### Expected behavior\r\n<!-- Tell us what you think should happen -->\r\nLocust tests should run on CI as they run on local machine.\r\n\r\n### Actual behavior\r\n<!-- Tell us what happens instead. Include screenshots if this an issue with the GUI. -->\r\nError \"pywintypes.error: (6, 'SetConsoleMode', 'The handle is invalid.')\" is printed out to console right after the test is started in Jenkins and job fails on exit code 2 even actual test passes.\r\n\r\n### Steps to reproduce\r\n<!-- Please provide a minimal reproducible code example (https://stackoverflow.com/help/minimal-reproducible-example) --> \r\n```\r\nfrom locust import User, task, between\r\n\r\n\r\nclass MyUser(User):\r\n @task\r\n def my_task(self):\r\n print(\"executing my_task\")\r\n\r\n wait_time = between(0.5, 10)\r\n```\r\nStart command\r\n`locust -f .\\locust_test.py --headless --run-time 3`\r\n\r\nConsole from Jenkins:\r\n```\r\n[Locust_test] $ powershell.exe -NonInteractive -ExecutionPolicy Bypass -File C:\\Users\\locust\\AppData\\Local\\Temp\\jenkins11138277147510956709.ps1\r\n[2020-12-10 19:13:09,846] robot-vm/INFO/locust.main: Run time limit set to 3 seconds\r\n[2020-12-10 19:13:09,847] robot-vm/INFO/locust.main: Starting Locust 1.4.1\r\n[2020-12-10 19:13:09,847] robot-vm/INFO/locust.runners: Spawning 1 users at the rate 1 users/s (0 users already running)...\r\n[2020-12-10 19:13:09,847] robot-vm/INFO/locust.runners: All users spawned: MyUser: 1 (1 total running)\r\nTraceback (most recent call last):\r\n File \"src/gevent/greenlet.py\", line 854, in gevent._gevent_cgreenlet.Greenlet.run\r\n File \"c:\\python39\\lib\\site-packages\\locust\\input_events.py\", line 89, in input_listener_func\r\n poller = get_poller()\r\n File \"c:\\python39\\lib\\site-packages\\locust\\input_events.py\", line 81, in get_poller\r\n return WindowsKeyPoller()\r\n File \"c:\\python39\\lib\\site-packages\\locust\\input_events.py\", line 47, in __init__\r\n self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)\r\npywintypes.error: (6, 'SetConsoleMode', 'The handle is invalid.')\r\n2020-12-10T17:13:09Z <Greenlet at 0x19066ffdd00: input_listener_func> failed with error\r\n\r\n Name # reqs # fails | Avg Min Max Median | req/s failures/s\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00\r\n\r\n[2020-12-10 19:13:09,855] robot-vm/CRITICAL/locust.main: Unhandled exception in greenlet: <Greenlet at 0x19066ffdd00: input_listener_func>\r\nTraceback (most recent call last):\r\n File \"src/gevent/greenlet.py\", line 854, in gevent._gevent_cgreenlet.Greenlet.run\r\n File \"c:\\python39\\lib\\site-packages\\locust\\input_events.py\", line 89, in input_listener_func\r\n poller = get_poller()\r\n File \"c:\\python39\\lib\\site-packages\\locust\\input_events.py\", line 81, in get_poller\r\n return WindowsKeyPoller()\r\n File \"c:\\python39\\lib\\site-packages\\locust\\input_events.py\", line 47, in __init__\r\n self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)\r\npywintypes.error: (6, 'SetConsoleMode', 'The handle is invalid.')\r\n Name # reqs # fails | Avg Min Max Median | req/s failures/s\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00\r\n\r\n[2020-12-10 19:13:12,486] robot-vm/INFO/locust.main: Time limit reached. Stopping Locust.\r\n[2020-12-10 19:13:12,486] robot-vm/INFO/locust.runners: Stopping 1 users\r\n[2020-12-10 19:13:12,487] robot-vm/INFO/locust.runners: 1 Users have been stopped, 0 still running\r\n[2020-12-10 19:13:12,487] robot-vm/INFO/locust.main: Running teardowns...\r\n[2020-12-10 19:13:12,487] robot-vm/INFO/locust.main: Shutting down (exit code 2), bye.\r\n[2020-12-10 19:13:12,487] robot-vm/INFO/locust.main: Cleaning up runner...\r\n Name # reqs # fails | Avg Min Max Median | req/s failures/s\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00\r\n\r\nResponse time percentiles (approximated)\r\n Type Name 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100% # reqs\r\n--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|\r\n--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|\r\n\r\nexecuting my_task\r\nBuild step 'PowerShell' marked build as failure\r\nArchiving artifacts\r\nFinished: FAILURE\r\n\r\nREST API\r\nJenkins 2.249.3\r\n```\r\nConsole from local PC's Powershell\r\n```\r\nPS C:\\workspace\\tmp> locust -f .\\locust_test.py --headless --run-time 3\r\n[2020-12-10 18:55:07,816] WL313558/INFO/locust.main: Run time limit set to 3 seconds\r\n[2020-12-10 18:55:07,816] WL313558/INFO/locust.main: Starting Locust 1.4.1\r\n[2020-12-10 18:55:07,816] WL313558/INFO/locust.runners: Spawning 1 users at the rate 1 users/s (0 users already running)...\r\n[2020-12-10 18:55:07,816] WL313558/INFO/locust.runners: All users spawned: MyUser: 1 (1 total running)\r\n Name # reqs # fails | Avg Min Max Median | req/s failures/s\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00\r\n\r\nexecuting my_task\r\n Name # reqs # fails | Avg Min Max Median | req/s failures/s\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00\r\n\r\nexecuting my_task\r\n[2020-12-10 18:55:10,512] WL313558/INFO/locust.main: Time limit reached. Stopping Locust.\r\n[2020-12-10 18:55:10,513] WL313558/INFO/locust.runners: Stopping 1 users\r\n[2020-12-10 18:55:10,513] WL313558/INFO/locust.runners: 1 Users have been stopped, 0 still running\r\n[2020-12-10 18:55:10,513] WL313558/INFO/locust.main: Running teardowns...\r\n[2020-12-10 18:55:10,513] WL313558/INFO/locust.main: Shutting down (exit code 0), bye.\r\n[2020-12-10 18:55:10,513] WL313558/INFO/locust.main: Cleaning up runner...\r\n Name # reqs # fails | Avg Min Max Median | req/s failures/s\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------------------------------------------------------------------------------\r\n Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00\r\n\r\nResponse time percentiles (approximated)\r\n Type Name 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100% # reqs\r\n--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|\r\n--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|\r\n```\r\n\r\n\r\n### Environment\r\n\r\n- OS: Windows 10\r\n- Python version: 3.9.1\r\n- Locust version: 1.4.1\n", "before_files": [{"content": "import gevent\nimport logging\nimport os\n\nif os.name == \"nt\":\n from win32api import STD_INPUT_HANDLE\n from win32console import (\n GetStdHandle,\n KEY_EVENT,\n ENABLE_ECHO_INPUT,\n ENABLE_LINE_INPUT,\n ENABLE_PROCESSED_INPUT,\n )\nelse:\n import sys\n import select\n import termios\n import tty\n\n\nclass InitError(Exception):\n pass\n\n\nclass UnixKeyPoller:\n def __init__(self):\n if sys.stdin.isatty():\n self.stdin = sys.stdin.fileno()\n self.tattr = termios.tcgetattr(self.stdin)\n tty.setcbreak(self.stdin, termios.TCSANOW)\n else:\n raise InitError(\"Terminal was not a tty. Keyboard input disabled\")\n\n def cleanup(self):\n termios.tcsetattr(self.stdin, termios.TCSANOW, self.tattr)\n\n def poll(_self):\n dr, dw, de = select.select([sys.stdin], [], [], 0)\n if not dr == []:\n return sys.stdin.read(1)\n return None\n\n\nclass WindowsKeyPoller:\n def __init__(self):\n self.read_handle = GetStdHandle(STD_INPUT_HANDLE)\n self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)\n self.cur_event_length = 0\n self.cur_keys_length = 0\n self.captured_chars = []\n\n def cleanup(self):\n pass\n\n def poll(self):\n if self.captured_chars:\n return self.captured_chars.pop(0)\n\n events_peek = self.read_handle.PeekConsoleInput(10000)\n\n if not events_peek:\n return None\n\n if not len(events_peek) == self.cur_event_length:\n for cur_event in events_peek[self.cur_event_length :]:\n if cur_event.EventType == KEY_EVENT:\n if ord(cur_event.Char) and cur_event.KeyDown:\n cur_char = str(cur_event.Char)\n self.captured_chars.append(cur_char)\n\n self.cur_event_length = len(events_peek)\n\n if self.captured_chars:\n return self.captured_chars.pop(0)\n else:\n return None\n\n\ndef get_poller():\n if os.name == \"nt\":\n return WindowsKeyPoller()\n else:\n return UnixKeyPoller()\n\n\ndef input_listener(key_to_func_map):\n def input_listener_func():\n try:\n poller = get_poller()\n except InitError as e:\n logging.info(e)\n return\n\n try:\n while True:\n input = poller.poll()\n if input:\n for key in key_to_func_map:\n if input == key:\n key_to_func_map[key]()\n else:\n gevent.sleep(0.2)\n except Exception as e:\n logging.warning(f\"Exception in keyboard input poller: {e}\")\n finally:\n poller.cleanup()\n\n return input_listener_func\n", "path": "locust/input_events.py"}], "after_files": [{"content": "import gevent\nimport logging\nimport os\nimport sys\n\nif os.name == \"nt\":\n from win32api import STD_INPUT_HANDLE\n from win32console import (\n GetStdHandle,\n KEY_EVENT,\n ENABLE_ECHO_INPUT,\n ENABLE_LINE_INPUT,\n ENABLE_PROCESSED_INPUT,\n )\nelse:\n import select\n import termios\n import tty\n\n\nclass InitError(Exception):\n pass\n\n\nclass UnixKeyPoller:\n def __init__(self):\n if sys.stdin.isatty():\n self.stdin = sys.stdin.fileno()\n self.tattr = termios.tcgetattr(self.stdin)\n tty.setcbreak(self.stdin, termios.TCSANOW)\n else:\n raise InitError(\"Terminal was not a tty. Keyboard input disabled\")\n\n def cleanup(self):\n termios.tcsetattr(self.stdin, termios.TCSANOW, self.tattr)\n\n def poll(_self):\n dr, dw, de = select.select([sys.stdin], [], [], 0)\n if not dr == []:\n return sys.stdin.read(1)\n return None\n\n\nclass WindowsKeyPoller:\n def __init__(self):\n if sys.stdin.isatty():\n self.read_handle = GetStdHandle(STD_INPUT_HANDLE)\n self.read_handle.SetConsoleMode(ENABLE_LINE_INPUT | ENABLE_ECHO_INPUT | ENABLE_PROCESSED_INPUT)\n self.cur_event_length = 0\n self.cur_keys_length = 0\n self.captured_chars = []\n else:\n raise InitError(\"Terminal was not a tty. Keyboard input disabled\")\n\n def cleanup(self):\n pass\n\n def poll(self):\n if self.captured_chars:\n return self.captured_chars.pop(0)\n\n events_peek = self.read_handle.PeekConsoleInput(10000)\n\n if not events_peek:\n return None\n\n if not len(events_peek) == self.cur_event_length:\n for cur_event in events_peek[self.cur_event_length :]:\n if cur_event.EventType == KEY_EVENT:\n if ord(cur_event.Char) and cur_event.KeyDown:\n cur_char = str(cur_event.Char)\n self.captured_chars.append(cur_char)\n\n self.cur_event_length = len(events_peek)\n\n if self.captured_chars:\n return self.captured_chars.pop(0)\n else:\n return None\n\n\ndef get_poller():\n if os.name == \"nt\":\n return WindowsKeyPoller()\n else:\n return UnixKeyPoller()\n\n\ndef input_listener(key_to_func_map):\n def input_listener_func():\n try:\n poller = get_poller()\n except InitError as e:\n logging.info(e)\n return\n\n try:\n while True:\n input = poller.poll()\n if input:\n for key in key_to_func_map:\n if input == key:\n key_to_func_map[key]()\n else:\n gevent.sleep(0.2)\n except Exception as e:\n logging.warning(f\"Exception in keyboard input poller: {e}\")\n finally:\n poller.cleanup()\n\n return input_listener_func\n", "path": "locust/input_events.py"}]}
| 3,865 | 314 |
gh_patches_debug_49851
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-15890
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OpenIDC SSO through apache stopped working after update to 3.7.6
### Deployment Type
Self-hosted
### NetBox Version
v3.7.6
### Python Version
3.9
### Steps to Reproduce
This is a longstanding NetBox instance. It runs under gunicorn, proxied through apache which is configured to use mod_auth_openid for authentication.
NetBox's configuration includes:
REMOTE_AUTH_ENABLED = True
REMOTE_AUTH_BACKEND = 'netbox.authentication.RemoteUserBackend'
REMOTE_AUTH_HEADER = 'HTTP_OIDC_CLAIM_PREFERRED_USERNAME'
REMOTE_AUTH_AUTO_CREATE_USER = True
This was working fine until the update to 3.7.6 following our usual procedure:
Pull and checkout v3.7.6.
Run upgrade.sh
Restart NetBox gunicorn service, netbox-rq and apache
Since the upgrade, NetBox has presented a login box instead of logging in as the REMOTE_AUTH_HEADER user. Using tcpdump, I can see the "OIDC_CLAIM_preferred_username" header is being sent to gunicorn. Other instances using the same OpenIDC configuration are working.
### Expected Behavior
REMOTE_AUTH login using OpenIDC credentials.
### Observed Behavior
The web frontend prompts for username and password.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `contrib/gunicorn.py`
Content:
```
1 # The IP address (typically localhost) and port that the NetBox WSGI process should listen on
2 bind = '127.0.0.1:8001'
3
4 # Number of gunicorn workers to spawn. This should typically be 2n+1, where
5 # n is the number of CPU cores present.
6 workers = 5
7
8 # Number of threads per worker process
9 threads = 3
10
11 # Timeout (in seconds) for a request to complete
12 timeout = 120
13
14 # The maximum number of requests a worker can handle before being respawned
15 max_requests = 5000
16 max_requests_jitter = 500
17
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/contrib/gunicorn.py b/contrib/gunicorn.py
--- a/contrib/gunicorn.py
+++ b/contrib/gunicorn.py
@@ -14,3 +14,7 @@
# The maximum number of requests a worker can handle before being respawned
max_requests = 5000
max_requests_jitter = 500
+
+# Uncomment this line to accept HTTP headers containing underscores, e.g. for remote
+# authentication support. See https://docs.gunicorn.org/en/stable/settings.html#header-map
+# header-map = 'dangerous'
|
{"golden_diff": "diff --git a/contrib/gunicorn.py b/contrib/gunicorn.py\n--- a/contrib/gunicorn.py\n+++ b/contrib/gunicorn.py\n@@ -14,3 +14,7 @@\n # The maximum number of requests a worker can handle before being respawned\n max_requests = 5000\n max_requests_jitter = 500\n+\n+# Uncomment this line to accept HTTP headers containing underscores, e.g. for remote\n+# authentication support. See https://docs.gunicorn.org/en/stable/settings.html#header-map\n+# header-map = 'dangerous'\n", "issue": "OpenIDC SSO through apache stopped working after update to 3.7.6\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv3.7.6\n\n### Python Version\n\n3.9\n\n### Steps to Reproduce\n\nThis is a longstanding NetBox instance. It runs under gunicorn, proxied through apache which is configured to use mod_auth_openid for authentication. \r\n\r\nNetBox's configuration includes:\r\nREMOTE_AUTH_ENABLED = True\r\nREMOTE_AUTH_BACKEND = 'netbox.authentication.RemoteUserBackend'\r\nREMOTE_AUTH_HEADER = 'HTTP_OIDC_CLAIM_PREFERRED_USERNAME'\r\nREMOTE_AUTH_AUTO_CREATE_USER = True\r\n\r\nThis was working fine until the update to 3.7.6 following our usual procedure:\r\n\r\nPull and checkout v3.7.6.\r\n\r\nRun upgrade.sh\r\n\r\nRestart NetBox gunicorn service, netbox-rq and apache\r\n\r\nSince the upgrade, NetBox has presented a login box instead of logging in as the REMOTE_AUTH_HEADER user. Using tcpdump, I can see the \"OIDC_CLAIM_preferred_username\" header is being sent to gunicorn. Other instances using the same OpenIDC configuration are working.\r\n\n\n### Expected Behavior\n\nREMOTE_AUTH login using OpenIDC credentials.\n\n### Observed Behavior\n\nThe web frontend prompts for username and password.\n", "before_files": [{"content": "# The IP address (typically localhost) and port that the NetBox WSGI process should listen on\nbind = '127.0.0.1:8001'\n\n# Number of gunicorn workers to spawn. This should typically be 2n+1, where\n# n is the number of CPU cores present.\nworkers = 5\n\n# Number of threads per worker process\nthreads = 3\n\n# Timeout (in seconds) for a request to complete\ntimeout = 120\n\n# The maximum number of requests a worker can handle before being respawned\nmax_requests = 5000\nmax_requests_jitter = 500\n", "path": "contrib/gunicorn.py"}], "after_files": [{"content": "# The IP address (typically localhost) and port that the NetBox WSGI process should listen on\nbind = '127.0.0.1:8001'\n\n# Number of gunicorn workers to spawn. This should typically be 2n+1, where\n# n is the number of CPU cores present.\nworkers = 5\n\n# Number of threads per worker process\nthreads = 3\n\n# Timeout (in seconds) for a request to complete\ntimeout = 120\n\n# The maximum number of requests a worker can handle before being respawned\nmax_requests = 5000\nmax_requests_jitter = 500\n\n# Uncomment this line to accept HTTP headers containing underscores, e.g. for remote\n# authentication support. See https://docs.gunicorn.org/en/stable/settings.html#header-map\n# header-map = 'dangerous'\n", "path": "contrib/gunicorn.py"}]}
| 695 | 124 |
gh_patches_debug_30377
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-2845
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Using 'Have-I-been-pwned' policy breaks flows in Authentik 2022.4.1
**Describe the bug**
Using a 'Have-I-been-pwned' policy on a password prompt within a flow breaks the flow.
**To Reproduce**
Steps to reproduce the behavior:
1. Use Authentik 2022.3.3
2. Use all the default settings/flows, so a clean install
3. Add a have-i-been-pwned policy to the default-password-change flow on the default-password-change-prompt stage.
4. This stage binding has the following settings:
- _Evaluate on plan: True_
- _Re-evaluate policies: False_
- _Invalid respones action: RETRY returns the error message and a similar challenge to the executor._
- _Policy engine mode: ALL, all policies must match to include this stage access._
5. Go to the Flow Overview and Execute flow with current user, see that the have-i-been pwned policy works correctly.
6. Use Authentik 2022.4.1
7. Repeat steps 2 - 5 described above
8. See that you will receive an error message 'Password not set in context'.
**Expected behavior**
The password should be checked, and the flow should not crash with the error 'Password not set in context'.
**Version and Deployment (please complete the following information):**
- authentik version: 2022.4.1
- Deployment: tested both Docker & K8S
**Additional context**
I repeated these steps multiple times and I keep getting the same issue. Therefore I think it is safe to assume that this is a bug introduced in the update from version 2022.3.3 to version 2022.4.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/policies/hibp/models.py`
Content:
```
1 """authentik HIBP Models"""
2 from hashlib import sha1
3
4 from django.db import models
5 from django.utils.translation import gettext as _
6 from rest_framework.serializers import BaseSerializer
7 from structlog.stdlib import get_logger
8
9 from authentik.lib.utils.http import get_http_session
10 from authentik.policies.models import Policy, PolicyResult
11 from authentik.policies.types import PolicyRequest
12
13 LOGGER = get_logger()
14
15
16 class HaveIBeenPwendPolicy(Policy):
17 """Check if password is on HaveIBeenPwned's list by uploading the first
18 5 characters of the SHA1 Hash."""
19
20 password_field = models.TextField(
21 default="password",
22 help_text=_("Field key to check, field keys defined in Prompt stages are available."),
23 )
24
25 allowed_count = models.IntegerField(default=0)
26
27 @property
28 def serializer(self) -> BaseSerializer:
29 from authentik.policies.hibp.api import HaveIBeenPwendPolicySerializer
30
31 return HaveIBeenPwendPolicySerializer
32
33 @property
34 def component(self) -> str:
35 return "ak-policy-hibp-form"
36
37 def passes(self, request: PolicyRequest) -> PolicyResult:
38 """Check if password is in HIBP DB. Hashes given Password with SHA1, uses the first 5
39 characters of Password in request and checks if full hash is in response. Returns 0
40 if Password is not in result otherwise the count of how many times it was used."""
41 if self.password_field not in request.context:
42 LOGGER.warning(
43 "Password field not set in Policy Request",
44 field=self.password_field,
45 fields=request.context.keys(),
46 )
47 return PolicyResult(False, _("Password not set in context"))
48 password = str(request.context[self.password_field])
49
50 pw_hash = sha1(password.encode("utf-8")).hexdigest() # nosec
51 url = f"https://api.pwnedpasswords.com/range/{pw_hash[:5]}"
52 result = get_http_session().get(url).text
53 final_count = 0
54 for line in result.split("\r\n"):
55 full_hash, count = line.split(":")
56 if pw_hash[5:] == full_hash.lower():
57 final_count = int(count)
58 LOGGER.debug("got hibp result", count=final_count, hash=pw_hash[:5])
59 if final_count > self.allowed_count:
60 message = _("Password exists on %(count)d online lists." % {"count": final_count})
61 return PolicyResult(False, message)
62 return PolicyResult(True)
63
64 class Meta:
65
66 verbose_name = _("Have I Been Pwned Policy")
67 verbose_name_plural = _("Have I Been Pwned Policies")
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/authentik/policies/hibp/models.py b/authentik/policies/hibp/models.py
--- a/authentik/policies/hibp/models.py
+++ b/authentik/policies/hibp/models.py
@@ -9,6 +9,7 @@
from authentik.lib.utils.http import get_http_session
from authentik.policies.models import Policy, PolicyResult
from authentik.policies.types import PolicyRequest
+from authentik.stages.prompt.stage import PLAN_CONTEXT_PROMPT
LOGGER = get_logger()
@@ -38,14 +39,17 @@
"""Check if password is in HIBP DB. Hashes given Password with SHA1, uses the first 5
characters of Password in request and checks if full hash is in response. Returns 0
if Password is not in result otherwise the count of how many times it was used."""
- if self.password_field not in request.context:
+ password = request.context.get(PLAN_CONTEXT_PROMPT, {}).get(
+ self.password_field, request.context.get(self.password_field)
+ )
+ if not password:
LOGGER.warning(
"Password field not set in Policy Request",
field=self.password_field,
fields=request.context.keys(),
)
return PolicyResult(False, _("Password not set in context"))
- password = str(request.context[self.password_field])
+ password = str(password)
pw_hash = sha1(password.encode("utf-8")).hexdigest() # nosec
url = f"https://api.pwnedpasswords.com/range/{pw_hash[:5]}"
|
{"golden_diff": "diff --git a/authentik/policies/hibp/models.py b/authentik/policies/hibp/models.py\n--- a/authentik/policies/hibp/models.py\n+++ b/authentik/policies/hibp/models.py\n@@ -9,6 +9,7 @@\n from authentik.lib.utils.http import get_http_session\n from authentik.policies.models import Policy, PolicyResult\n from authentik.policies.types import PolicyRequest\n+from authentik.stages.prompt.stage import PLAN_CONTEXT_PROMPT\n \n LOGGER = get_logger()\n \n@@ -38,14 +39,17 @@\n \"\"\"Check if password is in HIBP DB. Hashes given Password with SHA1, uses the first 5\n characters of Password in request and checks if full hash is in response. Returns 0\n if Password is not in result otherwise the count of how many times it was used.\"\"\"\n- if self.password_field not in request.context:\n+ password = request.context.get(PLAN_CONTEXT_PROMPT, {}).get(\n+ self.password_field, request.context.get(self.password_field)\n+ )\n+ if not password:\n LOGGER.warning(\n \"Password field not set in Policy Request\",\n field=self.password_field,\n fields=request.context.keys(),\n )\n return PolicyResult(False, _(\"Password not set in context\"))\n- password = str(request.context[self.password_field])\n+ password = str(password)\n \n pw_hash = sha1(password.encode(\"utf-8\")).hexdigest() # nosec\n url = f\"https://api.pwnedpasswords.com/range/{pw_hash[:5]}\"\n", "issue": "Using 'Have-I-been-pwned' policy breaks flows in Authentik 2022.4.1\n**Describe the bug**\r\nUsing a 'Have-I-been-pwned' policy on a password prompt within a flow breaks the flow.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Use Authentik 2022.3.3\r\n2. Use all the default settings/flows, so a clean install\r\n3. Add a have-i-been-pwned policy to the default-password-change flow on the default-password-change-prompt stage.\r\n4. This stage binding has the following settings:\r\n- _Evaluate on plan: True_\r\n- _Re-evaluate policies: False_\r\n- _Invalid respones action: RETRY returns the error message and a similar challenge to the executor._\r\n- _Policy engine mode: ALL, all policies must match to include this stage access._\r\n5. Go to the Flow Overview and Execute flow with current user, see that the have-i-been pwned policy works correctly.\r\n6. Use Authentik 2022.4.1\r\n7. Repeat steps 2 - 5 described above\r\n8. See that you will receive an error message 'Password not set in context'.\r\n\r\n**Expected behavior**\r\nThe password should be checked, and the flow should not crash with the error 'Password not set in context'.\r\n\r\n**Version and Deployment (please complete the following information):**\r\n - authentik version: 2022.4.1\r\n - Deployment: tested both Docker & K8S\r\n\r\n**Additional context**\r\nI repeated these steps multiple times and I keep getting the same issue. Therefore I think it is safe to assume that this is a bug introduced in the update from version 2022.3.3 to version 2022.4.1\r\n\n", "before_files": [{"content": "\"\"\"authentik HIBP Models\"\"\"\nfrom hashlib import sha1\n\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom rest_framework.serializers import BaseSerializer\nfrom structlog.stdlib import get_logger\n\nfrom authentik.lib.utils.http import get_http_session\nfrom authentik.policies.models import Policy, PolicyResult\nfrom authentik.policies.types import PolicyRequest\n\nLOGGER = get_logger()\n\n\nclass HaveIBeenPwendPolicy(Policy):\n \"\"\"Check if password is on HaveIBeenPwned's list by uploading the first\n 5 characters of the SHA1 Hash.\"\"\"\n\n password_field = models.TextField(\n default=\"password\",\n help_text=_(\"Field key to check, field keys defined in Prompt stages are available.\"),\n )\n\n allowed_count = models.IntegerField(default=0)\n\n @property\n def serializer(self) -> BaseSerializer:\n from authentik.policies.hibp.api import HaveIBeenPwendPolicySerializer\n\n return HaveIBeenPwendPolicySerializer\n\n @property\n def component(self) -> str:\n return \"ak-policy-hibp-form\"\n\n def passes(self, request: PolicyRequest) -> PolicyResult:\n \"\"\"Check if password is in HIBP DB. Hashes given Password with SHA1, uses the first 5\n characters of Password in request and checks if full hash is in response. Returns 0\n if Password is not in result otherwise the count of how many times it was used.\"\"\"\n if self.password_field not in request.context:\n LOGGER.warning(\n \"Password field not set in Policy Request\",\n field=self.password_field,\n fields=request.context.keys(),\n )\n return PolicyResult(False, _(\"Password not set in context\"))\n password = str(request.context[self.password_field])\n\n pw_hash = sha1(password.encode(\"utf-8\")).hexdigest() # nosec\n url = f\"https://api.pwnedpasswords.com/range/{pw_hash[:5]}\"\n result = get_http_session().get(url).text\n final_count = 0\n for line in result.split(\"\\r\\n\"):\n full_hash, count = line.split(\":\")\n if pw_hash[5:] == full_hash.lower():\n final_count = int(count)\n LOGGER.debug(\"got hibp result\", count=final_count, hash=pw_hash[:5])\n if final_count > self.allowed_count:\n message = _(\"Password exists on %(count)d online lists.\" % {\"count\": final_count})\n return PolicyResult(False, message)\n return PolicyResult(True)\n\n class Meta:\n\n verbose_name = _(\"Have I Been Pwned Policy\")\n verbose_name_plural = _(\"Have I Been Pwned Policies\")\n", "path": "authentik/policies/hibp/models.py"}], "after_files": [{"content": "\"\"\"authentik HIBP Models\"\"\"\nfrom hashlib import sha1\n\nfrom django.db import models\nfrom django.utils.translation import gettext as _\nfrom rest_framework.serializers import BaseSerializer\nfrom structlog.stdlib import get_logger\n\nfrom authentik.lib.utils.http import get_http_session\nfrom authentik.policies.models import Policy, PolicyResult\nfrom authentik.policies.types import PolicyRequest\nfrom authentik.stages.prompt.stage import PLAN_CONTEXT_PROMPT\n\nLOGGER = get_logger()\n\n\nclass HaveIBeenPwendPolicy(Policy):\n \"\"\"Check if password is on HaveIBeenPwned's list by uploading the first\n 5 characters of the SHA1 Hash.\"\"\"\n\n password_field = models.TextField(\n default=\"password\",\n help_text=_(\"Field key to check, field keys defined in Prompt stages are available.\"),\n )\n\n allowed_count = models.IntegerField(default=0)\n\n @property\n def serializer(self) -> BaseSerializer:\n from authentik.policies.hibp.api import HaveIBeenPwendPolicySerializer\n\n return HaveIBeenPwendPolicySerializer\n\n @property\n def component(self) -> str:\n return \"ak-policy-hibp-form\"\n\n def passes(self, request: PolicyRequest) -> PolicyResult:\n \"\"\"Check if password is in HIBP DB. Hashes given Password with SHA1, uses the first 5\n characters of Password in request and checks if full hash is in response. Returns 0\n if Password is not in result otherwise the count of how many times it was used.\"\"\"\n password = request.context.get(PLAN_CONTEXT_PROMPT, {}).get(\n self.password_field, request.context.get(self.password_field)\n )\n if not password:\n LOGGER.warning(\n \"Password field not set in Policy Request\",\n field=self.password_field,\n fields=request.context.keys(),\n )\n return PolicyResult(False, _(\"Password not set in context\"))\n password = str(password)\n\n pw_hash = sha1(password.encode(\"utf-8\")).hexdigest() # nosec\n url = f\"https://api.pwnedpasswords.com/range/{pw_hash[:5]}\"\n result = get_http_session().get(url).text\n final_count = 0\n for line in result.split(\"\\r\\n\"):\n full_hash, count = line.split(\":\")\n if pw_hash[5:] == full_hash.lower():\n final_count = int(count)\n LOGGER.debug(\"got hibp result\", count=final_count, hash=pw_hash[:5])\n if final_count > self.allowed_count:\n message = _(\"Password exists on %(count)d online lists.\" % {\"count\": final_count})\n return PolicyResult(False, message)\n return PolicyResult(True)\n\n class Meta:\n\n verbose_name = _(\"Have I Been Pwned Policy\")\n verbose_name_plural = _(\"Have I Been Pwned Policies\")\n", "path": "authentik/policies/hibp/models.py"}]}
| 1,365 | 346 |
gh_patches_debug_19098
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-5373
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/tasking/_util.py`
Content:
```
1 import asyncio
2 import importlib
3 import logging
4 import os
5 import resource
6 import signal
7 import sys
8 import threading
9 import time
10 from gettext import gettext as _
11
12 from django.conf import settings
13 from django.db import connection, transaction
14 from django.db.models import Q
15 from django.utils import timezone
16 from django_guid import set_guid
17 from django_guid.utils import generate_guid
18 from pulpcore.app.models import Task, TaskSchedule
19 from pulpcore.app.role_util import get_users_with_perms
20 from pulpcore.app.util import (
21 set_current_user,
22 set_domain,
23 configure_analytics,
24 configure_cleanup,
25 )
26 from pulpcore.constants import TASK_FINAL_STATES, TASK_STATES, VAR_TMP_PULP
27 from pulpcore.exceptions import AdvisoryLockError
28 from pulpcore.tasking.tasks import dispatch, execute_task
29
30 _logger = logging.getLogger(__name__)
31
32
33 class PGAdvisoryLock:
34 """
35 A context manager that will hold a postgres advisory lock non-blocking.
36
37 The locks can be chosen from a lock group to avoid collisions. They will never collide with the
38 locks used for tasks.
39 """
40
41 def __init__(self, lock, lock_group=0):
42 self.lock_group = lock_group
43 self.lock = lock
44
45 def __enter__(self):
46 with connection.cursor() as cursor:
47 cursor.execute("SELECT pg_try_advisory_lock(%s, %s)", [self.lock_group, self.lock])
48 acquired = cursor.fetchone()[0]
49 if not acquired:
50 raise AdvisoryLockError("Could not acquire lock.")
51 return self
52
53 def __exit__(self, exc_type, exc_value, traceback):
54 with connection.cursor() as cursor:
55 cursor.execute("SELECT pg_advisory_unlock(%s, %s)", [self.lock_group, self.lock])
56 released = cursor.fetchone()[0]
57 if not released:
58 raise RuntimeError("Lock not held.")
59
60
61 def startup_hook():
62 configure_analytics()
63 configure_cleanup()
64
65
66 def delete_incomplete_resources(task):
67 """
68 Delete all incomplete created-resources on a canceled task.
69
70 Args:
71 task (Task): A task.
72 """
73 if task.state != TASK_STATES.CANCELING:
74 raise RuntimeError(_("Task must be canceling."))
75 for model in (r.content_object for r in task.created_resources.all()):
76 try:
77 if model.complete:
78 continue
79 except AttributeError:
80 continue
81 try:
82 with transaction.atomic():
83 model.delete()
84 except Exception as error:
85 _logger.error(_("Delete created resource, failed: {}").format(str(error)))
86
87
88 def write_memory_usage(path):
89 _logger.info("Writing task memory data to {}".format(path))
90
91 with open(path, "w") as file:
92 file.write("# Seconds\tMemory in MB\n")
93 seconds = 0
94 while True:
95 current_mb_in_use = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024
96 file.write(f"{seconds}\t{current_mb_in_use:.2f}\n")
97 file.flush()
98 time.sleep(5)
99 seconds += 5
100
101
102 def child_signal_handler(sig, frame):
103 _logger.debug("Signal %s recieved by %s.", sig, os.getpid())
104 # Reset signal handlers to default
105 # If you kill the process a second time it's not graceful anymore.
106 signal.signal(signal.SIGINT, signal.SIG_DFL)
107 signal.signal(signal.SIGTERM, signal.SIG_DFL)
108 signal.signal(signal.SIGHUP, signal.SIG_DFL)
109 signal.signal(signal.SIGUSR1, signal.SIG_DFL)
110
111 if sig == signal.SIGUSR1:
112 sys.exit()
113
114
115 def perform_task(task_pk, task_working_dir_rel_path):
116 """Setup the environment to handle a task and execute it.
117 This must be called as a subprocess, while the parent holds the advisory lock of the task."""
118 signal.signal(signal.SIGINT, child_signal_handler)
119 signal.signal(signal.SIGTERM, child_signal_handler)
120 signal.signal(signal.SIGHUP, child_signal_handler)
121 signal.signal(signal.SIGUSR1, child_signal_handler)
122 if settings.TASK_DIAGNOSTICS:
123 diagnostics_dir = VAR_TMP_PULP / str(task_pk)
124 diagnostics_dir.mkdir(parents=True, exist_ok=True)
125 mem_diagnostics_path = diagnostics_dir / "memory.datum"
126 # It would be better to have this recording happen in the parent process instead of here
127 # https://github.com/pulp/pulpcore/issues/2337
128 mem_diagnostics_thread = threading.Thread(
129 target=write_memory_usage, args=(mem_diagnostics_path,), daemon=True
130 )
131 mem_diagnostics_thread.start()
132 # All processes need to create their own postgres connection
133 connection.connection = None
134 task = Task.objects.select_related("pulp_domain").get(pk=task_pk)
135 user = get_users_with_perms(task, with_group_users=False).first()
136 # Isolate from the parent asyncio.
137 asyncio.set_event_loop(asyncio.new_event_loop())
138 # Set current contexts
139 set_guid(task.logging_cid)
140 set_current_user(user)
141 set_domain(task.pulp_domain)
142 os.chdir(task_working_dir_rel_path)
143
144 # set up profiling
145 if settings.TASK_DIAGNOSTICS and importlib.util.find_spec("pyinstrument") is not None:
146 from pyinstrument import Profiler
147
148 with Profiler() as profiler:
149 execute_task(task)
150
151 profile_file = diagnostics_dir / "pyinstrument.html"
152 _logger.info("Writing task profile data to {}".format(profile_file))
153 with open(profile_file, "w+") as f:
154 f.write(profiler.output_html())
155 else:
156 execute_task(task)
157
158
159 def dispatch_scheduled_tasks():
160 # Warning, dispatch_scheduled_tasks is not race condition free!
161 now = timezone.now()
162 # Dispatch all tasks old enough and not still running
163 for task_schedule in TaskSchedule.objects.filter(next_dispatch__lte=now).filter(
164 Q(last_task=None) | Q(last_task__state__in=TASK_FINAL_STATES)
165 ):
166 try:
167 if task_schedule.dispatch_interval is None:
168 # This was a timed one shot task schedule
169 task_schedule.next_dispatch = None
170 else:
171 # This is a recurring task schedule
172 while task_schedule.next_dispatch < now:
173 # Do not schedule in the past
174 task_schedule.next_dispatch += task_schedule.dispatch_interval
175 set_guid(generate_guid())
176 with transaction.atomic():
177 task_schedule.last_task = dispatch(
178 task_schedule.task_name,
179 )
180 task_schedule.save(update_fields=["next_dispatch", "last_task"])
181
182 _logger.info(
183 "Dispatched scheduled task {task_name} as task id {task_id}".format(
184 task_name=task_schedule.task_name, task_id=task_schedule.last_task.pk
185 )
186 )
187 except Exception as e:
188 _logger.warning(
189 "Dispatching scheduled task {task_name} failed. {error}".format(
190 task_name=task_schedule.task_name, error=str(e)
191 )
192 )
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/tasking/_util.py b/pulpcore/tasking/_util.py
--- a/pulpcore/tasking/_util.py
+++ b/pulpcore/tasking/_util.py
@@ -15,7 +15,7 @@
from django.utils import timezone
from django_guid import set_guid
from django_guid.utils import generate_guid
-from pulpcore.app.models import Task, TaskSchedule
+from pulpcore.app.models import Artifact, Content, Task, TaskSchedule
from pulpcore.app.role_util import get_users_with_perms
from pulpcore.app.util import (
set_current_user,
@@ -73,6 +73,8 @@
if task.state != TASK_STATES.CANCELING:
raise RuntimeError(_("Task must be canceling."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
|
{"golden_diff": "diff --git a/pulpcore/tasking/_util.py b/pulpcore/tasking/_util.py\n--- a/pulpcore/tasking/_util.py\n+++ b/pulpcore/tasking/_util.py\n@@ -15,7 +15,7 @@\n from django.utils import timezone\n from django_guid import set_guid\n from django_guid.utils import generate_guid\n-from pulpcore.app.models import Task, TaskSchedule\n+from pulpcore.app.models import Artifact, Content, Task, TaskSchedule\n from pulpcore.app.role_util import get_users_with_perms\n from pulpcore.app.util import (\n set_current_user,\n@@ -73,6 +73,8 @@\n if task.state != TASK_STATES.CANCELING:\n raise RuntimeError(_(\"Task must be canceling.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n+ if isinstance(model, (Artifact, Content)):\n+ continue\n try:\n if model.complete:\n continue\n", "issue": "Task cleanup must not delete content nor artifacts\nDeleting content or artifacts outside of orphan cleanup is breaking the rules.\r\nAnd no, we cannot get away with that.\r\n\n", "before_files": [{"content": "import asyncio\nimport importlib\nimport logging\nimport os\nimport resource\nimport signal\nimport sys\nimport threading\nimport time\nfrom gettext import gettext as _\n\nfrom django.conf import settings\nfrom django.db import connection, transaction\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django_guid import set_guid\nfrom django_guid.utils import generate_guid\nfrom pulpcore.app.models import Task, TaskSchedule\nfrom pulpcore.app.role_util import get_users_with_perms\nfrom pulpcore.app.util import (\n set_current_user,\n set_domain,\n configure_analytics,\n configure_cleanup,\n)\nfrom pulpcore.constants import TASK_FINAL_STATES, TASK_STATES, VAR_TMP_PULP\nfrom pulpcore.exceptions import AdvisoryLockError\nfrom pulpcore.tasking.tasks import dispatch, execute_task\n\n_logger = logging.getLogger(__name__)\n\n\nclass PGAdvisoryLock:\n \"\"\"\n A context manager that will hold a postgres advisory lock non-blocking.\n\n The locks can be chosen from a lock group to avoid collisions. They will never collide with the\n locks used for tasks.\n \"\"\"\n\n def __init__(self, lock, lock_group=0):\n self.lock_group = lock_group\n self.lock = lock\n\n def __enter__(self):\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_try_advisory_lock(%s, %s)\", [self.lock_group, self.lock])\n acquired = cursor.fetchone()[0]\n if not acquired:\n raise AdvisoryLockError(\"Could not acquire lock.\")\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_advisory_unlock(%s, %s)\", [self.lock_group, self.lock])\n released = cursor.fetchone()[0]\n if not released:\n raise RuntimeError(\"Lock not held.\")\n\n\ndef startup_hook():\n configure_analytics()\n configure_cleanup()\n\n\ndef delete_incomplete_resources(task):\n \"\"\"\n Delete all incomplete created-resources on a canceled task.\n\n Args:\n task (Task): A task.\n \"\"\"\n if task.state != TASK_STATES.CANCELING:\n raise RuntimeError(_(\"Task must be canceling.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n try:\n if model.complete:\n continue\n except AttributeError:\n continue\n try:\n with transaction.atomic():\n model.delete()\n except Exception as error:\n _logger.error(_(\"Delete created resource, failed: {}\").format(str(error)))\n\n\ndef write_memory_usage(path):\n _logger.info(\"Writing task memory data to {}\".format(path))\n\n with open(path, \"w\") as file:\n file.write(\"# Seconds\\tMemory in MB\\n\")\n seconds = 0\n while True:\n current_mb_in_use = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024\n file.write(f\"{seconds}\\t{current_mb_in_use:.2f}\\n\")\n file.flush()\n time.sleep(5)\n seconds += 5\n\n\ndef child_signal_handler(sig, frame):\n _logger.debug(\"Signal %s recieved by %s.\", sig, os.getpid())\n # Reset signal handlers to default\n # If you kill the process a second time it's not graceful anymore.\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n signal.signal(signal.SIGTERM, signal.SIG_DFL)\n signal.signal(signal.SIGHUP, signal.SIG_DFL)\n signal.signal(signal.SIGUSR1, signal.SIG_DFL)\n\n if sig == signal.SIGUSR1:\n sys.exit()\n\n\ndef perform_task(task_pk, task_working_dir_rel_path):\n \"\"\"Setup the environment to handle a task and execute it.\n This must be called as a subprocess, while the parent holds the advisory lock of the task.\"\"\"\n signal.signal(signal.SIGINT, child_signal_handler)\n signal.signal(signal.SIGTERM, child_signal_handler)\n signal.signal(signal.SIGHUP, child_signal_handler)\n signal.signal(signal.SIGUSR1, child_signal_handler)\n if settings.TASK_DIAGNOSTICS:\n diagnostics_dir = VAR_TMP_PULP / str(task_pk)\n diagnostics_dir.mkdir(parents=True, exist_ok=True)\n mem_diagnostics_path = diagnostics_dir / \"memory.datum\"\n # It would be better to have this recording happen in the parent process instead of here\n # https://github.com/pulp/pulpcore/issues/2337\n mem_diagnostics_thread = threading.Thread(\n target=write_memory_usage, args=(mem_diagnostics_path,), daemon=True\n )\n mem_diagnostics_thread.start()\n # All processes need to create their own postgres connection\n connection.connection = None\n task = Task.objects.select_related(\"pulp_domain\").get(pk=task_pk)\n user = get_users_with_perms(task, with_group_users=False).first()\n # Isolate from the parent asyncio.\n asyncio.set_event_loop(asyncio.new_event_loop())\n # Set current contexts\n set_guid(task.logging_cid)\n set_current_user(user)\n set_domain(task.pulp_domain)\n os.chdir(task_working_dir_rel_path)\n\n # set up profiling\n if settings.TASK_DIAGNOSTICS and importlib.util.find_spec(\"pyinstrument\") is not None:\n from pyinstrument import Profiler\n\n with Profiler() as profiler:\n execute_task(task)\n\n profile_file = diagnostics_dir / \"pyinstrument.html\"\n _logger.info(\"Writing task profile data to {}\".format(profile_file))\n with open(profile_file, \"w+\") as f:\n f.write(profiler.output_html())\n else:\n execute_task(task)\n\n\ndef dispatch_scheduled_tasks():\n # Warning, dispatch_scheduled_tasks is not race condition free!\n now = timezone.now()\n # Dispatch all tasks old enough and not still running\n for task_schedule in TaskSchedule.objects.filter(next_dispatch__lte=now).filter(\n Q(last_task=None) | Q(last_task__state__in=TASK_FINAL_STATES)\n ):\n try:\n if task_schedule.dispatch_interval is None:\n # This was a timed one shot task schedule\n task_schedule.next_dispatch = None\n else:\n # This is a recurring task schedule\n while task_schedule.next_dispatch < now:\n # Do not schedule in the past\n task_schedule.next_dispatch += task_schedule.dispatch_interval\n set_guid(generate_guid())\n with transaction.atomic():\n task_schedule.last_task = dispatch(\n task_schedule.task_name,\n )\n task_schedule.save(update_fields=[\"next_dispatch\", \"last_task\"])\n\n _logger.info(\n \"Dispatched scheduled task {task_name} as task id {task_id}\".format(\n task_name=task_schedule.task_name, task_id=task_schedule.last_task.pk\n )\n )\n except Exception as e:\n _logger.warning(\n \"Dispatching scheduled task {task_name} failed. {error}\".format(\n task_name=task_schedule.task_name, error=str(e)\n )\n )\n", "path": "pulpcore/tasking/_util.py"}], "after_files": [{"content": "import asyncio\nimport importlib\nimport logging\nimport os\nimport resource\nimport signal\nimport sys\nimport threading\nimport time\nfrom gettext import gettext as _\n\nfrom django.conf import settings\nfrom django.db import connection, transaction\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django_guid import set_guid\nfrom django_guid.utils import generate_guid\nfrom pulpcore.app.models import Artifact, Content, Task, TaskSchedule\nfrom pulpcore.app.role_util import get_users_with_perms\nfrom pulpcore.app.util import (\n set_current_user,\n set_domain,\n configure_analytics,\n configure_cleanup,\n)\nfrom pulpcore.constants import TASK_FINAL_STATES, TASK_STATES, VAR_TMP_PULP\nfrom pulpcore.exceptions import AdvisoryLockError\nfrom pulpcore.tasking.tasks import dispatch, execute_task\n\n_logger = logging.getLogger(__name__)\n\n\nclass PGAdvisoryLock:\n \"\"\"\n A context manager that will hold a postgres advisory lock non-blocking.\n\n The locks can be chosen from a lock group to avoid collisions. They will never collide with the\n locks used for tasks.\n \"\"\"\n\n def __init__(self, lock, lock_group=0):\n self.lock_group = lock_group\n self.lock = lock\n\n def __enter__(self):\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_try_advisory_lock(%s, %s)\", [self.lock_group, self.lock])\n acquired = cursor.fetchone()[0]\n if not acquired:\n raise AdvisoryLockError(\"Could not acquire lock.\")\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_advisory_unlock(%s, %s)\", [self.lock_group, self.lock])\n released = cursor.fetchone()[0]\n if not released:\n raise RuntimeError(\"Lock not held.\")\n\n\ndef startup_hook():\n configure_analytics()\n configure_cleanup()\n\n\ndef delete_incomplete_resources(task):\n \"\"\"\n Delete all incomplete created-resources on a canceled task.\n\n Args:\n task (Task): A task.\n \"\"\"\n if task.state != TASK_STATES.CANCELING:\n raise RuntimeError(_(\"Task must be canceling.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n if isinstance(model, (Artifact, Content)):\n continue\n try:\n if model.complete:\n continue\n except AttributeError:\n continue\n try:\n with transaction.atomic():\n model.delete()\n except Exception as error:\n _logger.error(_(\"Delete created resource, failed: {}\").format(str(error)))\n\n\ndef write_memory_usage(path):\n _logger.info(\"Writing task memory data to {}\".format(path))\n\n with open(path, \"w\") as file:\n file.write(\"# Seconds\\tMemory in MB\\n\")\n seconds = 0\n while True:\n current_mb_in_use = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024\n file.write(f\"{seconds}\\t{current_mb_in_use:.2f}\\n\")\n file.flush()\n time.sleep(5)\n seconds += 5\n\n\ndef child_signal_handler(sig, frame):\n _logger.debug(\"Signal %s recieved by %s.\", sig, os.getpid())\n # Reset signal handlers to default\n # If you kill the process a second time it's not graceful anymore.\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n signal.signal(signal.SIGTERM, signal.SIG_DFL)\n signal.signal(signal.SIGHUP, signal.SIG_DFL)\n signal.signal(signal.SIGUSR1, signal.SIG_DFL)\n\n if sig == signal.SIGUSR1:\n sys.exit()\n\n\ndef perform_task(task_pk, task_working_dir_rel_path):\n \"\"\"Setup the environment to handle a task and execute it.\n This must be called as a subprocess, while the parent holds the advisory lock of the task.\"\"\"\n signal.signal(signal.SIGINT, child_signal_handler)\n signal.signal(signal.SIGTERM, child_signal_handler)\n signal.signal(signal.SIGHUP, child_signal_handler)\n signal.signal(signal.SIGUSR1, child_signal_handler)\n if settings.TASK_DIAGNOSTICS:\n diagnostics_dir = VAR_TMP_PULP / str(task_pk)\n diagnostics_dir.mkdir(parents=True, exist_ok=True)\n mem_diagnostics_path = diagnostics_dir / \"memory.datum\"\n # It would be better to have this recording happen in the parent process instead of here\n # https://github.com/pulp/pulpcore/issues/2337\n mem_diagnostics_thread = threading.Thread(\n target=write_memory_usage, args=(mem_diagnostics_path,), daemon=True\n )\n mem_diagnostics_thread.start()\n # All processes need to create their own postgres connection\n connection.connection = None\n task = Task.objects.select_related(\"pulp_domain\").get(pk=task_pk)\n user = get_users_with_perms(task, with_group_users=False).first()\n # Isolate from the parent asyncio.\n asyncio.set_event_loop(asyncio.new_event_loop())\n # Set current contexts\n set_guid(task.logging_cid)\n set_current_user(user)\n set_domain(task.pulp_domain)\n os.chdir(task_working_dir_rel_path)\n\n # set up profiling\n if settings.TASK_DIAGNOSTICS and importlib.util.find_spec(\"pyinstrument\") is not None:\n from pyinstrument import Profiler\n\n with Profiler() as profiler:\n execute_task(task)\n\n profile_file = diagnostics_dir / \"pyinstrument.html\"\n _logger.info(\"Writing task profile data to {}\".format(profile_file))\n with open(profile_file, \"w+\") as f:\n f.write(profiler.output_html())\n else:\n execute_task(task)\n\n\ndef dispatch_scheduled_tasks():\n # Warning, dispatch_scheduled_tasks is not race condition free!\n now = timezone.now()\n # Dispatch all tasks old enough and not still running\n for task_schedule in TaskSchedule.objects.filter(next_dispatch__lte=now).filter(\n Q(last_task=None) | Q(last_task__state__in=TASK_FINAL_STATES)\n ):\n try:\n if task_schedule.dispatch_interval is None:\n # This was a timed one shot task schedule\n task_schedule.next_dispatch = None\n else:\n # This is a recurring task schedule\n while task_schedule.next_dispatch < now:\n # Do not schedule in the past\n task_schedule.next_dispatch += task_schedule.dispatch_interval\n set_guid(generate_guid())\n with transaction.atomic():\n task_schedule.last_task = dispatch(\n task_schedule.task_name,\n )\n task_schedule.save(update_fields=[\"next_dispatch\", \"last_task\"])\n\n _logger.info(\n \"Dispatched scheduled task {task_name} as task id {task_id}\".format(\n task_name=task_schedule.task_name, task_id=task_schedule.last_task.pk\n )\n )\n except Exception as e:\n _logger.warning(\n \"Dispatching scheduled task {task_name} failed. {error}\".format(\n task_name=task_schedule.task_name, error=str(e)\n )\n )\n", "path": "pulpcore/tasking/_util.py"}]}
| 2,253 | 204 |
gh_patches_debug_14722
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-2226
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
construct_yaml_map leading to TypeError: unhashable type: 'dict_node'
*cfn-lint version: 0.58.1*
*Unhandled `TypeError: unhashable type: 'dict_node'` thrown from construct_yaml_map.*
Please provide as much information as possible:
Utilising some slightly edited form of the example eks-nodegroup cfn, [see here for the failing cfn](https://github.com/SaltKhan/_cfn-breaking-cfn-lint/blob/main/eks-nodegroup.yaml) or [here, for the stacktrace](https://github.com/SaltKhan/_cfn-breaking-cfn-lint/blob/main/README.md). The problem occurrs on line 259 `VPCZoneIdentifier: !Join [ ',', [ !ImportValue Fn::Sub: "${NetworkStackName}-SubnetsPrivate01", !ImportValue Fn::Sub: "${NetworkStackName}-SubnetsPrivate02" ] ]`. (Was trying to provide a list by string concat as that's what's listed as the example input when subnets is a parameter as in the example cfn, but I've changed that for the example I provide here.) I changed the line to an expanded `!Join` ~
```
VPCZoneIdentifier: !Join
- ','
- - Fn::ImportValue: !Sub "${NetworkStackName}-SubnetsPrivate01"
- Fn::ImportValue: !Sub "${NetworkStackName}-SubnetsPrivate02"
```
And it now correctly yields a normal error, rather than hitting the exception.
I found this similar previous issue ~ https://github.com/aws-cloudformation/cfn-lint/issues/348
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/decode/cfn_yaml.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import fileinput
6 import logging
7 import sys
8 import six
9 from yaml.composer import Composer
10 from yaml.reader import Reader
11 from yaml.scanner import Scanner
12 from yaml.resolver import Resolver
13 from yaml import ScalarNode
14 from yaml import SequenceNode
15 from yaml import MappingNode
16 from yaml.constructor import SafeConstructor
17 from yaml.constructor import ConstructorError
18 import cfnlint
19 from cfnlint.decode.node import str_node, dict_node, list_node, sub_node
20
21 try:
22 from yaml.cyaml import CParser as Parser # pylint: disable=ungrouped-imports
23 cyaml = True
24 except ImportError:
25 from yaml.parser import Parser # pylint: disable=ungrouped-imports
26 cyaml = False
27
28 UNCONVERTED_SUFFIXES = ['Ref', 'Condition']
29 FN_PREFIX = 'Fn::'
30
31 LOGGER = logging.getLogger(__name__)
32
33
34 class CfnParseError(ConstructorError):
35 """
36 Error thrown when the template contains Cfn Error
37 """
38
39 def __init__(self, filename, errors):
40
41 if isinstance(errors, cfnlint.rules.Match):
42 errors = [errors]
43
44 # Call the base class constructor with the parameters it needs
45 super(CfnParseError, self).__init__(errors[0].message)
46
47 # Now for your custom code...
48 self.filename = filename
49 self.matches = errors
50
51 def build_match(filename, message, line_number, column_number, key):
52 return cfnlint.rules.Match(
53 line_number + 1, column_number + 1, line_number + 1,
54 column_number + 1 + len(key), filename, cfnlint.rules.ParseError(), message=message)
55
56 class NodeConstructor(SafeConstructor):
57 """
58 Node Constructors for loading different types in Yaml
59 """
60
61 def __init__(self, filename):
62 # Call the base class constructor
63 super(NodeConstructor, self).__init__()
64
65 self.filename = filename
66
67 # To support lazy loading, the original constructors first yield
68 # an empty object, then fill them in when iterated. Due to
69 # laziness we omit this behaviour (and will only do "deep
70 # construction") by first exhausting iterators, then yielding
71 # copies.
72 def construct_yaml_map(self, node):
73
74 # Check for duplicate keys on the current level, this is not desirable
75 # because a dict does not support this. It overwrites it with the last
76 # occurance, which can give unexpected results
77 mapping = {}
78 self.flatten_mapping(node)
79 for key_node, value_node in node.value:
80 key = self.construct_object(key_node, False)
81 value = self.construct_object(value_node, False)
82
83 for key_dup in mapping:
84 if key_dup == key:
85 raise CfnParseError(
86 self.filename,
87 [
88 build_match(
89 filename=self.filename,
90 message='Duplicate resource found "{}" (line {})'.format(
91 key, key_dup.start_mark.line + 1),
92 line_number=key_dup.start_mark.line,
93 column_number=key_dup.start_mark.column,
94 key=key
95 ),
96 build_match(
97 filename=self.filename,
98 message='Duplicate resource found "{}" (line {})'.format(
99 key, key_node.start_mark.line + 1),
100 line_number=key_node.start_mark.line,
101 column_number=key_node.start_mark.column,
102 key=key
103 ),
104 ]
105 )
106 mapping[key] = value
107
108 obj, = SafeConstructor.construct_yaml_map(self, node)
109
110 if len(mapping) == 1:
111 if 'Fn::Sub' in mapping:
112 return sub_node(obj, node.start_mark, node.end_mark)
113
114 return dict_node(obj, node.start_mark, node.end_mark)
115
116 def construct_yaml_str(self, node):
117 obj = SafeConstructor.construct_yaml_str(self, node)
118 assert isinstance(obj, (six.string_types))
119 return str_node(obj, node.start_mark, node.end_mark)
120
121 def construct_yaml_seq(self, node):
122 obj, = SafeConstructor.construct_yaml_seq(self, node)
123 assert isinstance(obj, list)
124 return list_node(obj, node.start_mark, node.end_mark)
125
126 def construct_yaml_null_error(self, node):
127 """Throw a null error"""
128 raise CfnParseError(
129 self.filename,
130 [
131 build_match(
132 filename=self.filename,
133 message='Null value at line {0} column {1}'.format(
134 node.start_mark.line + 1, node.start_mark.column + 1),
135 line_number=node.start_mark.line,
136 column_number=node.start_mark.column,
137 key=' ',
138 )
139 ]
140 )
141
142
143 NodeConstructor.add_constructor(
144 u'tag:yaml.org,2002:map',
145 NodeConstructor.construct_yaml_map)
146
147 NodeConstructor.add_constructor(
148 u'tag:yaml.org,2002:str',
149 NodeConstructor.construct_yaml_str)
150
151 NodeConstructor.add_constructor(
152 u'tag:yaml.org,2002:seq',
153 NodeConstructor.construct_yaml_seq)
154
155 NodeConstructor.add_constructor(
156 u'tag:yaml.org,2002:null',
157 NodeConstructor.construct_yaml_null_error)
158
159
160 class MarkedLoader(Reader, Scanner, Parser, Composer, NodeConstructor, Resolver):
161 """
162 Class for marked loading YAML
163 """
164 # pylint: disable=non-parent-init-called,super-init-not-called
165
166 def __init__(self, stream, filename):
167 Reader.__init__(self, stream)
168 Scanner.__init__(self)
169 if cyaml:
170 Parser.__init__(self, stream)
171 else:
172 Parser.__init__(self)
173 Composer.__init__(self)
174 SafeConstructor.__init__(self)
175 Resolver.__init__(self)
176 NodeConstructor.__init__(self, filename)
177
178
179 def multi_constructor(loader, tag_suffix, node):
180 """
181 Deal with !Ref style function format
182 """
183
184 if tag_suffix not in UNCONVERTED_SUFFIXES:
185 tag_suffix = '{}{}'.format(FN_PREFIX, tag_suffix)
186
187 constructor = None
188 if tag_suffix == 'Fn::GetAtt':
189 constructor = construct_getatt
190 elif isinstance(node, ScalarNode):
191 constructor = loader.construct_scalar
192 elif isinstance(node, SequenceNode):
193 constructor = loader.construct_sequence
194 elif isinstance(node, MappingNode):
195 constructor = loader.construct_mapping
196 else:
197 raise 'Bad tag: !{}'.format(tag_suffix)
198
199 if tag_suffix == 'Fn::Sub':
200 return sub_node({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)
201
202 return dict_node({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)
203
204
205 def construct_getatt(node):
206 """
207 Reconstruct !GetAtt into a list
208 """
209
210 if isinstance(node.value, (six.string_types)):
211 return list_node(node.value.split('.', 1), node.start_mark, node.end_mark)
212 if isinstance(node.value, list):
213 return list_node([s.value for s in node.value], node.start_mark, node.end_mark)
214
215 raise ValueError('Unexpected node type: {}'.format(type(node.value)))
216
217
218 def loads(yaml_string, fname=None):
219 """
220 Load the given YAML string
221 """
222 loader = MarkedLoader(yaml_string, fname)
223 loader.add_multi_constructor('!', multi_constructor)
224 template = loader.get_single_data()
225 # Convert an empty file to an empty dict
226 if template is None:
227 template = {}
228
229 return template
230
231
232 def load(filename):
233 """
234 Load the given YAML file
235 """
236
237 content = ''
238
239 if not sys.stdin.isatty():
240 for line in fileinput.input(files=filename):
241 content = content + line
242 else:
243 with open(filename) as fp:
244 content = fp.read()
245
246 return loads(content, filename)
247
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cfnlint/decode/cfn_yaml.py b/src/cfnlint/decode/cfn_yaml.py
--- a/src/cfnlint/decode/cfn_yaml.py
+++ b/src/cfnlint/decode/cfn_yaml.py
@@ -103,7 +103,21 @@
),
]
)
- mapping[key] = value
+ try:
+ mapping[key] = value
+ except:
+ raise CfnParseError(
+ self.filename,
+ [
+ build_match(
+ filename=self.filename,
+ message=f'Unhashable type "{key}" (line {key.start_mark.line + 1})',
+ line_number=key.start_mark.line,
+ column_number=key.start_mark.column,
+ key=key
+ ),
+ ]
+ )
obj, = SafeConstructor.construct_yaml_map(self, node)
|
{"golden_diff": "diff --git a/src/cfnlint/decode/cfn_yaml.py b/src/cfnlint/decode/cfn_yaml.py\n--- a/src/cfnlint/decode/cfn_yaml.py\n+++ b/src/cfnlint/decode/cfn_yaml.py\n@@ -103,7 +103,21 @@\n ),\n ]\n )\n- mapping[key] = value\n+ try:\n+ mapping[key] = value\n+ except:\n+ raise CfnParseError(\n+ self.filename,\n+ [\n+ build_match(\n+ filename=self.filename,\n+ message=f'Unhashable type \"{key}\" (line {key.start_mark.line + 1})',\n+ line_number=key.start_mark.line,\n+ column_number=key.start_mark.column,\n+ key=key\n+ ),\n+ ]\n+ )\n \n obj, = SafeConstructor.construct_yaml_map(self, node)\n", "issue": "construct_yaml_map leading to TypeError: unhashable type: 'dict_node'\n*cfn-lint version: 0.58.1*\r\n\r\n*Unhandled `TypeError: unhashable type: 'dict_node'` thrown from construct_yaml_map.*\r\n\r\nPlease provide as much information as possible:\r\nUtilising some slightly edited form of the example eks-nodegroup cfn, [see here for the failing cfn](https://github.com/SaltKhan/_cfn-breaking-cfn-lint/blob/main/eks-nodegroup.yaml) or [here, for the stacktrace](https://github.com/SaltKhan/_cfn-breaking-cfn-lint/blob/main/README.md). The problem occurrs on line 259 `VPCZoneIdentifier: !Join [ ',', [ !ImportValue Fn::Sub: \"${NetworkStackName}-SubnetsPrivate01\", !ImportValue Fn::Sub: \"${NetworkStackName}-SubnetsPrivate02\" ] ]`. (Was trying to provide a list by string concat as that's what's listed as the example input when subnets is a parameter as in the example cfn, but I've changed that for the example I provide here.) I changed the line to an expanded `!Join` ~\r\n```\r\n VPCZoneIdentifier: !Join\r\n - ','\r\n - - Fn::ImportValue: !Sub \"${NetworkStackName}-SubnetsPrivate01\"\r\n - Fn::ImportValue: !Sub \"${NetworkStackName}-SubnetsPrivate02\"\r\n```\r\nAnd it now correctly yields a normal error, rather than hitting the exception.\r\nI found this similar previous issue ~ https://github.com/aws-cloudformation/cfn-lint/issues/348\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport fileinput\nimport logging\nimport sys\nimport six\nfrom yaml.composer import Composer\nfrom yaml.reader import Reader\nfrom yaml.scanner import Scanner\nfrom yaml.resolver import Resolver\nfrom yaml import ScalarNode\nfrom yaml import SequenceNode\nfrom yaml import MappingNode\nfrom yaml.constructor import SafeConstructor\nfrom yaml.constructor import ConstructorError\nimport cfnlint\nfrom cfnlint.decode.node import str_node, dict_node, list_node, sub_node\n\ntry:\n from yaml.cyaml import CParser as Parser # pylint: disable=ungrouped-imports\n cyaml = True\nexcept ImportError:\n from yaml.parser import Parser # pylint: disable=ungrouped-imports\n cyaml = False\n\nUNCONVERTED_SUFFIXES = ['Ref', 'Condition']\nFN_PREFIX = 'Fn::'\n\nLOGGER = logging.getLogger(__name__)\n\n\nclass CfnParseError(ConstructorError):\n \"\"\"\n Error thrown when the template contains Cfn Error\n \"\"\"\n\n def __init__(self, filename, errors):\n\n if isinstance(errors, cfnlint.rules.Match):\n errors = [errors]\n\n # Call the base class constructor with the parameters it needs\n super(CfnParseError, self).__init__(errors[0].message)\n\n # Now for your custom code...\n self.filename = filename\n self.matches = errors\n\ndef build_match(filename, message, line_number, column_number, key):\n return cfnlint.rules.Match(\n line_number + 1, column_number + 1, line_number + 1,\n column_number + 1 + len(key), filename, cfnlint.rules.ParseError(), message=message)\n\nclass NodeConstructor(SafeConstructor):\n \"\"\"\n Node Constructors for loading different types in Yaml\n \"\"\"\n\n def __init__(self, filename):\n # Call the base class constructor\n super(NodeConstructor, self).__init__()\n\n self.filename = filename\n\n # To support lazy loading, the original constructors first yield\n # an empty object, then fill them in when iterated. Due to\n # laziness we omit this behaviour (and will only do \"deep\n # construction\") by first exhausting iterators, then yielding\n # copies.\n def construct_yaml_map(self, node):\n\n # Check for duplicate keys on the current level, this is not desirable\n # because a dict does not support this. It overwrites it with the last\n # occurance, which can give unexpected results\n mapping = {}\n self.flatten_mapping(node)\n for key_node, value_node in node.value:\n key = self.construct_object(key_node, False)\n value = self.construct_object(value_node, False)\n\n for key_dup in mapping:\n if key_dup == key:\n raise CfnParseError(\n self.filename,\n [\n build_match(\n filename=self.filename,\n message='Duplicate resource found \"{}\" (line {})'.format(\n key, key_dup.start_mark.line + 1),\n line_number=key_dup.start_mark.line,\n column_number=key_dup.start_mark.column,\n key=key\n ),\n build_match(\n filename=self.filename,\n message='Duplicate resource found \"{}\" (line {})'.format(\n key, key_node.start_mark.line + 1),\n line_number=key_node.start_mark.line,\n column_number=key_node.start_mark.column,\n key=key\n ),\n ]\n )\n mapping[key] = value\n\n obj, = SafeConstructor.construct_yaml_map(self, node)\n\n if len(mapping) == 1:\n if 'Fn::Sub' in mapping:\n return sub_node(obj, node.start_mark, node.end_mark)\n\n return dict_node(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_str(self, node):\n obj = SafeConstructor.construct_yaml_str(self, node)\n assert isinstance(obj, (six.string_types))\n return str_node(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_seq(self, node):\n obj, = SafeConstructor.construct_yaml_seq(self, node)\n assert isinstance(obj, list)\n return list_node(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_null_error(self, node):\n \"\"\"Throw a null error\"\"\"\n raise CfnParseError(\n self.filename,\n [\n build_match(\n filename=self.filename,\n message='Null value at line {0} column {1}'.format(\n node.start_mark.line + 1, node.start_mark.column + 1),\n line_number=node.start_mark.line,\n column_number=node.start_mark.column,\n key=' ',\n )\n ]\n )\n\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:map',\n NodeConstructor.construct_yaml_map)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:str',\n NodeConstructor.construct_yaml_str)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:seq',\n NodeConstructor.construct_yaml_seq)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:null',\n NodeConstructor.construct_yaml_null_error)\n\n\nclass MarkedLoader(Reader, Scanner, Parser, Composer, NodeConstructor, Resolver):\n \"\"\"\n Class for marked loading YAML\n \"\"\"\n # pylint: disable=non-parent-init-called,super-init-not-called\n\n def __init__(self, stream, filename):\n Reader.__init__(self, stream)\n Scanner.__init__(self)\n if cyaml:\n Parser.__init__(self, stream)\n else:\n Parser.__init__(self)\n Composer.__init__(self)\n SafeConstructor.__init__(self)\n Resolver.__init__(self)\n NodeConstructor.__init__(self, filename)\n\n\ndef multi_constructor(loader, tag_suffix, node):\n \"\"\"\n Deal with !Ref style function format\n \"\"\"\n\n if tag_suffix not in UNCONVERTED_SUFFIXES:\n tag_suffix = '{}{}'.format(FN_PREFIX, tag_suffix)\n\n constructor = None\n if tag_suffix == 'Fn::GetAtt':\n constructor = construct_getatt\n elif isinstance(node, ScalarNode):\n constructor = loader.construct_scalar\n elif isinstance(node, SequenceNode):\n constructor = loader.construct_sequence\n elif isinstance(node, MappingNode):\n constructor = loader.construct_mapping\n else:\n raise 'Bad tag: !{}'.format(tag_suffix)\n\n if tag_suffix == 'Fn::Sub':\n return sub_node({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)\n\n return dict_node({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)\n\n\ndef construct_getatt(node):\n \"\"\"\n Reconstruct !GetAtt into a list\n \"\"\"\n\n if isinstance(node.value, (six.string_types)):\n return list_node(node.value.split('.', 1), node.start_mark, node.end_mark)\n if isinstance(node.value, list):\n return list_node([s.value for s in node.value], node.start_mark, node.end_mark)\n\n raise ValueError('Unexpected node type: {}'.format(type(node.value)))\n\n\ndef loads(yaml_string, fname=None):\n \"\"\"\n Load the given YAML string\n \"\"\"\n loader = MarkedLoader(yaml_string, fname)\n loader.add_multi_constructor('!', multi_constructor)\n template = loader.get_single_data()\n # Convert an empty file to an empty dict\n if template is None:\n template = {}\n\n return template\n\n\ndef load(filename):\n \"\"\"\n Load the given YAML file\n \"\"\"\n\n content = ''\n\n if not sys.stdin.isatty():\n for line in fileinput.input(files=filename):\n content = content + line\n else:\n with open(filename) as fp:\n content = fp.read()\n\n return loads(content, filename)\n", "path": "src/cfnlint/decode/cfn_yaml.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport fileinput\nimport logging\nimport sys\nimport six\nfrom yaml.composer import Composer\nfrom yaml.reader import Reader\nfrom yaml.scanner import Scanner\nfrom yaml.resolver import Resolver\nfrom yaml import ScalarNode\nfrom yaml import SequenceNode\nfrom yaml import MappingNode\nfrom yaml.constructor import SafeConstructor\nfrom yaml.constructor import ConstructorError\nimport cfnlint\nfrom cfnlint.decode.node import str_node, dict_node, list_node, sub_node\n\ntry:\n from yaml.cyaml import CParser as Parser # pylint: disable=ungrouped-imports\n cyaml = True\nexcept ImportError:\n from yaml.parser import Parser # pylint: disable=ungrouped-imports\n cyaml = False\n\nUNCONVERTED_SUFFIXES = ['Ref', 'Condition']\nFN_PREFIX = 'Fn::'\n\nLOGGER = logging.getLogger(__name__)\n\n\nclass CfnParseError(ConstructorError):\n \"\"\"\n Error thrown when the template contains Cfn Error\n \"\"\"\n\n def __init__(self, filename, errors):\n\n if isinstance(errors, cfnlint.rules.Match):\n errors = [errors]\n\n # Call the base class constructor with the parameters it needs\n super(CfnParseError, self).__init__(errors[0].message)\n\n # Now for your custom code...\n self.filename = filename\n self.matches = errors\n\ndef build_match(filename, message, line_number, column_number, key):\n return cfnlint.rules.Match(\n line_number + 1, column_number + 1, line_number + 1,\n column_number + 1 + len(key), filename, cfnlint.rules.ParseError(), message=message)\n\nclass NodeConstructor(SafeConstructor):\n \"\"\"\n Node Constructors for loading different types in Yaml\n \"\"\"\n\n def __init__(self, filename):\n # Call the base class constructor\n super(NodeConstructor, self).__init__()\n\n self.filename = filename\n\n # To support lazy loading, the original constructors first yield\n # an empty object, then fill them in when iterated. Due to\n # laziness we omit this behaviour (and will only do \"deep\n # construction\") by first exhausting iterators, then yielding\n # copies.\n def construct_yaml_map(self, node):\n\n # Check for duplicate keys on the current level, this is not desirable\n # because a dict does not support this. It overwrites it with the last\n # occurance, which can give unexpected results\n mapping = {}\n self.flatten_mapping(node)\n for key_node, value_node in node.value:\n key = self.construct_object(key_node, False)\n value = self.construct_object(value_node, False)\n\n for key_dup in mapping:\n if key_dup == key:\n raise CfnParseError(\n self.filename,\n [\n build_match(\n filename=self.filename,\n message='Duplicate resource found \"{}\" (line {})'.format(\n key, key_dup.start_mark.line + 1),\n line_number=key_dup.start_mark.line,\n column_number=key_dup.start_mark.column,\n key=key\n ),\n build_match(\n filename=self.filename,\n message='Duplicate resource found \"{}\" (line {})'.format(\n key, key_node.start_mark.line + 1),\n line_number=key_node.start_mark.line,\n column_number=key_node.start_mark.column,\n key=key\n ),\n ]\n )\n try:\n mapping[key] = value\n except:\n raise CfnParseError(\n self.filename,\n [\n build_match(\n filename=self.filename,\n message=f'Unhashable type \"{key}\" (line {key.start_mark.line + 1})',\n line_number=key.start_mark.line,\n column_number=key.start_mark.column,\n key=key\n ),\n ]\n )\n\n obj, = SafeConstructor.construct_yaml_map(self, node)\n\n if len(mapping) == 1:\n if 'Fn::Sub' in mapping:\n return sub_node(obj, node.start_mark, node.end_mark)\n\n return dict_node(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_str(self, node):\n obj = SafeConstructor.construct_yaml_str(self, node)\n assert isinstance(obj, (six.string_types))\n return str_node(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_seq(self, node):\n obj, = SafeConstructor.construct_yaml_seq(self, node)\n assert isinstance(obj, list)\n return list_node(obj, node.start_mark, node.end_mark)\n\n def construct_yaml_null_error(self, node):\n \"\"\"Throw a null error\"\"\"\n raise CfnParseError(\n self.filename,\n [\n build_match(\n filename=self.filename,\n message='Null value at line {0} column {1}'.format(\n node.start_mark.line + 1, node.start_mark.column + 1),\n line_number=node.start_mark.line,\n column_number=node.start_mark.column,\n key=' ',\n )\n ]\n )\n\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:map',\n NodeConstructor.construct_yaml_map)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:str',\n NodeConstructor.construct_yaml_str)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:seq',\n NodeConstructor.construct_yaml_seq)\n\nNodeConstructor.add_constructor(\n u'tag:yaml.org,2002:null',\n NodeConstructor.construct_yaml_null_error)\n\n\nclass MarkedLoader(Reader, Scanner, Parser, Composer, NodeConstructor, Resolver):\n \"\"\"\n Class for marked loading YAML\n \"\"\"\n # pylint: disable=non-parent-init-called,super-init-not-called\n\n def __init__(self, stream, filename):\n Reader.__init__(self, stream)\n Scanner.__init__(self)\n if cyaml:\n Parser.__init__(self, stream)\n else:\n Parser.__init__(self)\n Composer.__init__(self)\n SafeConstructor.__init__(self)\n Resolver.__init__(self)\n NodeConstructor.__init__(self, filename)\n\n\ndef multi_constructor(loader, tag_suffix, node):\n \"\"\"\n Deal with !Ref style function format\n \"\"\"\n\n if tag_suffix not in UNCONVERTED_SUFFIXES:\n tag_suffix = '{}{}'.format(FN_PREFIX, tag_suffix)\n\n constructor = None\n if tag_suffix == 'Fn::GetAtt':\n constructor = construct_getatt\n elif isinstance(node, ScalarNode):\n constructor = loader.construct_scalar\n elif isinstance(node, SequenceNode):\n constructor = loader.construct_sequence\n elif isinstance(node, MappingNode):\n constructor = loader.construct_mapping\n else:\n raise 'Bad tag: !{}'.format(tag_suffix)\n\n if tag_suffix == 'Fn::Sub':\n return sub_node({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)\n\n return dict_node({tag_suffix: constructor(node)}, node.start_mark, node.end_mark)\n\n\ndef construct_getatt(node):\n \"\"\"\n Reconstruct !GetAtt into a list\n \"\"\"\n\n if isinstance(node.value, (six.string_types)):\n return list_node(node.value.split('.', 1), node.start_mark, node.end_mark)\n if isinstance(node.value, list):\n return list_node([s.value for s in node.value], node.start_mark, node.end_mark)\n\n raise ValueError('Unexpected node type: {}'.format(type(node.value)))\n\n\ndef loads(yaml_string, fname=None):\n \"\"\"\n Load the given YAML string\n \"\"\"\n loader = MarkedLoader(yaml_string, fname)\n loader.add_multi_constructor('!', multi_constructor)\n template = loader.get_single_data()\n # Convert an empty file to an empty dict\n if template is None:\n template = {}\n\n return template\n\n\ndef load(filename):\n \"\"\"\n Load the given YAML file\n \"\"\"\n\n content = ''\n\n if not sys.stdin.isatty():\n for line in fileinput.input(files=filename):\n content = content + line\n else:\n with open(filename) as fp:\n content = fp.read()\n\n return loads(content, filename)\n", "path": "src/cfnlint/decode/cfn_yaml.py"}]}
| 2,937 | 194 |
gh_patches_debug_49167
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-672
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support ElasticSearch 7.14
The python package `elasticsearch-py` introduced the `terms_enum` parameter from ElasticSearch 7.14. This is currently not being instrumented and breaking tests.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/instruments/elasticsearch.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import logging
5 from collections import namedtuple
6
7 import wrapt
8
9 from scout_apm.compat import get_pos_args, unwrap_decorators
10 from scout_apm.core.tracked_request import TrackedRequest
11
12 try:
13 from elasticsearch import Elasticsearch, Transport
14 except ImportError: # pragma: no cover
15 Elasticsearch = None
16 Transport = None
17
18 logger = logging.getLogger(__name__)
19
20
21 def ensure_installed():
22 logger.debug("Instrumenting elasticsearch.")
23
24 if Elasticsearch is None:
25 logger.debug(
26 "Couldn't import elasticsearch.Elasticsearch - probably not installed."
27 )
28 else:
29 ensure_client_instrumented()
30 ensure_transport_instrumented()
31
32
33 ClientMethod = namedtuple("ClientMethod", ["name", "takes_index_argument"])
34
35 CLIENT_METHODS = [
36 ClientMethod("bulk", True),
37 ClientMethod("clear_scroll", False),
38 ClientMethod("close", False),
39 ClientMethod("close_point_in_time", False),
40 ClientMethod("count", True),
41 ClientMethod("create", True),
42 ClientMethod("delete", True),
43 ClientMethod("delete_by_query", True),
44 ClientMethod("delete_by_query_rethrottle", False),
45 ClientMethod("delete_script", False),
46 ClientMethod("exists", True),
47 ClientMethod("exists_source", True),
48 ClientMethod("explain", True),
49 ClientMethod("field_caps", True),
50 ClientMethod("get", True),
51 ClientMethod("get_script", False),
52 ClientMethod("get_script_context", False),
53 ClientMethod("get_script_languages", False),
54 ClientMethod("get_source", True),
55 ClientMethod("index", True),
56 ClientMethod("info", False),
57 ClientMethod("mget", True),
58 ClientMethod("msearch", True),
59 ClientMethod("msearch_template", True),
60 ClientMethod("mtermvectors", True),
61 ClientMethod("open_point_in_time", True),
62 ClientMethod("ping", False),
63 ClientMethod("put_script", False),
64 ClientMethod("rank_eval", True),
65 ClientMethod("reindex", False),
66 ClientMethod("reindex_rethrottle", False),
67 ClientMethod("render_search_template", False),
68 ClientMethod("scripts_painless_execute", False),
69 ClientMethod("scroll", False),
70 ClientMethod("search", True),
71 ClientMethod("search_shards", True),
72 ClientMethod("search_template", True),
73 ClientMethod("termvectors", True),
74 ClientMethod("update", True),
75 ClientMethod("update_by_query", True),
76 ClientMethod("update_by_query_rethrottle", False),
77 ]
78
79
80 have_patched_client = False
81
82
83 def ensure_client_instrumented():
84 global have_patched_client
85
86 if not have_patched_client:
87 for name, takes_index_argument in CLIENT_METHODS:
88 try:
89 method = getattr(Elasticsearch, name)
90 if takes_index_argument:
91 wrapped = wrap_client_index_method(method)
92 else:
93 wrapped = wrap_client_method(method)
94 setattr(Elasticsearch, name, wrapped)
95 except Exception as exc:
96 logger.warning(
97 "Failed to instrument elasticsearch.Elasticsearch.%s: %r",
98 name,
99 exc,
100 exc_info=exc,
101 )
102
103 have_patched_client = True
104
105
106 @wrapt.decorator
107 def wrap_client_index_method(wrapped, instance, args, kwargs):
108 # elasticsearch-py 7.5.1 changed the order of arguments for client methods,
109 # so to be safe we need to inspect the wrapped method's positional
110 # arguments to see if we should pull it from there
111 if "index" in kwargs:
112 index = kwargs["index"]
113 else:
114 unwrapped = unwrap_decorators(wrapped)
115 pos_args = get_pos_args(unwrapped)
116 try:
117 index_index = pos_args.index("index")
118 except ValueError: # pragma: no cover
119 # This guards against the method not accepting an 'index' argument
120 # but they all do - for now
121 index = ""
122 else:
123 try:
124 index = args[index_index - 1] # subtract 'self'
125 except IndexError:
126 index = ""
127
128 if isinstance(index, (list, tuple)):
129 index = ",".join(index)
130 if index == "":
131 index = "Unknown"
132 index = index.title()
133
134 camel_name = "".join(c.title() for c in wrapped.__name__.split("_"))
135 operation = "Elasticsearch/{}/{}".format(index, camel_name)
136 tracked_request = TrackedRequest.instance()
137 with tracked_request.span(operation=operation, ignore_children=True):
138 return wrapped(*args, **kwargs)
139
140
141 @wrapt.decorator
142 def wrap_client_method(wrapped, instance, args, kwargs):
143 camel_name = "".join(c.title() for c in wrapped.__name__.split("_"))
144 operation = "Elasticsearch/{}".format(camel_name)
145 tracked_request = TrackedRequest.instance()
146 with tracked_request.span(operation=operation, ignore_children=True):
147 return wrapped(*args, **kwargs)
148
149
150 have_patched_transport = False
151
152
153 def ensure_transport_instrumented():
154 global have_patched_transport
155
156 if not have_patched_transport:
157 try:
158 Transport.perform_request = wrapped_perform_request(
159 Transport.perform_request
160 )
161 except Exception as exc:
162 logger.warning(
163 "Failed to instrument elasticsearch.Transport.perform_request: %r",
164 exc,
165 exc_info=exc,
166 )
167
168 have_patched_transport = True
169
170
171 def _sanitize_name(name):
172 try:
173 op = name.split("/")[-1]
174 op = op[1:] # chop leading '_' from op
175 known_names = (
176 "bench",
177 "bulk",
178 "count",
179 "exists",
180 "explain",
181 "field_stats",
182 "health",
183 "mget",
184 "mlt",
185 "mpercolate",
186 "msearch",
187 "mtermvectors",
188 "percolate",
189 "query",
190 "scroll",
191 "search_shards",
192 "source",
193 "suggest",
194 "template",
195 "termvectors",
196 "update",
197 "search",
198 )
199 if op in known_names:
200 return op.title()
201 return "Unknown"
202 except Exception:
203 return "Unknown"
204
205
206 @wrapt.decorator
207 def wrapped_perform_request(wrapped, instance, args, kwargs):
208 try:
209 op = _sanitize_name(args[1])
210 except IndexError:
211 op = "Unknown"
212
213 tracked_request = TrackedRequest.instance()
214 with tracked_request.span(
215 operation="Elasticsearch/{}".format(op),
216 ignore_children=True,
217 ):
218 return wrapped(*args, **kwargs)
219
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/scout_apm/instruments/elasticsearch.py b/src/scout_apm/instruments/elasticsearch.py
--- a/src/scout_apm/instruments/elasticsearch.py
+++ b/src/scout_apm/instruments/elasticsearch.py
@@ -71,6 +71,7 @@
ClientMethod("search_shards", True),
ClientMethod("search_template", True),
ClientMethod("termvectors", True),
+ ClientMethod("terms_enum", True),
ClientMethod("update", True),
ClientMethod("update_by_query", True),
ClientMethod("update_by_query_rethrottle", False),
|
{"golden_diff": "diff --git a/src/scout_apm/instruments/elasticsearch.py b/src/scout_apm/instruments/elasticsearch.py\n--- a/src/scout_apm/instruments/elasticsearch.py\n+++ b/src/scout_apm/instruments/elasticsearch.py\n@@ -71,6 +71,7 @@\n ClientMethod(\"search_shards\", True),\n ClientMethod(\"search_template\", True),\n ClientMethod(\"termvectors\", True),\n+ ClientMethod(\"terms_enum\", True),\n ClientMethod(\"update\", True),\n ClientMethod(\"update_by_query\", True),\n ClientMethod(\"update_by_query_rethrottle\", False),\n", "issue": "Support ElasticSearch 7.14\nThe python package `elasticsearch-py` introduced the `terms_enum` parameter from ElasticSearch 7.14. This is currently not being instrumented and breaking tests.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nfrom collections import namedtuple\n\nimport wrapt\n\nfrom scout_apm.compat import get_pos_args, unwrap_decorators\nfrom scout_apm.core.tracked_request import TrackedRequest\n\ntry:\n from elasticsearch import Elasticsearch, Transport\nexcept ImportError: # pragma: no cover\n Elasticsearch = None\n Transport = None\n\nlogger = logging.getLogger(__name__)\n\n\ndef ensure_installed():\n logger.debug(\"Instrumenting elasticsearch.\")\n\n if Elasticsearch is None:\n logger.debug(\n \"Couldn't import elasticsearch.Elasticsearch - probably not installed.\"\n )\n else:\n ensure_client_instrumented()\n ensure_transport_instrumented()\n\n\nClientMethod = namedtuple(\"ClientMethod\", [\"name\", \"takes_index_argument\"])\n\nCLIENT_METHODS = [\n ClientMethod(\"bulk\", True),\n ClientMethod(\"clear_scroll\", False),\n ClientMethod(\"close\", False),\n ClientMethod(\"close_point_in_time\", False),\n ClientMethod(\"count\", True),\n ClientMethod(\"create\", True),\n ClientMethod(\"delete\", True),\n ClientMethod(\"delete_by_query\", True),\n ClientMethod(\"delete_by_query_rethrottle\", False),\n ClientMethod(\"delete_script\", False),\n ClientMethod(\"exists\", True),\n ClientMethod(\"exists_source\", True),\n ClientMethod(\"explain\", True),\n ClientMethod(\"field_caps\", True),\n ClientMethod(\"get\", True),\n ClientMethod(\"get_script\", False),\n ClientMethod(\"get_script_context\", False),\n ClientMethod(\"get_script_languages\", False),\n ClientMethod(\"get_source\", True),\n ClientMethod(\"index\", True),\n ClientMethod(\"info\", False),\n ClientMethod(\"mget\", True),\n ClientMethod(\"msearch\", True),\n ClientMethod(\"msearch_template\", True),\n ClientMethod(\"mtermvectors\", True),\n ClientMethod(\"open_point_in_time\", True),\n ClientMethod(\"ping\", False),\n ClientMethod(\"put_script\", False),\n ClientMethod(\"rank_eval\", True),\n ClientMethod(\"reindex\", False),\n ClientMethod(\"reindex_rethrottle\", False),\n ClientMethod(\"render_search_template\", False),\n ClientMethod(\"scripts_painless_execute\", False),\n ClientMethod(\"scroll\", False),\n ClientMethod(\"search\", True),\n ClientMethod(\"search_shards\", True),\n ClientMethod(\"search_template\", True),\n ClientMethod(\"termvectors\", True),\n ClientMethod(\"update\", True),\n ClientMethod(\"update_by_query\", True),\n ClientMethod(\"update_by_query_rethrottle\", False),\n]\n\n\nhave_patched_client = False\n\n\ndef ensure_client_instrumented():\n global have_patched_client\n\n if not have_patched_client:\n for name, takes_index_argument in CLIENT_METHODS:\n try:\n method = getattr(Elasticsearch, name)\n if takes_index_argument:\n wrapped = wrap_client_index_method(method)\n else:\n wrapped = wrap_client_method(method)\n setattr(Elasticsearch, name, wrapped)\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Elasticsearch.%s: %r\",\n name,\n exc,\n exc_info=exc,\n )\n\n have_patched_client = True\n\n\[email protected]\ndef wrap_client_index_method(wrapped, instance, args, kwargs):\n # elasticsearch-py 7.5.1 changed the order of arguments for client methods,\n # so to be safe we need to inspect the wrapped method's positional\n # arguments to see if we should pull it from there\n if \"index\" in kwargs:\n index = kwargs[\"index\"]\n else:\n unwrapped = unwrap_decorators(wrapped)\n pos_args = get_pos_args(unwrapped)\n try:\n index_index = pos_args.index(\"index\")\n except ValueError: # pragma: no cover\n # This guards against the method not accepting an 'index' argument\n # but they all do - for now\n index = \"\"\n else:\n try:\n index = args[index_index - 1] # subtract 'self'\n except IndexError:\n index = \"\"\n\n if isinstance(index, (list, tuple)):\n index = \",\".join(index)\n if index == \"\":\n index = \"Unknown\"\n index = index.title()\n\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}/{}\".format(index, camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\[email protected]\ndef wrap_client_method(wrapped, instance, args, kwargs):\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}\".format(camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\nhave_patched_transport = False\n\n\ndef ensure_transport_instrumented():\n global have_patched_transport\n\n if not have_patched_transport:\n try:\n Transport.perform_request = wrapped_perform_request(\n Transport.perform_request\n )\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Transport.perform_request: %r\",\n exc,\n exc_info=exc,\n )\n\n have_patched_transport = True\n\n\ndef _sanitize_name(name):\n try:\n op = name.split(\"/\")[-1]\n op = op[1:] # chop leading '_' from op\n known_names = (\n \"bench\",\n \"bulk\",\n \"count\",\n \"exists\",\n \"explain\",\n \"field_stats\",\n \"health\",\n \"mget\",\n \"mlt\",\n \"mpercolate\",\n \"msearch\",\n \"mtermvectors\",\n \"percolate\",\n \"query\",\n \"scroll\",\n \"search_shards\",\n \"source\",\n \"suggest\",\n \"template\",\n \"termvectors\",\n \"update\",\n \"search\",\n )\n if op in known_names:\n return op.title()\n return \"Unknown\"\n except Exception:\n return \"Unknown\"\n\n\[email protected]\ndef wrapped_perform_request(wrapped, instance, args, kwargs):\n try:\n op = _sanitize_name(args[1])\n except IndexError:\n op = \"Unknown\"\n\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(\n operation=\"Elasticsearch/{}\".format(op),\n ignore_children=True,\n ):\n return wrapped(*args, **kwargs)\n", "path": "src/scout_apm/instruments/elasticsearch.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nfrom collections import namedtuple\n\nimport wrapt\n\nfrom scout_apm.compat import get_pos_args, unwrap_decorators\nfrom scout_apm.core.tracked_request import TrackedRequest\n\ntry:\n from elasticsearch import Elasticsearch, Transport\nexcept ImportError: # pragma: no cover\n Elasticsearch = None\n Transport = None\n\nlogger = logging.getLogger(__name__)\n\n\ndef ensure_installed():\n logger.debug(\"Instrumenting elasticsearch.\")\n\n if Elasticsearch is None:\n logger.debug(\n \"Couldn't import elasticsearch.Elasticsearch - probably not installed.\"\n )\n else:\n ensure_client_instrumented()\n ensure_transport_instrumented()\n\n\nClientMethod = namedtuple(\"ClientMethod\", [\"name\", \"takes_index_argument\"])\n\nCLIENT_METHODS = [\n ClientMethod(\"bulk\", True),\n ClientMethod(\"clear_scroll\", False),\n ClientMethod(\"close\", False),\n ClientMethod(\"close_point_in_time\", False),\n ClientMethod(\"count\", True),\n ClientMethod(\"create\", True),\n ClientMethod(\"delete\", True),\n ClientMethod(\"delete_by_query\", True),\n ClientMethod(\"delete_by_query_rethrottle\", False),\n ClientMethod(\"delete_script\", False),\n ClientMethod(\"exists\", True),\n ClientMethod(\"exists_source\", True),\n ClientMethod(\"explain\", True),\n ClientMethod(\"field_caps\", True),\n ClientMethod(\"get\", True),\n ClientMethod(\"get_script\", False),\n ClientMethod(\"get_script_context\", False),\n ClientMethod(\"get_script_languages\", False),\n ClientMethod(\"get_source\", True),\n ClientMethod(\"index\", True),\n ClientMethod(\"info\", False),\n ClientMethod(\"mget\", True),\n ClientMethod(\"msearch\", True),\n ClientMethod(\"msearch_template\", True),\n ClientMethod(\"mtermvectors\", True),\n ClientMethod(\"open_point_in_time\", True),\n ClientMethod(\"ping\", False),\n ClientMethod(\"put_script\", False),\n ClientMethod(\"rank_eval\", True),\n ClientMethod(\"reindex\", False),\n ClientMethod(\"reindex_rethrottle\", False),\n ClientMethod(\"render_search_template\", False),\n ClientMethod(\"scripts_painless_execute\", False),\n ClientMethod(\"scroll\", False),\n ClientMethod(\"search\", True),\n ClientMethod(\"search_shards\", True),\n ClientMethod(\"search_template\", True),\n ClientMethod(\"termvectors\", True),\n ClientMethod(\"terms_enum\", True),\n ClientMethod(\"update\", True),\n ClientMethod(\"update_by_query\", True),\n ClientMethod(\"update_by_query_rethrottle\", False),\n]\n\n\nhave_patched_client = False\n\n\ndef ensure_client_instrumented():\n global have_patched_client\n\n if not have_patched_client:\n for name, takes_index_argument in CLIENT_METHODS:\n try:\n method = getattr(Elasticsearch, name)\n if takes_index_argument:\n wrapped = wrap_client_index_method(method)\n else:\n wrapped = wrap_client_method(method)\n setattr(Elasticsearch, name, wrapped)\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Elasticsearch.%s: %r\",\n name,\n exc,\n exc_info=exc,\n )\n\n have_patched_client = True\n\n\[email protected]\ndef wrap_client_index_method(wrapped, instance, args, kwargs):\n # elasticsearch-py 7.5.1 changed the order of arguments for client methods,\n # so to be safe we need to inspect the wrapped method's positional\n # arguments to see if we should pull it from there\n if \"index\" in kwargs:\n index = kwargs[\"index\"]\n else:\n unwrapped = unwrap_decorators(wrapped)\n pos_args = get_pos_args(unwrapped)\n try:\n index_index = pos_args.index(\"index\")\n except ValueError: # pragma: no cover\n # This guards against the method not accepting an 'index' argument\n # but they all do - for now\n index = \"\"\n else:\n try:\n index = args[index_index - 1] # subtract 'self'\n except IndexError:\n index = \"\"\n\n if isinstance(index, (list, tuple)):\n index = \",\".join(index)\n if index == \"\":\n index = \"Unknown\"\n index = index.title()\n\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}/{}\".format(index, camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\[email protected]\ndef wrap_client_method(wrapped, instance, args, kwargs):\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}\".format(camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\nhave_patched_transport = False\n\n\ndef ensure_transport_instrumented():\n global have_patched_transport\n\n if not have_patched_transport:\n try:\n Transport.perform_request = wrapped_perform_request(\n Transport.perform_request\n )\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Transport.perform_request: %r\",\n exc,\n exc_info=exc,\n )\n\n have_patched_transport = True\n\n\ndef _sanitize_name(name):\n try:\n op = name.split(\"/\")[-1]\n op = op[1:] # chop leading '_' from op\n known_names = (\n \"bench\",\n \"bulk\",\n \"count\",\n \"exists\",\n \"explain\",\n \"field_stats\",\n \"health\",\n \"mget\",\n \"mlt\",\n \"mpercolate\",\n \"msearch\",\n \"mtermvectors\",\n \"percolate\",\n \"query\",\n \"scroll\",\n \"search_shards\",\n \"source\",\n \"suggest\",\n \"template\",\n \"termvectors\",\n \"update\",\n \"search\",\n )\n if op in known_names:\n return op.title()\n return \"Unknown\"\n except Exception:\n return \"Unknown\"\n\n\[email protected]\ndef wrapped_perform_request(wrapped, instance, args, kwargs):\n try:\n op = _sanitize_name(args[1])\n except IndexError:\n op = \"Unknown\"\n\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(\n operation=\"Elasticsearch/{}\".format(op),\n ignore_children=True,\n ):\n return wrapped(*args, **kwargs)\n", "path": "src/scout_apm/instruments/elasticsearch.py"}]}
| 2,311 | 134 |
gh_patches_debug_5128
|
rasdani/github-patches
|
git_diff
|
geopandas__geopandas-202
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove mapquest from geocoder
The mapquest API is now only available to enterprise customers and has been removed from geopy.
See [geopy docs](https://geopy.readthedocs.org/en/1.10.0/#id4)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geopandas/tools/geocoding.py`
Content:
```
1 from collections import defaultdict
2 import time
3
4 from fiona.crs import from_epsg
5 import numpy as np
6 import pandas as pd
7 from shapely.geometry import Point
8 from six import iteritems
9
10 import geopandas as gpd
11
12
13 def _throttle_time(provider):
14 """ Amount of time to wait between requests to a geocoding API.
15
16 Currently implemented for Nominatim, as their terms of service
17 require a maximum of 1 request per second.
18 https://wiki.openstreetmap.org/wiki/Nominatim_usage_policy
19 """
20 if provider == 'nominatim':
21 return 1
22 else:
23 return 0
24
25
26 def geocode(strings, provider='googlev3', **kwargs):
27 """
28 Geocode a set of strings and get a GeoDataFrame of the resulting points.
29
30 Parameters
31 ----------
32 strings : list or Series of addresses to geocode
33 provider : geopy geocoder to use, default 'googlev3'
34 Some providers require additional arguments such as access keys
35 See each geocoder's specific parameters in geopy.geocoders
36 * googlev3, default
37 * bing
38 * google
39 * yahoo
40 * mapquest
41 * openmapquest
42
43 Ensure proper use of the results by consulting the Terms of Service for
44 your provider.
45
46 Geocoding requires geopy. Install it using 'pip install geopy'. See also
47 https://github.com/geopy/geopy
48
49 Example
50 -------
51 >>> df = geocode(['boston, ma', '1600 pennsylvania ave. washington, dc'])
52
53 address \
54 0 Boston, MA, USA
55 1 1600 Pennsylvania Avenue Northwest, President'...
56
57 geometry
58 0 POINT (-71.0597732 42.3584308)
59 1 POINT (-77.0365305 38.8977332)
60
61 """
62 return _query(strings, True, provider, **kwargs)
63
64
65 def reverse_geocode(points, provider='googlev3', **kwargs):
66 """
67 Reverse geocode a set of points and get a GeoDataFrame of the resulting
68 addresses.
69
70 The points
71
72 Parameters
73 ----------
74 points : list or Series of Shapely Point objects.
75 x coordinate is longitude
76 y coordinate is latitude
77 provider : geopy geocoder to use, default 'googlev3'
78 These are the same options as the geocode() function
79 Some providers require additional arguments such as access keys
80 See each geocoder's specific parameters in geopy.geocoders
81 * googlev3, default
82 * bing
83 * google
84 * yahoo
85 * mapquest
86 * openmapquest
87
88 Ensure proper use of the results by consulting the Terms of Service for
89 your provider.
90
91 Reverse geocoding requires geopy. Install it using 'pip install geopy'.
92 See also https://github.com/geopy/geopy
93
94 Example
95 -------
96 >>> df = reverse_geocode([Point(-71.0594869, 42.3584697),
97 Point(-77.0365305, 38.8977332)])
98
99 address \
100 0 29 Court Square, Boston, MA 02108, USA
101 1 1600 Pennsylvania Avenue Northwest, President'...
102
103 geometry
104 0 POINT (-71.0594869 42.3584697)
105 1 POINT (-77.0365305 38.8977332)
106
107 """
108 return _query(points, False, provider, **kwargs)
109
110
111 def _query(data, forward, provider, **kwargs):
112 import geopy
113 from geopy.geocoders.base import GeocoderQueryError
114
115 if not isinstance(data, pd.Series):
116 data = pd.Series(data)
117
118 # workaround changed name in 0.96
119 try:
120 Yahoo = geopy.geocoders.YahooPlaceFinder
121 except AttributeError:
122 Yahoo = geopy.geocoders.Yahoo
123
124 coders = {'googlev3': geopy.geocoders.GoogleV3,
125 'bing': geopy.geocoders.Bing,
126 'yahoo': Yahoo,
127 'mapquest': geopy.geocoders.MapQuest,
128 'openmapquest': geopy.geocoders.OpenMapQuest,
129 'nominatim': geopy.geocoders.Nominatim}
130
131 if provider not in coders:
132 raise ValueError('Unknown geocoding provider: {0}'.format(provider))
133
134 coder = coders[provider](**kwargs)
135 results = {}
136 for i, s in iteritems(data):
137 try:
138 if forward:
139 results[i] = coder.geocode(s)
140 else:
141 results[i] = coder.reverse((s.y, s.x), exactly_one=True)
142 except (GeocoderQueryError, ValueError):
143 results[i] = (None, None)
144 time.sleep(_throttle_time(provider))
145
146 df = _prepare_geocode_result(results)
147 return df
148
149
150 def _prepare_geocode_result(results):
151 """
152 Helper function for the geocode function
153
154 Takes a dict where keys are index entries, values are tuples containing:
155 (address, (lat, lon))
156
157 """
158 # Prepare the data for the DataFrame as a dict of lists
159 d = defaultdict(list)
160 index = []
161
162 for i, s in iteritems(results):
163 address, loc = s
164
165 # loc is lat, lon and we want lon, lat
166 if loc is None:
167 p = Point()
168 else:
169 p = Point(loc[1], loc[0])
170
171 if address is None:
172 address = np.nan
173
174 d['geometry'].append(p)
175 d['address'].append(address)
176 index.append(i)
177
178 df = gpd.GeoDataFrame(d, index=index)
179 df.crs = from_epsg(4326)
180
181 return df
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/geopandas/tools/geocoding.py b/geopandas/tools/geocoding.py
--- a/geopandas/tools/geocoding.py
+++ b/geopandas/tools/geocoding.py
@@ -124,7 +124,6 @@
coders = {'googlev3': geopy.geocoders.GoogleV3,
'bing': geopy.geocoders.Bing,
'yahoo': Yahoo,
- 'mapquest': geopy.geocoders.MapQuest,
'openmapquest': geopy.geocoders.OpenMapQuest,
'nominatim': geopy.geocoders.Nominatim}
|
{"golden_diff": "diff --git a/geopandas/tools/geocoding.py b/geopandas/tools/geocoding.py\n--- a/geopandas/tools/geocoding.py\n+++ b/geopandas/tools/geocoding.py\n@@ -124,7 +124,6 @@\n coders = {'googlev3': geopy.geocoders.GoogleV3,\n 'bing': geopy.geocoders.Bing,\n 'yahoo': Yahoo,\n- 'mapquest': geopy.geocoders.MapQuest,\n 'openmapquest': geopy.geocoders.OpenMapQuest,\n 'nominatim': geopy.geocoders.Nominatim}\n", "issue": "Remove mapquest from geocoder\nThe mapquest API is now only available to enterprise customers and has been removed from geopy.\n\nSee [geopy docs](https://geopy.readthedocs.org/en/1.10.0/#id4)\n\n", "before_files": [{"content": "from collections import defaultdict\nimport time\n\nfrom fiona.crs import from_epsg\nimport numpy as np\nimport pandas as pd\nfrom shapely.geometry import Point\nfrom six import iteritems\n\nimport geopandas as gpd\n\n\ndef _throttle_time(provider):\n \"\"\" Amount of time to wait between requests to a geocoding API.\n\n Currently implemented for Nominatim, as their terms of service\n require a maximum of 1 request per second.\n https://wiki.openstreetmap.org/wiki/Nominatim_usage_policy\n \"\"\"\n if provider == 'nominatim':\n return 1\n else:\n return 0\n\n\ndef geocode(strings, provider='googlev3', **kwargs):\n \"\"\"\n Geocode a set of strings and get a GeoDataFrame of the resulting points.\n\n Parameters\n ----------\n strings : list or Series of addresses to geocode\n provider : geopy geocoder to use, default 'googlev3'\n Some providers require additional arguments such as access keys\n See each geocoder's specific parameters in geopy.geocoders\n * googlev3, default\n * bing\n * google\n * yahoo\n * mapquest\n * openmapquest\n\n Ensure proper use of the results by consulting the Terms of Service for\n your provider.\n\n Geocoding requires geopy. Install it using 'pip install geopy'. See also\n https://github.com/geopy/geopy\n\n Example\n -------\n >>> df = geocode(['boston, ma', '1600 pennsylvania ave. washington, dc'])\n\n address \\\n 0 Boston, MA, USA\n 1 1600 Pennsylvania Avenue Northwest, President'...\n\n geometry\n 0 POINT (-71.0597732 42.3584308)\n 1 POINT (-77.0365305 38.8977332)\n\n \"\"\"\n return _query(strings, True, provider, **kwargs)\n\n\ndef reverse_geocode(points, provider='googlev3', **kwargs):\n \"\"\"\n Reverse geocode a set of points and get a GeoDataFrame of the resulting\n addresses.\n\n The points\n\n Parameters\n ----------\n points : list or Series of Shapely Point objects.\n x coordinate is longitude\n y coordinate is latitude\n provider : geopy geocoder to use, default 'googlev3'\n These are the same options as the geocode() function\n Some providers require additional arguments such as access keys\n See each geocoder's specific parameters in geopy.geocoders\n * googlev3, default\n * bing\n * google\n * yahoo\n * mapquest\n * openmapquest\n\n Ensure proper use of the results by consulting the Terms of Service for\n your provider.\n\n Reverse geocoding requires geopy. Install it using 'pip install geopy'.\n See also https://github.com/geopy/geopy\n\n Example\n -------\n >>> df = reverse_geocode([Point(-71.0594869, 42.3584697),\n Point(-77.0365305, 38.8977332)])\n\n address \\\n 0 29 Court Square, Boston, MA 02108, USA\n 1 1600 Pennsylvania Avenue Northwest, President'...\n\n geometry\n 0 POINT (-71.0594869 42.3584697)\n 1 POINT (-77.0365305 38.8977332)\n\n \"\"\"\n return _query(points, False, provider, **kwargs)\n\n\ndef _query(data, forward, provider, **kwargs):\n import geopy\n from geopy.geocoders.base import GeocoderQueryError\n\n if not isinstance(data, pd.Series):\n data = pd.Series(data)\n\n # workaround changed name in 0.96\n try:\n Yahoo = geopy.geocoders.YahooPlaceFinder\n except AttributeError:\n Yahoo = geopy.geocoders.Yahoo\n\n coders = {'googlev3': geopy.geocoders.GoogleV3,\n 'bing': geopy.geocoders.Bing,\n 'yahoo': Yahoo,\n 'mapquest': geopy.geocoders.MapQuest,\n 'openmapquest': geopy.geocoders.OpenMapQuest,\n 'nominatim': geopy.geocoders.Nominatim}\n\n if provider not in coders:\n raise ValueError('Unknown geocoding provider: {0}'.format(provider))\n\n coder = coders[provider](**kwargs)\n results = {}\n for i, s in iteritems(data):\n try:\n if forward:\n results[i] = coder.geocode(s)\n else:\n results[i] = coder.reverse((s.y, s.x), exactly_one=True)\n except (GeocoderQueryError, ValueError):\n results[i] = (None, None)\n time.sleep(_throttle_time(provider))\n\n df = _prepare_geocode_result(results)\n return df\n\n\ndef _prepare_geocode_result(results):\n \"\"\"\n Helper function for the geocode function\n\n Takes a dict where keys are index entries, values are tuples containing:\n (address, (lat, lon))\n\n \"\"\"\n # Prepare the data for the DataFrame as a dict of lists\n d = defaultdict(list)\n index = []\n\n for i, s in iteritems(results):\n address, loc = s\n\n # loc is lat, lon and we want lon, lat\n if loc is None:\n p = Point()\n else:\n p = Point(loc[1], loc[0])\n\n if address is None:\n address = np.nan\n\n d['geometry'].append(p)\n d['address'].append(address)\n index.append(i)\n\n df = gpd.GeoDataFrame(d, index=index)\n df.crs = from_epsg(4326)\n\n return df\n", "path": "geopandas/tools/geocoding.py"}], "after_files": [{"content": "from collections import defaultdict\nimport time\n\nfrom fiona.crs import from_epsg\nimport numpy as np\nimport pandas as pd\nfrom shapely.geometry import Point\nfrom six import iteritems\n\nimport geopandas as gpd\n\n\ndef _throttle_time(provider):\n \"\"\" Amount of time to wait between requests to a geocoding API.\n\n Currently implemented for Nominatim, as their terms of service\n require a maximum of 1 request per second.\n https://wiki.openstreetmap.org/wiki/Nominatim_usage_policy\n \"\"\"\n if provider == 'nominatim':\n return 1\n else:\n return 0\n\n\ndef geocode(strings, provider='googlev3', **kwargs):\n \"\"\"\n Geocode a set of strings and get a GeoDataFrame of the resulting points.\n\n Parameters\n ----------\n strings : list or Series of addresses to geocode\n provider : geopy geocoder to use, default 'googlev3'\n Some providers require additional arguments such as access keys\n See each geocoder's specific parameters in geopy.geocoders\n * googlev3, default\n * bing\n * google\n * yahoo\n * mapquest\n * openmapquest\n\n Ensure proper use of the results by consulting the Terms of Service for\n your provider.\n\n Geocoding requires geopy. Install it using 'pip install geopy'. See also\n https://github.com/geopy/geopy\n\n Example\n -------\n >>> df = geocode(['boston, ma', '1600 pennsylvania ave. washington, dc'])\n\n address \\\n 0 Boston, MA, USA\n 1 1600 Pennsylvania Avenue Northwest, President'...\n\n geometry\n 0 POINT (-71.0597732 42.3584308)\n 1 POINT (-77.0365305 38.8977332)\n\n \"\"\"\n return _query(strings, True, provider, **kwargs)\n\n\ndef reverse_geocode(points, provider='googlev3', **kwargs):\n \"\"\"\n Reverse geocode a set of points and get a GeoDataFrame of the resulting\n addresses.\n\n The points\n\n Parameters\n ----------\n points : list or Series of Shapely Point objects.\n x coordinate is longitude\n y coordinate is latitude\n provider : geopy geocoder to use, default 'googlev3'\n These are the same options as the geocode() function\n Some providers require additional arguments such as access keys\n See each geocoder's specific parameters in geopy.geocoders\n * googlev3, default\n * bing\n * google\n * yahoo\n * mapquest\n * openmapquest\n\n Ensure proper use of the results by consulting the Terms of Service for\n your provider.\n\n Reverse geocoding requires geopy. Install it using 'pip install geopy'.\n See also https://github.com/geopy/geopy\n\n Example\n -------\n >>> df = reverse_geocode([Point(-71.0594869, 42.3584697),\n Point(-77.0365305, 38.8977332)])\n\n address \\\n 0 29 Court Square, Boston, MA 02108, USA\n 1 1600 Pennsylvania Avenue Northwest, President'...\n\n geometry\n 0 POINT (-71.0594869 42.3584697)\n 1 POINT (-77.0365305 38.8977332)\n\n \"\"\"\n return _query(points, False, provider, **kwargs)\n\n\ndef _query(data, forward, provider, **kwargs):\n import geopy\n from geopy.geocoders.base import GeocoderQueryError\n\n if not isinstance(data, pd.Series):\n data = pd.Series(data)\n\n # workaround changed name in 0.96\n try:\n Yahoo = geopy.geocoders.YahooPlaceFinder\n except AttributeError:\n Yahoo = geopy.geocoders.Yahoo\n\n coders = {'googlev3': geopy.geocoders.GoogleV3,\n 'bing': geopy.geocoders.Bing,\n 'yahoo': Yahoo,\n 'openmapquest': geopy.geocoders.OpenMapQuest,\n 'nominatim': geopy.geocoders.Nominatim}\n\n if provider not in coders:\n raise ValueError('Unknown geocoding provider: {0}'.format(provider))\n\n coder = coders[provider](**kwargs)\n results = {}\n for i, s in iteritems(data):\n try:\n if forward:\n results[i] = coder.geocode(s)\n else:\n results[i] = coder.reverse((s.y, s.x), exactly_one=True)\n except (GeocoderQueryError, ValueError):\n results[i] = (None, None)\n time.sleep(_throttle_time(provider))\n\n df = _prepare_geocode_result(results)\n return df\n\n\ndef _prepare_geocode_result(results):\n \"\"\"\n Helper function for the geocode function\n\n Takes a dict where keys are index entries, values are tuples containing:\n (address, (lat, lon))\n\n \"\"\"\n # Prepare the data for the DataFrame as a dict of lists\n d = defaultdict(list)\n index = []\n\n for i, s in iteritems(results):\n address, loc = s\n\n # loc is lat, lon and we want lon, lat\n if loc is None:\n p = Point()\n else:\n p = Point(loc[1], loc[0])\n\n if address is None:\n address = np.nan\n\n d['geometry'].append(p)\n d['address'].append(address)\n index.append(i)\n\n df = gpd.GeoDataFrame(d, index=index)\n df.crs = from_epsg(4326)\n\n return df\n", "path": "geopandas/tools/geocoding.py"}]}
| 2,119 | 140 |
gh_patches_debug_11057
|
rasdani/github-patches
|
git_diff
|
cornellius-gp__gpytorch-1407
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Code in variational_strategy.py
# 🐛 Bug
Dear Gpytorch Developers
Here is a possible issue with variational_strategy.py
I am wondering whether it is a special consideration or something missing. According to the formula in Line 121, it seems an L is missing in front of self.prior_distribution.mean in Line 115. I understand variable inducing_values has already included L.inv() according to whitening of the inducing posterior, but this is not the case for the prior. It is fortunate that self.prior_distribution.mean is always 0, so no errors.
Thank you for the great package
J.
## To reproduce
** Code snippet to reproduce **
```python
# Your code goes here
# Please make sure it does not require any external dependencies (other than PyTorch!)
# (We much prefer small snippets rather than links to existing libraries!)
```
** Stack trace/error message **
```
// Paste the bad output here!
```
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
## System information
**Please complete the following information:**
- <!-- GPyTorch Version (run `print(gpytorch.__version__)` -->
- <!-- PyTorch Version (run `print(torch.__version__)` -->
- <!-- Computer OS -->
## Additional context
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpytorch/variational/variational_strategy.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import warnings
4
5 import torch
6
7 from .. import settings
8 from ..distributions import MultivariateNormal
9 from ..lazy import DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, SumLazyTensor, TriangularLazyTensor, delazify
10 from ..settings import trace_mode
11 from ..utils.cholesky import psd_safe_cholesky
12 from ..utils.errors import CachingError
13 from ..utils.memoize import cached, clear_cache_hook, pop_from_cache_ignore_args
14 from ..utils.warnings import OldVersionWarning
15 from ._variational_strategy import _VariationalStrategy
16
17
18 def _ensure_updated_strategy_flag_set(
19 state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
20 ):
21 device = state_dict[list(state_dict.keys())[0]].device
22 if prefix + "updated_strategy" not in state_dict:
23 state_dict[prefix + "updated_strategy"] = torch.tensor(False, device=device)
24 warnings.warn(
25 "You have loaded a variational GP model (using `VariationalStrategy`) from a previous version of "
26 "GPyTorch. We have updated the parameters of your model to work with the new version of "
27 "`VariationalStrategy` that uses whitened parameters.\nYour model will work as expected, but we "
28 "recommend that you re-save your model.",
29 OldVersionWarning,
30 )
31
32
33 class VariationalStrategy(_VariationalStrategy):
34 r"""
35 The standard variational strategy, as defined by `Hensman et al. (2015)`_.
36 This strategy takes a set of :math:`m \ll n` inducing points :math:`\mathbf Z`
37 and applies an approximate distribution :math:`q( \mathbf u)` over their function values.
38 (Here, we use the common notation :math:`\mathbf u = f(\mathbf Z)`.
39 The approximate function distribution for any abitrary input :math:`\mathbf X` is given by:
40
41 .. math::
42
43 q( f(\mathbf X) ) = \int p( f(\mathbf X) \mid \mathbf u) q(\mathbf u) \: d\mathbf u
44
45 This variational strategy uses "whitening" to accelerate the optimization of the variational
46 parameters. See `Matthews (2017)`_ for more info.
47
48 :param ~gpytorch.models.ApproximateGP model: Model this strategy is applied to.
49 Typically passed in when the VariationalStrategy is created in the
50 __init__ method of the user defined model.
51 :param torch.Tensor inducing_points: Tensor containing a set of inducing
52 points to use for variational inference.
53 :param ~gpytorch.variational.VariationalDistribution variational_distribution: A
54 VariationalDistribution object that represents the form of the variational distribution :math:`q(\mathbf u)`
55 :param learn_inducing_locations: (Default True): Whether or not
56 the inducing point locations :math:`\mathbf Z` should be learned (i.e. are they
57 parameters of the model).
58 :type learn_inducing_locations: `bool`, optional
59
60 .. _Hensman et al. (2015):
61 http://proceedings.mlr.press/v38/hensman15.pdf
62 .. _Matthews (2017):
63 https://www.repository.cam.ac.uk/handle/1810/278022
64 """
65
66 def __init__(self, model, inducing_points, variational_distribution, learn_inducing_locations=True):
67 super().__init__(model, inducing_points, variational_distribution, learn_inducing_locations)
68 self.register_buffer("updated_strategy", torch.tensor(True))
69 self._register_load_state_dict_pre_hook(_ensure_updated_strategy_flag_set)
70
71 @cached(name="cholesky_factor", ignore_args=True)
72 def _cholesky_factor(self, induc_induc_covar):
73 L = psd_safe_cholesky(delazify(induc_induc_covar).double(), jitter=settings.cholesky_jitter.value())
74 return TriangularLazyTensor(L)
75
76 @property
77 @cached(name="prior_distribution_memo")
78 def prior_distribution(self):
79 zeros = torch.zeros(
80 self._variational_distribution.shape(),
81 dtype=self._variational_distribution.dtype,
82 device=self._variational_distribution.device,
83 )
84 ones = torch.ones_like(zeros)
85 res = MultivariateNormal(zeros, DiagLazyTensor(ones))
86 return res
87
88 def forward(self, x, inducing_points, inducing_values, variational_inducing_covar=None, **kwargs):
89 # Compute full prior distribution
90 full_inputs = torch.cat([inducing_points, x], dim=-2)
91 full_output = self.model.forward(full_inputs, **kwargs)
92 full_covar = full_output.lazy_covariance_matrix
93
94 # Covariance terms
95 num_induc = inducing_points.size(-2)
96 test_mean = full_output.mean[..., num_induc:]
97 induc_induc_covar = full_covar[..., :num_induc, :num_induc].add_jitter()
98 induc_data_covar = full_covar[..., :num_induc, num_induc:].evaluate()
99 data_data_covar = full_covar[..., num_induc:, num_induc:]
100
101 # Compute interpolation terms
102 # K_ZZ^{-1/2} K_ZX
103 # K_ZZ^{-1/2} \mu_Z
104 L = self._cholesky_factor(induc_induc_covar)
105 if L.shape != induc_induc_covar.shape:
106 # Aggressive caching can cause nasty shape incompatibilies when evaluating with different batch shapes
107 # TODO: Use a hook fo this
108 try:
109 pop_from_cache_ignore_args(self, "cholesky_factor")
110 except CachingError:
111 pass
112 L = self._cholesky_factor(induc_induc_covar)
113 interp_term = L.inv_matmul(induc_data_covar.double()).to(full_inputs.dtype)
114
115 # Compute the mean of q(f)
116 # k_XZ K_ZZ^{-1/2} (m - K_ZZ^{-1/2} \mu_Z) + \mu_X
117 predictive_mean = (
118 torch.matmul(
119 interp_term.transpose(-1, -2), (inducing_values - self.prior_distribution.mean).unsqueeze(-1)
120 ).squeeze(-1)
121 + test_mean
122 )
123
124 # Compute the covariance of q(f)
125 # K_XX + k_XZ K_ZZ^{-1/2} (S - I) K_ZZ^{-1/2} k_ZX
126 middle_term = self.prior_distribution.lazy_covariance_matrix.mul(-1)
127 if variational_inducing_covar is not None:
128 middle_term = SumLazyTensor(variational_inducing_covar, middle_term)
129
130 if trace_mode.on():
131 predictive_covar = (
132 data_data_covar.add_jitter(1e-4).evaluate()
133 + interp_term.transpose(-1, -2) @ middle_term.evaluate() @ interp_term
134 )
135 else:
136 predictive_covar = SumLazyTensor(
137 data_data_covar.add_jitter(1e-4),
138 MatmulLazyTensor(interp_term.transpose(-1, -2), middle_term @ interp_term),
139 )
140
141 # Return the distribution
142 return MultivariateNormal(predictive_mean, predictive_covar)
143
144 def __call__(self, x, prior=False, **kwargs):
145 if not self.updated_strategy.item() and not prior:
146 with torch.no_grad():
147 # Get unwhitened p(u)
148 prior_function_dist = self(self.inducing_points, prior=True)
149 prior_mean = prior_function_dist.loc
150 L = self._cholesky_factor(prior_function_dist.lazy_covariance_matrix.add_jitter())
151
152 # Temporarily turn off noise that's added to the mean
153 orig_mean_init_std = self._variational_distribution.mean_init_std
154 self._variational_distribution.mean_init_std = 0.0
155
156 # Change the variational parameters to be whitened
157 variational_dist = self.variational_distribution
158 mean_diff = (variational_dist.loc - prior_mean).unsqueeze(-1).double()
159 whitened_mean = L.inv_matmul(mean_diff).squeeze(-1).to(variational_dist.loc.dtype)
160 covar_root = variational_dist.lazy_covariance_matrix.root_decomposition().root.evaluate().double()
161 whitened_covar = RootLazyTensor(L.inv_matmul(covar_root).to(variational_dist.loc.dtype))
162 whitened_variational_distribution = variational_dist.__class__(whitened_mean, whitened_covar)
163 self._variational_distribution.initialize_variational_distribution(whitened_variational_distribution)
164
165 # Reset the random noise parameter of the model
166 self._variational_distribution.mean_init_std = orig_mean_init_std
167
168 # Reset the cache
169 clear_cache_hook(self)
170
171 # Mark that we have updated the variational strategy
172 self.updated_strategy.fill_(True)
173
174 return super().__call__(x, prior=prior, **kwargs)
175
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gpytorch/variational/variational_strategy.py b/gpytorch/variational/variational_strategy.py
--- a/gpytorch/variational/variational_strategy.py
+++ b/gpytorch/variational/variational_strategy.py
@@ -114,12 +114,7 @@
# Compute the mean of q(f)
# k_XZ K_ZZ^{-1/2} (m - K_ZZ^{-1/2} \mu_Z) + \mu_X
- predictive_mean = (
- torch.matmul(
- interp_term.transpose(-1, -2), (inducing_values - self.prior_distribution.mean).unsqueeze(-1)
- ).squeeze(-1)
- + test_mean
- )
+ predictive_mean = (interp_term.transpose(-1, -2) @ inducing_values.unsqueeze(-1)).squeeze(-1) + test_mean
# Compute the covariance of q(f)
# K_XX + k_XZ K_ZZ^{-1/2} (S - I) K_ZZ^{-1/2} k_ZX
|
{"golden_diff": "diff --git a/gpytorch/variational/variational_strategy.py b/gpytorch/variational/variational_strategy.py\n--- a/gpytorch/variational/variational_strategy.py\n+++ b/gpytorch/variational/variational_strategy.py\n@@ -114,12 +114,7 @@\n \n # Compute the mean of q(f)\n # k_XZ K_ZZ^{-1/2} (m - K_ZZ^{-1/2} \\mu_Z) + \\mu_X\n- predictive_mean = (\n- torch.matmul(\n- interp_term.transpose(-1, -2), (inducing_values - self.prior_distribution.mean).unsqueeze(-1)\n- ).squeeze(-1)\n- + test_mean\n- )\n+ predictive_mean = (interp_term.transpose(-1, -2) @ inducing_values.unsqueeze(-1)).squeeze(-1) + test_mean\n \n # Compute the covariance of q(f)\n # K_XX + k_XZ K_ZZ^{-1/2} (S - I) K_ZZ^{-1/2} k_ZX\n", "issue": "Code in variational_strategy.py\n# \ud83d\udc1b Bug\r\n\r\nDear Gpytorch Developers\r\n\r\nHere is a possible issue with variational_strategy.py\r\n\r\nI am wondering whether it is a special consideration or something missing. According to the formula in Line 121, it seems an L is missing in front of self.prior_distribution.mean in Line 115. I understand variable inducing_values has already included L.inv() according to whitening of the inducing posterior, but this is not the case for the prior. It is fortunate that self.prior_distribution.mean is always 0, so no errors.\r\n\r\nThank you for the great package\r\n\r\nJ.\r\n\r\n## To reproduce\r\n\r\n** Code snippet to reproduce **\r\n```python\r\n# Your code goes here\r\n# Please make sure it does not require any external dependencies (other than PyTorch!)\r\n# (We much prefer small snippets rather than links to existing libraries!)\r\n```\r\n\r\n** Stack trace/error message **\r\n```\r\n// Paste the bad output here!\r\n```\r\n\r\n## Expected Behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## System information\r\n\r\n**Please complete the following information:**\r\n- <!-- GPyTorch Version (run `print(gpytorch.__version__)` -->\r\n- <!-- PyTorch Version (run `print(torch.__version__)` -->\r\n- <!-- Computer OS -->\r\n\r\n## Additional context\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport warnings\n\nimport torch\n\nfrom .. import settings\nfrom ..distributions import MultivariateNormal\nfrom ..lazy import DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, SumLazyTensor, TriangularLazyTensor, delazify\nfrom ..settings import trace_mode\nfrom ..utils.cholesky import psd_safe_cholesky\nfrom ..utils.errors import CachingError\nfrom ..utils.memoize import cached, clear_cache_hook, pop_from_cache_ignore_args\nfrom ..utils.warnings import OldVersionWarning\nfrom ._variational_strategy import _VariationalStrategy\n\n\ndef _ensure_updated_strategy_flag_set(\n state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs\n):\n device = state_dict[list(state_dict.keys())[0]].device\n if prefix + \"updated_strategy\" not in state_dict:\n state_dict[prefix + \"updated_strategy\"] = torch.tensor(False, device=device)\n warnings.warn(\n \"You have loaded a variational GP model (using `VariationalStrategy`) from a previous version of \"\n \"GPyTorch. We have updated the parameters of your model to work with the new version of \"\n \"`VariationalStrategy` that uses whitened parameters.\\nYour model will work as expected, but we \"\n \"recommend that you re-save your model.\",\n OldVersionWarning,\n )\n\n\nclass VariationalStrategy(_VariationalStrategy):\n r\"\"\"\n The standard variational strategy, as defined by `Hensman et al. (2015)`_.\n This strategy takes a set of :math:`m \\ll n` inducing points :math:`\\mathbf Z`\n and applies an approximate distribution :math:`q( \\mathbf u)` over their function values.\n (Here, we use the common notation :math:`\\mathbf u = f(\\mathbf Z)`.\n The approximate function distribution for any abitrary input :math:`\\mathbf X` is given by:\n\n .. math::\n\n q( f(\\mathbf X) ) = \\int p( f(\\mathbf X) \\mid \\mathbf u) q(\\mathbf u) \\: d\\mathbf u\n\n This variational strategy uses \"whitening\" to accelerate the optimization of the variational\n parameters. See `Matthews (2017)`_ for more info.\n\n :param ~gpytorch.models.ApproximateGP model: Model this strategy is applied to.\n Typically passed in when the VariationalStrategy is created in the\n __init__ method of the user defined model.\n :param torch.Tensor inducing_points: Tensor containing a set of inducing\n points to use for variational inference.\n :param ~gpytorch.variational.VariationalDistribution variational_distribution: A\n VariationalDistribution object that represents the form of the variational distribution :math:`q(\\mathbf u)`\n :param learn_inducing_locations: (Default True): Whether or not\n the inducing point locations :math:`\\mathbf Z` should be learned (i.e. are they\n parameters of the model).\n :type learn_inducing_locations: `bool`, optional\n\n .. _Hensman et al. (2015):\n http://proceedings.mlr.press/v38/hensman15.pdf\n .. _Matthews (2017):\n https://www.repository.cam.ac.uk/handle/1810/278022\n \"\"\"\n\n def __init__(self, model, inducing_points, variational_distribution, learn_inducing_locations=True):\n super().__init__(model, inducing_points, variational_distribution, learn_inducing_locations)\n self.register_buffer(\"updated_strategy\", torch.tensor(True))\n self._register_load_state_dict_pre_hook(_ensure_updated_strategy_flag_set)\n\n @cached(name=\"cholesky_factor\", ignore_args=True)\n def _cholesky_factor(self, induc_induc_covar):\n L = psd_safe_cholesky(delazify(induc_induc_covar).double(), jitter=settings.cholesky_jitter.value())\n return TriangularLazyTensor(L)\n\n @property\n @cached(name=\"prior_distribution_memo\")\n def prior_distribution(self):\n zeros = torch.zeros(\n self._variational_distribution.shape(),\n dtype=self._variational_distribution.dtype,\n device=self._variational_distribution.device,\n )\n ones = torch.ones_like(zeros)\n res = MultivariateNormal(zeros, DiagLazyTensor(ones))\n return res\n\n def forward(self, x, inducing_points, inducing_values, variational_inducing_covar=None, **kwargs):\n # Compute full prior distribution\n full_inputs = torch.cat([inducing_points, x], dim=-2)\n full_output = self.model.forward(full_inputs, **kwargs)\n full_covar = full_output.lazy_covariance_matrix\n\n # Covariance terms\n num_induc = inducing_points.size(-2)\n test_mean = full_output.mean[..., num_induc:]\n induc_induc_covar = full_covar[..., :num_induc, :num_induc].add_jitter()\n induc_data_covar = full_covar[..., :num_induc, num_induc:].evaluate()\n data_data_covar = full_covar[..., num_induc:, num_induc:]\n\n # Compute interpolation terms\n # K_ZZ^{-1/2} K_ZX\n # K_ZZ^{-1/2} \\mu_Z\n L = self._cholesky_factor(induc_induc_covar)\n if L.shape != induc_induc_covar.shape:\n # Aggressive caching can cause nasty shape incompatibilies when evaluating with different batch shapes\n # TODO: Use a hook fo this\n try:\n pop_from_cache_ignore_args(self, \"cholesky_factor\")\n except CachingError:\n pass\n L = self._cholesky_factor(induc_induc_covar)\n interp_term = L.inv_matmul(induc_data_covar.double()).to(full_inputs.dtype)\n\n # Compute the mean of q(f)\n # k_XZ K_ZZ^{-1/2} (m - K_ZZ^{-1/2} \\mu_Z) + \\mu_X\n predictive_mean = (\n torch.matmul(\n interp_term.transpose(-1, -2), (inducing_values - self.prior_distribution.mean).unsqueeze(-1)\n ).squeeze(-1)\n + test_mean\n )\n\n # Compute the covariance of q(f)\n # K_XX + k_XZ K_ZZ^{-1/2} (S - I) K_ZZ^{-1/2} k_ZX\n middle_term = self.prior_distribution.lazy_covariance_matrix.mul(-1)\n if variational_inducing_covar is not None:\n middle_term = SumLazyTensor(variational_inducing_covar, middle_term)\n\n if trace_mode.on():\n predictive_covar = (\n data_data_covar.add_jitter(1e-4).evaluate()\n + interp_term.transpose(-1, -2) @ middle_term.evaluate() @ interp_term\n )\n else:\n predictive_covar = SumLazyTensor(\n data_data_covar.add_jitter(1e-4),\n MatmulLazyTensor(interp_term.transpose(-1, -2), middle_term @ interp_term),\n )\n\n # Return the distribution\n return MultivariateNormal(predictive_mean, predictive_covar)\n\n def __call__(self, x, prior=False, **kwargs):\n if not self.updated_strategy.item() and not prior:\n with torch.no_grad():\n # Get unwhitened p(u)\n prior_function_dist = self(self.inducing_points, prior=True)\n prior_mean = prior_function_dist.loc\n L = self._cholesky_factor(prior_function_dist.lazy_covariance_matrix.add_jitter())\n\n # Temporarily turn off noise that's added to the mean\n orig_mean_init_std = self._variational_distribution.mean_init_std\n self._variational_distribution.mean_init_std = 0.0\n\n # Change the variational parameters to be whitened\n variational_dist = self.variational_distribution\n mean_diff = (variational_dist.loc - prior_mean).unsqueeze(-1).double()\n whitened_mean = L.inv_matmul(mean_diff).squeeze(-1).to(variational_dist.loc.dtype)\n covar_root = variational_dist.lazy_covariance_matrix.root_decomposition().root.evaluate().double()\n whitened_covar = RootLazyTensor(L.inv_matmul(covar_root).to(variational_dist.loc.dtype))\n whitened_variational_distribution = variational_dist.__class__(whitened_mean, whitened_covar)\n self._variational_distribution.initialize_variational_distribution(whitened_variational_distribution)\n\n # Reset the random noise parameter of the model\n self._variational_distribution.mean_init_std = orig_mean_init_std\n\n # Reset the cache\n clear_cache_hook(self)\n\n # Mark that we have updated the variational strategy\n self.updated_strategy.fill_(True)\n\n return super().__call__(x, prior=prior, **kwargs)\n", "path": "gpytorch/variational/variational_strategy.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport warnings\n\nimport torch\n\nfrom .. import settings\nfrom ..distributions import MultivariateNormal\nfrom ..lazy import DiagLazyTensor, MatmulLazyTensor, RootLazyTensor, SumLazyTensor, TriangularLazyTensor, delazify\nfrom ..settings import trace_mode\nfrom ..utils.cholesky import psd_safe_cholesky\nfrom ..utils.errors import CachingError\nfrom ..utils.memoize import cached, clear_cache_hook, pop_from_cache_ignore_args\nfrom ..utils.warnings import OldVersionWarning\nfrom ._variational_strategy import _VariationalStrategy\n\n\ndef _ensure_updated_strategy_flag_set(\n state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs\n):\n device = state_dict[list(state_dict.keys())[0]].device\n if prefix + \"updated_strategy\" not in state_dict:\n state_dict[prefix + \"updated_strategy\"] = torch.tensor(False, device=device)\n warnings.warn(\n \"You have loaded a variational GP model (using `VariationalStrategy`) from a previous version of \"\n \"GPyTorch. We have updated the parameters of your model to work with the new version of \"\n \"`VariationalStrategy` that uses whitened parameters.\\nYour model will work as expected, but we \"\n \"recommend that you re-save your model.\",\n OldVersionWarning,\n )\n\n\nclass VariationalStrategy(_VariationalStrategy):\n r\"\"\"\n The standard variational strategy, as defined by `Hensman et al. (2015)`_.\n This strategy takes a set of :math:`m \\ll n` inducing points :math:`\\mathbf Z`\n and applies an approximate distribution :math:`q( \\mathbf u)` over their function values.\n (Here, we use the common notation :math:`\\mathbf u = f(\\mathbf Z)`.\n The approximate function distribution for any abitrary input :math:`\\mathbf X` is given by:\n\n .. math::\n\n q( f(\\mathbf X) ) = \\int p( f(\\mathbf X) \\mid \\mathbf u) q(\\mathbf u) \\: d\\mathbf u\n\n This variational strategy uses \"whitening\" to accelerate the optimization of the variational\n parameters. See `Matthews (2017)`_ for more info.\n\n :param ~gpytorch.models.ApproximateGP model: Model this strategy is applied to.\n Typically passed in when the VariationalStrategy is created in the\n __init__ method of the user defined model.\n :param torch.Tensor inducing_points: Tensor containing a set of inducing\n points to use for variational inference.\n :param ~gpytorch.variational.VariationalDistribution variational_distribution: A\n VariationalDistribution object that represents the form of the variational distribution :math:`q(\\mathbf u)`\n :param learn_inducing_locations: (Default True): Whether or not\n the inducing point locations :math:`\\mathbf Z` should be learned (i.e. are they\n parameters of the model).\n :type learn_inducing_locations: `bool`, optional\n\n .. _Hensman et al. (2015):\n http://proceedings.mlr.press/v38/hensman15.pdf\n .. _Matthews (2017):\n https://www.repository.cam.ac.uk/handle/1810/278022\n \"\"\"\n\n def __init__(self, model, inducing_points, variational_distribution, learn_inducing_locations=True):\n super().__init__(model, inducing_points, variational_distribution, learn_inducing_locations)\n self.register_buffer(\"updated_strategy\", torch.tensor(True))\n self._register_load_state_dict_pre_hook(_ensure_updated_strategy_flag_set)\n\n @cached(name=\"cholesky_factor\", ignore_args=True)\n def _cholesky_factor(self, induc_induc_covar):\n L = psd_safe_cholesky(delazify(induc_induc_covar).double(), jitter=settings.cholesky_jitter.value())\n return TriangularLazyTensor(L)\n\n @property\n @cached(name=\"prior_distribution_memo\")\n def prior_distribution(self):\n zeros = torch.zeros(\n self._variational_distribution.shape(),\n dtype=self._variational_distribution.dtype,\n device=self._variational_distribution.device,\n )\n ones = torch.ones_like(zeros)\n res = MultivariateNormal(zeros, DiagLazyTensor(ones))\n return res\n\n def forward(self, x, inducing_points, inducing_values, variational_inducing_covar=None, **kwargs):\n # Compute full prior distribution\n full_inputs = torch.cat([inducing_points, x], dim=-2)\n full_output = self.model.forward(full_inputs, **kwargs)\n full_covar = full_output.lazy_covariance_matrix\n\n # Covariance terms\n num_induc = inducing_points.size(-2)\n test_mean = full_output.mean[..., num_induc:]\n induc_induc_covar = full_covar[..., :num_induc, :num_induc].add_jitter()\n induc_data_covar = full_covar[..., :num_induc, num_induc:].evaluate()\n data_data_covar = full_covar[..., num_induc:, num_induc:]\n\n # Compute interpolation terms\n # K_ZZ^{-1/2} K_ZX\n # K_ZZ^{-1/2} \\mu_Z\n L = self._cholesky_factor(induc_induc_covar)\n if L.shape != induc_induc_covar.shape:\n # Aggressive caching can cause nasty shape incompatibilies when evaluating with different batch shapes\n # TODO: Use a hook fo this\n try:\n pop_from_cache_ignore_args(self, \"cholesky_factor\")\n except CachingError:\n pass\n L = self._cholesky_factor(induc_induc_covar)\n interp_term = L.inv_matmul(induc_data_covar.double()).to(full_inputs.dtype)\n\n # Compute the mean of q(f)\n # k_XZ K_ZZ^{-1/2} (m - K_ZZ^{-1/2} \\mu_Z) + \\mu_X\n predictive_mean = (interp_term.transpose(-1, -2) @ inducing_values.unsqueeze(-1)).squeeze(-1) + test_mean\n\n # Compute the covariance of q(f)\n # K_XX + k_XZ K_ZZ^{-1/2} (S - I) K_ZZ^{-1/2} k_ZX\n middle_term = self.prior_distribution.lazy_covariance_matrix.mul(-1)\n if variational_inducing_covar is not None:\n middle_term = SumLazyTensor(variational_inducing_covar, middle_term)\n\n if trace_mode.on():\n predictive_covar = (\n data_data_covar.add_jitter(1e-4).evaluate()\n + interp_term.transpose(-1, -2) @ middle_term.evaluate() @ interp_term\n )\n else:\n predictive_covar = SumLazyTensor(\n data_data_covar.add_jitter(1e-4),\n MatmulLazyTensor(interp_term.transpose(-1, -2), middle_term @ interp_term),\n )\n\n # Return the distribution\n return MultivariateNormal(predictive_mean, predictive_covar)\n\n def __call__(self, x, prior=False, **kwargs):\n if not self.updated_strategy.item() and not prior:\n with torch.no_grad():\n # Get unwhitened p(u)\n prior_function_dist = self(self.inducing_points, prior=True)\n prior_mean = prior_function_dist.loc\n L = self._cholesky_factor(prior_function_dist.lazy_covariance_matrix.add_jitter())\n\n # Temporarily turn off noise that's added to the mean\n orig_mean_init_std = self._variational_distribution.mean_init_std\n self._variational_distribution.mean_init_std = 0.0\n\n # Change the variational parameters to be whitened\n variational_dist = self.variational_distribution\n mean_diff = (variational_dist.loc - prior_mean).unsqueeze(-1).double()\n whitened_mean = L.inv_matmul(mean_diff).squeeze(-1).to(variational_dist.loc.dtype)\n covar_root = variational_dist.lazy_covariance_matrix.root_decomposition().root.evaluate().double()\n whitened_covar = RootLazyTensor(L.inv_matmul(covar_root).to(variational_dist.loc.dtype))\n whitened_variational_distribution = variational_dist.__class__(whitened_mean, whitened_covar)\n self._variational_distribution.initialize_variational_distribution(whitened_variational_distribution)\n\n # Reset the random noise parameter of the model\n self._variational_distribution.mean_init_std = orig_mean_init_std\n\n # Reset the cache\n clear_cache_hook(self)\n\n # Mark that we have updated the variational strategy\n self.updated_strategy.fill_(True)\n\n return super().__call__(x, prior=prior, **kwargs)\n", "path": "gpytorch/variational/variational_strategy.py"}]}
| 2,955 | 240 |
gh_patches_debug_25309
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-2479
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Monitoring visualization show negative CPU usage with retries
**Describe the bug**
Currently, the logged CPU time in monitoring db is cumulative. This works for most of the cases, except when a task is failed and retried. As a result, multiple figures in the visualization will show a sawtooth.
**To Reproduce**
Steps to reproduce the behavior, for e.g:
1. Setup Parsl master with Python 3.6 on cluster
2. Run all the tests and open the visualization
3. See error
**Expected behavior**
Each retried task should be on a new row.
**Possible solution**
Add a new column (e.g., `retries`) to the table as a new primary key.
Fix visualization based on that
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/monitoring/visualization/plots/default/workflow_resource_plots.py`
Content:
```
1 import math
2 import numpy as np
3 import pandas as pd
4 import plotly.graph_objs as go
5 from plotly.offline import plot
6
7
8 def resource_distribution_plot(df_resources, df_task, type='psutil_process_time_user', label='CPU Time Distribution', option='avg', columns=20,):
9 # E.g., psutil_process_time_user or psutil_process_memory_percent
10
11 min_range = min(df_resources[type].astype('float'))
12 max_range = max(df_resources[type].astype('float'))
13 time_step = (max_range - min_range) / columns
14
15 if min_range == max_range:
16 x_axis = [min_range]
17 else:
18 x_axis = []
19 for i in np.arange(min_range, max_range + time_step, time_step):
20 x_axis.append(i)
21
22 apps_dict = dict()
23 for i in range(len(df_task)):
24 row = df_task.iloc[i]
25 apps_dict[row['task_id']] = []
26
27 def y_axis_setup():
28 items = [0] * len(x_axis)
29
30 for app, tasks in apps_dict.items():
31 if option == 'avg':
32 task = df_resources[df_resources['task_id'] ==
33 app][type].astype('float').mean()
34 elif option == 'max':
35 task = df_resources[df_resources['task_id'] == app][type].astype('float').max()
36
37 for i in range(len(x_axis) - 1):
38 a = task >= x_axis[i]
39 b = task < x_axis[i + 1]
40 if a and b:
41 items[i] += 1
42 break
43 if task >= x_axis[-1]:
44 items[-1] += 1
45 return items
46
47 if "memory" not in type:
48 xaxis = dict(autorange=True,
49 title='CPU user time (seconds)')
50 else:
51 xaxis = dict(autorange=True,
52 title='Memory usage (bytes)')
53 fig = go.Figure(
54 data=[go.Bar(x=x_axis,
55 y=y_axis_setup(),
56 name='tasks')],
57 layout=go.Layout(xaxis=xaxis,
58 yaxis=dict(title='Tasks'),
59 title=label + '(' + option + ')'))
60
61 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
62
63
64 def resource_time_series(tasks, type='psutil_process_time_user', label='CPU user time'):
65 tasks['epoch_time'] = (pd.to_datetime(
66 tasks['timestamp']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
67 step = int(tasks['resource_monitoring_interval'][0])
68 start = tasks['epoch_time'].min()
69 end = tasks['epoch_time'].max()
70 tasks['relative_time'] = tasks['epoch_time'] - start
71 if end != start:
72 bins = pd.cut(tasks['relative_time'],
73 range(0, end - start + 1, step),
74 include_lowest=True)
75 df = tasks.groupby(bins, as_index=False)[type].mean()
76 df['time'] = step * df.index
77 fig = go.Figure(
78 data=[go.Scatter(x=df['time'],
79 y=df[type],
80 )],
81 layout=go.Layout(xaxis=dict(autorange=True,
82 title='Time (seconds)'),
83 yaxis=dict(title=label),
84 title=label))
85 else:
86 fig = go.Figure(
87 data=[go.Scatter(x=[0],
88 y=[tasks[type].mean()],
89 )],
90 layout=go.Layout(xaxis=dict(autorange=True,
91 title='Time (seconds)'),
92 yaxis=dict(title=label),
93 title=label))
94 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
95
96
97 def worker_efficiency(task, node):
98 try:
99 node['epoch_time'] = (pd.to_datetime(
100 node['timestamp']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
101 task['epoch_time_start'] = (pd.to_datetime(
102 task['task_try_time_launched']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
103 task['epoch_time_running'] = (pd.to_datetime(
104 task['task_try_time_running']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
105 task['epoch_time_returned'] = (pd.to_datetime(
106 task['task_try_time_returned']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
107 start = int(min(task['epoch_time_start'].min(), node['epoch_time'].min()))
108 end = int(task['epoch_time_returned'].max())
109
110 worker_plot = [0] * (end - start + 1)
111 total_workers = node['worker_count'].sum()
112
113 for i, row in task.iterrows():
114 if math.isnan(row['epoch_time_running']):
115 # skip tasks with no running start time
116 continue
117 if math.isnan(row['epoch_time_returned']):
118 # there is no end time for this, so we should assume the "end" time
119 etr = end
120 else:
121 etr = int(row['epoch_time_returned'])
122 for j in range(int(row['epoch_time_running']), etr + 1):
123 worker_plot[j - start] += 1
124 fig = go.Figure(
125 data=[go.Scatter(x=list(range(0, end - start + 1)),
126 y=worker_plot,
127 name='Total busy workers',
128 ),
129 go.Scatter(x=list(range(0, end - start + 1)),
130 y=[total_workers] * (end - start + 1),
131 name='Total online workers',
132 )
133 ],
134 layout=go.Layout(xaxis=dict(autorange=True,
135 title='Time (seconds)'),
136 yaxis=dict(title='Number of workers'),
137 title="Worker efficiency"))
138 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
139 except Exception as e:
140 print(e)
141 return "The worker efficiency plot cannot be generated due to missing data."
142
143
144 def resource_efficiency(resource, node, label='CPU'):
145 try:
146 resource['epoch_time'] = (pd.to_datetime(
147 resource['timestamp']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
148 node['epoch_time'] = (pd.to_datetime(
149 node['timestamp']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
150 resource = resource.sort_values(by='epoch_time')
151 start = min(resource['epoch_time'].min(), node['epoch_time'].min())
152 end = resource['epoch_time'].max()
153 resource['relative_time'] = resource['epoch_time'] - start
154 node['relative_time'] = node['epoch_time'] - start
155
156 task_plot = [0] * (end - start + 1)
157 if label == 'CPU':
158 total = node['cpu_count'].sum()
159 elif label == 'mem':
160 total = node['total_memory'].sum() / 1024 / 1024 / 1024
161
162 resource['total_cpu_time'] = resource['psutil_process_time_user'] + resource['psutil_process_time_system']
163 for task_id in resource['task_id'].unique():
164 tmp = resource[resource['task_id'] == task_id]
165 tmp['last_timestamp'] = tmp['relative_time'].shift(1)
166 if label == 'CPU':
167 tmp['last_cputime'] = tmp['total_cpu_time'].shift(1)
168 for index, row in tmp.iterrows():
169 if np.isnan(row['last_timestamp']):
170 continue
171 for i in range(int(row['last_timestamp']), int(row['relative_time'])):
172 if label == 'CPU':
173 diff = (row['total_cpu_time'] - row['last_cputime']) / (row['relative_time'] - row['last_timestamp'])
174 elif label == 'mem':
175 diff = row['psutil_process_memory_resident'] / 1024 / 1024 / 1024
176 task_plot[i] += diff
177
178 if label == 'CPU':
179 name1 = 'Used CPU cores'
180 name2 = 'Total CPU cores'
181 yaxis = 'Number of CPU cores'
182 title = 'CPU usage'
183 elif label == 'mem':
184 name1 = 'Used memory'
185 name2 = 'Total memory'
186 yaxis = 'Memory (GB)'
187 title = 'Memory usage'
188
189 fig = go.Figure(
190 data=[go.Scatter(x=list(range(0, end - start + 1)),
191 y=task_plot,
192 name=name1,
193 ),
194 go.Scatter(x=list(range(0, end - start + 1)),
195 y=[total] * (end - start + 1),
196 name=name2,
197 )
198 ],
199 layout=go.Layout(xaxis=dict(autorange=True,
200 title='Time (seconds)'),
201 yaxis=dict(title=yaxis),
202 title=title))
203 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
204 except Exception as e:
205 print(e)
206 return "The resource efficiency plot cannot be generated because of exception {}.".format(e)
207
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py b/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py
--- a/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py
+++ b/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py
@@ -152,6 +152,7 @@
end = resource['epoch_time'].max()
resource['relative_time'] = resource['epoch_time'] - start
node['relative_time'] = node['epoch_time'] - start
+ resource['key'] = resource['task_id'].astype(str) + "-" + resource['try_id'].astype(str)
task_plot = [0] * (end - start + 1)
if label == 'CPU':
@@ -160,8 +161,8 @@
total = node['total_memory'].sum() / 1024 / 1024 / 1024
resource['total_cpu_time'] = resource['psutil_process_time_user'] + resource['psutil_process_time_system']
- for task_id in resource['task_id'].unique():
- tmp = resource[resource['task_id'] == task_id]
+ for key in resource['key'].unique():
+ tmp = resource[resource['key'] == key]
tmp['last_timestamp'] = tmp['relative_time'].shift(1)
if label == 'CPU':
tmp['last_cputime'] = tmp['total_cpu_time'].shift(1)
|
{"golden_diff": "diff --git a/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py b/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py\n--- a/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py\n+++ b/parsl/monitoring/visualization/plots/default/workflow_resource_plots.py\n@@ -152,6 +152,7 @@\n end = resource['epoch_time'].max()\n resource['relative_time'] = resource['epoch_time'] - start\n node['relative_time'] = node['epoch_time'] - start\n+ resource['key'] = resource['task_id'].astype(str) + \"-\" + resource['try_id'].astype(str)\n \n task_plot = [0] * (end - start + 1)\n if label == 'CPU':\n@@ -160,8 +161,8 @@\n total = node['total_memory'].sum() / 1024 / 1024 / 1024\n \n resource['total_cpu_time'] = resource['psutil_process_time_user'] + resource['psutil_process_time_system']\n- for task_id in resource['task_id'].unique():\n- tmp = resource[resource['task_id'] == task_id]\n+ for key in resource['key'].unique():\n+ tmp = resource[resource['key'] == key]\n tmp['last_timestamp'] = tmp['relative_time'].shift(1)\n if label == 'CPU':\n tmp['last_cputime'] = tmp['total_cpu_time'].shift(1)\n", "issue": "Monitoring visualization show negative CPU usage with retries\n**Describe the bug**\r\nCurrently, the logged CPU time in monitoring db is cumulative. This works for most of the cases, except when a task is failed and retried. As a result, multiple figures in the visualization will show a sawtooth.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior, for e.g:\r\n1. Setup Parsl master with Python 3.6 on cluster\r\n2. Run all the tests and open the visualization\r\n3. See error\r\n\r\n**Expected behavior**\r\nEach retried task should be on a new row.\r\n\r\n**Possible solution**\r\nAdd a new column (e.g., `retries`) to the table as a new primary key.\r\nFix visualization based on that\r\n\n", "before_files": [{"content": "import math\nimport numpy as np\nimport pandas as pd\nimport plotly.graph_objs as go\nfrom plotly.offline import plot\n\n\ndef resource_distribution_plot(df_resources, df_task, type='psutil_process_time_user', label='CPU Time Distribution', option='avg', columns=20,):\n # E.g., psutil_process_time_user or psutil_process_memory_percent\n\n min_range = min(df_resources[type].astype('float'))\n max_range = max(df_resources[type].astype('float'))\n time_step = (max_range - min_range) / columns\n\n if min_range == max_range:\n x_axis = [min_range]\n else:\n x_axis = []\n for i in np.arange(min_range, max_range + time_step, time_step):\n x_axis.append(i)\n\n apps_dict = dict()\n for i in range(len(df_task)):\n row = df_task.iloc[i]\n apps_dict[row['task_id']] = []\n\n def y_axis_setup():\n items = [0] * len(x_axis)\n\n for app, tasks in apps_dict.items():\n if option == 'avg':\n task = df_resources[df_resources['task_id'] ==\n app][type].astype('float').mean()\n elif option == 'max':\n task = df_resources[df_resources['task_id'] == app][type].astype('float').max()\n\n for i in range(len(x_axis) - 1):\n a = task >= x_axis[i]\n b = task < x_axis[i + 1]\n if a and b:\n items[i] += 1\n break\n if task >= x_axis[-1]:\n items[-1] += 1\n return items\n\n if \"memory\" not in type:\n xaxis = dict(autorange=True,\n title='CPU user time (seconds)')\n else:\n xaxis = dict(autorange=True,\n title='Memory usage (bytes)')\n fig = go.Figure(\n data=[go.Bar(x=x_axis,\n y=y_axis_setup(),\n name='tasks')],\n layout=go.Layout(xaxis=xaxis,\n yaxis=dict(title='Tasks'),\n title=label + '(' + option + ')'))\n\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n\n\ndef resource_time_series(tasks, type='psutil_process_time_user', label='CPU user time'):\n tasks['epoch_time'] = (pd.to_datetime(\n tasks['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n step = int(tasks['resource_monitoring_interval'][0])\n start = tasks['epoch_time'].min()\n end = tasks['epoch_time'].max()\n tasks['relative_time'] = tasks['epoch_time'] - start\n if end != start:\n bins = pd.cut(tasks['relative_time'],\n range(0, end - start + 1, step),\n include_lowest=True)\n df = tasks.groupby(bins, as_index=False)[type].mean()\n df['time'] = step * df.index\n fig = go.Figure(\n data=[go.Scatter(x=df['time'],\n y=df[type],\n )],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title=label),\n title=label))\n else:\n fig = go.Figure(\n data=[go.Scatter(x=[0],\n y=[tasks[type].mean()],\n )],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title=label),\n title=label))\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n\n\ndef worker_efficiency(task, node):\n try:\n node['epoch_time'] = (pd.to_datetime(\n node['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n task['epoch_time_start'] = (pd.to_datetime(\n task['task_try_time_launched']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n task['epoch_time_running'] = (pd.to_datetime(\n task['task_try_time_running']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n task['epoch_time_returned'] = (pd.to_datetime(\n task['task_try_time_returned']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n start = int(min(task['epoch_time_start'].min(), node['epoch_time'].min()))\n end = int(task['epoch_time_returned'].max())\n\n worker_plot = [0] * (end - start + 1)\n total_workers = node['worker_count'].sum()\n\n for i, row in task.iterrows():\n if math.isnan(row['epoch_time_running']):\n # skip tasks with no running start time\n continue\n if math.isnan(row['epoch_time_returned']):\n # there is no end time for this, so we should assume the \"end\" time\n etr = end\n else:\n etr = int(row['epoch_time_returned'])\n for j in range(int(row['epoch_time_running']), etr + 1):\n worker_plot[j - start] += 1\n fig = go.Figure(\n data=[go.Scatter(x=list(range(0, end - start + 1)),\n y=worker_plot,\n name='Total busy workers',\n ),\n go.Scatter(x=list(range(0, end - start + 1)),\n y=[total_workers] * (end - start + 1),\n name='Total online workers',\n )\n ],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title='Number of workers'),\n title=\"Worker efficiency\"))\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n except Exception as e:\n print(e)\n return \"The worker efficiency plot cannot be generated due to missing data.\"\n\n\ndef resource_efficiency(resource, node, label='CPU'):\n try:\n resource['epoch_time'] = (pd.to_datetime(\n resource['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n node['epoch_time'] = (pd.to_datetime(\n node['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n resource = resource.sort_values(by='epoch_time')\n start = min(resource['epoch_time'].min(), node['epoch_time'].min())\n end = resource['epoch_time'].max()\n resource['relative_time'] = resource['epoch_time'] - start\n node['relative_time'] = node['epoch_time'] - start\n\n task_plot = [0] * (end - start + 1)\n if label == 'CPU':\n total = node['cpu_count'].sum()\n elif label == 'mem':\n total = node['total_memory'].sum() / 1024 / 1024 / 1024\n\n resource['total_cpu_time'] = resource['psutil_process_time_user'] + resource['psutil_process_time_system']\n for task_id in resource['task_id'].unique():\n tmp = resource[resource['task_id'] == task_id]\n tmp['last_timestamp'] = tmp['relative_time'].shift(1)\n if label == 'CPU':\n tmp['last_cputime'] = tmp['total_cpu_time'].shift(1)\n for index, row in tmp.iterrows():\n if np.isnan(row['last_timestamp']):\n continue\n for i in range(int(row['last_timestamp']), int(row['relative_time'])):\n if label == 'CPU':\n diff = (row['total_cpu_time'] - row['last_cputime']) / (row['relative_time'] - row['last_timestamp'])\n elif label == 'mem':\n diff = row['psutil_process_memory_resident'] / 1024 / 1024 / 1024\n task_plot[i] += diff\n\n if label == 'CPU':\n name1 = 'Used CPU cores'\n name2 = 'Total CPU cores'\n yaxis = 'Number of CPU cores'\n title = 'CPU usage'\n elif label == 'mem':\n name1 = 'Used memory'\n name2 = 'Total memory'\n yaxis = 'Memory (GB)'\n title = 'Memory usage'\n\n fig = go.Figure(\n data=[go.Scatter(x=list(range(0, end - start + 1)),\n y=task_plot,\n name=name1,\n ),\n go.Scatter(x=list(range(0, end - start + 1)),\n y=[total] * (end - start + 1),\n name=name2,\n )\n ],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title=yaxis),\n title=title))\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n except Exception as e:\n print(e)\n return \"The resource efficiency plot cannot be generated because of exception {}.\".format(e)\n", "path": "parsl/monitoring/visualization/plots/default/workflow_resource_plots.py"}], "after_files": [{"content": "import math\nimport numpy as np\nimport pandas as pd\nimport plotly.graph_objs as go\nfrom plotly.offline import plot\n\n\ndef resource_distribution_plot(df_resources, df_task, type='psutil_process_time_user', label='CPU Time Distribution', option='avg', columns=20,):\n # E.g., psutil_process_time_user or psutil_process_memory_percent\n\n min_range = min(df_resources[type].astype('float'))\n max_range = max(df_resources[type].astype('float'))\n time_step = (max_range - min_range) / columns\n\n if min_range == max_range:\n x_axis = [min_range]\n else:\n x_axis = []\n for i in np.arange(min_range, max_range + time_step, time_step):\n x_axis.append(i)\n\n apps_dict = dict()\n for i in range(len(df_task)):\n row = df_task.iloc[i]\n apps_dict[row['task_id']] = []\n\n def y_axis_setup():\n items = [0] * len(x_axis)\n\n for app, tasks in apps_dict.items():\n if option == 'avg':\n task = df_resources[df_resources['task_id'] ==\n app][type].astype('float').mean()\n elif option == 'max':\n task = df_resources[df_resources['task_id'] == app][type].astype('float').max()\n\n for i in range(len(x_axis) - 1):\n a = task >= x_axis[i]\n b = task < x_axis[i + 1]\n if a and b:\n items[i] += 1\n break\n if task >= x_axis[-1]:\n items[-1] += 1\n return items\n\n if \"memory\" not in type:\n xaxis = dict(autorange=True,\n title='CPU user time (seconds)')\n else:\n xaxis = dict(autorange=True,\n title='Memory usage (bytes)')\n fig = go.Figure(\n data=[go.Bar(x=x_axis,\n y=y_axis_setup(),\n name='tasks')],\n layout=go.Layout(xaxis=xaxis,\n yaxis=dict(title='Tasks'),\n title=label + '(' + option + ')'))\n\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n\n\ndef resource_time_series(tasks, type='psutil_process_time_user', label='CPU user time'):\n tasks['epoch_time'] = (pd.to_datetime(\n tasks['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n step = int(tasks['resource_monitoring_interval'][0])\n start = tasks['epoch_time'].min()\n end = tasks['epoch_time'].max()\n tasks['relative_time'] = tasks['epoch_time'] - start\n if end != start:\n bins = pd.cut(tasks['relative_time'],\n range(0, end - start + 1, step),\n include_lowest=True)\n df = tasks.groupby(bins, as_index=False)[type].mean()\n df['time'] = step * df.index\n fig = go.Figure(\n data=[go.Scatter(x=df['time'],\n y=df[type],\n )],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title=label),\n title=label))\n else:\n fig = go.Figure(\n data=[go.Scatter(x=[0],\n y=[tasks[type].mean()],\n )],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title=label),\n title=label))\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n\n\ndef worker_efficiency(task, node):\n try:\n node['epoch_time'] = (pd.to_datetime(\n node['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n task['epoch_time_start'] = (pd.to_datetime(\n task['task_try_time_launched']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n task['epoch_time_running'] = (pd.to_datetime(\n task['task_try_time_running']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n task['epoch_time_returned'] = (pd.to_datetime(\n task['task_try_time_returned']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n start = int(min(task['epoch_time_start'].min(), node['epoch_time'].min()))\n end = int(task['epoch_time_returned'].max())\n\n worker_plot = [0] * (end - start + 1)\n total_workers = node['worker_count'].sum()\n\n for i, row in task.iterrows():\n if math.isnan(row['epoch_time_running']):\n # skip tasks with no running start time\n continue\n if math.isnan(row['epoch_time_returned']):\n # there is no end time for this, so we should assume the \"end\" time\n etr = end\n else:\n etr = int(row['epoch_time_returned'])\n for j in range(int(row['epoch_time_running']), etr + 1):\n worker_plot[j - start] += 1\n fig = go.Figure(\n data=[go.Scatter(x=list(range(0, end - start + 1)),\n y=worker_plot,\n name='Total busy workers',\n ),\n go.Scatter(x=list(range(0, end - start + 1)),\n y=[total_workers] * (end - start + 1),\n name='Total online workers',\n )\n ],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title='Number of workers'),\n title=\"Worker efficiency\"))\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n except Exception as e:\n print(e)\n return \"The worker efficiency plot cannot be generated due to missing data.\"\n\n\ndef resource_efficiency(resource, node, label='CPU'):\n try:\n resource['epoch_time'] = (pd.to_datetime(\n resource['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n node['epoch_time'] = (pd.to_datetime(\n node['timestamp']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n resource = resource.sort_values(by='epoch_time')\n start = min(resource['epoch_time'].min(), node['epoch_time'].min())\n end = resource['epoch_time'].max()\n resource['relative_time'] = resource['epoch_time'] - start\n node['relative_time'] = node['epoch_time'] - start\n resource['key'] = resource['task_id'].astype(str) + \"-\" + resource['try_id'].astype(str)\n\n task_plot = [0] * (end - start + 1)\n if label == 'CPU':\n total = node['cpu_count'].sum()\n elif label == 'mem':\n total = node['total_memory'].sum() / 1024 / 1024 / 1024\n\n resource['total_cpu_time'] = resource['psutil_process_time_user'] + resource['psutil_process_time_system']\n for key in resource['key'].unique():\n tmp = resource[resource['key'] == key]\n tmp['last_timestamp'] = tmp['relative_time'].shift(1)\n if label == 'CPU':\n tmp['last_cputime'] = tmp['total_cpu_time'].shift(1)\n for index, row in tmp.iterrows():\n if np.isnan(row['last_timestamp']):\n continue\n for i in range(int(row['last_timestamp']), int(row['relative_time'])):\n if label == 'CPU':\n diff = (row['total_cpu_time'] - row['last_cputime']) / (row['relative_time'] - row['last_timestamp'])\n elif label == 'mem':\n diff = row['psutil_process_memory_resident'] / 1024 / 1024 / 1024\n task_plot[i] += diff\n\n if label == 'CPU':\n name1 = 'Used CPU cores'\n name2 = 'Total CPU cores'\n yaxis = 'Number of CPU cores'\n title = 'CPU usage'\n elif label == 'mem':\n name1 = 'Used memory'\n name2 = 'Total memory'\n yaxis = 'Memory (GB)'\n title = 'Memory usage'\n\n fig = go.Figure(\n data=[go.Scatter(x=list(range(0, end - start + 1)),\n y=task_plot,\n name=name1,\n ),\n go.Scatter(x=list(range(0, end - start + 1)),\n y=[total] * (end - start + 1),\n name=name2,\n )\n ],\n layout=go.Layout(xaxis=dict(autorange=True,\n title='Time (seconds)'),\n yaxis=dict(title=yaxis),\n title=title))\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n except Exception as e:\n print(e)\n return \"The resource efficiency plot cannot be generated because of exception {}.\".format(e)\n", "path": "parsl/monitoring/visualization/plots/default/workflow_resource_plots.py"}]}
| 2,973 | 342 |
gh_patches_debug_30742
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-265
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
More filters for table list API
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
The table list API should allow filtering. For example, we might want to get the list of all tables in a schema to see if the table the user is trying to create already exists in that schema.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
The table list endpoint should support filtering by:
- schema
- before/after: created, last updated
- whether the import was verified
**Additional context**
<!-- Add any other context or screenshots about the feature request here.-->
We should use `django-filter` since it integrates with DRF and makes setting up filters easy.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/models.py`
Content:
```
1 from django.contrib.auth.models import User
2 from django.core.cache import cache
3 from django.db import models
4 from django.utils.functional import cached_property
5
6 from mathesar.database.base import create_mathesar_engine
7 from mathesar.utils import models as model_utils
8 from db import tables, records, schemas
9
10 NAME_CACHE_INTERVAL = 60 * 5
11
12
13 class BaseModel(models.Model):
14 created_at = models.DateTimeField(auto_now_add=True)
15 updated_at = models.DateTimeField(auto_now=True)
16
17 class Meta:
18 abstract = True
19
20
21 class DatabaseObject(BaseModel):
22 oid = models.IntegerField()
23
24 class Meta:
25 abstract = True
26
27 def __str__(self):
28 return f"{self.__class__.__name__}: {self.oid}"
29
30
31 class Schema(DatabaseObject):
32 database = models.CharField(max_length=128)
33
34 @cached_property
35 def _sa_engine(self):
36 # We're caching this since the engine is used frequently.
37 return create_mathesar_engine(self.database)
38
39 @cached_property
40 def name(self):
41 cache_key = f"{self.database}_schema_name_{self.oid}"
42 try:
43 schema_name = cache.get(cache_key)
44 if schema_name is None:
45 schema_name = schemas.get_schema_name_from_oid(
46 self.oid, self._sa_engine
47 )
48 cache.set(cache_key, schema_name, NAME_CACHE_INTERVAL)
49 return schema_name
50 # We catch this error, since it lets us decouple the cadence of
51 # overall DB reflection from the cadence of cache expiration for
52 # schema names. Also, it makes it obvious when the DB layer has
53 # been altered, as opposed to other reasons for a 404 when
54 # requesting a schema.
55 except TypeError:
56 return 'MISSING'
57
58
59 class Table(DatabaseObject):
60 schema = models.ForeignKey('Schema', on_delete=models.CASCADE,
61 related_name='tables')
62 import_verified = models.BooleanField(blank=True, null=True)
63
64 @cached_property
65 def _sa_table(self):
66 try:
67 table = tables.reflect_table_from_oid(
68 self.oid, self.schema._sa_engine,
69 )
70 # We catch this error, since it lets us decouple the cadence of
71 # overall DB reflection from the cadence of cache expiration for
72 # table names. Also, it makes it obvious when the DB layer has
73 # been altered, as opposed to other reasons for a 404 when
74 # requesting a table.
75 except TypeError:
76 table = tables.create_empty_table("MISSING")
77 return table
78
79 @cached_property
80 def name(self):
81 return self._sa_table.name
82
83 @property
84 def sa_columns(self):
85 return self._sa_table.columns
86
87 @property
88 def sa_column_names(self):
89 return self.sa_columns.keys()
90
91 @property
92 def sa_num_records(self):
93 return tables.get_count(self._sa_table, self.schema._sa_engine)
94
95 @property
96 def sa_all_records(self):
97 return records.get_records(self._sa_table, self.schema._sa_engine)
98
99 def get_record(self, id_value):
100 return records.get_record(self._sa_table, self.schema._sa_engine, id_value)
101
102 def get_records(self, limit=None, offset=None):
103 return records.get_records(self._sa_table, self.schema._sa_engine, limit, offset)
104
105 def create_record_or_records(self, record_data):
106 return records.create_record_or_records(self._sa_table, self.schema._sa_engine, record_data)
107
108 def update_record(self, id_value, record_data):
109 return records.update_record(self._sa_table, self.schema._sa_engine, id_value, record_data)
110
111 def delete_record(self, id_value):
112 return records.delete_record(self._sa_table, self.schema._sa_engine, id_value)
113
114
115 class DataFile(BaseModel):
116 file = models.FileField(
117 upload_to=model_utils.user_directory_path,
118 )
119 user = models.ForeignKey(User, blank=True, null=True, on_delete=models.CASCADE)
120 table_imported_to = models.ForeignKey(Table, related_name="data_files", blank=True,
121 null=True, on_delete=models.SET_NULL)
122 delimiter = models.CharField(max_length=1, default=',', blank=True)
123 escapechar = models.CharField(max_length=1, blank=True)
124 quotechar = models.CharField(max_length=1, default='"', blank=True)
125
```
Path: `mathesar/filters.py`
Content:
```
1 from django_property_filter import (
2 PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter,
3 )
4
5 from mathesar.models import Schema, Table
6
7
8 class CharInFilter(PropertyBaseInFilter, PropertyCharFilter):
9 pass
10
11
12 class SchemaFilter(PropertyFilterSet):
13 database = CharInFilter(field_name='database', lookup_expr='in')
14 name = CharInFilter(field_name='name', lookup_expr='in')
15
16 class Meta:
17 model = Schema
18 fields = ['database', 'name']
19
20
21 class TableFilter(PropertyFilterSet):
22 name = CharInFilter(field_name='name', lookup_expr='in')
23
24 class Meta:
25 model = Table
26 fields = ['name']
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mathesar/filters.py b/mathesar/filters.py
--- a/mathesar/filters.py
+++ b/mathesar/filters.py
@@ -1,5 +1,6 @@
from django_property_filter import (
PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter,
+ PropertyDateTimeFromToRangeFilter, PropertyBooleanFilter
)
from mathesar.models import Schema, Table
@@ -20,7 +21,13 @@
class TableFilter(PropertyFilterSet):
name = CharInFilter(field_name='name', lookup_expr='in')
+ schema = CharInFilter(field_name='schema__name', lookup_expr='in')
+ created = PropertyDateTimeFromToRangeFilter(field_name='created_at')
+ updated = PropertyDateTimeFromToRangeFilter(field_name='updated_at')
+ import_verified = PropertyBooleanFilter(field_name='import_verified')
+ not_imported = PropertyBooleanFilter(lookup_expr="isnull",
+ field_name='import_verified')
class Meta:
model = Table
- fields = ['name']
+ fields = ['name', 'schema', 'created_at', 'updated_at', 'import_verified']
diff --git a/mathesar/models.py b/mathesar/models.py
--- a/mathesar/models.py
+++ b/mathesar/models.py
@@ -28,13 +28,21 @@
return f"{self.__class__.__name__}: {self.oid}"
+# TODO: Replace with a proper form of caching
+# See: https://github.com/centerofci/mathesar/issues/280
+_engine = None
+
+
class Schema(DatabaseObject):
database = models.CharField(max_length=128)
- @cached_property
+ @property
def _sa_engine(self):
+ global _engine
# We're caching this since the engine is used frequently.
- return create_mathesar_engine(self.database)
+ if _engine is None:
+ _engine = create_mathesar_engine(self.database)
+ return _engine
@cached_property
def name(self):
|
{"golden_diff": "diff --git a/mathesar/filters.py b/mathesar/filters.py\n--- a/mathesar/filters.py\n+++ b/mathesar/filters.py\n@@ -1,5 +1,6 @@\n from django_property_filter import (\n PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter,\n+ PropertyDateTimeFromToRangeFilter, PropertyBooleanFilter\n )\n \n from mathesar.models import Schema, Table\n@@ -20,7 +21,13 @@\n \n class TableFilter(PropertyFilterSet):\n name = CharInFilter(field_name='name', lookup_expr='in')\n+ schema = CharInFilter(field_name='schema__name', lookup_expr='in')\n+ created = PropertyDateTimeFromToRangeFilter(field_name='created_at')\n+ updated = PropertyDateTimeFromToRangeFilter(field_name='updated_at')\n+ import_verified = PropertyBooleanFilter(field_name='import_verified')\n+ not_imported = PropertyBooleanFilter(lookup_expr=\"isnull\",\n+ field_name='import_verified')\n \n class Meta:\n model = Table\n- fields = ['name']\n+ fields = ['name', 'schema', 'created_at', 'updated_at', 'import_verified']\ndiff --git a/mathesar/models.py b/mathesar/models.py\n--- a/mathesar/models.py\n+++ b/mathesar/models.py\n@@ -28,13 +28,21 @@\n return f\"{self.__class__.__name__}: {self.oid}\"\n \n \n+# TODO: Replace with a proper form of caching\n+# See: https://github.com/centerofci/mathesar/issues/280\n+_engine = None\n+\n+\n class Schema(DatabaseObject):\n database = models.CharField(max_length=128)\n \n- @cached_property\n+ @property\n def _sa_engine(self):\n+ global _engine\n # We're caching this since the engine is used frequently.\n- return create_mathesar_engine(self.database)\n+ if _engine is None:\n+ _engine = create_mathesar_engine(self.database)\n+ return _engine\n \n @cached_property\n def name(self):\n", "issue": "More filters for table list API\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nThe table list API should allow filtering. For example, we might want to get the list of all tables in a schema to see if the table the user is trying to create already exists in that schema.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nThe table list endpoint should support filtering by:\r\n- schema\r\n- before/after: created, last updated\r\n- whether the import was verified\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\nWe should use `django-filter` since it integrates with DRF and makes setting up filters easy.\n", "before_files": [{"content": "from django.contrib.auth.models import User\nfrom django.core.cache import cache\nfrom django.db import models\nfrom django.utils.functional import cached_property\n\nfrom mathesar.database.base import create_mathesar_engine\nfrom mathesar.utils import models as model_utils\nfrom db import tables, records, schemas\n\nNAME_CACHE_INTERVAL = 60 * 5\n\n\nclass BaseModel(models.Model):\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n\n class Meta:\n abstract = True\n\n\nclass DatabaseObject(BaseModel):\n oid = models.IntegerField()\n\n class Meta:\n abstract = True\n\n def __str__(self):\n return f\"{self.__class__.__name__}: {self.oid}\"\n\n\nclass Schema(DatabaseObject):\n database = models.CharField(max_length=128)\n\n @cached_property\n def _sa_engine(self):\n # We're caching this since the engine is used frequently.\n return create_mathesar_engine(self.database)\n\n @cached_property\n def name(self):\n cache_key = f\"{self.database}_schema_name_{self.oid}\"\n try:\n schema_name = cache.get(cache_key)\n if schema_name is None:\n schema_name = schemas.get_schema_name_from_oid(\n self.oid, self._sa_engine\n )\n cache.set(cache_key, schema_name, NAME_CACHE_INTERVAL)\n return schema_name\n # We catch this error, since it lets us decouple the cadence of\n # overall DB reflection from the cadence of cache expiration for\n # schema names. Also, it makes it obvious when the DB layer has\n # been altered, as opposed to other reasons for a 404 when\n # requesting a schema.\n except TypeError:\n return 'MISSING'\n\n\nclass Table(DatabaseObject):\n schema = models.ForeignKey('Schema', on_delete=models.CASCADE,\n related_name='tables')\n import_verified = models.BooleanField(blank=True, null=True)\n\n @cached_property\n def _sa_table(self):\n try:\n table = tables.reflect_table_from_oid(\n self.oid, self.schema._sa_engine,\n )\n # We catch this error, since it lets us decouple the cadence of\n # overall DB reflection from the cadence of cache expiration for\n # table names. Also, it makes it obvious when the DB layer has\n # been altered, as opposed to other reasons for a 404 when\n # requesting a table.\n except TypeError:\n table = tables.create_empty_table(\"MISSING\")\n return table\n\n @cached_property\n def name(self):\n return self._sa_table.name\n\n @property\n def sa_columns(self):\n return self._sa_table.columns\n\n @property\n def sa_column_names(self):\n return self.sa_columns.keys()\n\n @property\n def sa_num_records(self):\n return tables.get_count(self._sa_table, self.schema._sa_engine)\n\n @property\n def sa_all_records(self):\n return records.get_records(self._sa_table, self.schema._sa_engine)\n\n def get_record(self, id_value):\n return records.get_record(self._sa_table, self.schema._sa_engine, id_value)\n\n def get_records(self, limit=None, offset=None):\n return records.get_records(self._sa_table, self.schema._sa_engine, limit, offset)\n\n def create_record_or_records(self, record_data):\n return records.create_record_or_records(self._sa_table, self.schema._sa_engine, record_data)\n\n def update_record(self, id_value, record_data):\n return records.update_record(self._sa_table, self.schema._sa_engine, id_value, record_data)\n\n def delete_record(self, id_value):\n return records.delete_record(self._sa_table, self.schema._sa_engine, id_value)\n\n\nclass DataFile(BaseModel):\n file = models.FileField(\n upload_to=model_utils.user_directory_path,\n )\n user = models.ForeignKey(User, blank=True, null=True, on_delete=models.CASCADE)\n table_imported_to = models.ForeignKey(Table, related_name=\"data_files\", blank=True,\n null=True, on_delete=models.SET_NULL)\n delimiter = models.CharField(max_length=1, default=',', blank=True)\n escapechar = models.CharField(max_length=1, blank=True)\n quotechar = models.CharField(max_length=1, default='\"', blank=True)\n", "path": "mathesar/models.py"}, {"content": "from django_property_filter import (\n PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter,\n)\n\nfrom mathesar.models import Schema, Table\n\n\nclass CharInFilter(PropertyBaseInFilter, PropertyCharFilter):\n pass\n\n\nclass SchemaFilter(PropertyFilterSet):\n database = CharInFilter(field_name='database', lookup_expr='in')\n name = CharInFilter(field_name='name', lookup_expr='in')\n\n class Meta:\n model = Schema\n fields = ['database', 'name']\n\n\nclass TableFilter(PropertyFilterSet):\n name = CharInFilter(field_name='name', lookup_expr='in')\n\n class Meta:\n model = Table\n fields = ['name']\n", "path": "mathesar/filters.py"}], "after_files": [{"content": "from django.contrib.auth.models import User\nfrom django.core.cache import cache\nfrom django.db import models\nfrom django.utils.functional import cached_property\n\nfrom mathesar.database.base import create_mathesar_engine\nfrom mathesar.utils import models as model_utils\nfrom db import tables, records, schemas\n\nNAME_CACHE_INTERVAL = 60 * 5\n\n\nclass BaseModel(models.Model):\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n\n class Meta:\n abstract = True\n\n\nclass DatabaseObject(BaseModel):\n oid = models.IntegerField()\n\n class Meta:\n abstract = True\n\n def __str__(self):\n return f\"{self.__class__.__name__}: {self.oid}\"\n\n\n# TODO: Replace with a proper form of caching\n# See: https://github.com/centerofci/mathesar/issues/280\n_engine = None\n\n\nclass Schema(DatabaseObject):\n database = models.CharField(max_length=128)\n\n @property\n def _sa_engine(self):\n global _engine\n # We're caching this since the engine is used frequently.\n if _engine is None:\n _engine = create_mathesar_engine(self.database)\n return _engine\n\n @cached_property\n def name(self):\n cache_key = f\"{self.database}_schema_name_{self.oid}\"\n try:\n schema_name = cache.get(cache_key)\n if schema_name is None:\n schema_name = schemas.get_schema_name_from_oid(\n self.oid, self._sa_engine\n )\n cache.set(cache_key, schema_name, NAME_CACHE_INTERVAL)\n return schema_name\n # We catch this error, since it lets us decouple the cadence of\n # overall DB reflection from the cadence of cache expiration for\n # schema names. Also, it makes it obvious when the DB layer has\n # been altered, as opposed to other reasons for a 404 when\n # requesting a schema.\n except TypeError:\n return 'MISSING'\n\n\nclass Table(DatabaseObject):\n schema = models.ForeignKey('Schema', on_delete=models.CASCADE,\n related_name='tables')\n import_verified = models.BooleanField(blank=True, null=True)\n\n @cached_property\n def _sa_table(self):\n try:\n table = tables.reflect_table_from_oid(\n self.oid, self.schema._sa_engine,\n )\n # We catch this error, since it lets us decouple the cadence of\n # overall DB reflection from the cadence of cache expiration for\n # table names. Also, it makes it obvious when the DB layer has\n # been altered, as opposed to other reasons for a 404 when\n # requesting a table.\n except TypeError:\n table = tables.create_empty_table(\"MISSING\")\n return table\n\n @cached_property\n def name(self):\n return self._sa_table.name\n\n @property\n def sa_columns(self):\n return self._sa_table.columns\n\n @property\n def sa_column_names(self):\n return self.sa_columns.keys()\n\n @property\n def sa_num_records(self):\n return tables.get_count(self._sa_table, self.schema._sa_engine)\n\n @property\n def sa_all_records(self):\n return records.get_records(self._sa_table, self.schema._sa_engine)\n\n def get_record(self, id_value):\n return records.get_record(self._sa_table, self.schema._sa_engine, id_value)\n\n def get_records(self, limit=None, offset=None):\n return records.get_records(self._sa_table, self.schema._sa_engine, limit, offset)\n\n def create_record_or_records(self, record_data):\n return records.create_record_or_records(self._sa_table, self.schema._sa_engine, record_data)\n\n def update_record(self, id_value, record_data):\n return records.update_record(self._sa_table, self.schema._sa_engine, id_value, record_data)\n\n def delete_record(self, id_value):\n return records.delete_record(self._sa_table, self.schema._sa_engine, id_value)\n\n\nclass DataFile(BaseModel):\n file = models.FileField(\n upload_to=model_utils.user_directory_path,\n )\n user = models.ForeignKey(User, blank=True, null=True, on_delete=models.CASCADE)\n table_imported_to = models.ForeignKey(Table, related_name=\"data_files\", blank=True,\n null=True, on_delete=models.SET_NULL)\n delimiter = models.CharField(max_length=1, default=',', blank=True)\n escapechar = models.CharField(max_length=1, blank=True)\n quotechar = models.CharField(max_length=1, default='\"', blank=True)\n", "path": "mathesar/models.py"}, {"content": "from django_property_filter import (\n PropertyFilterSet, PropertyBaseInFilter, PropertyCharFilter,\n PropertyDateTimeFromToRangeFilter, PropertyBooleanFilter\n)\n\nfrom mathesar.models import Schema, Table\n\n\nclass CharInFilter(PropertyBaseInFilter, PropertyCharFilter):\n pass\n\n\nclass SchemaFilter(PropertyFilterSet):\n database = CharInFilter(field_name='database', lookup_expr='in')\n name = CharInFilter(field_name='name', lookup_expr='in')\n\n class Meta:\n model = Schema\n fields = ['database', 'name']\n\n\nclass TableFilter(PropertyFilterSet):\n name = CharInFilter(field_name='name', lookup_expr='in')\n schema = CharInFilter(field_name='schema__name', lookup_expr='in')\n created = PropertyDateTimeFromToRangeFilter(field_name='created_at')\n updated = PropertyDateTimeFromToRangeFilter(field_name='updated_at')\n import_verified = PropertyBooleanFilter(field_name='import_verified')\n not_imported = PropertyBooleanFilter(lookup_expr=\"isnull\",\n field_name='import_verified')\n\n class Meta:\n model = Table\n fields = ['name', 'schema', 'created_at', 'updated_at', 'import_verified']\n", "path": "mathesar/filters.py"}]}
| 1,838 | 448 |
gh_patches_debug_19797
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-4220
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`ddtrace.opentracer` incorrectly raises `SpanContextCorruptedException` on `extract` of missing span context
The documentation for `SpanContextCorruptedException` [says](https://opentracing-python.readthedocs.io/en/1.3.0/api.html#opentracing.SpanContextCorruptedException):
> SpanContextCorruptedException should be used when the underlying span context state is seemingly present but not well-formed.
`ddtrace.opentracer`'s `extract` is throwing an error whenever it fails to recover a span, whether or not it was malformed or simply missing. This completely breaks the normal pattern of "I received an HTTP request, so I'll throw the headers at `extract` and pass the result to `child_of` for my new span, expecting to get `None` and therefore make a new root span if I was called without tracing info".
### Which version of dd-trace-py are you using?
Python 3.7
ddtrace 0.46.0
### How can we reproduce your problem?
```py
In [1]: from opentracing import Format
In [2]: from ddtrace.opentracer import Tracer
In [3]: tracer = Tracer()
In [4]: tracer.extract(Format.HTTP_HEADERS, {})
---------------------------------------------------------------------------
SpanContextCorruptedException Traceback (most recent call last)
<ipython-input-4-f497fe0c23a2> in <module>
----> 1 tracer.extract(Format.HTTP_HEADERS, {})
~/projects/granular/analysis/analysis-api/.venv/lib/python3.7/site-packages/ddtrace/opentracer/tracer.py in extract(self, format, carrier)
326 # we have to manually activate the returned context from a distributed
327 # trace
--> 328 ot_span_ctx = propagator.extract(carrier)
329 dd_span_ctx = ot_span_ctx._dd_context
330 self._dd_tracer.context_provider.activate(dd_span_ctx)
~/projects/granular/analysis/analysis-api/.venv/lib/python3.7/site-packages/ddtrace/opentracer/propagation/http.py in extract(self, carrier)
70 # if this occurs.
71 if not ddspan_ctx.trace_id:
---> 72 raise SpanContextCorruptedException("failed to extract span context")
73
74 baggage = {}
SpanContextCorruptedException: failed to extract span context
```
### What is the result that you expected?
I expect to get a clean `None` with no error if no DataDog span context material was present. See Jaeger:
```py
In [1]: from opentracing import Format
In [2]: import jaeger_client
In [3]: tracer = jaeger_client.Config({"service_name": "foo"}).initialize_tracer()
In [4]: tracer.extract(Format.HTTP_HEADERS, {})
In [5]: print(tracer.extract(Format.HTTP_HEADERS, {}))
None
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/opentracer/propagation/http.py`
Content:
```
1 from typing import Dict
2
3 from opentracing import InvalidCarrierException
4 from opentracing import SpanContextCorruptedException
5
6 from ddtrace.propagation.http import HTTPPropagator as DDHTTPPropagator
7
8 from ...internal.logger import get_logger
9 from ..span_context import SpanContext
10 from .propagator import Propagator
11
12
13 log = get_logger(__name__)
14
15 HTTP_BAGGAGE_PREFIX = "ot-baggage-"
16 HTTP_BAGGAGE_PREFIX_LEN = len(HTTP_BAGGAGE_PREFIX)
17
18
19 class HTTPPropagator(Propagator):
20 """OpenTracing compatible HTTP_HEADER and TEXT_MAP format propagator.
21
22 `HTTPPropagator` provides compatibility by using existing OpenTracing
23 compatible methods from the ddtracer along with new logic supporting the
24 outstanding OpenTracing-defined functionality.
25 """
26
27 @staticmethod
28 def inject(span_context, carrier):
29 # type: (SpanContext, Dict[str, str]) -> None
30 """Inject a span context into a carrier.
31
32 *span_context* is injected into the carrier by first using an
33 :class:`ddtrace.propagation.http.HTTPPropagator` to inject the ddtracer
34 specific fields.
35
36 Then the baggage is injected into *carrier*.
37
38 :param span_context: span context to inject.
39
40 :param carrier: carrier to inject into.
41 """
42 if not isinstance(carrier, dict):
43 raise InvalidCarrierException("propagator expects carrier to be a dict")
44
45 DDHTTPPropagator.inject(span_context._dd_context, carrier)
46
47 # Add the baggage
48 if span_context.baggage is not None:
49 for key in span_context.baggage:
50 carrier[HTTP_BAGGAGE_PREFIX + key] = span_context.baggage[key]
51
52 @staticmethod
53 def extract(carrier):
54 # type: (Dict[str, str]) -> SpanContext
55 """Extract a span context from a carrier.
56
57 :class:`ddtrace.propagation.http.HTTPPropagator` is used to extract
58 ddtracer supported fields into a `ddtrace.Context` context which is
59 combined with new logic to extract the baggage which is returned in an
60 OpenTracing compatible span context.
61
62 :param carrier: carrier to extract from.
63
64 :return: extracted span context.
65 """
66 if not isinstance(carrier, dict):
67 raise InvalidCarrierException("propagator expects carrier to be a dict")
68
69 ddspan_ctx = DDHTTPPropagator.extract(carrier)
70
71 # if the dd propagator fails then it will return a new empty span
72 # context (with trace_id=None), we however want to raise an exception
73 # if this occurs.
74 if not ddspan_ctx.trace_id:
75 raise SpanContextCorruptedException("failed to extract span context")
76
77 baggage = {}
78 for key in carrier:
79 if key.startswith(HTTP_BAGGAGE_PREFIX):
80 baggage[key[HTTP_BAGGAGE_PREFIX_LEN:]] = carrier[key]
81
82 return SpanContext(ddcontext=ddspan_ctx, baggage=baggage)
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ddtrace/opentracer/propagation/http.py b/ddtrace/opentracer/propagation/http.py
--- a/ddtrace/opentracer/propagation/http.py
+++ b/ddtrace/opentracer/propagation/http.py
@@ -1,7 +1,6 @@
from typing import Dict
from opentracing import InvalidCarrierException
-from opentracing import SpanContextCorruptedException
from ddtrace.propagation.http import HTTPPropagator as DDHTTPPropagator
@@ -67,13 +66,6 @@
raise InvalidCarrierException("propagator expects carrier to be a dict")
ddspan_ctx = DDHTTPPropagator.extract(carrier)
-
- # if the dd propagator fails then it will return a new empty span
- # context (with trace_id=None), we however want to raise an exception
- # if this occurs.
- if not ddspan_ctx.trace_id:
- raise SpanContextCorruptedException("failed to extract span context")
-
baggage = {}
for key in carrier:
if key.startswith(HTTP_BAGGAGE_PREFIX):
|
{"golden_diff": "diff --git a/ddtrace/opentracer/propagation/http.py b/ddtrace/opentracer/propagation/http.py\n--- a/ddtrace/opentracer/propagation/http.py\n+++ b/ddtrace/opentracer/propagation/http.py\n@@ -1,7 +1,6 @@\n from typing import Dict\n \n from opentracing import InvalidCarrierException\n-from opentracing import SpanContextCorruptedException\n \n from ddtrace.propagation.http import HTTPPropagator as DDHTTPPropagator\n \n@@ -67,13 +66,6 @@\n raise InvalidCarrierException(\"propagator expects carrier to be a dict\")\n \n ddspan_ctx = DDHTTPPropagator.extract(carrier)\n-\n- # if the dd propagator fails then it will return a new empty span\n- # context (with trace_id=None), we however want to raise an exception\n- # if this occurs.\n- if not ddspan_ctx.trace_id:\n- raise SpanContextCorruptedException(\"failed to extract span context\")\n-\n baggage = {}\n for key in carrier:\n if key.startswith(HTTP_BAGGAGE_PREFIX):\n", "issue": "`ddtrace.opentracer` incorrectly raises `SpanContextCorruptedException` on `extract` of missing span context\nThe documentation for `SpanContextCorruptedException` [says](https://opentracing-python.readthedocs.io/en/1.3.0/api.html#opentracing.SpanContextCorruptedException):\r\n\r\n> SpanContextCorruptedException should be used when the underlying span context state is seemingly present but not well-formed.\r\n\r\n`ddtrace.opentracer`'s `extract` is throwing an error whenever it fails to recover a span, whether or not it was malformed or simply missing. This completely breaks the normal pattern of \"I received an HTTP request, so I'll throw the headers at `extract` and pass the result to `child_of` for my new span, expecting to get `None` and therefore make a new root span if I was called without tracing info\".\r\n\r\n### Which version of dd-trace-py are you using?\r\n\r\nPython 3.7\r\nddtrace 0.46.0\r\n\r\n### How can we reproduce your problem?\r\n\r\n```py\r\nIn [1]: from opentracing import Format\r\n\r\nIn [2]: from ddtrace.opentracer import Tracer\r\n\r\nIn [3]: tracer = Tracer()\r\n\r\nIn [4]: tracer.extract(Format.HTTP_HEADERS, {})\r\n---------------------------------------------------------------------------\r\nSpanContextCorruptedException Traceback (most recent call last)\r\n<ipython-input-4-f497fe0c23a2> in <module>\r\n----> 1 tracer.extract(Format.HTTP_HEADERS, {})\r\n\r\n~/projects/granular/analysis/analysis-api/.venv/lib/python3.7/site-packages/ddtrace/opentracer/tracer.py in extract(self, format, carrier)\r\n 326 # we have to manually activate the returned context from a distributed\r\n 327 # trace\r\n--> 328 ot_span_ctx = propagator.extract(carrier)\r\n 329 dd_span_ctx = ot_span_ctx._dd_context\r\n 330 self._dd_tracer.context_provider.activate(dd_span_ctx)\r\n\r\n~/projects/granular/analysis/analysis-api/.venv/lib/python3.7/site-packages/ddtrace/opentracer/propagation/http.py in extract(self, carrier)\r\n 70 # if this occurs.\r\n 71 if not ddspan_ctx.trace_id:\r\n---> 72 raise SpanContextCorruptedException(\"failed to extract span context\")\r\n 73 \r\n 74 baggage = {}\r\n\r\nSpanContextCorruptedException: failed to extract span context\r\n```\r\n\r\n### What is the result that you expected?\r\n\r\nI expect to get a clean `None` with no error if no DataDog span context material was present. See Jaeger:\r\n\r\n```py\r\nIn [1]: from opentracing import Format\r\n\r\nIn [2]: import jaeger_client\r\n\r\nIn [3]: tracer = jaeger_client.Config({\"service_name\": \"foo\"}).initialize_tracer()\r\n\r\nIn [4]: tracer.extract(Format.HTTP_HEADERS, {})\r\n\r\nIn [5]: print(tracer.extract(Format.HTTP_HEADERS, {}))\r\nNone\r\n```\r\n\n", "before_files": [{"content": "from typing import Dict\n\nfrom opentracing import InvalidCarrierException\nfrom opentracing import SpanContextCorruptedException\n\nfrom ddtrace.propagation.http import HTTPPropagator as DDHTTPPropagator\n\nfrom ...internal.logger import get_logger\nfrom ..span_context import SpanContext\nfrom .propagator import Propagator\n\n\nlog = get_logger(__name__)\n\nHTTP_BAGGAGE_PREFIX = \"ot-baggage-\"\nHTTP_BAGGAGE_PREFIX_LEN = len(HTTP_BAGGAGE_PREFIX)\n\n\nclass HTTPPropagator(Propagator):\n \"\"\"OpenTracing compatible HTTP_HEADER and TEXT_MAP format propagator.\n\n `HTTPPropagator` provides compatibility by using existing OpenTracing\n compatible methods from the ddtracer along with new logic supporting the\n outstanding OpenTracing-defined functionality.\n \"\"\"\n\n @staticmethod\n def inject(span_context, carrier):\n # type: (SpanContext, Dict[str, str]) -> None\n \"\"\"Inject a span context into a carrier.\n\n *span_context* is injected into the carrier by first using an\n :class:`ddtrace.propagation.http.HTTPPropagator` to inject the ddtracer\n specific fields.\n\n Then the baggage is injected into *carrier*.\n\n :param span_context: span context to inject.\n\n :param carrier: carrier to inject into.\n \"\"\"\n if not isinstance(carrier, dict):\n raise InvalidCarrierException(\"propagator expects carrier to be a dict\")\n\n DDHTTPPropagator.inject(span_context._dd_context, carrier)\n\n # Add the baggage\n if span_context.baggage is not None:\n for key in span_context.baggage:\n carrier[HTTP_BAGGAGE_PREFIX + key] = span_context.baggage[key]\n\n @staticmethod\n def extract(carrier):\n # type: (Dict[str, str]) -> SpanContext\n \"\"\"Extract a span context from a carrier.\n\n :class:`ddtrace.propagation.http.HTTPPropagator` is used to extract\n ddtracer supported fields into a `ddtrace.Context` context which is\n combined with new logic to extract the baggage which is returned in an\n OpenTracing compatible span context.\n\n :param carrier: carrier to extract from.\n\n :return: extracted span context.\n \"\"\"\n if not isinstance(carrier, dict):\n raise InvalidCarrierException(\"propagator expects carrier to be a dict\")\n\n ddspan_ctx = DDHTTPPropagator.extract(carrier)\n\n # if the dd propagator fails then it will return a new empty span\n # context (with trace_id=None), we however want to raise an exception\n # if this occurs.\n if not ddspan_ctx.trace_id:\n raise SpanContextCorruptedException(\"failed to extract span context\")\n\n baggage = {}\n for key in carrier:\n if key.startswith(HTTP_BAGGAGE_PREFIX):\n baggage[key[HTTP_BAGGAGE_PREFIX_LEN:]] = carrier[key]\n\n return SpanContext(ddcontext=ddspan_ctx, baggage=baggage)\n", "path": "ddtrace/opentracer/propagation/http.py"}], "after_files": [{"content": "from typing import Dict\n\nfrom opentracing import InvalidCarrierException\n\nfrom ddtrace.propagation.http import HTTPPropagator as DDHTTPPropagator\n\nfrom ...internal.logger import get_logger\nfrom ..span_context import SpanContext\nfrom .propagator import Propagator\n\n\nlog = get_logger(__name__)\n\nHTTP_BAGGAGE_PREFIX = \"ot-baggage-\"\nHTTP_BAGGAGE_PREFIX_LEN = len(HTTP_BAGGAGE_PREFIX)\n\n\nclass HTTPPropagator(Propagator):\n \"\"\"OpenTracing compatible HTTP_HEADER and TEXT_MAP format propagator.\n\n `HTTPPropagator` provides compatibility by using existing OpenTracing\n compatible methods from the ddtracer along with new logic supporting the\n outstanding OpenTracing-defined functionality.\n \"\"\"\n\n @staticmethod\n def inject(span_context, carrier):\n # type: (SpanContext, Dict[str, str]) -> None\n \"\"\"Inject a span context into a carrier.\n\n *span_context* is injected into the carrier by first using an\n :class:`ddtrace.propagation.http.HTTPPropagator` to inject the ddtracer\n specific fields.\n\n Then the baggage is injected into *carrier*.\n\n :param span_context: span context to inject.\n\n :param carrier: carrier to inject into.\n \"\"\"\n if not isinstance(carrier, dict):\n raise InvalidCarrierException(\"propagator expects carrier to be a dict\")\n\n DDHTTPPropagator.inject(span_context._dd_context, carrier)\n\n # Add the baggage\n if span_context.baggage is not None:\n for key in span_context.baggage:\n carrier[HTTP_BAGGAGE_PREFIX + key] = span_context.baggage[key]\n\n @staticmethod\n def extract(carrier):\n # type: (Dict[str, str]) -> SpanContext\n \"\"\"Extract a span context from a carrier.\n\n :class:`ddtrace.propagation.http.HTTPPropagator` is used to extract\n ddtracer supported fields into a `ddtrace.Context` context which is\n combined with new logic to extract the baggage which is returned in an\n OpenTracing compatible span context.\n\n :param carrier: carrier to extract from.\n\n :return: extracted span context.\n \"\"\"\n if not isinstance(carrier, dict):\n raise InvalidCarrierException(\"propagator expects carrier to be a dict\")\n\n ddspan_ctx = DDHTTPPropagator.extract(carrier)\n baggage = {}\n for key in carrier:\n if key.startswith(HTTP_BAGGAGE_PREFIX):\n baggage[key[HTTP_BAGGAGE_PREFIX_LEN:]] = carrier[key]\n\n return SpanContext(ddcontext=ddspan_ctx, baggage=baggage)\n", "path": "ddtrace/opentracer/propagation/http.py"}]}
| 1,742 | 240 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.