problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_9951
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__torchmetrics-2574
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BootStrapper.reset() does not reset properly
## 🐛 Bug
Calling `BootStrapper.reset()` does not reset `self.metrics` properly.
### To Reproduce
```py
import torch
from torchmetrics.wrappers import BootStrapper
from torchmetrics.classification import MulticlassAccuracy
metric = BootStrapper(MulticlassAccuracy(num_classes=10))
for i in range(10):
output = torch.randn((2000, 10))
target = torch.randint(10, (2000,))
# output = 0.5 * (target + output)
metric.update(output, target)
print(metric.compute())
# {'mean': tensor(0.0990), 'std': tensor(0.0029)} <-- ok
print(metric.metrics[0].update_count)
# 10 <-- ok
metric.reset()
print(metric.compute())
# {'mean': tensor(0.0990), 'std': tensor(0.0029)} <-- ERROR, should be undefined after reset
print(metric.metrics[0].update_count)
# 10 <-- ERROR, should be 0 after reset
```
### Environment
- TorchMetrics version 1.4.0.post0
- Python version 3.11.9
- torch version 2.2.1+cu118
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/wrappers/bootstrapping.py`
Content:
```
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from copy import deepcopy
15 from typing import Any, Dict, Optional, Sequence, Union
16
17 import torch
18 from lightning_utilities import apply_to_collection
19 from torch import Tensor
20 from torch.nn import ModuleList
21
22 from torchmetrics.metric import Metric
23 from torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE
24 from torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE
25 from torchmetrics.wrappers.abstract import WrapperMetric
26
27 if not _MATPLOTLIB_AVAILABLE:
28 __doctest_skip__ = ["BootStrapper.plot"]
29
30
31 def _bootstrap_sampler(
32 size: int,
33 sampling_strategy: str = "poisson",
34 ) -> Tensor:
35 """Resample a tensor along its first dimension with replacement.
36
37 Args:
38 size: number of samples
39 sampling_strategy: the strategy to use for sampling, either ``'poisson'`` or ``'multinomial'``
40
41 Returns:
42 resampled tensor
43
44 """
45 if sampling_strategy == "poisson":
46 p = torch.distributions.Poisson(1)
47 n = p.sample((size,))
48 return torch.arange(size).repeat_interleave(n.long(), dim=0)
49 if sampling_strategy == "multinomial":
50 return torch.multinomial(torch.ones(size), num_samples=size, replacement=True)
51 raise ValueError("Unknown sampling strategy")
52
53
54 class BootStrapper(WrapperMetric):
55 r"""Using `Turn a Metric into a Bootstrapped`_.
56
57 That can automate the process of getting confidence intervals for metric values. This wrapper
58 class basically keeps multiple copies of the same base metric in memory and whenever ``update`` or
59 ``forward`` is called, all input tensors are resampled (with replacement) along the first dimension.
60
61 Args:
62 base_metric: base metric class to wrap
63 num_bootstraps: number of copies to make of the base metric for bootstrapping
64 mean: if ``True`` return the mean of the bootstraps
65 std: if ``True`` return the standard deviation of the bootstraps
66 quantile: if given, returns the quantile of the bootstraps. Can only be used with pytorch version 1.6 or higher
67 raw: if ``True``, return all bootstrapped values
68 sampling_strategy:
69 Determines how to produce bootstrapped samplings. Either ``'poisson'`` or ``multinomial``.
70 If ``'possion'`` is chosen, the number of times each sample will be included in the bootstrap
71 will be given by :math:`n\sim Poisson(\lambda=1)`, which approximates the true bootstrap distribution
72 when the number of samples is large. If ``'multinomial'`` is chosen, we will apply true bootstrapping
73 at the batch level to approximate bootstrapping over the hole dataset.
74 kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
75
76 Example::
77 >>> from pprint import pprint
78 >>> from torchmetrics.wrappers import BootStrapper
79 >>> from torchmetrics.classification import MulticlassAccuracy
80 >>> _ = torch.manual_seed(123)
81 >>> base_metric = MulticlassAccuracy(num_classes=5, average='micro')
82 >>> bootstrap = BootStrapper(base_metric, num_bootstraps=20)
83 >>> bootstrap.update(torch.randint(5, (20,)), torch.randint(5, (20,)))
84 >>> output = bootstrap.compute()
85 >>> pprint(output)
86 {'mean': tensor(0.2205), 'std': tensor(0.0859)}
87
88 """
89
90 full_state_update: Optional[bool] = True
91
92 def __init__(
93 self,
94 base_metric: Metric,
95 num_bootstraps: int = 10,
96 mean: bool = True,
97 std: bool = True,
98 quantile: Optional[Union[float, Tensor]] = None,
99 raw: bool = False,
100 sampling_strategy: str = "poisson",
101 **kwargs: Any,
102 ) -> None:
103 super().__init__(**kwargs)
104 if not isinstance(base_metric, Metric):
105 raise ValueError(
106 f"Expected base metric to be an instance of torchmetrics.Metric but received {base_metric}"
107 )
108
109 self.metrics = ModuleList([deepcopy(base_metric) for _ in range(num_bootstraps)])
110 self.num_bootstraps = num_bootstraps
111
112 self.mean = mean
113 self.std = std
114 self.quantile = quantile
115 self.raw = raw
116
117 allowed_sampling = ("poisson", "multinomial")
118 if sampling_strategy not in allowed_sampling:
119 raise ValueError(
120 f"Expected argument ``sampling_strategy`` to be one of {allowed_sampling}"
121 f" but received {sampling_strategy}"
122 )
123 self.sampling_strategy = sampling_strategy
124
125 def update(self, *args: Any, **kwargs: Any) -> None:
126 """Update the state of the base metric.
127
128 Any tensor passed in will be bootstrapped along dimension 0.
129
130 """
131 args_sizes = apply_to_collection(args, Tensor, len)
132 kwargs_sizes = apply_to_collection(kwargs, Tensor, len)
133 if len(args_sizes) > 0:
134 size = args_sizes[0]
135 elif len(kwargs_sizes) > 0:
136 size = next(iter(kwargs_sizes.values()))
137 else:
138 raise ValueError("None of the input contained tensors, so could not determine the sampling size")
139
140 for idx in range(self.num_bootstraps):
141 sample_idx = _bootstrap_sampler(size, sampling_strategy=self.sampling_strategy).to(self.device)
142 if sample_idx.numel() == 0:
143 continue
144 new_args = apply_to_collection(args, Tensor, torch.index_select, dim=0, index=sample_idx)
145 new_kwargs = apply_to_collection(kwargs, Tensor, torch.index_select, dim=0, index=sample_idx)
146 self.metrics[idx].update(*new_args, **new_kwargs)
147
148 def compute(self) -> Dict[str, Tensor]:
149 """Compute the bootstrapped metric values.
150
151 Always returns a dict of tensors, which can contain the following keys: ``mean``, ``std``, ``quantile`` and
152 ``raw`` depending on how the class was initialized.
153
154 """
155 computed_vals = torch.stack([m.compute() for m in self.metrics], dim=0)
156 output_dict = {}
157 if self.mean:
158 output_dict["mean"] = computed_vals.mean(dim=0)
159 if self.std:
160 output_dict["std"] = computed_vals.std(dim=0)
161 if self.quantile is not None:
162 output_dict["quantile"] = torch.quantile(computed_vals, self.quantile)
163 if self.raw:
164 output_dict["raw"] = computed_vals
165 return output_dict
166
167 def forward(self, *args: Any, **kwargs: Any) -> Any:
168 """Use the original forward method of the base metric class."""
169 return super(WrapperMetric, self).forward(*args, **kwargs)
170
171 def plot(
172 self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None
173 ) -> _PLOT_OUT_TYPE:
174 """Plot a single or multiple values from the metric.
175
176 Args:
177 val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.
178 If no value is provided, will automatically call `metric.compute` and plot that result.
179 ax: An matplotlib axis object. If provided will add plot to that axis
180
181 Returns:
182 Figure and Axes object
183
184 Raises:
185 ModuleNotFoundError:
186 If `matplotlib` is not installed
187
188 .. plot::
189 :scale: 75
190
191 >>> # Example plotting a single value
192 >>> import torch
193 >>> from torchmetrics.wrappers import BootStrapper
194 >>> from torchmetrics.regression import MeanSquaredError
195 >>> metric = BootStrapper(MeanSquaredError(), num_bootstraps=20)
196 >>> metric.update(torch.randn(100,), torch.randn(100,))
197 >>> fig_, ax_ = metric.plot()
198
199 .. plot::
200 :scale: 75
201
202 >>> # Example plotting multiple values
203 >>> import torch
204 >>> from torchmetrics.wrappers import BootStrapper
205 >>> from torchmetrics.regression import MeanSquaredError
206 >>> metric = BootStrapper(MeanSquaredError(), num_bootstraps=20)
207 >>> values = [ ]
208 >>> for _ in range(3):
209 ... values.append(metric(torch.randn(100,), torch.randn(100,)))
210 >>> fig_, ax_ = metric.plot(values)
211
212 """
213 return self._plot(val, ax)
214
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/torchmetrics/wrappers/bootstrapping.py b/src/torchmetrics/wrappers/bootstrapping.py
--- a/src/torchmetrics/wrappers/bootstrapping.py
+++ b/src/torchmetrics/wrappers/bootstrapping.py
@@ -168,6 +168,12 @@
"""Use the original forward method of the base metric class."""
return super(WrapperMetric, self).forward(*args, **kwargs)
+ def reset(self) -> None:
+ """Reset the state of the base metric."""
+ for m in self.metrics:
+ m.reset()
+ super().reset()
+
def plot(
self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None
) -> _PLOT_OUT_TYPE:
|
{"golden_diff": "diff --git a/src/torchmetrics/wrappers/bootstrapping.py b/src/torchmetrics/wrappers/bootstrapping.py\n--- a/src/torchmetrics/wrappers/bootstrapping.py\n+++ b/src/torchmetrics/wrappers/bootstrapping.py\n@@ -168,6 +168,12 @@\n \"\"\"Use the original forward method of the base metric class.\"\"\"\n return super(WrapperMetric, self).forward(*args, **kwargs)\n \n+ def reset(self) -> None:\n+ \"\"\"Reset the state of the base metric.\"\"\"\n+ for m in self.metrics:\n+ m.reset()\n+ super().reset()\n+\n def plot(\n self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None\n ) -> _PLOT_OUT_TYPE:\n", "issue": "BootStrapper.reset() does not reset properly\n## \ud83d\udc1b Bug\r\n\r\nCalling `BootStrapper.reset()` does not reset `self.metrics` properly.\r\n\r\n### To Reproduce\r\n\r\n```py\r\nimport torch\r\nfrom torchmetrics.wrappers import BootStrapper\r\nfrom torchmetrics.classification import MulticlassAccuracy\r\n\r\nmetric = BootStrapper(MulticlassAccuracy(num_classes=10))\r\n\r\nfor i in range(10):\r\n output = torch.randn((2000, 10))\r\n target = torch.randint(10, (2000,))\r\n # output = 0.5 * (target + output)\r\n metric.update(output, target)\r\n\r\nprint(metric.compute())\r\n# {'mean': tensor(0.0990), 'std': tensor(0.0029)} <-- ok\r\nprint(metric.metrics[0].update_count)\r\n# 10 <-- ok\r\n\r\nmetric.reset()\r\n\r\nprint(metric.compute())\r\n# {'mean': tensor(0.0990), 'std': tensor(0.0029)} <-- ERROR, should be undefined after reset\r\nprint(metric.metrics[0].update_count)\r\n# 10 <-- ERROR, should be 0 after reset\r\n\r\n```\r\n\r\n### Environment\r\n\r\n- TorchMetrics version 1.4.0.post0\r\n- Python version 3.11.9\r\n- torch version 2.2.1+cu118\r\n\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom copy import deepcopy\nfrom typing import Any, Dict, Optional, Sequence, Union\n\nimport torch\nfrom lightning_utilities import apply_to_collection\nfrom torch import Tensor\nfrom torch.nn import ModuleList\n\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE\nfrom torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE\nfrom torchmetrics.wrappers.abstract import WrapperMetric\n\nif not _MATPLOTLIB_AVAILABLE:\n __doctest_skip__ = [\"BootStrapper.plot\"]\n\n\ndef _bootstrap_sampler(\n size: int,\n sampling_strategy: str = \"poisson\",\n) -> Tensor:\n \"\"\"Resample a tensor along its first dimension with replacement.\n\n Args:\n size: number of samples\n sampling_strategy: the strategy to use for sampling, either ``'poisson'`` or ``'multinomial'``\n\n Returns:\n resampled tensor\n\n \"\"\"\n if sampling_strategy == \"poisson\":\n p = torch.distributions.Poisson(1)\n n = p.sample((size,))\n return torch.arange(size).repeat_interleave(n.long(), dim=0)\n if sampling_strategy == \"multinomial\":\n return torch.multinomial(torch.ones(size), num_samples=size, replacement=True)\n raise ValueError(\"Unknown sampling strategy\")\n\n\nclass BootStrapper(WrapperMetric):\n r\"\"\"Using `Turn a Metric into a Bootstrapped`_.\n\n That can automate the process of getting confidence intervals for metric values. This wrapper\n class basically keeps multiple copies of the same base metric in memory and whenever ``update`` or\n ``forward`` is called, all input tensors are resampled (with replacement) along the first dimension.\n\n Args:\n base_metric: base metric class to wrap\n num_bootstraps: number of copies to make of the base metric for bootstrapping\n mean: if ``True`` return the mean of the bootstraps\n std: if ``True`` return the standard deviation of the bootstraps\n quantile: if given, returns the quantile of the bootstraps. Can only be used with pytorch version 1.6 or higher\n raw: if ``True``, return all bootstrapped values\n sampling_strategy:\n Determines how to produce bootstrapped samplings. Either ``'poisson'`` or ``multinomial``.\n If ``'possion'`` is chosen, the number of times each sample will be included in the bootstrap\n will be given by :math:`n\\sim Poisson(\\lambda=1)`, which approximates the true bootstrap distribution\n when the number of samples is large. If ``'multinomial'`` is chosen, we will apply true bootstrapping\n at the batch level to approximate bootstrapping over the hole dataset.\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Example::\n >>> from pprint import pprint\n >>> from torchmetrics.wrappers import BootStrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> _ = torch.manual_seed(123)\n >>> base_metric = MulticlassAccuracy(num_classes=5, average='micro')\n >>> bootstrap = BootStrapper(base_metric, num_bootstraps=20)\n >>> bootstrap.update(torch.randint(5, (20,)), torch.randint(5, (20,)))\n >>> output = bootstrap.compute()\n >>> pprint(output)\n {'mean': tensor(0.2205), 'std': tensor(0.0859)}\n\n \"\"\"\n\n full_state_update: Optional[bool] = True\n\n def __init__(\n self,\n base_metric: Metric,\n num_bootstraps: int = 10,\n mean: bool = True,\n std: bool = True,\n quantile: Optional[Union[float, Tensor]] = None,\n raw: bool = False,\n sampling_strategy: str = \"poisson\",\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n if not isinstance(base_metric, Metric):\n raise ValueError(\n f\"Expected base metric to be an instance of torchmetrics.Metric but received {base_metric}\"\n )\n\n self.metrics = ModuleList([deepcopy(base_metric) for _ in range(num_bootstraps)])\n self.num_bootstraps = num_bootstraps\n\n self.mean = mean\n self.std = std\n self.quantile = quantile\n self.raw = raw\n\n allowed_sampling = (\"poisson\", \"multinomial\")\n if sampling_strategy not in allowed_sampling:\n raise ValueError(\n f\"Expected argument ``sampling_strategy`` to be one of {allowed_sampling}\"\n f\" but received {sampling_strategy}\"\n )\n self.sampling_strategy = sampling_strategy\n\n def update(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Update the state of the base metric.\n\n Any tensor passed in will be bootstrapped along dimension 0.\n\n \"\"\"\n args_sizes = apply_to_collection(args, Tensor, len)\n kwargs_sizes = apply_to_collection(kwargs, Tensor, len)\n if len(args_sizes) > 0:\n size = args_sizes[0]\n elif len(kwargs_sizes) > 0:\n size = next(iter(kwargs_sizes.values()))\n else:\n raise ValueError(\"None of the input contained tensors, so could not determine the sampling size\")\n\n for idx in range(self.num_bootstraps):\n sample_idx = _bootstrap_sampler(size, sampling_strategy=self.sampling_strategy).to(self.device)\n if sample_idx.numel() == 0:\n continue\n new_args = apply_to_collection(args, Tensor, torch.index_select, dim=0, index=sample_idx)\n new_kwargs = apply_to_collection(kwargs, Tensor, torch.index_select, dim=0, index=sample_idx)\n self.metrics[idx].update(*new_args, **new_kwargs)\n\n def compute(self) -> Dict[str, Tensor]:\n \"\"\"Compute the bootstrapped metric values.\n\n Always returns a dict of tensors, which can contain the following keys: ``mean``, ``std``, ``quantile`` and\n ``raw`` depending on how the class was initialized.\n\n \"\"\"\n computed_vals = torch.stack([m.compute() for m in self.metrics], dim=0)\n output_dict = {}\n if self.mean:\n output_dict[\"mean\"] = computed_vals.mean(dim=0)\n if self.std:\n output_dict[\"std\"] = computed_vals.std(dim=0)\n if self.quantile is not None:\n output_dict[\"quantile\"] = torch.quantile(computed_vals, self.quantile)\n if self.raw:\n output_dict[\"raw\"] = computed_vals\n return output_dict\n\n def forward(self, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Use the original forward method of the base metric class.\"\"\"\n return super(WrapperMetric, self).forward(*args, **kwargs)\n\n def plot(\n self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None\n ) -> _PLOT_OUT_TYPE:\n \"\"\"Plot a single or multiple values from the metric.\n\n Args:\n val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.\n If no value is provided, will automatically call `metric.compute` and plot that result.\n ax: An matplotlib axis object. If provided will add plot to that axis\n\n Returns:\n Figure and Axes object\n\n Raises:\n ModuleNotFoundError:\n If `matplotlib` is not installed\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting a single value\n >>> import torch\n >>> from torchmetrics.wrappers import BootStrapper\n >>> from torchmetrics.regression import MeanSquaredError\n >>> metric = BootStrapper(MeanSquaredError(), num_bootstraps=20)\n >>> metric.update(torch.randn(100,), torch.randn(100,))\n >>> fig_, ax_ = metric.plot()\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting multiple values\n >>> import torch\n >>> from torchmetrics.wrappers import BootStrapper\n >>> from torchmetrics.regression import MeanSquaredError\n >>> metric = BootStrapper(MeanSquaredError(), num_bootstraps=20)\n >>> values = [ ]\n >>> for _ in range(3):\n ... values.append(metric(torch.randn(100,), torch.randn(100,)))\n >>> fig_, ax_ = metric.plot(values)\n\n \"\"\"\n return self._plot(val, ax)\n", "path": "src/torchmetrics/wrappers/bootstrapping.py"}], "after_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom copy import deepcopy\nfrom typing import Any, Dict, Optional, Sequence, Union\n\nimport torch\nfrom lightning_utilities import apply_to_collection\nfrom torch import Tensor\nfrom torch.nn import ModuleList\n\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE\nfrom torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE\nfrom torchmetrics.wrappers.abstract import WrapperMetric\n\nif not _MATPLOTLIB_AVAILABLE:\n __doctest_skip__ = [\"BootStrapper.plot\"]\n\n\ndef _bootstrap_sampler(\n size: int,\n sampling_strategy: str = \"poisson\",\n) -> Tensor:\n \"\"\"Resample a tensor along its first dimension with replacement.\n\n Args:\n size: number of samples\n sampling_strategy: the strategy to use for sampling, either ``'poisson'`` or ``'multinomial'``\n\n Returns:\n resampled tensor\n\n \"\"\"\n if sampling_strategy == \"poisson\":\n p = torch.distributions.Poisson(1)\n n = p.sample((size,))\n return torch.arange(size).repeat_interleave(n.long(), dim=0)\n if sampling_strategy == \"multinomial\":\n return torch.multinomial(torch.ones(size), num_samples=size, replacement=True)\n raise ValueError(\"Unknown sampling strategy\")\n\n\nclass BootStrapper(WrapperMetric):\n r\"\"\"Using `Turn a Metric into a Bootstrapped`_.\n\n That can automate the process of getting confidence intervals for metric values. This wrapper\n class basically keeps multiple copies of the same base metric in memory and whenever ``update`` or\n ``forward`` is called, all input tensors are resampled (with replacement) along the first dimension.\n\n Args:\n base_metric: base metric class to wrap\n num_bootstraps: number of copies to make of the base metric for bootstrapping\n mean: if ``True`` return the mean of the bootstraps\n std: if ``True`` return the standard deviation of the bootstraps\n quantile: if given, returns the quantile of the bootstraps. Can only be used with pytorch version 1.6 or higher\n raw: if ``True``, return all bootstrapped values\n sampling_strategy:\n Determines how to produce bootstrapped samplings. Either ``'poisson'`` or ``multinomial``.\n If ``'possion'`` is chosen, the number of times each sample will be included in the bootstrap\n will be given by :math:`n\\sim Poisson(\\lambda=1)`, which approximates the true bootstrap distribution\n when the number of samples is large. If ``'multinomial'`` is chosen, we will apply true bootstrapping\n at the batch level to approximate bootstrapping over the hole dataset.\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Example::\n >>> from pprint import pprint\n >>> from torchmetrics.wrappers import BootStrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> _ = torch.manual_seed(123)\n >>> base_metric = MulticlassAccuracy(num_classes=5, average='micro')\n >>> bootstrap = BootStrapper(base_metric, num_bootstraps=20)\n >>> bootstrap.update(torch.randint(5, (20,)), torch.randint(5, (20,)))\n >>> output = bootstrap.compute()\n >>> pprint(output)\n {'mean': tensor(0.2205), 'std': tensor(0.0859)}\n\n \"\"\"\n\n full_state_update: Optional[bool] = True\n\n def __init__(\n self,\n base_metric: Metric,\n num_bootstraps: int = 10,\n mean: bool = True,\n std: bool = True,\n quantile: Optional[Union[float, Tensor]] = None,\n raw: bool = False,\n sampling_strategy: str = \"poisson\",\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n if not isinstance(base_metric, Metric):\n raise ValueError(\n f\"Expected base metric to be an instance of torchmetrics.Metric but received {base_metric}\"\n )\n\n self.metrics = ModuleList([deepcopy(base_metric) for _ in range(num_bootstraps)])\n self.num_bootstraps = num_bootstraps\n\n self.mean = mean\n self.std = std\n self.quantile = quantile\n self.raw = raw\n\n allowed_sampling = (\"poisson\", \"multinomial\")\n if sampling_strategy not in allowed_sampling:\n raise ValueError(\n f\"Expected argument ``sampling_strategy`` to be one of {allowed_sampling}\"\n f\" but received {sampling_strategy}\"\n )\n self.sampling_strategy = sampling_strategy\n\n def update(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Update the state of the base metric.\n\n Any tensor passed in will be bootstrapped along dimension 0.\n\n \"\"\"\n args_sizes = apply_to_collection(args, Tensor, len)\n kwargs_sizes = apply_to_collection(kwargs, Tensor, len)\n if len(args_sizes) > 0:\n size = args_sizes[0]\n elif len(kwargs_sizes) > 0:\n size = next(iter(kwargs_sizes.values()))\n else:\n raise ValueError(\"None of the input contained tensors, so could not determine the sampling size\")\n\n for idx in range(self.num_bootstraps):\n sample_idx = _bootstrap_sampler(size, sampling_strategy=self.sampling_strategy).to(self.device)\n if sample_idx.numel() == 0:\n continue\n new_args = apply_to_collection(args, Tensor, torch.index_select, dim=0, index=sample_idx)\n new_kwargs = apply_to_collection(kwargs, Tensor, torch.index_select, dim=0, index=sample_idx)\n self.metrics[idx].update(*new_args, **new_kwargs)\n\n def compute(self) -> Dict[str, Tensor]:\n \"\"\"Compute the bootstrapped metric values.\n\n Always returns a dict of tensors, which can contain the following keys: ``mean``, ``std``, ``quantile`` and\n ``raw`` depending on how the class was initialized.\n\n \"\"\"\n computed_vals = torch.stack([m.compute() for m in self.metrics], dim=0)\n output_dict = {}\n if self.mean:\n output_dict[\"mean\"] = computed_vals.mean(dim=0)\n if self.std:\n output_dict[\"std\"] = computed_vals.std(dim=0)\n if self.quantile is not None:\n output_dict[\"quantile\"] = torch.quantile(computed_vals, self.quantile)\n if self.raw:\n output_dict[\"raw\"] = computed_vals\n return output_dict\n\n def forward(self, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Use the original forward method of the base metric class.\"\"\"\n return super(WrapperMetric, self).forward(*args, **kwargs)\n\n def reset(self) -> None:\n \"\"\"Reset the state of the base metric.\"\"\"\n for m in self.metrics:\n m.reset()\n super().reset()\n\n def plot(\n self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None\n ) -> _PLOT_OUT_TYPE:\n \"\"\"Plot a single or multiple values from the metric.\n\n Args:\n val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.\n If no value is provided, will automatically call `metric.compute` and plot that result.\n ax: An matplotlib axis object. If provided will add plot to that axis\n\n Returns:\n Figure and Axes object\n\n Raises:\n ModuleNotFoundError:\n If `matplotlib` is not installed\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting a single value\n >>> import torch\n >>> from torchmetrics.wrappers import BootStrapper\n >>> from torchmetrics.regression import MeanSquaredError\n >>> metric = BootStrapper(MeanSquaredError(), num_bootstraps=20)\n >>> metric.update(torch.randn(100,), torch.randn(100,))\n >>> fig_, ax_ = metric.plot()\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting multiple values\n >>> import torch\n >>> from torchmetrics.wrappers import BootStrapper\n >>> from torchmetrics.regression import MeanSquaredError\n >>> metric = BootStrapper(MeanSquaredError(), num_bootstraps=20)\n >>> values = [ ]\n >>> for _ in range(3):\n ... values.append(metric(torch.randn(100,), torch.randn(100,)))\n >>> fig_, ax_ = metric.plot(values)\n\n \"\"\"\n return self._plot(val, ax)\n", "path": "src/torchmetrics/wrappers/bootstrapping.py"}]}
| 3,098 | 183 |
gh_patches_debug_4127
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-4074
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Latest docs specify an RC instead of a Release version
## Description
`update_version.sh` bumps version string in docs even when an RC is created. There might be a small period of time (e.g. during release QA) where that Tag exists (albeit not signed)
## Steps to Reproduce
https://docs.securedrop.org/en/latest/set_up_admin_tails.html?highlight=git%20checkout and observe instructions to check out 0.12.0~rc1 tag
## Expected Behavior
The tag should be the latest release (as of today, 0.11.1)
## Actual Behavior
The tag is 0.12.0~rc1
## Comments
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # SecureDrop documentation build configuration file, created by
4 # sphinx-quickstart on Tue Oct 13 12:08:52 2015.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import os
16
17 # Detect if we're being built by Read the Docs
18 # https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs
19 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
20
21 # If extensions (or modules to document with autodoc) are in another directory,
22 # add these directories to sys.path here. If the directory is relative to the
23 # documentation root, use os.path.abspath to make it absolute, like shown here.
24 # sys.path.insert(0, os.path.abspath('.'))
25
26 # -- General configuration ------------------------------------------------
27
28 # If your documentation needs a minimal Sphinx version, state it here.
29 # needs_sphinx = '1.0'
30
31 # Add any Sphinx extension module names here, as strings. They can be
32 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
33 # ones.
34 extensions = ['sphinx.ext.todo', ]
35
36 # Add any paths that contain templates here, relative to this directory.
37 templates_path = ['_templates']
38
39 # The suffix(es) of source filenames.
40 # You can specify multiple suffix as a list of string:
41 # source_suffix = ['.rst', '.md']
42 source_suffix = '.rst'
43
44 # The encoding of source files.
45 # source_encoding = 'utf-8-sig'
46
47 # The master toctree document.
48 master_doc = 'index'
49
50 # General information about the project.
51 project = u'SecureDrop'
52 copyright = u'2017, Freedom of the Press Foundation'
53 author = u'SecureDrop Team and Contributors'
54
55 # The version info for the project you're documenting, acts as replacement for
56 # |version| and |release|, also used in various other places throughout the
57 # built documents.
58 #
59 # The short X.Y version.
60 version = '0.13.0~rc1'
61 # The full version, including alpha/beta/rc tags.
62 release = '0.13.0~rc1'
63
64 # The language for content autogenerated by Sphinx. Refer to documentation
65 # for a list of supported languages.
66 #
67 # This is also used if you do content translation via gettext catalogs.
68 # Usually you set "language" from the command line for these cases.
69 language = None
70
71 # There are two options for replacing |today|: either, you set today to some
72 # non-false value, then it is used:
73 # today = ''
74 # Else, today_fmt is used as the format for a strftime call.
75 # today_fmt = '%B %d, %Y'
76
77 # List of patterns, relative to source directory, that match files and
78 # directories to ignore when looking for source files.
79 exclude_patterns = ['_build']
80
81 # The reST default role (used for this markup: `text`) to use for all
82 # documents.
83 # default_role = None
84
85 # If true, '()' will be appended to :func: etc. cross-reference text.
86 # add_function_parentheses = True
87
88 # If true, the current module name will be prepended to all description
89 # unit titles (such as .. function::).
90 # add_module_names = True
91
92 # If true, sectionauthor and moduleauthor directives will be shown in the
93 # output. They are ignored by default.
94 # show_authors = False
95
96 # The name of the Pygments (syntax highlighting) style to use.
97 pygments_style = 'sphinx'
98
99 # A list of ignored prefixes for module index sorting.
100 # modindex_common_prefix = []
101
102 # If true, keep warnings as "system message" paragraphs in the built documents.
103 # keep_warnings = False
104
105 # If true, `todo` and `todoList` produce output, else they produce nothing.
106 todo_include_todos = False
107
108
109 # -- Options for HTML output ----------------------------------------------
110
111 # The theme to use for HTML and HTML Help pages. See the documentation for
112 # a list of builtin themes.
113 if on_rtd:
114 html_theme = 'default'
115 else:
116 try:
117 # If you want to build the docs locally using the RTD theme,
118 # you may need to install it: ``pip install sphinx_rtd_theme``.
119 # https://github.com/snide/sphinx_rtd_theme#via-package
120 import sphinx_rtd_theme
121 html_theme = "sphinx_rtd_theme"
122 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
123 except ImportError:
124 # This theme is included with Sphinx and is quite nice (based
125 # on the Pocoo themes), but since we're using the RTD theme
126 # for the production docs, it's best to use that to avoid
127 # issues due to discrepancies between the themes.
128 html_theme = 'alabaster'
129
130 # Theme options are theme-specific and customize the look and feel of a theme
131 # further. For a list of options available for each theme, see the
132 # documentation.
133 # html_theme_options = {}
134
135 # Add any paths that contain custom themes here, relative to this directory.
136 # html_theme_path = []
137
138 # The name for this set of Sphinx documents. If None, it defaults to
139 # "<project> v<release> documentation".
140 # html_title = None
141
142 # A shorter title for the navigation bar. Default is the same as html_title.
143 # html_short_title = None
144
145 # The name of an image file (relative to this directory) to place at the top
146 # of the sidebar.
147 html_logo = '../securedrop/static/i/favicon.png'
148
149 # The name of an image file (within the static path) to use as favicon of the
150 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
151 # pixels large.
152 # html_favicon = None
153
154 # Add any paths that contain custom static files (such as style sheets) here,
155 # relative to this directory. They are copied after the builtin static files,
156 # so a file named "default.css" will overwrite the builtin "default.css".
157 # html_static_path = ['_static']
158
159 # Add any extra paths that contain custom files (such as robots.txt or
160 # .htaccess) here, relative to this directory. These files are copied
161 # directly to the root of the documentation.
162 # html_extra_path = []
163
164 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
165 # using the given strftime format.
166 # html_last_updated_fmt = '%b %d, %Y'
167
168 # If true, SmartyPants will be used to convert quotes and dashes to
169 # typographically correct entities.
170 # html_use_smartypants = True
171
172 # Custom sidebar templates, maps document names to template names.
173 # html_sidebars = {}
174
175 # Additional templates that should be rendered to pages, maps page names to
176 # template names.
177 # html_additional_pages = {}
178
179 # If false, no module index is generated.
180 # html_domain_indices = True
181
182 # If false, no index is generated.
183 # html_use_index = True
184
185 # If true, the index is split into individual pages for each letter.
186 # html_split_index = False
187
188 # If true, links to the reST sources are added to the pages.
189 # html_show_sourcelink = True
190
191 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
192 # html_show_sphinx = True
193
194 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
195 # html_show_copyright = True
196
197 # If true, an OpenSearch description file will be output, and all pages will
198 # contain a <link> tag referring to it. The value of this option must be the
199 # base URL from which the finished HTML is served.
200 # html_use_opensearch = ''
201
202 # This is the file name suffix for HTML files (e.g. ".xhtml").
203 # html_file_suffix = None
204
205 # Language to be used for generating the HTML full-text search index.
206 # Sphinx supports the following languages:
207 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
208 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
209 # html_search_language = 'en'
210
211 # A dictionary with options for the search language support, empty by default.
212 # Now only 'ja' uses this config value
213 # html_search_options = {'type': 'default'}
214
215 # The name of a javascript file (relative to the configuration directory) that
216 # implements a search results scorer. If empty, the default will be used.
217 # html_search_scorer = 'scorer.js'
218
219 # Output file base name for HTML help builder.
220 htmlhelp_basename = 'SecureDropdoc'
221
222 # -- Options for LaTeX output ---------------------------------------------
223
224 latex_elements = {
225 # The paper size ('letterpaper' or 'a4paper').
226 # 'papersize': 'letterpaper',
227
228 # The font size ('10pt', '11pt' or '12pt').
229 # 'pointsize': '10pt',
230
231 # Additional stuff for the LaTeX preamble.
232 # 'preamble': '',
233
234 # Latex figure (float) alignment
235 # 'figure_align': 'htbp',
236 }
237
238 # Grouping the document tree into LaTeX files. List of tuples
239 # (source start file, target name, title,
240 # author, documentclass [howto, manual, or own class]).
241 latex_documents = [
242 (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',
243 author, 'manual'),
244 ]
245
246 # The name of an image file (relative to this directory) to place at the top of
247 # the title page.
248 # latex_logo = None
249
250 # For "manual" documents, if this is true, then toplevel headings are parts,
251 # not chapters.
252 # latex_use_parts = False
253
254 # If true, show page references after internal links.
255 # latex_show_pagerefs = False
256
257 # If true, show URL addresses after external links.
258 # latex_show_urls = False
259
260 # Documents to append as an appendix to all manuals.
261 # latex_appendices = []
262
263 # If false, no module index is generated.
264 # latex_domain_indices = True
265
266
267 # -- Options for manual page output ---------------------------------------
268
269 # One entry per manual page. List of tuples
270 # (source start file, name, description, authors, manual section).
271 man_pages = [
272 (master_doc, 'securedrop', u'SecureDrop Documentation',
273 [author], 1)
274 ]
275
276 # If true, show URL addresses after external links.
277 # man_show_urls = False
278
279
280 # -- Options for Texinfo output -------------------------------------------
281
282 # Grouping the document tree into Texinfo files. List of tuples
283 # (source start file, target name, title, author,
284 # dir menu entry, description, category)
285 texinfo_documents = [
286 (master_doc, 'SecureDrop', u'SecureDrop Documentation',
287 author, 'SecureDrop', 'One line description of project.',
288 'Miscellaneous'),
289 ]
290
291 # Documents to append as an appendix to all manuals.
292 # texinfo_appendices = []
293
294 # If false, no module index is generated.
295 # texinfo_domain_indices = True
296
297 # How to display URL addresses: 'footnote', 'no', or 'inline'.
298 # texinfo_show_urls = 'footnote'
299
300 # If true, do not generate a @detailmenu in the "Top" node's menu.
301 # texinfo_no_detailmenu = False
302
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -57,9 +57,9 @@
# built documents.
#
# The short X.Y version.
-version = '0.13.0~rc1'
+version = '0.12.1'
# The full version, including alpha/beta/rc tags.
-release = '0.13.0~rc1'
+release = '0.12.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -57,9 +57,9 @@\n # built documents.\n #\n # The short X.Y version.\n-version = '0.13.0~rc1'\n+version = '0.12.1'\n # The full version, including alpha/beta/rc tags.\n-release = '0.13.0~rc1'\n+release = '0.12.1'\n \n # The language for content autogenerated by Sphinx. Refer to documentation\n # for a list of supported languages.\n", "issue": "Latest docs specify an RC instead of a Release version\n## Description\r\n\r\n`update_version.sh` bumps version string in docs even when an RC is created. There might be a small period of time (e.g. during release QA) where that Tag exists (albeit not signed)\r\n\r\n## Steps to Reproduce\r\n\r\nhttps://docs.securedrop.org/en/latest/set_up_admin_tails.html?highlight=git%20checkout and observe instructions to check out 0.12.0~rc1 tag\r\n\r\n## Expected Behavior\r\n\r\nThe tag should be the latest release (as of today, 0.11.1)\r\n## Actual Behavior\r\n\r\nThe tag is 0.12.0~rc1\r\n\r\n## Comments\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# SecureDrop documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 13 12:08:52 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\n\n# Detect if we're being built by Read the Docs\n# https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n# sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.todo', ]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'SecureDrop'\ncopyright = u'2017, Freedom of the Press Foundation'\nauthor = u'SecureDrop Team and Contributors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.13.0~rc1'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.13.0~rc1'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'default'\nelse:\n try:\n # If you want to build the docs locally using the RTD theme,\n # you may need to install it: ``pip install sphinx_rtd_theme``.\n # https://github.com/snide/sphinx_rtd_theme#via-package\n import sphinx_rtd_theme\n html_theme = \"sphinx_rtd_theme\"\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n except ImportError:\n # This theme is included with Sphinx and is quite nice (based\n # on the Pocoo themes), but since we're using the RTD theme\n # for the production docs, it's best to use that to avoid\n # issues due to discrepancies between the themes.\n html_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n# html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = '../securedrop/static/i/favicon.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n# html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'SecureDropdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',\n author, 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'securedrop', u'SecureDrop Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'SecureDrop', u'SecureDrop Documentation',\n author, 'SecureDrop', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n# texinfo_no_detailmenu = False\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# SecureDrop documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 13 12:08:52 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\n\n# Detect if we're being built by Read the Docs\n# https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n# sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.todo', ]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'SecureDrop'\ncopyright = u'2017, Freedom of the Press Foundation'\nauthor = u'SecureDrop Team and Contributors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.12.1'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.12.1'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'default'\nelse:\n try:\n # If you want to build the docs locally using the RTD theme,\n # you may need to install it: ``pip install sphinx_rtd_theme``.\n # https://github.com/snide/sphinx_rtd_theme#via-package\n import sphinx_rtd_theme\n html_theme = \"sphinx_rtd_theme\"\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n except ImportError:\n # This theme is included with Sphinx and is quite nice (based\n # on the Pocoo themes), but since we're using the RTD theme\n # for the production docs, it's best to use that to avoid\n # issues due to discrepancies between the themes.\n html_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n# html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = '../securedrop/static/i/favicon.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n# html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'SecureDropdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',\n author, 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'securedrop', u'SecureDrop Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'SecureDrop', u'SecureDrop Documentation',\n author, 'SecureDrop', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n# texinfo_no_detailmenu = False\n", "path": "docs/conf.py"}]}
| 3,778 | 130 |
gh_patches_debug_28259
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-302
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add option to pass additional dependencies to hooks
I am currently working on implementing this framework and one of the things I am trying to run is eslint. As part of that I have a number of plugins that are in my configuration file. I think that, rather than forcing anyone who is using plugins to create a new hook definition with a corresponding package.json it might be useful to add a global option to pass a list of dependencies in the configuration file.
For instance, something lilke this:
``` yaml
- repo: https://github.com/pre-commit/mirrors-eslint
sha: 135f285caf8e6e886b28c8e98fdff402b69c4490
hooks:
- id: eslint
language_version: '0.12.7'
dependencies: [eslint-plugin-react, eslint-plugin-html]
```
and have those dependencies installed into the generated environment for that language.
I am going to work on implementing this in my forked repo but would like feedback on whether this is a desired feature or any implementation advice on how best to facilitate this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/output.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import os
4 import subprocess
5 import sys
6
7 from pre_commit import color
8 from pre_commit import five
9
10
11 # TODO: smell: import side-effects
12 try:
13 if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)
14 raise OSError('Cannot determine width without TERM')
15 COLS = int(
16 subprocess.Popen(
17 ('tput', 'cols'), stdout=subprocess.PIPE,
18 ).communicate()[0] or
19 # Default in the case of no terminal
20 80
21 )
22 except OSError: # pragma: no cover (windows)
23 COLS = 80
24
25
26 def get_hook_message(
27 start,
28 postfix='',
29 end_msg=None,
30 end_len=0,
31 end_color=None,
32 use_color=None,
33 cols=COLS,
34 ):
35 """Prints a message for running a hook.
36
37 This currently supports three approaches:
38
39 # Print `start` followed by dots, leaving 6 characters at the end
40 >>> print_hook_message('start', end_len=6)
41 start...............................................................
42
43 # Print `start` followed by dots with the end message colored if coloring
44 # is specified and a newline afterwards
45 >>> print_hook_message(
46 'start',
47 end_msg='end',
48 end_color=color.RED,
49 use_color=True,
50 )
51 start...................................................................end
52
53 # Print `start` followed by dots, followed by the `postfix` message
54 # uncolored, followed by the `end_msg` colored if specified and a newline
55 # afterwards
56 >>> print_hook_message(
57 'start',
58 postfix='postfix ',
59 end_msg='end',
60 end_color=color.RED,
61 use_color=True,
62 )
63 start...........................................................postfix end
64 """
65 if bool(end_msg) == bool(end_len):
66 raise ValueError('Expected one of (`end_msg`, `end_len`)')
67 if end_msg is not None and (end_color is None or use_color is None):
68 raise ValueError(
69 '`end_color` and `use_color` are required with `end_msg`'
70 )
71
72 if end_len:
73 return start + '.' * (cols - len(start) - end_len - 1)
74 else:
75 return '{0}{1}{2}{3}\n'.format(
76 start,
77 '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),
78 postfix,
79 color.format_color(end_msg, end_color, use_color),
80 )
81
82
83 stdout_byte_stream = getattr(sys.stdout, 'buffer', sys.stdout)
84
85
86 def sys_stdout_write_wrapper(s, stream=stdout_byte_stream):
87 stream.write(five.to_bytes(s))
88
```
Path: `pre_commit/languages/python.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import contextlib
4 import distutils.spawn
5 import os
6 import sys
7
8 import virtualenv
9
10 from pre_commit.languages import helpers
11 from pre_commit.util import clean_path_on_failure
12 from pre_commit.util import shell_escape
13
14
15 ENVIRONMENT_DIR = 'py_env'
16
17
18 class PythonEnv(helpers.Environment):
19 @property
20 def env_prefix(self):
21 return ". '{{prefix}}{0}activate' &&".format(
22 virtualenv.path_locations(
23 helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)
24 )[-1].rstrip(os.sep) + os.sep,
25 )
26
27
28 @contextlib.contextmanager
29 def in_env(repo_cmd_runner, language_version):
30 yield PythonEnv(repo_cmd_runner, language_version)
31
32
33 def norm_version(version):
34 version = os.path.expanduser(version)
35 if os.name == 'nt': # pragma: no cover (windows)
36 if not distutils.spawn.find_executable(version):
37 # expanduser introduces a leading slash
38 version = version.strip('\\')
39 # The default place for python on windows is:
40 # C:\PythonXX\python.exe
41 version = r'C:\{0}\python.exe'.format(version.replace('.', ''))
42 return version
43
44
45 def install_environment(
46 repo_cmd_runner,
47 version='default',
48 additional_dependencies=None,
49 ):
50 assert repo_cmd_runner.exists('setup.py')
51 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
52
53 # Install a virtualenv
54 with clean_path_on_failure(repo_cmd_runner.path(directory)):
55 venv_cmd = [
56 sys.executable, '-m', 'virtualenv',
57 '{{prefix}}{0}'.format(directory)
58 ]
59 if version != 'default':
60 venv_cmd.extend(['-p', norm_version(version)])
61 repo_cmd_runner.run(venv_cmd)
62 with in_env(repo_cmd_runner, version) as env:
63 env.run("cd '{prefix}' && pip install .")
64 if additional_dependencies:
65 env.run(
66 "cd '{prefix}' && pip install " +
67 ' '.join(
68 shell_escape(dep) for dep in additional_dependencies
69 )
70 )
71
72
73 def run_hook(repo_cmd_runner, hook, file_args):
74 with in_env(repo_cmd_runner, hook['language_version']) as env:
75 return helpers.run_hook(env, hook, file_args)
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -31,15 +31,18 @@
def norm_version(version):
- version = os.path.expanduser(version)
if os.name == 'nt': # pragma: no cover (windows)
- if not distutils.spawn.find_executable(version):
- # expanduser introduces a leading slash
- version = version.strip('\\')
- # The default place for python on windows is:
- # C:\PythonXX\python.exe
- version = r'C:\{0}\python.exe'.format(version.replace('.', ''))
- return version
+ # Try looking up by name
+ if distutils.spawn.find_executable(version):
+ return version
+
+ # If it is in the form pythonx.x search in the default
+ # place on windows
+ if version.startswith('python'):
+ return r'C:\{0}\python.exe'.format(version.replace('.', ''))
+
+ # Otherwise assume it is a path
+ return os.path.expanduser(version)
def install_environment(
diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -12,13 +12,14 @@
try:
if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)
raise OSError('Cannot determine width without TERM')
- COLS = int(
- subprocess.Popen(
- ('tput', 'cols'), stdout=subprocess.PIPE,
- ).communicate()[0] or
- # Default in the case of no terminal
- 80
- )
+ else: # pragma no cover (windows)
+ COLS = int(
+ subprocess.Popen(
+ ('tput', 'cols'), stdout=subprocess.PIPE,
+ ).communicate()[0] or
+ # Default in the case of no terminal
+ 80
+ )
except OSError: # pragma: no cover (windows)
COLS = 80
|
{"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -31,15 +31,18 @@\n \n \n def norm_version(version):\n- version = os.path.expanduser(version)\n if os.name == 'nt': # pragma: no cover (windows)\n- if not distutils.spawn.find_executable(version):\n- # expanduser introduces a leading slash\n- version = version.strip('\\\\')\n- # The default place for python on windows is:\n- # C:\\PythonXX\\python.exe\n- version = r'C:\\{0}\\python.exe'.format(version.replace('.', ''))\n- return version\n+ # Try looking up by name\n+ if distutils.spawn.find_executable(version):\n+ return version\n+\n+ # If it is in the form pythonx.x search in the default\n+ # place on windows\n+ if version.startswith('python'):\n+ return r'C:\\{0}\\python.exe'.format(version.replace('.', ''))\n+\n+ # Otherwise assume it is a path\n+ return os.path.expanduser(version)\n \n \n def install_environment(\ndiff --git a/pre_commit/output.py b/pre_commit/output.py\n--- a/pre_commit/output.py\n+++ b/pre_commit/output.py\n@@ -12,13 +12,14 @@\n try:\n if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)\n raise OSError('Cannot determine width without TERM')\n- COLS = int(\n- subprocess.Popen(\n- ('tput', 'cols'), stdout=subprocess.PIPE,\n- ).communicate()[0] or\n- # Default in the case of no terminal\n- 80\n- )\n+ else: # pragma no cover (windows)\n+ COLS = int(\n+ subprocess.Popen(\n+ ('tput', 'cols'), stdout=subprocess.PIPE,\n+ ).communicate()[0] or\n+ # Default in the case of no terminal\n+ 80\n+ )\n except OSError: # pragma: no cover (windows)\n COLS = 80\n", "issue": "Add option to pass additional dependencies to hooks\nI am currently working on implementing this framework and one of the things I am trying to run is eslint. As part of that I have a number of plugins that are in my configuration file. I think that, rather than forcing anyone who is using plugins to create a new hook definition with a corresponding package.json it might be useful to add a global option to pass a list of dependencies in the configuration file.\n\nFor instance, something lilke this:\n\n``` yaml\n- repo: https://github.com/pre-commit/mirrors-eslint\n sha: 135f285caf8e6e886b28c8e98fdff402b69c4490\n hooks:\n - id: eslint\n language_version: '0.12.7'\n dependencies: [eslint-plugin-react, eslint-plugin-html]\n```\n\nand have those dependencies installed into the generated environment for that language.\n\nI am going to work on implementing this in my forked repo but would like feedback on whether this is a desired feature or any implementation advice on how best to facilitate this.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport os\nimport subprocess\nimport sys\n\nfrom pre_commit import color\nfrom pre_commit import five\n\n\n# TODO: smell: import side-effects\ntry:\n if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)\n raise OSError('Cannot determine width without TERM')\n COLS = int(\n subprocess.Popen(\n ('tput', 'cols'), stdout=subprocess.PIPE,\n ).communicate()[0] or\n # Default in the case of no terminal\n 80\n )\nexcept OSError: # pragma: no cover (windows)\n COLS = 80\n\n\ndef get_hook_message(\n start,\n postfix='',\n end_msg=None,\n end_len=0,\n end_color=None,\n use_color=None,\n cols=COLS,\n):\n \"\"\"Prints a message for running a hook.\n\n This currently supports three approaches:\n\n # Print `start` followed by dots, leaving 6 characters at the end\n >>> print_hook_message('start', end_len=6)\n start...............................................................\n\n # Print `start` followed by dots with the end message colored if coloring\n # is specified and a newline afterwards\n >>> print_hook_message(\n 'start',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...................................................................end\n\n # Print `start` followed by dots, followed by the `postfix` message\n # uncolored, followed by the `end_msg` colored if specified and a newline\n # afterwards\n >>> print_hook_message(\n 'start',\n postfix='postfix ',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...........................................................postfix end\n \"\"\"\n if bool(end_msg) == bool(end_len):\n raise ValueError('Expected one of (`end_msg`, `end_len`)')\n if end_msg is not None and (end_color is None or use_color is None):\n raise ValueError(\n '`end_color` and `use_color` are required with `end_msg`'\n )\n\n if end_len:\n return start + '.' * (cols - len(start) - end_len - 1)\n else:\n return '{0}{1}{2}{3}\\n'.format(\n start,\n '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n\n\nstdout_byte_stream = getattr(sys.stdout, 'buffer', sys.stdout)\n\n\ndef sys_stdout_write_wrapper(s, stream=stdout_byte_stream):\n stream.write(five.to_bytes(s))\n", "path": "pre_commit/output.py"}, {"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\nimport sys\n\nimport virtualenv\n\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import shell_escape\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\nclass PythonEnv(helpers.Environment):\n @property\n def env_prefix(self):\n return \". '{{prefix}}{0}activate' &&\".format(\n virtualenv.path_locations(\n helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)\n )[-1].rstrip(os.sep) + os.sep,\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n yield PythonEnv(repo_cmd_runner, language_version)\n\n\ndef norm_version(version):\n version = os.path.expanduser(version)\n if os.name == 'nt': # pragma: no cover (windows)\n if not distutils.spawn.find_executable(version):\n # expanduser introduces a leading slash\n version = version.strip('\\\\')\n # The default place for python on windows is:\n # C:\\PythonXX\\python.exe\n version = r'C:\\{0}\\python.exe'.format(version.replace('.', ''))\n return version\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=None,\n):\n assert repo_cmd_runner.exists('setup.py')\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n venv_cmd = [\n sys.executable, '-m', 'virtualenv',\n '{{prefix}}{0}'.format(directory)\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version) as env:\n env.run(\"cd '{prefix}' && pip install .\")\n if additional_dependencies:\n env.run(\n \"cd '{prefix}' && pip install \" +\n ' '.join(\n shell_escape(dep) for dep in additional_dependencies\n )\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']) as env:\n return helpers.run_hook(env, hook, file_args)\n", "path": "pre_commit/languages/python.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport os\nimport subprocess\nimport sys\n\nfrom pre_commit import color\nfrom pre_commit import five\n\n\n# TODO: smell: import side-effects\ntry:\n if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)\n raise OSError('Cannot determine width without TERM')\n else: # pragma no cover (windows)\n COLS = int(\n subprocess.Popen(\n ('tput', 'cols'), stdout=subprocess.PIPE,\n ).communicate()[0] or\n # Default in the case of no terminal\n 80\n )\nexcept OSError: # pragma: no cover (windows)\n COLS = 80\n\n\ndef get_hook_message(\n start,\n postfix='',\n end_msg=None,\n end_len=0,\n end_color=None,\n use_color=None,\n cols=COLS,\n):\n \"\"\"Prints a message for running a hook.\n\n This currently supports three approaches:\n\n # Print `start` followed by dots, leaving 6 characters at the end\n >>> print_hook_message('start', end_len=6)\n start...............................................................\n\n # Print `start` followed by dots with the end message colored if coloring\n # is specified and a newline afterwards\n >>> print_hook_message(\n 'start',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...................................................................end\n\n # Print `start` followed by dots, followed by the `postfix` message\n # uncolored, followed by the `end_msg` colored if specified and a newline\n # afterwards\n >>> print_hook_message(\n 'start',\n postfix='postfix ',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...........................................................postfix end\n \"\"\"\n if bool(end_msg) == bool(end_len):\n raise ValueError('Expected one of (`end_msg`, `end_len`)')\n if end_msg is not None and (end_color is None or use_color is None):\n raise ValueError(\n '`end_color` and `use_color` are required with `end_msg`'\n )\n\n if end_len:\n return start + '.' * (cols - len(start) - end_len - 1)\n else:\n return '{0}{1}{2}{3}\\n'.format(\n start,\n '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n\n\nstdout_byte_stream = getattr(sys.stdout, 'buffer', sys.stdout)\n\n\ndef sys_stdout_write_wrapper(s, stream=stdout_byte_stream):\n stream.write(five.to_bytes(s))\n", "path": "pre_commit/output.py"}, {"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport distutils.spawn\nimport os\nimport sys\n\nimport virtualenv\n\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import shell_escape\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\nclass PythonEnv(helpers.Environment):\n @property\n def env_prefix(self):\n return \". '{{prefix}}{0}activate' &&\".format(\n virtualenv.path_locations(\n helpers.environment_dir(ENVIRONMENT_DIR, self.language_version)\n )[-1].rstrip(os.sep) + os.sep,\n )\n\n\[email protected]\ndef in_env(repo_cmd_runner, language_version):\n yield PythonEnv(repo_cmd_runner, language_version)\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n # Try looking up by name\n if distutils.spawn.find_executable(version):\n return version\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n return r'C:\\{0}\\python.exe'.format(version.replace('.', ''))\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=None,\n):\n assert repo_cmd_runner.exists('setup.py')\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n # Install a virtualenv\n with clean_path_on_failure(repo_cmd_runner.path(directory)):\n venv_cmd = [\n sys.executable, '-m', 'virtualenv',\n '{{prefix}}{0}'.format(directory)\n ]\n if version != 'default':\n venv_cmd.extend(['-p', norm_version(version)])\n repo_cmd_runner.run(venv_cmd)\n with in_env(repo_cmd_runner, version) as env:\n env.run(\"cd '{prefix}' && pip install .\")\n if additional_dependencies:\n env.run(\n \"cd '{prefix}' && pip install \" +\n ' '.join(\n shell_escape(dep) for dep in additional_dependencies\n )\n )\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n with in_env(repo_cmd_runner, hook['language_version']) as env:\n return helpers.run_hook(env, hook, file_args)\n", "path": "pre_commit/languages/python.py"}]}
| 1,913 | 477 |
gh_patches_debug_18579
|
rasdani/github-patches
|
git_diff
|
falconry__falcon-62
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove responder exception handling
Can hide problems, encourage bad coding practices.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `falcon/api.py`
Content:
```
1 """Defines the API class.
2
3 Copyright 2013 by Rackspace Hosting, Inc.
4
5 Licensed under the Apache License, Version 2.0 (the "License");
6 you may not use this file except in compliance with the License.
7 You may obtain a copy of the License at
8
9 http://www.apache.org/licenses/LICENSE-2.0
10
11 Unless required by applicable law or agreed to in writing, software
12 distributed under the License is distributed on an "AS IS" BASIS,
13 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 See the License for the specific language governing permissions and
15 limitations under the License.
16
17 """
18
19 import traceback
20
21 from .request import Request
22 from .response import Response
23 from . import responders
24 from .status_codes import *
25 from .api_helpers import *
26
27 from .http_error import HTTPError
28
29
30 class API(object):
31 """Provides routing and such for building a web service application
32
33 This class is the main entry point into a Falcon-based app. It provides a
34 callable WSGI interface and a simple routing engine based on URI templates.
35
36 """
37
38 __slots__ = ('_routes')
39
40 def __init__(self):
41 """Initialize default values"""
42 self._routes = []
43
44 def __call__(self, env, start_response):
45 """WSGI "app" method
46
47 Makes instances of API callable by any WSGI server. See also PEP 333.
48
49 Args:
50 env: A WSGI environment dictionary
51 start_response: A WSGI helper method for setting status and headers
52 on a response.
53
54 """
55
56 req = Request(env)
57 resp = Response()
58
59 responder, params = self._get_responder(req.path, req.method)
60
61 try:
62 responder(req, resp, **params)
63
64 except HTTPError as ex:
65 resp.status = ex.status
66 if ex.headers is not None:
67 resp.set_headers(ex.headers)
68
69 if req.client_accepts_json():
70 resp.body = ex.json()
71
72 except Exception as ex:
73 # Reset to a known state and respond with a generic error
74 req = Request(env)
75 resp = Response()
76
77 message = ['Responder raised ', ex.__class__.__name__]
78
79 details = str(ex)
80 if details:
81 message.append(': ')
82 message.append(details)
83
84 stack = traceback.format_exc()
85 message.append('\n')
86 message.append(stack)
87
88 req.log_error(''.join(message))
89 responders.server_error(req, resp)
90
91 #
92 # Set status and headers
93 #
94 use_body = not should_ignore_body(resp.status, req.method)
95 if use_body:
96 set_content_length(resp)
97
98 start_response(resp.status, resp._wsgi_headers())
99
100 # Return an iterable for the body, per the WSGI spec
101 if use_body:
102 return prepare_wsgi_content(resp)
103
104 # Default: return an empty body
105 return []
106
107 def add_route(self, uri_template, resource):
108 """Associate a URI path with a resource
109
110 Args:
111 uri_template: Relative URI template. Currently only Level 1
112 templates are supported. See also RFC 6570.
113 resource: Object which represents an HTTP/REST "resource". Falcon
114 will pass "GET" requests to on_get, "PUT" requests to on_put,
115 etc. If any HTTP methods are not supported by your resource,
116 simply don't define the corresponding request handlers, and
117 Falcon will do the right thing.
118
119 """
120
121 if not uri_template:
122 uri_template = '/'
123
124 path_template = compile_uri_template(uri_template)
125 method_map = create_http_method_map(resource)
126
127 # Insert at the head of the list in case we get duplicate
128 # adds (will cause the last one to win).
129 self._routes.insert(0, (path_template, method_map))
130
131 def _get_responder(self, path, method):
132 """Searches routes for a matching responder
133
134 Args:
135 path: URI path to search (without query stirng)
136 method: HTTP method (uppercase) requested
137 Returns:
138 A 2-member tuple, containing a responder callable and a dict
139 containing parsed path fields, if any were specified in
140 the matching route's URI template
141
142 """
143
144 for path_template, method_map in self._routes:
145 m = path_template.match(path)
146 if m:
147 params = m.groupdict()
148
149 try:
150 responder = method_map[method]
151 except KeyError:
152 responder = responders.bad_request
153
154 break
155 else:
156 responder = responders.path_not_found
157 params = {}
158
159 return (responder, params)
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/falcon/api.py b/falcon/api.py
--- a/falcon/api.py
+++ b/falcon/api.py
@@ -16,8 +16,6 @@
"""
-import traceback
-
from .request import Request
from .response import Response
from . import responders
@@ -69,25 +67,6 @@
if req.client_accepts_json():
resp.body = ex.json()
- except Exception as ex:
- # Reset to a known state and respond with a generic error
- req = Request(env)
- resp = Response()
-
- message = ['Responder raised ', ex.__class__.__name__]
-
- details = str(ex)
- if details:
- message.append(': ')
- message.append(details)
-
- stack = traceback.format_exc()
- message.append('\n')
- message.append(stack)
-
- req.log_error(''.join(message))
- responders.server_error(req, resp)
-
#
# Set status and headers
#
|
{"golden_diff": "diff --git a/falcon/api.py b/falcon/api.py\n--- a/falcon/api.py\n+++ b/falcon/api.py\n@@ -16,8 +16,6 @@\n \n \"\"\"\n \n-import traceback\n-\n from .request import Request\n from .response import Response\n from . import responders\n@@ -69,25 +67,6 @@\n if req.client_accepts_json():\n resp.body = ex.json()\n \n- except Exception as ex:\n- # Reset to a known state and respond with a generic error\n- req = Request(env)\n- resp = Response()\n-\n- message = ['Responder raised ', ex.__class__.__name__]\n-\n- details = str(ex)\n- if details:\n- message.append(': ')\n- message.append(details)\n-\n- stack = traceback.format_exc()\n- message.append('\\n')\n- message.append(stack)\n-\n- req.log_error(''.join(message))\n- responders.server_error(req, resp)\n-\n #\n # Set status and headers\n #\n", "issue": "Remove responder exception handling\nCan hide problems, encourage bad coding practices.\n\n", "before_files": [{"content": "\"\"\"Defines the API class.\n\nCopyright 2013 by Rackspace Hosting, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\"\"\"\n\nimport traceback\n\nfrom .request import Request\nfrom .response import Response\nfrom . import responders\nfrom .status_codes import *\nfrom .api_helpers import *\n\nfrom .http_error import HTTPError\n\n\nclass API(object):\n \"\"\"Provides routing and such for building a web service application\n\n This class is the main entry point into a Falcon-based app. It provides a\n callable WSGI interface and a simple routing engine based on URI templates.\n\n \"\"\"\n\n __slots__ = ('_routes')\n\n def __init__(self):\n \"\"\"Initialize default values\"\"\"\n self._routes = []\n\n def __call__(self, env, start_response):\n \"\"\"WSGI \"app\" method\n\n Makes instances of API callable by any WSGI server. See also PEP 333.\n\n Args:\n env: A WSGI environment dictionary\n start_response: A WSGI helper method for setting status and headers\n on a response.\n\n \"\"\"\n\n req = Request(env)\n resp = Response()\n\n responder, params = self._get_responder(req.path, req.method)\n\n try:\n responder(req, resp, **params)\n\n except HTTPError as ex:\n resp.status = ex.status\n if ex.headers is not None:\n resp.set_headers(ex.headers)\n\n if req.client_accepts_json():\n resp.body = ex.json()\n\n except Exception as ex:\n # Reset to a known state and respond with a generic error\n req = Request(env)\n resp = Response()\n\n message = ['Responder raised ', ex.__class__.__name__]\n\n details = str(ex)\n if details:\n message.append(': ')\n message.append(details)\n\n stack = traceback.format_exc()\n message.append('\\n')\n message.append(stack)\n\n req.log_error(''.join(message))\n responders.server_error(req, resp)\n\n #\n # Set status and headers\n #\n use_body = not should_ignore_body(resp.status, req.method)\n if use_body:\n set_content_length(resp)\n\n start_response(resp.status, resp._wsgi_headers())\n\n # Return an iterable for the body, per the WSGI spec\n if use_body:\n return prepare_wsgi_content(resp)\n\n # Default: return an empty body\n return []\n\n def add_route(self, uri_template, resource):\n \"\"\"Associate a URI path with a resource\n\n Args:\n uri_template: Relative URI template. Currently only Level 1\n templates are supported. See also RFC 6570.\n resource: Object which represents an HTTP/REST \"resource\". Falcon\n will pass \"GET\" requests to on_get, \"PUT\" requests to on_put,\n etc. If any HTTP methods are not supported by your resource,\n simply don't define the corresponding request handlers, and\n Falcon will do the right thing.\n\n \"\"\"\n\n if not uri_template:\n uri_template = '/'\n\n path_template = compile_uri_template(uri_template)\n method_map = create_http_method_map(resource)\n\n # Insert at the head of the list in case we get duplicate\n # adds (will cause the last one to win).\n self._routes.insert(0, (path_template, method_map))\n\n def _get_responder(self, path, method):\n \"\"\"Searches routes for a matching responder\n\n Args:\n path: URI path to search (without query stirng)\n method: HTTP method (uppercase) requested\n Returns:\n A 2-member tuple, containing a responder callable and a dict\n containing parsed path fields, if any were specified in\n the matching route's URI template\n\n \"\"\"\n\n for path_template, method_map in self._routes:\n m = path_template.match(path)\n if m:\n params = m.groupdict()\n\n try:\n responder = method_map[method]\n except KeyError:\n responder = responders.bad_request\n\n break\n else:\n responder = responders.path_not_found\n params = {}\n\n return (responder, params)\n", "path": "falcon/api.py"}], "after_files": [{"content": "\"\"\"Defines the API class.\n\nCopyright 2013 by Rackspace Hosting, Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\"\"\"\n\nfrom .request import Request\nfrom .response import Response\nfrom . import responders\nfrom .status_codes import *\nfrom .api_helpers import *\n\nfrom .http_error import HTTPError\n\n\nclass API(object):\n \"\"\"Provides routing and such for building a web service application\n\n This class is the main entry point into a Falcon-based app. It provides a\n callable WSGI interface and a simple routing engine based on URI templates.\n\n \"\"\"\n\n __slots__ = ('_routes')\n\n def __init__(self):\n \"\"\"Initialize default values\"\"\"\n self._routes = []\n\n def __call__(self, env, start_response):\n \"\"\"WSGI \"app\" method\n\n Makes instances of API callable by any WSGI server. See also PEP 333.\n\n Args:\n env: A WSGI environment dictionary\n start_response: A WSGI helper method for setting status and headers\n on a response.\n\n \"\"\"\n\n req = Request(env)\n resp = Response()\n\n responder, params = self._get_responder(req.path, req.method)\n\n try:\n responder(req, resp, **params)\n\n except HTTPError as ex:\n resp.status = ex.status\n if ex.headers is not None:\n resp.set_headers(ex.headers)\n\n if req.client_accepts_json():\n resp.body = ex.json()\n\n #\n # Set status and headers\n #\n use_body = not should_ignore_body(resp.status, req.method)\n if use_body:\n set_content_length(resp)\n\n start_response(resp.status, resp._wsgi_headers())\n\n # Return an iterable for the body, per the WSGI spec\n if use_body:\n return prepare_wsgi_content(resp)\n\n # Default: return an empty body\n return []\n\n def add_route(self, uri_template, resource):\n \"\"\"Associate a URI path with a resource\n\n Args:\n uri_template: Relative URI template. Currently only Level 1\n templates are supported. See also RFC 6570.\n resource: Object which represents an HTTP/REST \"resource\". Falcon\n will pass \"GET\" requests to on_get, \"PUT\" requests to on_put,\n etc. If any HTTP methods are not supported by your resource,\n simply don't define the corresponding request handlers, and\n Falcon will do the right thing.\n\n \"\"\"\n\n if not uri_template:\n uri_template = '/'\n\n path_template = compile_uri_template(uri_template)\n method_map = create_http_method_map(resource)\n\n # Insert at the head of the list in case we get duplicate\n # adds (will cause the last one to win).\n self._routes.insert(0, (path_template, method_map))\n\n def _get_responder(self, path, method):\n \"\"\"Searches routes for a matching responder\n\n Args:\n path: URI path to search (without query stirng)\n method: HTTP method (uppercase) requested\n Returns:\n A 2-member tuple, containing a responder callable and a dict\n containing parsed path fields, if any were specified in\n the matching route's URI template\n\n \"\"\"\n\n for path_template, method_map in self._routes:\n m = path_template.match(path)\n if m:\n params = m.groupdict()\n\n try:\n responder = method_map[method]\n except KeyError:\n responder = responders.bad_request\n\n break\n else:\n responder = responders.path_not_found\n params = {}\n\n return (responder, params)\n", "path": "falcon/api.py"}]}
| 1,651 | 223 |
gh_patches_debug_31655
|
rasdani/github-patches
|
git_diff
|
ocadotechnology__codeforlife-portal-686
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CMS upgrade
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from setuptools import find_packages, setup
3 import versioneer
4
5
6 setup(name='codeforlife-portal',
7 cmdclass=versioneer.get_cmdclass(),
8 version=versioneer.get_version(),
9 packages=find_packages(),
10 include_package_data=True,
11 install_requires=[
12 'django==1.8.2',
13 'django-appconf==1.0.1',
14 'django-countries==3.4.1',
15 'djangorestframework==3.1.3',
16 'django-jquery==1.9.1',
17 'django-autoconfig==0.3.6',
18 'django-pipeline==1.5.4',
19 'django-recaptcha==1.3.1', # 1.4 dropped support for < 1.11
20
21 'pyyaml==3.10',
22 'rapid-router >= 1.0.0.post.dev1',
23 'six==1.9.0',
24 'docutils==0.12',
25 'reportlab==3.2.0',
26 'postcodes==0.1',
27 'django-formtools==1.0',
28 'django-two-factor-auth==1.2.0',
29 'urllib3==1.10.4',
30 'requests==2.7.0',
31
32 'django-cms==3.1.2',
33
34 'django-classy-tags==0.6.1',
35 'django-treebeard==3.0',
36 'django-sekizai==0.8.2',
37 'djangocms-admin-style==0.2.8',
38
39 'djangocms-text-ckeditor==2.6.0',
40 'djangocms-link==1.6.2',
41 'djangocms-snippet==1.5',
42 'djangocms-style==1.5',
43 'djangocms-column==1.5',
44 'djangocms-grid==1.2',
45 'djangocms-oembed==0.5',
46 'djangocms-table==1.2',
47 'djangocms-file==0.1',
48 'djangocms_flash==0.2.0',
49 'djangocms_googlemap==0.3',
50 'djangocms_inherit==0.1',
51 'djangocms_picture==0.1',
52 'djangocms_teaser==0.1',
53 'djangocms_video==0.1',
54 'django-online-status==0.1.0',
55
56
57 'Pillow==2.9.0',
58 'django-reversion==1.9.3',
59 'sqlparse',
60 'libsass',
61 ],
62 tests_require=[
63 'django-setuptest',
64 'django-selenium-clean==0.2.1',
65 'responses==0.4.0',
66 'selenium==2.48.0',
67 ],
68 test_suite='setuptest.setuptest.SetupTestSuite',
69 zip_safe=False,
70 )
71
```
Path: `portal/autoconfig.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Code for Life
3 #
4 # Copyright (C) 2018, Ocado Innovation Limited
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU Affero General Public License as
8 # published by the Free Software Foundation, either version 3 of the
9 # License, or (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU Affero General Public License for more details.
15 #
16 # You should have received a copy of the GNU Affero General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 # ADDITIONAL TERMS – Section 7 GNU General Public Licence
20 #
21 # This licence does not grant any right, title or interest in any “Ocado” logos,
22 # trade names or the trademark “Ocado” or any other trademarks or domain names
23 # owned by Ocado Innovation Limited or the Ocado group of companies or any other
24 # distinctive brand features of “Ocado” as may be secured from time to time. You
25 # must not distribute any modification of this program using the trademark
26 # “Ocado” or claim any affiliation or association with Ocado or its employees.
27 #
28 # You are not authorised to use the name Ocado (or any of its trade names) or
29 # the names of any author or contributor in advertising or for publicity purposes
30 # pertaining to the distribution of this program, without the prior written
31 # authorisation of Ocado.
32 #
33 # Any propagation, distribution or conveyance of this program must include this
34 # copyright notice and these terms. You must not misrepresent the origins of this
35 # program; modified versions of the program must be marked as such and not
36 # identified as the original program.
37 '''Portal autoconfig'''
38 import os
39
40 from django_autoconfig.autoconfig import OrderingRelationship
41
42
43 DEFAULT_SETTINGS = {
44 'AUTOCONFIG_INDEX_VIEW': 'home',
45 'LANGUAGE_CODE': 'en-gb',
46 'SITE_ID': 1,
47 'MEDIA_ROOT': os.path.join(os.path.join(os.path.dirname(__file__), 'static'), 'email_media/')
48 }
49
50 SETTINGS = {
51 'AUTOCONFIG_DISABLED_APPS': [
52 'django_otp',
53 'django_otp.plugins.otp_static',
54 'django_otp.plugins.otp_totp',
55 ],
56 'PIPELINE_COMPILERS': (
57 'pipeline.compilers.sass.SASSCompiler',
58 ),
59 'PIPELINE_CSS': {
60 'css': {
61 'source_filenames': (
62 'portal/sass/bootstrap.scss',
63 'portal/sass/colorbox.scss',
64 'portal/sass/styles.scss',
65 ),
66 'output_filename': 'portal.css',
67 },
68 'base': {
69 'source_filenames': (
70 'portal/sass/old_styles.scss',
71 ),
72 'output_filename': 'base.css',
73 },
74 },
75 'PIPELINE_CSS_COMPRESSOR': None,
76 'INSTALLED_APPS': [
77 'cms',
78 'game',
79 'pipeline',
80 'portal',
81 'ratelimit',
82 'django.contrib.admin',
83 'django.contrib.admindocs',
84 'django.contrib.auth',
85 'django.contrib.contenttypes',
86 'django.contrib.sessions',
87 'django.contrib.messages',
88 'django.contrib.sites',
89 'django.contrib.staticfiles',
90 'rest_framework',
91 'jquery',
92 'django_otp',
93 'django_otp.plugins.otp_static',
94 'django_otp.plugins.otp_totp',
95 'sekizai', # for javascript and css management
96 'treebeard',
97 'two_factor',
98 ],
99 'LANGUAGES': [
100 ('en-gb', 'English'),
101 ],
102 'STATICFILES_FINDERS': [
103 'pipeline.finders.PipelineFinder',
104 ],
105 'STATICFILES_STORAGE': 'pipeline.storage.PipelineStorage',
106 'MESSAGE_STORAGE': 'django.contrib.messages.storage.session.SessionStorage',
107 'MIDDLEWARE_CLASSES': [
108 'django.contrib.sessions.middleware.SessionMiddleware',
109 'django.middleware.locale.LocaleMiddleware',
110 'django.middleware.common.CommonMiddleware',
111 'django.middleware.csrf.CsrfViewMiddleware',
112 'django.contrib.auth.middleware.AuthenticationMiddleware',
113 'online_status.middleware.OnlineStatusMiddleware',
114 'django.contrib.messages.middleware.MessageMiddleware',
115 'django.middleware.clickjacking.XFrameOptionsMiddleware',
116 'deploy.middleware.exceptionlogging.ExceptionLoggingMiddleware',
117 'cms.middleware.user.CurrentUserMiddleware',
118 'cms.middleware.page.CurrentPageMiddleware',
119 'cms.middleware.toolbar.ToolbarMiddleware',
120 'cms.middleware.language.LanguageCookieMiddleware',
121 'portal.middleware.ratelimit_login_attempts.RateLimitLoginAttemptsMiddleware',
122 'django_otp.middleware.OTPMiddleware',
123 ],
124
125 'TEMPLATES': [
126 {
127 'BACKEND': 'django.template.backends.django.DjangoTemplates',
128 'APP_DIRS': True,
129 'OPTIONS': {
130 'context_processors': [
131 'django.contrib.auth.context_processors.auth',
132 'django.template.context_processors.request',
133 'django.contrib.messages.context_processors.messages',
134 'sekizai.context_processors.sekizai',
135 ]
136 }
137 }
138 ],
139
140 'CODEFORLIFE_WEBSITE': 'www.codeforlife.education',
141
142 'CLOUD_STORAGE_PREFIX': '//storage.googleapis.com/codeforlife-assets/',
143
144 'LOGGING': {
145 'version': 1,
146 'disable_existing_loggers': False,
147 'handlers': {
148 'console': {
149 'level': 'DEBUG',
150 'class': 'logging.StreamHandler',
151 },
152 },
153 'loggers': {
154 'two_factor': {
155 'handlers': ['console'],
156 'level': 'INFO',
157 }
158 }
159 },
160
161 'RAPID_ROUTER_EARLY_ACCESS_FUNCTION_NAME': 'portal.beta.has_beta_access',
162 }
163
164 RELATIONSHIPS = [
165 OrderingRelationship(
166 'MIDDLEWARE_CLASSES',
167 'cms.middleware.toolbar.ToolbarMiddleware',
168 after=[
169 'django.contrib.auth.middleware.AuthenticationMiddleware',
170 ],
171 add_missing=False,
172 ),
173 OrderingRelationship(
174 'MIDDLEWARE_CLASSES',
175 'online_status.middleware.OnlineStatusMiddleware',
176 after=[
177 'django.contrib.auth.middleware.AuthenticationMiddleware',
178 ],
179 add_missing=False,
180 ),
181 OrderingRelationship(
182 'MIDDLEWARE_CLASSES',
183 'django_otp.middleware.OTPMiddleware',
184 after=[
185 'django.contrib.auth.middleware.AuthenticationMiddleware',
186 ],
187 add_missing=False,
188 ),
189 ]
190
191 try:
192 import django_pandasso
193 SETTINGS['INSTALLED_APPS'].append('django_pandasso')
194 SETTINGS['INSTALLED_APPS'].append('social.apps.django_app.default')
195 except ImportError:
196 pass
197
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/portal/autoconfig.py b/portal/autoconfig.py
--- a/portal/autoconfig.py
+++ b/portal/autoconfig.py
@@ -74,7 +74,6 @@
},
'PIPELINE_CSS_COMPRESSOR': None,
'INSTALLED_APPS': [
- 'cms',
'game',
'pipeline',
'portal',
@@ -114,10 +113,6 @@
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'deploy.middleware.exceptionlogging.ExceptionLoggingMiddleware',
- 'cms.middleware.user.CurrentUserMiddleware',
- 'cms.middleware.page.CurrentPageMiddleware',
- 'cms.middleware.toolbar.ToolbarMiddleware',
- 'cms.middleware.language.LanguageCookieMiddleware',
'portal.middleware.ratelimit_login_attempts.RateLimitLoginAttemptsMiddleware',
'django_otp.middleware.OTPMiddleware',
],
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,28 +29,10 @@
'urllib3==1.10.4',
'requests==2.7.0',
- 'django-cms==3.1.2',
-
'django-classy-tags==0.6.1',
'django-treebeard==3.0',
'django-sekizai==0.8.2',
- 'djangocms-admin-style==0.2.8',
- 'djangocms-text-ckeditor==2.6.0',
- 'djangocms-link==1.6.2',
- 'djangocms-snippet==1.5',
- 'djangocms-style==1.5',
- 'djangocms-column==1.5',
- 'djangocms-grid==1.2',
- 'djangocms-oembed==0.5',
- 'djangocms-table==1.2',
- 'djangocms-file==0.1',
- 'djangocms_flash==0.2.0',
- 'djangocms_googlemap==0.3',
- 'djangocms_inherit==0.1',
- 'djangocms_picture==0.1',
- 'djangocms_teaser==0.1',
- 'djangocms_video==0.1',
'django-online-status==0.1.0',
|
{"golden_diff": "diff --git a/portal/autoconfig.py b/portal/autoconfig.py\n--- a/portal/autoconfig.py\n+++ b/portal/autoconfig.py\n@@ -74,7 +74,6 @@\n },\n 'PIPELINE_CSS_COMPRESSOR': None,\n 'INSTALLED_APPS': [\n- 'cms',\n 'game',\n 'pipeline',\n 'portal',\n@@ -114,10 +113,6 @@\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'deploy.middleware.exceptionlogging.ExceptionLoggingMiddleware',\n- 'cms.middleware.user.CurrentUserMiddleware',\n- 'cms.middleware.page.CurrentPageMiddleware',\n- 'cms.middleware.toolbar.ToolbarMiddleware',\n- 'cms.middleware.language.LanguageCookieMiddleware',\n 'portal.middleware.ratelimit_login_attempts.RateLimitLoginAttemptsMiddleware',\n 'django_otp.middleware.OTPMiddleware',\n ],\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,28 +29,10 @@\n 'urllib3==1.10.4',\n 'requests==2.7.0',\n \n- 'django-cms==3.1.2',\n-\n 'django-classy-tags==0.6.1',\n 'django-treebeard==3.0',\n 'django-sekizai==0.8.2',\n- 'djangocms-admin-style==0.2.8',\n \n- 'djangocms-text-ckeditor==2.6.0',\n- 'djangocms-link==1.6.2',\n- 'djangocms-snippet==1.5',\n- 'djangocms-style==1.5',\n- 'djangocms-column==1.5',\n- 'djangocms-grid==1.2',\n- 'djangocms-oembed==0.5',\n- 'djangocms-table==1.2',\n- 'djangocms-file==0.1',\n- 'djangocms_flash==0.2.0',\n- 'djangocms_googlemap==0.3',\n- 'djangocms_inherit==0.1',\n- 'djangocms_picture==0.1',\n- 'djangocms_teaser==0.1',\n- 'djangocms_video==0.1',\n 'django-online-status==0.1.0',\n", "issue": "CMS upgrade\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\nimport versioneer\n\n\nsetup(name='codeforlife-portal',\n cmdclass=versioneer.get_cmdclass(),\n version=versioneer.get_version(),\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'django==1.8.2',\n 'django-appconf==1.0.1',\n 'django-countries==3.4.1',\n 'djangorestframework==3.1.3',\n 'django-jquery==1.9.1',\n 'django-autoconfig==0.3.6',\n 'django-pipeline==1.5.4',\n 'django-recaptcha==1.3.1', # 1.4 dropped support for < 1.11\n\n 'pyyaml==3.10',\n 'rapid-router >= 1.0.0.post.dev1',\n 'six==1.9.0',\n 'docutils==0.12',\n 'reportlab==3.2.0',\n 'postcodes==0.1',\n 'django-formtools==1.0',\n 'django-two-factor-auth==1.2.0',\n 'urllib3==1.10.4',\n 'requests==2.7.0',\n\n 'django-cms==3.1.2',\n\n 'django-classy-tags==0.6.1',\n 'django-treebeard==3.0',\n 'django-sekizai==0.8.2',\n 'djangocms-admin-style==0.2.8',\n\n 'djangocms-text-ckeditor==2.6.0',\n 'djangocms-link==1.6.2',\n 'djangocms-snippet==1.5',\n 'djangocms-style==1.5',\n 'djangocms-column==1.5',\n 'djangocms-grid==1.2',\n 'djangocms-oembed==0.5',\n 'djangocms-table==1.2',\n 'djangocms-file==0.1',\n 'djangocms_flash==0.2.0',\n 'djangocms_googlemap==0.3',\n 'djangocms_inherit==0.1',\n 'djangocms_picture==0.1',\n 'djangocms_teaser==0.1',\n 'djangocms_video==0.1',\n 'django-online-status==0.1.0',\n\n\n 'Pillow==2.9.0',\n 'django-reversion==1.9.3',\n 'sqlparse',\n 'libsass',\n ],\n tests_require=[\n 'django-setuptest',\n 'django-selenium-clean==0.2.1',\n 'responses==0.4.0',\n 'selenium==2.48.0',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n zip_safe=False,\n )\n", "path": "setup.py"}, {"content": "# -*- coding: utf-8 -*-\n# Code for Life\n#\n# Copyright (C) 2018, Ocado Innovation Limited\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as\n# published by the Free Software Foundation, either version 3 of the\n# License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n# ADDITIONAL TERMS \u2013 Section 7 GNU General Public Licence\n#\n# This licence does not grant any right, title or interest in any \u201cOcado\u201d logos,\n# trade names or the trademark \u201cOcado\u201d or any other trademarks or domain names\n# owned by Ocado Innovation Limited or the Ocado group of companies or any other\n# distinctive brand features of \u201cOcado\u201d as may be secured from time to time. You\n# must not distribute any modification of this program using the trademark\n# \u201cOcado\u201d or claim any affiliation or association with Ocado or its employees.\n#\n# You are not authorised to use the name Ocado (or any of its trade names) or\n# the names of any author or contributor in advertising or for publicity purposes\n# pertaining to the distribution of this program, without the prior written\n# authorisation of Ocado.\n#\n# Any propagation, distribution or conveyance of this program must include this\n# copyright notice and these terms. You must not misrepresent the origins of this\n# program; modified versions of the program must be marked as such and not\n# identified as the original program.\n'''Portal autoconfig'''\nimport os\n\nfrom django_autoconfig.autoconfig import OrderingRelationship\n\n\nDEFAULT_SETTINGS = {\n 'AUTOCONFIG_INDEX_VIEW': 'home',\n 'LANGUAGE_CODE': 'en-gb',\n 'SITE_ID': 1,\n 'MEDIA_ROOT': os.path.join(os.path.join(os.path.dirname(__file__), 'static'), 'email_media/')\n}\n\nSETTINGS = {\n 'AUTOCONFIG_DISABLED_APPS': [\n 'django_otp',\n 'django_otp.plugins.otp_static',\n 'django_otp.plugins.otp_totp',\n ],\n 'PIPELINE_COMPILERS': (\n 'pipeline.compilers.sass.SASSCompiler',\n ),\n 'PIPELINE_CSS': {\n 'css': {\n 'source_filenames': (\n 'portal/sass/bootstrap.scss',\n 'portal/sass/colorbox.scss',\n 'portal/sass/styles.scss',\n ),\n 'output_filename': 'portal.css',\n },\n 'base': {\n 'source_filenames': (\n 'portal/sass/old_styles.scss',\n ),\n 'output_filename': 'base.css',\n },\n },\n 'PIPELINE_CSS_COMPRESSOR': None,\n 'INSTALLED_APPS': [\n 'cms',\n 'game',\n 'pipeline',\n 'portal',\n 'ratelimit',\n 'django.contrib.admin',\n 'django.contrib.admindocs',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.sites',\n 'django.contrib.staticfiles',\n 'rest_framework',\n 'jquery',\n 'django_otp',\n 'django_otp.plugins.otp_static',\n 'django_otp.plugins.otp_totp',\n 'sekizai', # for javascript and css management\n 'treebeard',\n 'two_factor',\n ],\n 'LANGUAGES': [\n ('en-gb', 'English'),\n ],\n 'STATICFILES_FINDERS': [\n 'pipeline.finders.PipelineFinder',\n ],\n 'STATICFILES_STORAGE': 'pipeline.storage.PipelineStorage',\n 'MESSAGE_STORAGE': 'django.contrib.messages.storage.session.SessionStorage',\n 'MIDDLEWARE_CLASSES': [\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'online_status.middleware.OnlineStatusMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'deploy.middleware.exceptionlogging.ExceptionLoggingMiddleware',\n 'cms.middleware.user.CurrentUserMiddleware',\n 'cms.middleware.page.CurrentPageMiddleware',\n 'cms.middleware.toolbar.ToolbarMiddleware',\n 'cms.middleware.language.LanguageCookieMiddleware',\n 'portal.middleware.ratelimit_login_attempts.RateLimitLoginAttemptsMiddleware',\n 'django_otp.middleware.OTPMiddleware',\n ],\n\n 'TEMPLATES': [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.contrib.auth.context_processors.auth',\n 'django.template.context_processors.request',\n 'django.contrib.messages.context_processors.messages',\n 'sekizai.context_processors.sekizai',\n ]\n }\n }\n ],\n\n 'CODEFORLIFE_WEBSITE': 'www.codeforlife.education',\n\n 'CLOUD_STORAGE_PREFIX': '//storage.googleapis.com/codeforlife-assets/',\n\n 'LOGGING': {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'handlers': {\n 'console': {\n 'level': 'DEBUG',\n 'class': 'logging.StreamHandler',\n },\n },\n 'loggers': {\n 'two_factor': {\n 'handlers': ['console'],\n 'level': 'INFO',\n }\n }\n },\n\n 'RAPID_ROUTER_EARLY_ACCESS_FUNCTION_NAME': 'portal.beta.has_beta_access',\n}\n\nRELATIONSHIPS = [\n OrderingRelationship(\n 'MIDDLEWARE_CLASSES',\n 'cms.middleware.toolbar.ToolbarMiddleware',\n after=[\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n ],\n add_missing=False,\n ),\n OrderingRelationship(\n 'MIDDLEWARE_CLASSES',\n 'online_status.middleware.OnlineStatusMiddleware',\n after=[\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n ],\n add_missing=False,\n ),\n OrderingRelationship(\n 'MIDDLEWARE_CLASSES',\n 'django_otp.middleware.OTPMiddleware',\n after=[\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n ],\n add_missing=False,\n ),\n]\n\ntry:\n import django_pandasso\n SETTINGS['INSTALLED_APPS'].append('django_pandasso')\n SETTINGS['INSTALLED_APPS'].append('social.apps.django_app.default')\nexcept ImportError:\n pass\n", "path": "portal/autoconfig.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\nimport versioneer\n\n\nsetup(name='codeforlife-portal',\n cmdclass=versioneer.get_cmdclass(),\n version=versioneer.get_version(),\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'django==1.8.2',\n 'django-appconf==1.0.1',\n 'django-countries==3.4.1',\n 'djangorestframework==3.1.3',\n 'django-jquery==1.9.1',\n 'django-autoconfig==0.3.6',\n 'django-pipeline==1.5.4',\n 'django-recaptcha==1.3.1', # 1.4 dropped support for < 1.11\n\n 'pyyaml==3.10',\n 'rapid-router >= 1.0.0.post.dev1',\n 'six==1.9.0',\n 'docutils==0.12',\n 'reportlab==3.2.0',\n 'postcodes==0.1',\n 'django-formtools==1.0',\n 'django-two-factor-auth==1.2.0',\n 'urllib3==1.10.4',\n 'requests==2.7.0',\n\n 'django-classy-tags==0.6.1',\n 'django-treebeard==3.0',\n 'django-sekizai==0.8.2',\n\n 'django-online-status==0.1.0',\n\n\n 'Pillow==2.9.0',\n 'django-reversion==1.9.3',\n 'sqlparse',\n 'libsass',\n ],\n tests_require=[\n 'django-setuptest',\n 'django-selenium-clean==0.2.1',\n 'responses==0.4.0',\n 'selenium==2.48.0',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n zip_safe=False,\n )\n", "path": "setup.py"}, {"content": "# -*- coding: utf-8 -*-\n# Code for Life\n#\n# Copyright (C) 2018, Ocado Innovation Limited\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as\n# published by the Free Software Foundation, either version 3 of the\n# License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n# ADDITIONAL TERMS \u2013 Section 7 GNU General Public Licence\n#\n# This licence does not grant any right, title or interest in any \u201cOcado\u201d logos,\n# trade names or the trademark \u201cOcado\u201d or any other trademarks or domain names\n# owned by Ocado Innovation Limited or the Ocado group of companies or any other\n# distinctive brand features of \u201cOcado\u201d as may be secured from time to time. You\n# must not distribute any modification of this program using the trademark\n# \u201cOcado\u201d or claim any affiliation or association with Ocado or its employees.\n#\n# You are not authorised to use the name Ocado (or any of its trade names) or\n# the names of any author or contributor in advertising or for publicity purposes\n# pertaining to the distribution of this program, without the prior written\n# authorisation of Ocado.\n#\n# Any propagation, distribution or conveyance of this program must include this\n# copyright notice and these terms. You must not misrepresent the origins of this\n# program; modified versions of the program must be marked as such and not\n# identified as the original program.\n'''Portal autoconfig'''\nimport os\n\nfrom django_autoconfig.autoconfig import OrderingRelationship\n\n\nDEFAULT_SETTINGS = {\n 'AUTOCONFIG_INDEX_VIEW': 'home',\n 'LANGUAGE_CODE': 'en-gb',\n 'SITE_ID': 1,\n 'MEDIA_ROOT': os.path.join(os.path.join(os.path.dirname(__file__), 'static'), 'email_media/')\n}\n\nSETTINGS = {\n 'AUTOCONFIG_DISABLED_APPS': [\n 'django_otp',\n 'django_otp.plugins.otp_static',\n 'django_otp.plugins.otp_totp',\n ],\n 'PIPELINE_COMPILERS': (\n 'pipeline.compilers.sass.SASSCompiler',\n ),\n 'PIPELINE_CSS': {\n 'css': {\n 'source_filenames': (\n 'portal/sass/bootstrap.scss',\n 'portal/sass/colorbox.scss',\n 'portal/sass/styles.scss',\n ),\n 'output_filename': 'portal.css',\n },\n 'base': {\n 'source_filenames': (\n 'portal/sass/old_styles.scss',\n ),\n 'output_filename': 'base.css',\n },\n },\n 'PIPELINE_CSS_COMPRESSOR': None,\n 'INSTALLED_APPS': [\n 'game',\n 'pipeline',\n 'portal',\n 'ratelimit',\n 'django.contrib.admin',\n 'django.contrib.admindocs',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.sites',\n 'django.contrib.staticfiles',\n 'rest_framework',\n 'jquery',\n 'django_otp',\n 'django_otp.plugins.otp_static',\n 'django_otp.plugins.otp_totp',\n 'sekizai', # for javascript and css management\n 'treebeard',\n 'two_factor',\n ],\n 'LANGUAGES': [\n ('en-gb', 'English'),\n ],\n 'STATICFILES_FINDERS': [\n 'pipeline.finders.PipelineFinder',\n ],\n 'STATICFILES_STORAGE': 'pipeline.storage.PipelineStorage',\n 'MESSAGE_STORAGE': 'django.contrib.messages.storage.session.SessionStorage',\n 'MIDDLEWARE_CLASSES': [\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'online_status.middleware.OnlineStatusMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'deploy.middleware.exceptionlogging.ExceptionLoggingMiddleware',\n 'portal.middleware.ratelimit_login_attempts.RateLimitLoginAttemptsMiddleware',\n 'django_otp.middleware.OTPMiddleware',\n ],\n\n 'TEMPLATES': [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.contrib.auth.context_processors.auth',\n 'django.template.context_processors.request',\n 'django.contrib.messages.context_processors.messages',\n 'sekizai.context_processors.sekizai',\n ]\n }\n }\n ],\n\n 'CODEFORLIFE_WEBSITE': 'www.codeforlife.education',\n\n 'CLOUD_STORAGE_PREFIX': '//storage.googleapis.com/codeforlife-assets/',\n\n 'LOGGING': {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'handlers': {\n 'console': {\n 'level': 'DEBUG',\n 'class': 'logging.StreamHandler',\n },\n },\n 'loggers': {\n 'two_factor': {\n 'handlers': ['console'],\n 'level': 'INFO',\n }\n }\n },\n\n 'RAPID_ROUTER_EARLY_ACCESS_FUNCTION_NAME': 'portal.beta.has_beta_access',\n}\n\nRELATIONSHIPS = [\n OrderingRelationship(\n 'MIDDLEWARE_CLASSES',\n 'cms.middleware.toolbar.ToolbarMiddleware',\n after=[\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n ],\n add_missing=False,\n ),\n OrderingRelationship(\n 'MIDDLEWARE_CLASSES',\n 'online_status.middleware.OnlineStatusMiddleware',\n after=[\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n ],\n add_missing=False,\n ),\n OrderingRelationship(\n 'MIDDLEWARE_CLASSES',\n 'django_otp.middleware.OTPMiddleware',\n after=[\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n ],\n add_missing=False,\n ),\n]\n\ntry:\n import django_pandasso\n SETTINGS['INSTALLED_APPS'].append('django_pandasso')\n SETTINGS['INSTALLED_APPS'].append('social.apps.django_app.default')\nexcept ImportError:\n pass\n", "path": "portal/autoconfig.py"}]}
| 2,991 | 542 |
gh_patches_debug_29686
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-easyblocks-1842
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Better error message when BLAS is expected in the Easyblock but the toolchain does not provide it
I tried to build SuperLU in GCCcore and got the following error:
```
$ eb SAIGE-0.35.8.8-foss-2019a-R-3.6.0.eb -Tr
== temporary log file in case of crash /scratch/branfosj-admin/eb-5y3HuT/easybuild-HPGR5T.log
== resolving dependencies ...
== processing EasyBuild easyconfig /rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-easyconfigs/easybuild/easyconfigs/s/SuperLU/SuperLU-5.2.1-GCCcore-8.2.0.eb
== building and installing SuperLU/5.2.1-GCCcore-8.2.0...
>> installation prefix: /rds/bear-apps/devel/2019a/branfosj-eb-4/EL7/EL7-cascadelake/software/SuperLU/5.2.1-GCCcore-8.2.0
== fetching files...
>> sources:
>> /rds/bear-sysadmin/configmgmt/easybuild/sources/s/SuperLU/superlu_5.2.1.tar.gz [SHA256: 28fb66d6107ee66248d5cf508c79de03d0621852a0ddeba7301801d3d859f463]
== creating build dir, resetting environment...
>> build dir: /dev/shm/build-branfosj-admin/branfosj-admin-4/SuperLU/5.2.1/GCCcore-8.2.0
== unpacking...
>> running command:
[started at: 2019-10-14 13:48:47]
[output logged in /scratch/branfosj-admin/eb-5y3HuT/easybuild-run_cmd-3y3GN5.log]
tar xzf /rds/bear-sysadmin/configmgmt/easybuild/sources/s/SuperLU/superlu_5.2.1.tar.gz
>> command completed: exit 0, ran in < 1s
== patching...
== preparing...
>> loading toolchain module: GCCcore/8.2.0
>> loading modules for build dependencies:
>> * CMake/3.13.3-GCCcore-8.2.0
>> (no (runtime) dependencies specified)
>> defining build environment for GCCcore/8.2.0 toolchain
== configuring...
ERROR: Traceback (most recent call last):
File "/rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-framework/easybuild/main.py", line 112, in build_and_install_software
(ec_res['success'], app_log, err) = build_and_install_one(ec, init_env)
File "/rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-framework/easybuild/framework/easyblock.py", line 3046, in build_and_install_one
result = app.run_all_steps(run_test_cases=run_test_cases)
File "/rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-framework/easybuild/framework/easyblock.py", line 2956, in run_all_steps
self.run_step(step_name, step_methods)
File "/rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-framework/easybuild/framework/easyblock.py", line 2826, in run_step
step_method(self)()
File "/rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-easyblocks/easybuild/easyblocks/s/superlu.py", line 80, in configure_step
toolchain_blas = self.toolchain.definition().get('BLAS', None)[0]
TypeError: 'NoneType' object has no attribute '__getitem__
```
Moving SuperLU to foss fixed the issue (as per a suggestion from Boegel).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `easybuild/easyblocks/s/superlu.py`
Content:
```
1 ##
2 # Copyright 2009-2019 Ghent University, University of Luxembourg
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 EasyBuild support for building and installing the SuperLU library, implemented as an easyblock
27
28 @author: Xavier Besseron (University of Luxembourg)
29 """
30
31 import os
32 from distutils.version import LooseVersion
33
34 from easybuild.easyblocks.generic.cmakemake import CMakeMake
35 from easybuild.framework.easyconfig import CUSTOM
36 from easybuild.tools.build_log import EasyBuildError
37 from easybuild.tools.systemtools import get_shared_lib_ext
38 from easybuild.tools.modules import get_software_root, get_software_version, get_software_libdir
39
40
41 class EB_SuperLU(CMakeMake):
42 """
43 Support for building the SuperLU library
44 """
45
46 @staticmethod
47 def extra_options():
48 """
49 Define custom easyconfig parameters for SuperLU.
50 """
51 extra_vars = {
52 'build_shared_libs': [False, "Build shared library (instead of static library)", CUSTOM],
53 }
54 return CMakeMake.extra_options(extra_vars)
55
56 def configure_step(self):
57 """
58 Set the CMake options for SuperLU
59 """
60 self.cfg['separate_build_dir'] = True
61
62 if self.cfg['build_shared_libs']:
63 self.cfg.update('configopts', '-DBUILD_SHARED_LIBS=ON')
64 self.lib_ext = get_shared_lib_ext()
65
66 else:
67 self.cfg.update('configopts', '-DBUILD_SHARED_LIBS=OFF')
68 self.lib_ext = 'a'
69
70 # Add -fPIC flag if necessary
71 pic_flag = ('OFF', 'ON')[self.toolchain.options['pic']]
72 self.cfg.update('configopts', '-DCMAKE_POSITION_INDEPENDENT_CODE=%s' % pic_flag)
73
74 # Make sure not to build the slow BLAS library included in the package
75 self.cfg.update('configopts', '-Denable_blaslib=OFF')
76
77 # Set the BLAS library to use
78 # For this, use the BLA_VENDOR option from the FindBLAS module of CMake
79 # Check for all possible values at https://cmake.org/cmake/help/latest/module/FindBLAS.html
80 toolchain_blas = self.toolchain.definition().get('BLAS', None)[0]
81 if toolchain_blas == 'imkl':
82 imkl_version = get_software_version('imkl')
83 if LooseVersion(imkl_version) >= LooseVersion('10'):
84 # 'Intel10_64lp' -> For Intel mkl v10 64 bit,lp thread model, lp64 model
85 # It should work for Intel MKL 10 and above, as long as the library names stay the same
86 # SuperLU requires thread, 'Intel10_64lp_seq' will not work!
87 self.cfg.update('configopts', '-DBLA_VENDOR="Intel10_64lp"')
88
89 else:
90 # 'Intel' -> For older versions of mkl 32 and 64 bit
91 self.cfg.update('configopts', '-DBLA_VENDOR="Intel"')
92
93 elif toolchain_blas in ['ACML', 'ATLAS']:
94 self.cfg.update('configopts', '-DBLA_VENDOR="%s"' % toolchain_blas)
95
96 elif toolchain_blas == 'OpenBLAS':
97 # Unfortunately, OpenBLAS is not recognized by FindBLAS from CMake,
98 # we have to specify the OpenBLAS library manually
99 openblas_lib = os.path.join(get_software_root('OpenBLAS'), get_software_libdir('OpenBLAS'), "libopenblas.a")
100 self.cfg.update('configopts', '-DBLAS_LIBRARIES="%s;-pthread"' % openblas_lib)
101
102 elif toolchain_blas is None:
103 # This toolchain has no BLAS library
104 raise EasyBuildError("No BLAS library found in the toolchain")
105
106 else:
107 # This BLAS library is not supported yet
108 raise EasyBuildError("BLAS library '%s' is not supported yet", toolchain_blas)
109
110 super(EB_SuperLU, self).configure_step()
111
112 def test_step(self):
113 """
114 Run the testsuite of SuperLU
115 """
116 if self.cfg['runtest'] is None:
117 self.cfg['runtest'] = 'test'
118 super(EB_SuperLU, self).test_step()
119
120 def install_step(self):
121 """
122 Custom install procedure for SuperLU
123 """
124 super(EB_SuperLU, self).install_step()
125
126 self.libbits = 'lib'
127 if not os.path.exists(os.path.join(self.installdir, self.libbits)):
128 self.libbits = 'lib64'
129
130 if not os.path.exists(os.path.join(self.installdir, self.libbits)):
131 raise EasyBuildError("No lib or lib64 subdirectory exist in %s", self.installdir)
132
133 expected_libpath = os.path.join(self.installdir, self.libbits, "libsuperlu.%s" % self.lib_ext)
134 actual_libpath = os.path.join(self.installdir, self.libbits, "libsuperlu_%s.%s" %
135 (self.cfg['version'], self.lib_ext))
136
137 if not os.path.exists(expected_libpath):
138 try:
139 os.symlink(actual_libpath, expected_libpath)
140 except OSError as err:
141 raise EasyBuildError("Failed to create symlink '%s' -> '%s: %s", expected_libpath, actual_libpath, err)
142
143 def sanity_check_step(self):
144 """
145 Check for main library files for SuperLU
146 """
147 custom_paths = {
148 'files': ["include/supermatrix.h", os.path.join(self.libbits, "libsuperlu.%s" % self.lib_ext)],
149 'dirs': [],
150 }
151 super(EB_SuperLU, self).sanity_check_step(custom_paths=custom_paths)
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/easybuild/easyblocks/s/superlu.py b/easybuild/easyblocks/s/superlu.py
--- a/easybuild/easyblocks/s/superlu.py
+++ b/easybuild/easyblocks/s/superlu.py
@@ -77,7 +77,12 @@
# Set the BLAS library to use
# For this, use the BLA_VENDOR option from the FindBLAS module of CMake
# Check for all possible values at https://cmake.org/cmake/help/latest/module/FindBLAS.html
- toolchain_blas = self.toolchain.definition().get('BLAS', None)[0]
+ toolchain_blas_list = self.toolchain.definition().get('BLAS', None)
+ if toolchain_blas_list is None:
+ # This toolchain has no BLAS library
+ raise EasyBuildError("No BLAS library found in the toolchain")
+
+ toolchain_blas = toolchain_blas_list[0]
if toolchain_blas == 'imkl':
imkl_version = get_software_version('imkl')
if LooseVersion(imkl_version) >= LooseVersion('10'):
@@ -99,10 +104,6 @@
openblas_lib = os.path.join(get_software_root('OpenBLAS'), get_software_libdir('OpenBLAS'), "libopenblas.a")
self.cfg.update('configopts', '-DBLAS_LIBRARIES="%s;-pthread"' % openblas_lib)
- elif toolchain_blas is None:
- # This toolchain has no BLAS library
- raise EasyBuildError("No BLAS library found in the toolchain")
-
else:
# This BLAS library is not supported yet
raise EasyBuildError("BLAS library '%s' is not supported yet", toolchain_blas)
|
{"golden_diff": "diff --git a/easybuild/easyblocks/s/superlu.py b/easybuild/easyblocks/s/superlu.py\n--- a/easybuild/easyblocks/s/superlu.py\n+++ b/easybuild/easyblocks/s/superlu.py\n@@ -77,7 +77,12 @@\n # Set the BLAS library to use\n # For this, use the BLA_VENDOR option from the FindBLAS module of CMake\n # Check for all possible values at https://cmake.org/cmake/help/latest/module/FindBLAS.html\n- toolchain_blas = self.toolchain.definition().get('BLAS', None)[0]\n+ toolchain_blas_list = self.toolchain.definition().get('BLAS', None)\n+ if toolchain_blas_list is None:\n+ # This toolchain has no BLAS library\n+ raise EasyBuildError(\"No BLAS library found in the toolchain\")\n+\n+ toolchain_blas = toolchain_blas_list[0]\n if toolchain_blas == 'imkl':\n imkl_version = get_software_version('imkl')\n if LooseVersion(imkl_version) >= LooseVersion('10'):\n@@ -99,10 +104,6 @@\n openblas_lib = os.path.join(get_software_root('OpenBLAS'), get_software_libdir('OpenBLAS'), \"libopenblas.a\")\n self.cfg.update('configopts', '-DBLAS_LIBRARIES=\"%s;-pthread\"' % openblas_lib)\n \n- elif toolchain_blas is None:\n- # This toolchain has no BLAS library\n- raise EasyBuildError(\"No BLAS library found in the toolchain\")\n-\n else:\n # This BLAS library is not supported yet\n raise EasyBuildError(\"BLAS library '%s' is not supported yet\", toolchain_blas)\n", "issue": "Better error message when BLAS is expected in the Easyblock but the toolchain does not provide it\nI tried to build SuperLU in GCCcore and got the following error:\r\n```\r\n$ eb SAIGE-0.35.8.8-foss-2019a-R-3.6.0.eb -Tr\r\n== temporary log file in case of crash /scratch/branfosj-admin/eb-5y3HuT/easybuild-HPGR5T.log\r\n== resolving dependencies ...\r\n== processing EasyBuild easyconfig /rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-easyconfigs/easybuild/easyconfigs/s/SuperLU/SuperLU-5.2.1-GCCcore-8.2.0.eb\r\n== building and installing SuperLU/5.2.1-GCCcore-8.2.0...\r\n >> installation prefix: /rds/bear-apps/devel/2019a/branfosj-eb-4/EL7/EL7-cascadelake/software/SuperLU/5.2.1-GCCcore-8.2.0\r\n== fetching files...\r\n >> sources:\r\n >> /rds/bear-sysadmin/configmgmt/easybuild/sources/s/SuperLU/superlu_5.2.1.tar.gz [SHA256: 28fb66d6107ee66248d5cf508c79de03d0621852a0ddeba7301801d3d859f463]\r\n== creating build dir, resetting environment...\r\n >> build dir: /dev/shm/build-branfosj-admin/branfosj-admin-4/SuperLU/5.2.1/GCCcore-8.2.0\r\n== unpacking...\r\n >> running command:\r\n [started at: 2019-10-14 13:48:47]\r\n [output logged in /scratch/branfosj-admin/eb-5y3HuT/easybuild-run_cmd-3y3GN5.log]\r\n tar xzf /rds/bear-sysadmin/configmgmt/easybuild/sources/s/SuperLU/superlu_5.2.1.tar.gz\r\n >> command completed: exit 0, ran in < 1s\r\n== patching...\r\n== preparing...\r\n >> loading toolchain module: GCCcore/8.2.0\r\n >> loading modules for build dependencies:\r\n >> * CMake/3.13.3-GCCcore-8.2.0\r\n >> (no (runtime) dependencies specified)\r\n >> defining build environment for GCCcore/8.2.0 toolchain\r\n== configuring...\r\nERROR: Traceback (most recent call last):\r\n File \"/rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-framework/easybuild/main.py\", line 112, in build_and_install_software\r\n (ec_res['success'], app_log, err) = build_and_install_one(ec, init_env)\r\n File \"/rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-framework/easybuild/framework/easyblock.py\", line 3046, in build_and_install_one\r\n result = app.run_all_steps(run_test_cases=run_test_cases)\r\n File \"/rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-framework/easybuild/framework/easyblock.py\", line 2956, in run_all_steps\r\n self.run_step(step_name, step_methods)\r\n File \"/rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-framework/easybuild/framework/easyblock.py\", line 2826, in run_step\r\n step_method(self)()\r\n File \"/rds/bear-apps/devel/2019a/branfosj-eb-4/src/easybuild-easyblocks/easybuild/easyblocks/s/superlu.py\", line 80, in configure_step\r\n toolchain_blas = self.toolchain.definition().get('BLAS', None)[0]\r\nTypeError: 'NoneType' object has no attribute '__getitem__\r\n```\r\n\r\nMoving SuperLU to foss fixed the issue (as per a suggestion from Boegel).\n", "before_files": [{"content": "##\n# Copyright 2009-2019 Ghent University, University of Luxembourg\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing the SuperLU library, implemented as an easyblock\n\n@author: Xavier Besseron (University of Luxembourg)\n\"\"\"\n\nimport os\nfrom distutils.version import LooseVersion\n\nfrom easybuild.easyblocks.generic.cmakemake import CMakeMake\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.systemtools import get_shared_lib_ext\nfrom easybuild.tools.modules import get_software_root, get_software_version, get_software_libdir\n\n\nclass EB_SuperLU(CMakeMake):\n \"\"\"\n Support for building the SuperLU library\n \"\"\"\n\n @staticmethod\n def extra_options():\n \"\"\"\n Define custom easyconfig parameters for SuperLU.\n \"\"\"\n extra_vars = {\n 'build_shared_libs': [False, \"Build shared library (instead of static library)\", CUSTOM],\n }\n return CMakeMake.extra_options(extra_vars)\n\n def configure_step(self):\n \"\"\"\n Set the CMake options for SuperLU\n \"\"\"\n self.cfg['separate_build_dir'] = True\n\n if self.cfg['build_shared_libs']:\n self.cfg.update('configopts', '-DBUILD_SHARED_LIBS=ON')\n self.lib_ext = get_shared_lib_ext()\n\n else:\n self.cfg.update('configopts', '-DBUILD_SHARED_LIBS=OFF')\n self.lib_ext = 'a'\n\n # Add -fPIC flag if necessary\n pic_flag = ('OFF', 'ON')[self.toolchain.options['pic']]\n self.cfg.update('configopts', '-DCMAKE_POSITION_INDEPENDENT_CODE=%s' % pic_flag)\n\n # Make sure not to build the slow BLAS library included in the package\n self.cfg.update('configopts', '-Denable_blaslib=OFF')\n\n # Set the BLAS library to use\n # For this, use the BLA_VENDOR option from the FindBLAS module of CMake\n # Check for all possible values at https://cmake.org/cmake/help/latest/module/FindBLAS.html\n toolchain_blas = self.toolchain.definition().get('BLAS', None)[0]\n if toolchain_blas == 'imkl':\n imkl_version = get_software_version('imkl')\n if LooseVersion(imkl_version) >= LooseVersion('10'):\n # 'Intel10_64lp' -> For Intel mkl v10 64 bit,lp thread model, lp64 model\n # It should work for Intel MKL 10 and above, as long as the library names stay the same\n # SuperLU requires thread, 'Intel10_64lp_seq' will not work!\n self.cfg.update('configopts', '-DBLA_VENDOR=\"Intel10_64lp\"')\n\n else:\n # 'Intel' -> For older versions of mkl 32 and 64 bit\n self.cfg.update('configopts', '-DBLA_VENDOR=\"Intel\"')\n\n elif toolchain_blas in ['ACML', 'ATLAS']:\n self.cfg.update('configopts', '-DBLA_VENDOR=\"%s\"' % toolchain_blas)\n\n elif toolchain_blas == 'OpenBLAS':\n # Unfortunately, OpenBLAS is not recognized by FindBLAS from CMake,\n # we have to specify the OpenBLAS library manually\n openblas_lib = os.path.join(get_software_root('OpenBLAS'), get_software_libdir('OpenBLAS'), \"libopenblas.a\")\n self.cfg.update('configopts', '-DBLAS_LIBRARIES=\"%s;-pthread\"' % openblas_lib)\n\n elif toolchain_blas is None:\n # This toolchain has no BLAS library\n raise EasyBuildError(\"No BLAS library found in the toolchain\")\n\n else:\n # This BLAS library is not supported yet\n raise EasyBuildError(\"BLAS library '%s' is not supported yet\", toolchain_blas)\n\n super(EB_SuperLU, self).configure_step()\n\n def test_step(self):\n \"\"\"\n Run the testsuite of SuperLU\n \"\"\"\n if self.cfg['runtest'] is None:\n self.cfg['runtest'] = 'test'\n super(EB_SuperLU, self).test_step()\n\n def install_step(self):\n \"\"\"\n Custom install procedure for SuperLU\n \"\"\"\n super(EB_SuperLU, self).install_step()\n\n self.libbits = 'lib'\n if not os.path.exists(os.path.join(self.installdir, self.libbits)):\n self.libbits = 'lib64'\n\n if not os.path.exists(os.path.join(self.installdir, self.libbits)):\n raise EasyBuildError(\"No lib or lib64 subdirectory exist in %s\", self.installdir)\n\n expected_libpath = os.path.join(self.installdir, self.libbits, \"libsuperlu.%s\" % self.lib_ext)\n actual_libpath = os.path.join(self.installdir, self.libbits, \"libsuperlu_%s.%s\" %\n (self.cfg['version'], self.lib_ext))\n\n if not os.path.exists(expected_libpath):\n try:\n os.symlink(actual_libpath, expected_libpath)\n except OSError as err:\n raise EasyBuildError(\"Failed to create symlink '%s' -> '%s: %s\", expected_libpath, actual_libpath, err)\n\n def sanity_check_step(self):\n \"\"\"\n Check for main library files for SuperLU\n \"\"\"\n custom_paths = {\n 'files': [\"include/supermatrix.h\", os.path.join(self.libbits, \"libsuperlu.%s\" % self.lib_ext)],\n 'dirs': [],\n }\n super(EB_SuperLU, self).sanity_check_step(custom_paths=custom_paths)\n", "path": "easybuild/easyblocks/s/superlu.py"}], "after_files": [{"content": "##\n# Copyright 2009-2019 Ghent University, University of Luxembourg\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing the SuperLU library, implemented as an easyblock\n\n@author: Xavier Besseron (University of Luxembourg)\n\"\"\"\n\nimport os\nfrom distutils.version import LooseVersion\n\nfrom easybuild.easyblocks.generic.cmakemake import CMakeMake\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.systemtools import get_shared_lib_ext\nfrom easybuild.tools.modules import get_software_root, get_software_version, get_software_libdir\n\n\nclass EB_SuperLU(CMakeMake):\n \"\"\"\n Support for building the SuperLU library\n \"\"\"\n\n @staticmethod\n def extra_options():\n \"\"\"\n Define custom easyconfig parameters for SuperLU.\n \"\"\"\n extra_vars = {\n 'build_shared_libs': [False, \"Build shared library (instead of static library)\", CUSTOM],\n }\n return CMakeMake.extra_options(extra_vars)\n\n def configure_step(self):\n \"\"\"\n Set the CMake options for SuperLU\n \"\"\"\n self.cfg['separate_build_dir'] = True\n\n if self.cfg['build_shared_libs']:\n self.cfg.update('configopts', '-DBUILD_SHARED_LIBS=ON')\n self.lib_ext = get_shared_lib_ext()\n\n else:\n self.cfg.update('configopts', '-DBUILD_SHARED_LIBS=OFF')\n self.lib_ext = 'a'\n\n # Add -fPIC flag if necessary\n pic_flag = ('OFF', 'ON')[self.toolchain.options['pic']]\n self.cfg.update('configopts', '-DCMAKE_POSITION_INDEPENDENT_CODE=%s' % pic_flag)\n\n # Make sure not to build the slow BLAS library included in the package\n self.cfg.update('configopts', '-Denable_blaslib=OFF')\n\n # Set the BLAS library to use\n # For this, use the BLA_VENDOR option from the FindBLAS module of CMake\n # Check for all possible values at https://cmake.org/cmake/help/latest/module/FindBLAS.html\n toolchain_blas_list = self.toolchain.definition().get('BLAS', None)\n if toolchain_blas_list is None:\n # This toolchain has no BLAS library\n raise EasyBuildError(\"No BLAS library found in the toolchain\")\n\n toolchain_blas = toolchain_blas_list[0]\n if toolchain_blas == 'imkl':\n imkl_version = get_software_version('imkl')\n if LooseVersion(imkl_version) >= LooseVersion('10'):\n # 'Intel10_64lp' -> For Intel mkl v10 64 bit,lp thread model, lp64 model\n # It should work for Intel MKL 10 and above, as long as the library names stay the same\n # SuperLU requires thread, 'Intel10_64lp_seq' will not work!\n self.cfg.update('configopts', '-DBLA_VENDOR=\"Intel10_64lp\"')\n\n else:\n # 'Intel' -> For older versions of mkl 32 and 64 bit\n self.cfg.update('configopts', '-DBLA_VENDOR=\"Intel\"')\n\n elif toolchain_blas in ['ACML', 'ATLAS']:\n self.cfg.update('configopts', '-DBLA_VENDOR=\"%s\"' % toolchain_blas)\n\n elif toolchain_blas == 'OpenBLAS':\n # Unfortunately, OpenBLAS is not recognized by FindBLAS from CMake,\n # we have to specify the OpenBLAS library manually\n openblas_lib = os.path.join(get_software_root('OpenBLAS'), get_software_libdir('OpenBLAS'), \"libopenblas.a\")\n self.cfg.update('configopts', '-DBLAS_LIBRARIES=\"%s;-pthread\"' % openblas_lib)\n\n else:\n # This BLAS library is not supported yet\n raise EasyBuildError(\"BLAS library '%s' is not supported yet\", toolchain_blas)\n\n super(EB_SuperLU, self).configure_step()\n\n def test_step(self):\n \"\"\"\n Run the testsuite of SuperLU\n \"\"\"\n if self.cfg['runtest'] is None:\n self.cfg['runtest'] = 'test'\n super(EB_SuperLU, self).test_step()\n\n def install_step(self):\n \"\"\"\n Custom install procedure for SuperLU\n \"\"\"\n super(EB_SuperLU, self).install_step()\n\n self.libbits = 'lib'\n if not os.path.exists(os.path.join(self.installdir, self.libbits)):\n self.libbits = 'lib64'\n\n if not os.path.exists(os.path.join(self.installdir, self.libbits)):\n raise EasyBuildError(\"No lib or lib64 subdirectory exist in %s\", self.installdir)\n\n expected_libpath = os.path.join(self.installdir, self.libbits, \"libsuperlu.%s\" % self.lib_ext)\n actual_libpath = os.path.join(self.installdir, self.libbits, \"libsuperlu_%s.%s\" %\n (self.cfg['version'], self.lib_ext))\n\n if not os.path.exists(expected_libpath):\n try:\n os.symlink(actual_libpath, expected_libpath)\n except OSError as err:\n raise EasyBuildError(\"Failed to create symlink '%s' -> '%s: %s\", expected_libpath, actual_libpath, err)\n\n def sanity_check_step(self):\n \"\"\"\n Check for main library files for SuperLU\n \"\"\"\n custom_paths = {\n 'files': [\"include/supermatrix.h\", os.path.join(self.libbits, \"libsuperlu.%s\" % self.lib_ext)],\n 'dirs': [],\n }\n super(EB_SuperLU, self).sanity_check_step(custom_paths=custom_paths)\n", "path": "easybuild/easyblocks/s/superlu.py"}]}
| 3,135 | 408 |
gh_patches_debug_17836
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-1023
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hide "Number of chants" and "Number of melodies" fields in Source admin
On the Source detail page in the admin area, we are currently displaying the number of melodies and number of chants for the source.

We only use this information behind the scenes, so we should not allow users to edit this field since they will be automatically updated as chants or melodies are added/removed from the Source.
Earlier, I found an issue where these fields weren't being updated correctly. I found this because the only place we can see the number of chants and melodies is in the admin area. For this reason and for future situations like this, I think we should make these fields `read_only` instead of hidden altogether.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/admin.py`
Content:
```
1 from django.contrib import admin
2 from main_app.models import *
3 from main_app.forms import (
4 AdminCenturyForm,
5 AdminChantForm,
6 AdminFeastForm,
7 AdminGenreForm,
8 AdminNotationForm,
9 AdminOfficeForm,
10 AdminProvenanceForm,
11 AdminRismSiglumForm,
12 AdminSegmentForm,
13 AdminSequenceForm,
14 AdminSourceForm,
15 )
16
17 # these fields should not be editable by all classes
18 EXCLUDE = (
19 "created_by",
20 "last_updated_by",
21 "json_info",
22 )
23
24
25 class BaseModelAdmin(admin.ModelAdmin):
26 exclude = EXCLUDE
27
28 # if an object is created in the admin interface, assign the user to the created_by field
29 # else if an object is updated in the admin interface, assign the user to the last_updated_by field
30 def save_model(self, request, obj, form, change):
31 if change:
32 obj.last_updated_by = request.user
33 else:
34 obj.created_by = request.user
35 super().save_model(request, obj, form, change)
36
37
38 class CenturyAdmin(BaseModelAdmin):
39 search_fields = ("name",)
40 form = AdminCenturyForm
41
42
43 class ChantAdmin(BaseModelAdmin):
44 @admin.display(description="Source Siglum")
45 def get_source_siglum(self, obj):
46 if obj.source:
47 return obj.source.siglum
48
49 list_display = (
50 "incipit",
51 "get_source_siglum",
52 "genre",
53 )
54 search_fields = (
55 "title",
56 "incipit",
57 "cantus_id",
58 "id",
59 )
60 list_filter = (
61 "genre",
62 "office",
63 )
64 exclude = EXCLUDE + (
65 "col1",
66 "col2",
67 "col3",
68 "next_chant",
69 "s_sequence",
70 "is_last_chant_in_feast",
71 "visible_status",
72 "date",
73 )
74 form = AdminChantForm
75 raw_id_fields = (
76 "source",
77 "feast",
78 )
79 ordering = ("source__siglum",)
80
81
82 class FeastAdmin(BaseModelAdmin):
83 search_fields = (
84 "name",
85 "feast_code",
86 )
87 list_display = (
88 "name",
89 "month",
90 "day",
91 "feast_code",
92 )
93 form = AdminFeastForm
94
95
96 class GenreAdmin(BaseModelAdmin):
97 search_fields = ("name",)
98 form = AdminGenreForm
99
100
101 class NotationAdmin(BaseModelAdmin):
102 search_fields = ("name",)
103 form = AdminNotationForm
104
105
106 class OfficeAdmin(BaseModelAdmin):
107 search_fields = ("name",)
108 form = AdminOfficeForm
109
110
111 class ProvenanceAdmin(BaseModelAdmin):
112 search_fields = ("name",)
113 form = AdminProvenanceForm
114
115
116 class RismSiglumAdmin(BaseModelAdmin):
117 search_fields = ("name",)
118 form = AdminRismSiglumForm
119
120
121 class SegmentAdmin(BaseModelAdmin):
122 search_fields = ("name",)
123 form = AdminSegmentForm
124
125
126 class SequenceAdmin(BaseModelAdmin):
127 @admin.display(description="Source Siglum")
128 def get_source_siglum(self, obj):
129 if obj.source:
130 return obj.source.siglum
131
132 search_fields = (
133 "title",
134 "incipit",
135 "cantus_id",
136 "id",
137 )
138 exclude = EXCLUDE + (
139 "c_sequence",
140 "next_chant",
141 "is_last_chant_in_feast",
142 "visible_status",
143 )
144 list_display = ("incipit", "get_source_siglum", "genre")
145 list_filter = (
146 "genre",
147 "office",
148 )
149 raw_id_fields = (
150 "source",
151 "feast",
152 )
153 ordering = ("source__siglum",)
154 form = AdminSequenceForm
155
156
157 class SourceAdmin(BaseModelAdmin):
158 # These search fields are also available on the user-source inline relationship in the user admin page
159 search_fields = (
160 "siglum",
161 "title",
162 "id",
163 )
164 # from the Django docs:
165 # Adding a ManyToManyField to this list will instead use a nifty unobtrusive JavaScript “filter” interface
166 # that allows searching within the options. The unselected and selected options appear in two boxes side by side.
167 filter_horizontal = (
168 "century",
169 "notation",
170 "current_editors",
171 "inventoried_by",
172 "full_text_entered_by",
173 "melodies_entered_by",
174 "proofreaders",
175 "other_editors",
176 )
177
178 list_display = (
179 "title",
180 "siglum",
181 "id",
182 )
183
184 list_filter = (
185 "full_source",
186 "segment",
187 "source_status",
188 "published",
189 "century",
190 )
191
192 ordering = ("siglum",)
193
194 form = AdminSourceForm
195
196
197 admin.site.register(Century, CenturyAdmin)
198 admin.site.register(Chant, ChantAdmin)
199 admin.site.register(Feast, FeastAdmin)
200 admin.site.register(Genre, GenreAdmin)
201 admin.site.register(Notation, NotationAdmin)
202 admin.site.register(Office, OfficeAdmin)
203 admin.site.register(Provenance, ProvenanceAdmin)
204 admin.site.register(RismSiglum, RismSiglumAdmin)
205 admin.site.register(Segment, SegmentAdmin)
206 admin.site.register(Sequence, SequenceAdmin)
207 admin.site.register(Source, SourceAdmin)
208
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django/cantusdb_project/main_app/admin.py b/django/cantusdb_project/main_app/admin.py
--- a/django/cantusdb_project/main_app/admin.py
+++ b/django/cantusdb_project/main_app/admin.py
@@ -57,6 +57,12 @@
"cantus_id",
"id",
)
+
+ readonly_fields = (
+ "date_created",
+ "date_updated",
+ )
+
list_filter = (
"genre",
"office",
@@ -161,6 +167,12 @@
"title",
"id",
)
+ readonly_fields = (
+ "number_of_chants",
+ "number_of_melodies",
+ "date_created",
+ "date_updated",
+ )
# from the Django docs:
# Adding a ManyToManyField to this list will instead use a nifty unobtrusive JavaScript “filter” interface
# that allows searching within the options. The unselected and selected options appear in two boxes side by side.
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/admin.py b/django/cantusdb_project/main_app/admin.py\n--- a/django/cantusdb_project/main_app/admin.py\n+++ b/django/cantusdb_project/main_app/admin.py\n@@ -57,6 +57,12 @@\n \"cantus_id\",\n \"id\",\n )\n+\n+ readonly_fields = (\n+ \"date_created\",\n+ \"date_updated\",\n+ )\n+\n list_filter = (\n \"genre\",\n \"office\",\n@@ -161,6 +167,12 @@\n \"title\",\n \"id\",\n )\n+ readonly_fields = (\n+ \"number_of_chants\",\n+ \"number_of_melodies\",\n+ \"date_created\",\n+ \"date_updated\",\n+ )\n # from the Django docs:\n # Adding a ManyToManyField to this list will instead use a nifty unobtrusive JavaScript \u201cfilter\u201d interface\n # that allows searching within the options. The unselected and selected options appear in two boxes side by side.\n", "issue": "Hide \"Number of chants\" and \"Number of melodies\" fields in Source admin\nOn the Source detail page in the admin area, we are currently displaying the number of melodies and number of chants for the source.\r\n\r\n\r\nWe only use this information behind the scenes, so we should not allow users to edit this field since they will be automatically updated as chants or melodies are added/removed from the Source.\r\n\r\nEarlier, I found an issue where these fields weren't being updated correctly. I found this because the only place we can see the number of chants and melodies is in the admin area. For this reason and for future situations like this, I think we should make these fields `read_only` instead of hidden altogether.\n", "before_files": [{"content": "from django.contrib import admin\nfrom main_app.models import *\nfrom main_app.forms import (\n AdminCenturyForm,\n AdminChantForm,\n AdminFeastForm,\n AdminGenreForm,\n AdminNotationForm,\n AdminOfficeForm,\n AdminProvenanceForm,\n AdminRismSiglumForm,\n AdminSegmentForm,\n AdminSequenceForm,\n AdminSourceForm,\n)\n\n# these fields should not be editable by all classes\nEXCLUDE = (\n \"created_by\",\n \"last_updated_by\",\n \"json_info\",\n)\n\n\nclass BaseModelAdmin(admin.ModelAdmin):\n exclude = EXCLUDE\n\n # if an object is created in the admin interface, assign the user to the created_by field\n # else if an object is updated in the admin interface, assign the user to the last_updated_by field\n def save_model(self, request, obj, form, change):\n if change:\n obj.last_updated_by = request.user\n else:\n obj.created_by = request.user\n super().save_model(request, obj, form, change)\n\n\nclass CenturyAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminCenturyForm\n\n\nclass ChantAdmin(BaseModelAdmin):\n @admin.display(description=\"Source Siglum\")\n def get_source_siglum(self, obj):\n if obj.source:\n return obj.source.siglum\n\n list_display = (\n \"incipit\",\n \"get_source_siglum\",\n \"genre\",\n )\n search_fields = (\n \"title\",\n \"incipit\",\n \"cantus_id\",\n \"id\",\n )\n list_filter = (\n \"genre\",\n \"office\",\n )\n exclude = EXCLUDE + (\n \"col1\",\n \"col2\",\n \"col3\",\n \"next_chant\",\n \"s_sequence\",\n \"is_last_chant_in_feast\",\n \"visible_status\",\n \"date\",\n )\n form = AdminChantForm\n raw_id_fields = (\n \"source\",\n \"feast\",\n )\n ordering = (\"source__siglum\",)\n\n\nclass FeastAdmin(BaseModelAdmin):\n search_fields = (\n \"name\",\n \"feast_code\",\n )\n list_display = (\n \"name\",\n \"month\",\n \"day\",\n \"feast_code\",\n )\n form = AdminFeastForm\n\n\nclass GenreAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminGenreForm\n\n\nclass NotationAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminNotationForm\n\n\nclass OfficeAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminOfficeForm\n\n\nclass ProvenanceAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminProvenanceForm\n\n\nclass RismSiglumAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminRismSiglumForm\n\n\nclass SegmentAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminSegmentForm\n\n\nclass SequenceAdmin(BaseModelAdmin):\n @admin.display(description=\"Source Siglum\")\n def get_source_siglum(self, obj):\n if obj.source:\n return obj.source.siglum\n\n search_fields = (\n \"title\",\n \"incipit\",\n \"cantus_id\",\n \"id\",\n )\n exclude = EXCLUDE + (\n \"c_sequence\",\n \"next_chant\",\n \"is_last_chant_in_feast\",\n \"visible_status\",\n )\n list_display = (\"incipit\", \"get_source_siglum\", \"genre\")\n list_filter = (\n \"genre\",\n \"office\",\n )\n raw_id_fields = (\n \"source\",\n \"feast\",\n )\n ordering = (\"source__siglum\",)\n form = AdminSequenceForm\n\n\nclass SourceAdmin(BaseModelAdmin):\n # These search fields are also available on the user-source inline relationship in the user admin page\n search_fields = (\n \"siglum\",\n \"title\",\n \"id\",\n )\n # from the Django docs:\n # Adding a ManyToManyField to this list will instead use a nifty unobtrusive JavaScript \u201cfilter\u201d interface\n # that allows searching within the options. The unselected and selected options appear in two boxes side by side.\n filter_horizontal = (\n \"century\",\n \"notation\",\n \"current_editors\",\n \"inventoried_by\",\n \"full_text_entered_by\",\n \"melodies_entered_by\",\n \"proofreaders\",\n \"other_editors\",\n )\n\n list_display = (\n \"title\",\n \"siglum\",\n \"id\",\n )\n\n list_filter = (\n \"full_source\",\n \"segment\",\n \"source_status\",\n \"published\",\n \"century\",\n )\n\n ordering = (\"siglum\",)\n\n form = AdminSourceForm\n\n\nadmin.site.register(Century, CenturyAdmin)\nadmin.site.register(Chant, ChantAdmin)\nadmin.site.register(Feast, FeastAdmin)\nadmin.site.register(Genre, GenreAdmin)\nadmin.site.register(Notation, NotationAdmin)\nadmin.site.register(Office, OfficeAdmin)\nadmin.site.register(Provenance, ProvenanceAdmin)\nadmin.site.register(RismSiglum, RismSiglumAdmin)\nadmin.site.register(Segment, SegmentAdmin)\nadmin.site.register(Sequence, SequenceAdmin)\nadmin.site.register(Source, SourceAdmin)\n", "path": "django/cantusdb_project/main_app/admin.py"}], "after_files": [{"content": "from django.contrib import admin\nfrom main_app.models import *\nfrom main_app.forms import (\n AdminCenturyForm,\n AdminChantForm,\n AdminFeastForm,\n AdminGenreForm,\n AdminNotationForm,\n AdminOfficeForm,\n AdminProvenanceForm,\n AdminRismSiglumForm,\n AdminSegmentForm,\n AdminSequenceForm,\n AdminSourceForm,\n)\n\n# these fields should not be editable by all classes\nEXCLUDE = (\n \"created_by\",\n \"last_updated_by\",\n \"json_info\",\n)\n\n\nclass BaseModelAdmin(admin.ModelAdmin):\n exclude = EXCLUDE\n\n # if an object is created in the admin interface, assign the user to the created_by field\n # else if an object is updated in the admin interface, assign the user to the last_updated_by field\n def save_model(self, request, obj, form, change):\n if change:\n obj.last_updated_by = request.user\n else:\n obj.created_by = request.user\n super().save_model(request, obj, form, change)\n\n\nclass CenturyAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminCenturyForm\n\n\nclass ChantAdmin(BaseModelAdmin):\n @admin.display(description=\"Source Siglum\")\n def get_source_siglum(self, obj):\n if obj.source:\n return obj.source.siglum\n\n list_display = (\n \"incipit\",\n \"get_source_siglum\",\n \"genre\",\n )\n search_fields = (\n \"title\",\n \"incipit\",\n \"cantus_id\",\n \"id\",\n )\n\n readonly_fields = (\n \"date_created\",\n \"date_updated\",\n )\n\n list_filter = (\n \"genre\",\n \"office\",\n )\n exclude = EXCLUDE + (\n \"col1\",\n \"col2\",\n \"col3\",\n \"next_chant\",\n \"s_sequence\",\n \"is_last_chant_in_feast\",\n \"visible_status\",\n \"date\",\n )\n form = AdminChantForm\n raw_id_fields = (\n \"source\",\n \"feast\",\n )\n ordering = (\"source__siglum\",)\n\n\nclass FeastAdmin(BaseModelAdmin):\n search_fields = (\n \"name\",\n \"feast_code\",\n )\n list_display = (\n \"name\",\n \"month\",\n \"day\",\n \"feast_code\",\n )\n form = AdminFeastForm\n\n\nclass GenreAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminGenreForm\n\n\nclass NotationAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminNotationForm\n\n\nclass OfficeAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminOfficeForm\n\n\nclass ProvenanceAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminProvenanceForm\n\n\nclass RismSiglumAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminRismSiglumForm\n\n\nclass SegmentAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminSegmentForm\n\n\nclass SequenceAdmin(BaseModelAdmin):\n @admin.display(description=\"Source Siglum\")\n def get_source_siglum(self, obj):\n if obj.source:\n return obj.source.siglum\n\n search_fields = (\n \"title\",\n \"incipit\",\n \"cantus_id\",\n \"id\",\n )\n exclude = EXCLUDE + (\n \"c_sequence\",\n \"next_chant\",\n \"is_last_chant_in_feast\",\n \"visible_status\",\n )\n list_display = (\"incipit\", \"get_source_siglum\", \"genre\")\n list_filter = (\n \"genre\",\n \"office\",\n )\n raw_id_fields = (\n \"source\",\n \"feast\",\n )\n ordering = (\"source__siglum\",)\n form = AdminSequenceForm\n\n\nclass SourceAdmin(BaseModelAdmin):\n # These search fields are also available on the user-source inline relationship in the user admin page\n search_fields = (\n \"siglum\",\n \"title\",\n \"id\",\n )\n readonly_fields = (\n \"number_of_chants\",\n \"number_of_melodies\",\n \"date_created\",\n \"date_updated\",\n )\n # from the Django docs:\n # Adding a ManyToManyField to this list will instead use a nifty unobtrusive JavaScript \u201cfilter\u201d interface\n # that allows searching within the options. The unselected and selected options appear in two boxes side by side.\n filter_horizontal = (\n \"century\",\n \"notation\",\n \"current_editors\",\n \"inventoried_by\",\n \"full_text_entered_by\",\n \"melodies_entered_by\",\n \"proofreaders\",\n \"other_editors\",\n )\n\n list_display = (\n \"title\",\n \"siglum\",\n \"id\",\n )\n\n list_filter = (\n \"full_source\",\n \"segment\",\n \"source_status\",\n \"published\",\n \"century\",\n )\n\n ordering = (\"siglum\",)\n\n form = AdminSourceForm\n\n\nadmin.site.register(Century, CenturyAdmin)\nadmin.site.register(Chant, ChantAdmin)\nadmin.site.register(Feast, FeastAdmin)\nadmin.site.register(Genre, GenreAdmin)\nadmin.site.register(Notation, NotationAdmin)\nadmin.site.register(Office, OfficeAdmin)\nadmin.site.register(Provenance, ProvenanceAdmin)\nadmin.site.register(RismSiglum, RismSiglumAdmin)\nadmin.site.register(Segment, SegmentAdmin)\nadmin.site.register(Sequence, SequenceAdmin)\nadmin.site.register(Source, SourceAdmin)\n", "path": "django/cantusdb_project/main_app/admin.py"}]}
| 2,175 | 236 |
gh_patches_debug_8483
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__PaddleSpeech-2171
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
切换英文语音合成报错 get_input_ids() got an unexpected keyword argument 'get_tone_ids'
要切换成英文语音合成时,更改了/paddlespeech/server/conf/application.yaml这个配置文件中的tts_python里面的声学模型和声码器,声学模型用的是fastspeech2_ljspeech,声码器用的pwgan_ljspeech,并且lang改为en,但是报错 get_input_ids() got an unexpected keyword argument 'get_tone_ids'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `paddlespeech/server/engine/engine_warmup.py`
Content:
```
1 # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import time
15
16 from paddlespeech.cli.log import logger
17 from paddlespeech.server.engine.engine_pool import get_engine_pool
18
19
20 def warm_up(engine_and_type: str, warm_up_time: int=3) -> bool:
21 engine_pool = get_engine_pool()
22
23 if "tts" in engine_and_type:
24 tts_engine = engine_pool['tts']
25 flag_online = False
26 if tts_engine.lang == 'zh':
27 sentence = "您好,欢迎使用语音合成服务。"
28 elif tts_engine.lang == 'en':
29 sentence = "Hello and welcome to the speech synthesis service."
30 else:
31 logger.error("tts engine only support lang: zh or en.")
32 sys.exit(-1)
33
34 if engine_and_type == "tts_python":
35 from paddlespeech.server.engine.tts.python.tts_engine import PaddleTTSConnectionHandler
36 elif engine_and_type == "tts_inference":
37 from paddlespeech.server.engine.tts.paddleinference.tts_engine import PaddleTTSConnectionHandler
38 elif engine_and_type == "tts_online":
39 from paddlespeech.server.engine.tts.online.python.tts_engine import PaddleTTSConnectionHandler
40 flag_online = True
41 elif engine_and_type == "tts_online-onnx":
42 from paddlespeech.server.engine.tts.online.onnx.tts_engine import PaddleTTSConnectionHandler
43 flag_online = True
44 else:
45 logger.error("Please check tte engine type.")
46
47 try:
48 logger.debug("Start to warm up tts engine.")
49 for i in range(warm_up_time):
50 connection_handler = PaddleTTSConnectionHandler(tts_engine)
51 if flag_online:
52 for wav in connection_handler.infer(
53 text=sentence,
54 lang=tts_engine.lang,
55 am=tts_engine.config.am):
56 logger.debug(
57 f"The first response time of the {i} warm up: {connection_handler.first_response_time} s"
58 )
59 break
60
61 else:
62 st = time.time()
63 connection_handler.infer(text=sentence)
64 et = time.time()
65 logger.debug(
66 f"The response time of the {i} warm up: {et - st} s")
67 except Exception as e:
68 logger.error("Failed to warm up on tts engine.")
69 logger.error(e)
70 return False
71
72 else:
73 pass
74
75 return True
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/paddlespeech/server/engine/engine_warmup.py b/paddlespeech/server/engine/engine_warmup.py
--- a/paddlespeech/server/engine/engine_warmup.py
+++ b/paddlespeech/server/engine/engine_warmup.py
@@ -60,7 +60,10 @@
else:
st = time.time()
- connection_handler.infer(text=sentence)
+ connection_handler.infer(
+ text=sentence,
+ lang=tts_engine.lang,
+ am=tts_engine.config.am)
et = time.time()
logger.debug(
f"The response time of the {i} warm up: {et - st} s")
|
{"golden_diff": "diff --git a/paddlespeech/server/engine/engine_warmup.py b/paddlespeech/server/engine/engine_warmup.py\n--- a/paddlespeech/server/engine/engine_warmup.py\n+++ b/paddlespeech/server/engine/engine_warmup.py\n@@ -60,7 +60,10 @@\n \n else:\n st = time.time()\n- connection_handler.infer(text=sentence)\n+ connection_handler.infer(\n+ text=sentence,\n+ lang=tts_engine.lang,\n+ am=tts_engine.config.am)\n et = time.time()\n logger.debug(\n f\"The response time of the {i} warm up: {et - st} s\")\n", "issue": "\u5207\u6362\u82f1\u6587\u8bed\u97f3\u5408\u6210\u62a5\u9519 get_input_ids() got an unexpected keyword argument 'get_tone_ids'\n\u8981\u5207\u6362\u6210\u82f1\u6587\u8bed\u97f3\u5408\u6210\u65f6\uff0c\u66f4\u6539\u4e86/paddlespeech/server/conf/application.yaml\u8fd9\u4e2a\u914d\u7f6e\u6587\u4ef6\u4e2d\u7684tts_python\u91cc\u9762\u7684\u58f0\u5b66\u6a21\u578b\u548c\u58f0\u7801\u5668\uff0c\u58f0\u5b66\u6a21\u578b\u7528\u7684\u662ffastspeech2_ljspeech\uff0c\u58f0\u7801\u5668\u7528\u7684pwgan_ljspeech\uff0c\u5e76\u4e14lang\u6539\u4e3aen\uff0c\u4f46\u662f\u62a5\u9519 get_input_ids() got an unexpected keyword argument 'get_tone_ids'\n", "before_files": [{"content": "# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport time\n\nfrom paddlespeech.cli.log import logger\nfrom paddlespeech.server.engine.engine_pool import get_engine_pool\n\n\ndef warm_up(engine_and_type: str, warm_up_time: int=3) -> bool:\n engine_pool = get_engine_pool()\n\n if \"tts\" in engine_and_type:\n tts_engine = engine_pool['tts']\n flag_online = False\n if tts_engine.lang == 'zh':\n sentence = \"\u60a8\u597d\uff0c\u6b22\u8fce\u4f7f\u7528\u8bed\u97f3\u5408\u6210\u670d\u52a1\u3002\"\n elif tts_engine.lang == 'en':\n sentence = \"Hello and welcome to the speech synthesis service.\"\n else:\n logger.error(\"tts engine only support lang: zh or en.\")\n sys.exit(-1)\n\n if engine_and_type == \"tts_python\":\n from paddlespeech.server.engine.tts.python.tts_engine import PaddleTTSConnectionHandler\n elif engine_and_type == \"tts_inference\":\n from paddlespeech.server.engine.tts.paddleinference.tts_engine import PaddleTTSConnectionHandler\n elif engine_and_type == \"tts_online\":\n from paddlespeech.server.engine.tts.online.python.tts_engine import PaddleTTSConnectionHandler\n flag_online = True\n elif engine_and_type == \"tts_online-onnx\":\n from paddlespeech.server.engine.tts.online.onnx.tts_engine import PaddleTTSConnectionHandler\n flag_online = True\n else:\n logger.error(\"Please check tte engine type.\")\n\n try:\n logger.debug(\"Start to warm up tts engine.\")\n for i in range(warm_up_time):\n connection_handler = PaddleTTSConnectionHandler(tts_engine)\n if flag_online:\n for wav in connection_handler.infer(\n text=sentence,\n lang=tts_engine.lang,\n am=tts_engine.config.am):\n logger.debug(\n f\"The first response time of the {i} warm up: {connection_handler.first_response_time} s\"\n )\n break\n\n else:\n st = time.time()\n connection_handler.infer(text=sentence)\n et = time.time()\n logger.debug(\n f\"The response time of the {i} warm up: {et - st} s\")\n except Exception as e:\n logger.error(\"Failed to warm up on tts engine.\")\n logger.error(e)\n return False\n\n else:\n pass\n\n return True\n", "path": "paddlespeech/server/engine/engine_warmup.py"}], "after_files": [{"content": "# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport time\n\nfrom paddlespeech.cli.log import logger\nfrom paddlespeech.server.engine.engine_pool import get_engine_pool\n\n\ndef warm_up(engine_and_type: str, warm_up_time: int=3) -> bool:\n engine_pool = get_engine_pool()\n\n if \"tts\" in engine_and_type:\n tts_engine = engine_pool['tts']\n flag_online = False\n if tts_engine.lang == 'zh':\n sentence = \"\u60a8\u597d\uff0c\u6b22\u8fce\u4f7f\u7528\u8bed\u97f3\u5408\u6210\u670d\u52a1\u3002\"\n elif tts_engine.lang == 'en':\n sentence = \"Hello and welcome to the speech synthesis service.\"\n else:\n logger.error(\"tts engine only support lang: zh or en.\")\n sys.exit(-1)\n\n if engine_and_type == \"tts_python\":\n from paddlespeech.server.engine.tts.python.tts_engine import PaddleTTSConnectionHandler\n elif engine_and_type == \"tts_inference\":\n from paddlespeech.server.engine.tts.paddleinference.tts_engine import PaddleTTSConnectionHandler\n elif engine_and_type == \"tts_online\":\n from paddlespeech.server.engine.tts.online.python.tts_engine import PaddleTTSConnectionHandler\n flag_online = True\n elif engine_and_type == \"tts_online-onnx\":\n from paddlespeech.server.engine.tts.online.onnx.tts_engine import PaddleTTSConnectionHandler\n flag_online = True\n else:\n logger.error(\"Please check tte engine type.\")\n\n try:\n logger.debug(\"Start to warm up tts engine.\")\n for i in range(warm_up_time):\n connection_handler = PaddleTTSConnectionHandler(tts_engine)\n if flag_online:\n for wav in connection_handler.infer(\n text=sentence,\n lang=tts_engine.lang,\n am=tts_engine.config.am):\n logger.debug(\n f\"The first response time of the {i} warm up: {connection_handler.first_response_time} s\"\n )\n break\n\n else:\n st = time.time()\n connection_handler.infer(\n text=sentence,\n lang=tts_engine.lang,\n am=tts_engine.config.am)\n et = time.time()\n logger.debug(\n f\"The response time of the {i} warm up: {et - st} s\")\n except Exception as e:\n logger.error(\"Failed to warm up on tts engine.\")\n logger.error(e)\n return False\n\n else:\n pass\n\n return True\n", "path": "paddlespeech/server/engine/engine_warmup.py"}]}
| 1,158 | 149 |
gh_patches_debug_35491
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-4874
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Proposal: aws ecr get-login-password
This is a proposal for a new AWS CLI command for ECR
```
$ aws ecr get-login-password
cGFzc3dvcmQ=
```
This command can be used in the following ways:
```
$ aws ecr get-login-password | docker login --username AWS --password-stdin 111111111111.dkr.ecr.us-west-2.amazonaws.com
Login Succeeded
$ docker login --username AWS --password "$(aws ecr get-login-password)" 111111111111.dkr.ecr.us-west-2.amazonaws.com
Login Succeeded
```
This idea has been previously proposed by @theY4Kman https://github.com/aws/aws-cli/issues/2875#issuecomment-433565983 and @kojiromike https://github.com/aws/aws-cli/issues/3687#issue-374397564
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/ecr.py`
Content:
```
1 # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 from awscli.customizations.commands import BasicCommand
14 from awscli.customizations.utils import create_client_from_parsed_globals
15
16 from base64 import b64decode
17 import sys
18
19
20 def register_ecr_commands(cli):
21 cli.register('building-command-table.ecr', _inject_get_login)
22
23
24 def _inject_get_login(command_table, session, **kwargs):
25 command_table['get-login'] = ECRLogin(session)
26
27
28 class ECRLogin(BasicCommand):
29 """Log in with docker login"""
30 NAME = 'get-login'
31
32 DESCRIPTION = BasicCommand.FROM_FILE('ecr/get-login_description.rst')
33
34 ARG_TABLE = [
35 {
36 'name': 'registry-ids',
37 'help_text': 'A list of AWS account IDs that correspond to the '
38 'Amazon ECR registries that you want to log in to.',
39 'required': False,
40 'nargs': '+'
41 },
42 {
43 'name': 'include-email',
44 'action': 'store_true',
45 'group_name': 'include-email',
46 'dest': 'include_email',
47 'default': True,
48 'required': False,
49 'help_text': (
50 "Specify if the '-e' flag should be included in the "
51 "'docker login' command. The '-e' option has been deprecated "
52 "and is removed in docker version 17.06 and later. You must "
53 "specify --no-include-email if you're using docker version "
54 "17.06 or later. The default behavior is to include the "
55 "'-e' flag in the 'docker login' output."),
56 },
57 {
58 'name': 'no-include-email',
59 'help_text': 'Include email arg',
60 'action': 'store_false',
61 'default': True,
62 'group_name': 'include-email',
63 'dest': 'include_email',
64 'required': False,
65 },
66 ]
67
68 def _run_main(self, parsed_args, parsed_globals):
69 ecr_client = create_client_from_parsed_globals(
70 self._session, 'ecr', parsed_globals)
71 if not parsed_args.registry_ids:
72 result = ecr_client.get_authorization_token()
73 else:
74 result = ecr_client.get_authorization_token(
75 registryIds=parsed_args.registry_ids)
76 for auth in result['authorizationData']:
77 auth_token = b64decode(auth['authorizationToken']).decode()
78 username, password = auth_token.split(':')
79 command = ['docker', 'login', '-u', username, '-p', password]
80 if parsed_args.include_email:
81 command.extend(['-e', 'none'])
82 command.append(auth['proxyEndpoint'])
83 sys.stdout.write(' '.join(command))
84 sys.stdout.write('\n')
85 return 0
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/awscli/customizations/ecr.py b/awscli/customizations/ecr.py
--- a/awscli/customizations/ecr.py
+++ b/awscli/customizations/ecr.py
@@ -18,15 +18,16 @@
def register_ecr_commands(cli):
- cli.register('building-command-table.ecr', _inject_get_login)
+ cli.register('building-command-table.ecr', _inject_commands)
-def _inject_get_login(command_table, session, **kwargs):
+def _inject_commands(command_table, session, **kwargs):
command_table['get-login'] = ECRLogin(session)
+ command_table['get-login-password'] = ECRGetLoginPassword(session)
class ECRLogin(BasicCommand):
- """Log in with docker login"""
+ """Log in with 'docker login'"""
NAME = 'get-login'
DESCRIPTION = BasicCommand.FROM_FILE('ecr/get-login_description.rst')
@@ -49,8 +50,8 @@
'help_text': (
"Specify if the '-e' flag should be included in the "
"'docker login' command. The '-e' option has been deprecated "
- "and is removed in docker version 17.06 and later. You must "
- "specify --no-include-email if you're using docker version "
+ "and is removed in Docker version 17.06 and later. You must "
+ "specify --no-include-email if you're using Docker version "
"17.06 or later. The default behavior is to include the "
"'-e' flag in the 'docker login' output."),
},
@@ -83,3 +84,24 @@
sys.stdout.write(' '.join(command))
sys.stdout.write('\n')
return 0
+
+
+class ECRGetLoginPassword(BasicCommand):
+ """Get a password to be used with container clients such as Docker"""
+ NAME = 'get-login-password'
+
+ DESCRIPTION = BasicCommand.FROM_FILE(
+ 'ecr/get-login-password_description.rst')
+
+ def _run_main(self, parsed_args, parsed_globals):
+ ecr_client = create_client_from_parsed_globals(
+ self._session,
+ 'ecr',
+ parsed_globals)
+ result = ecr_client.get_authorization_token()
+ auth = result['authorizationData'][0]
+ auth_token = b64decode(auth['authorizationToken']).decode()
+ _, password = auth_token.split(':')
+ sys.stdout.write(password)
+ sys.stdout.write('\n')
+ return 0
|
{"golden_diff": "diff --git a/awscli/customizations/ecr.py b/awscli/customizations/ecr.py\n--- a/awscli/customizations/ecr.py\n+++ b/awscli/customizations/ecr.py\n@@ -18,15 +18,16 @@\n \n \n def register_ecr_commands(cli):\n- cli.register('building-command-table.ecr', _inject_get_login)\n+ cli.register('building-command-table.ecr', _inject_commands)\n \n \n-def _inject_get_login(command_table, session, **kwargs):\n+def _inject_commands(command_table, session, **kwargs):\n command_table['get-login'] = ECRLogin(session)\n+ command_table['get-login-password'] = ECRGetLoginPassword(session)\n \n \n class ECRLogin(BasicCommand):\n- \"\"\"Log in with docker login\"\"\"\n+ \"\"\"Log in with 'docker login'\"\"\"\n NAME = 'get-login'\n \n DESCRIPTION = BasicCommand.FROM_FILE('ecr/get-login_description.rst')\n@@ -49,8 +50,8 @@\n 'help_text': (\n \"Specify if the '-e' flag should be included in the \"\n \"'docker login' command. The '-e' option has been deprecated \"\n- \"and is removed in docker version 17.06 and later. You must \"\n- \"specify --no-include-email if you're using docker version \"\n+ \"and is removed in Docker version 17.06 and later. You must \"\n+ \"specify --no-include-email if you're using Docker version \"\n \"17.06 or later. The default behavior is to include the \"\n \"'-e' flag in the 'docker login' output.\"),\n },\n@@ -83,3 +84,24 @@\n sys.stdout.write(' '.join(command))\n sys.stdout.write('\\n')\n return 0\n+\n+\n+class ECRGetLoginPassword(BasicCommand):\n+ \"\"\"Get a password to be used with container clients such as Docker\"\"\"\n+ NAME = 'get-login-password'\n+\n+ DESCRIPTION = BasicCommand.FROM_FILE(\n+ 'ecr/get-login-password_description.rst')\n+\n+ def _run_main(self, parsed_args, parsed_globals):\n+ ecr_client = create_client_from_parsed_globals(\n+ self._session,\n+ 'ecr',\n+ parsed_globals)\n+ result = ecr_client.get_authorization_token()\n+ auth = result['authorizationData'][0]\n+ auth_token = b64decode(auth['authorizationToken']).decode()\n+ _, password = auth_token.split(':')\n+ sys.stdout.write(password)\n+ sys.stdout.write('\\n')\n+ return 0\n", "issue": "Proposal: aws ecr get-login-password\nThis is a proposal for a new AWS CLI command for ECR\r\n\r\n```\r\n$ aws ecr get-login-password\r\ncGFzc3dvcmQ=\r\n```\r\n\r\nThis command can be used in the following ways:\r\n\r\n```\r\n$ aws ecr get-login-password | docker login --username AWS --password-stdin 111111111111.dkr.ecr.us-west-2.amazonaws.com\r\nLogin Succeeded\r\n\r\n$ docker login --username AWS --password \"$(aws ecr get-login-password)\" 111111111111.dkr.ecr.us-west-2.amazonaws.com\r\nLogin Succeeded\r\n```\r\n\r\nThis idea has been previously proposed by @theY4Kman https://github.com/aws/aws-cli/issues/2875#issuecomment-433565983 and @kojiromike https://github.com/aws/aws-cli/issues/3687#issue-374397564\n", "before_files": [{"content": "# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nfrom awscli.customizations.commands import BasicCommand\nfrom awscli.customizations.utils import create_client_from_parsed_globals\n\nfrom base64 import b64decode\nimport sys\n\n\ndef register_ecr_commands(cli):\n cli.register('building-command-table.ecr', _inject_get_login)\n\n\ndef _inject_get_login(command_table, session, **kwargs):\n command_table['get-login'] = ECRLogin(session)\n\n\nclass ECRLogin(BasicCommand):\n \"\"\"Log in with docker login\"\"\"\n NAME = 'get-login'\n\n DESCRIPTION = BasicCommand.FROM_FILE('ecr/get-login_description.rst')\n\n ARG_TABLE = [\n {\n 'name': 'registry-ids',\n 'help_text': 'A list of AWS account IDs that correspond to the '\n 'Amazon ECR registries that you want to log in to.',\n 'required': False,\n 'nargs': '+'\n },\n {\n 'name': 'include-email',\n 'action': 'store_true',\n 'group_name': 'include-email',\n 'dest': 'include_email',\n 'default': True,\n 'required': False,\n 'help_text': (\n \"Specify if the '-e' flag should be included in the \"\n \"'docker login' command. The '-e' option has been deprecated \"\n \"and is removed in docker version 17.06 and later. You must \"\n \"specify --no-include-email if you're using docker version \"\n \"17.06 or later. The default behavior is to include the \"\n \"'-e' flag in the 'docker login' output.\"),\n },\n {\n 'name': 'no-include-email',\n 'help_text': 'Include email arg',\n 'action': 'store_false',\n 'default': True,\n 'group_name': 'include-email',\n 'dest': 'include_email',\n 'required': False,\n },\n ]\n\n def _run_main(self, parsed_args, parsed_globals):\n ecr_client = create_client_from_parsed_globals(\n self._session, 'ecr', parsed_globals)\n if not parsed_args.registry_ids:\n result = ecr_client.get_authorization_token()\n else:\n result = ecr_client.get_authorization_token(\n registryIds=parsed_args.registry_ids)\n for auth in result['authorizationData']:\n auth_token = b64decode(auth['authorizationToken']).decode()\n username, password = auth_token.split(':')\n command = ['docker', 'login', '-u', username, '-p', password]\n if parsed_args.include_email:\n command.extend(['-e', 'none'])\n command.append(auth['proxyEndpoint'])\n sys.stdout.write(' '.join(command))\n sys.stdout.write('\\n')\n return 0\n", "path": "awscli/customizations/ecr.py"}], "after_files": [{"content": "# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nfrom awscli.customizations.commands import BasicCommand\nfrom awscli.customizations.utils import create_client_from_parsed_globals\n\nfrom base64 import b64decode\nimport sys\n\n\ndef register_ecr_commands(cli):\n cli.register('building-command-table.ecr', _inject_commands)\n\n\ndef _inject_commands(command_table, session, **kwargs):\n command_table['get-login'] = ECRLogin(session)\n command_table['get-login-password'] = ECRGetLoginPassword(session)\n\n\nclass ECRLogin(BasicCommand):\n \"\"\"Log in with 'docker login'\"\"\"\n NAME = 'get-login'\n\n DESCRIPTION = BasicCommand.FROM_FILE('ecr/get-login_description.rst')\n\n ARG_TABLE = [\n {\n 'name': 'registry-ids',\n 'help_text': 'A list of AWS account IDs that correspond to the '\n 'Amazon ECR registries that you want to log in to.',\n 'required': False,\n 'nargs': '+'\n },\n {\n 'name': 'include-email',\n 'action': 'store_true',\n 'group_name': 'include-email',\n 'dest': 'include_email',\n 'default': True,\n 'required': False,\n 'help_text': (\n \"Specify if the '-e' flag should be included in the \"\n \"'docker login' command. The '-e' option has been deprecated \"\n \"and is removed in Docker version 17.06 and later. You must \"\n \"specify --no-include-email if you're using Docker version \"\n \"17.06 or later. The default behavior is to include the \"\n \"'-e' flag in the 'docker login' output.\"),\n },\n {\n 'name': 'no-include-email',\n 'help_text': 'Include email arg',\n 'action': 'store_false',\n 'default': True,\n 'group_name': 'include-email',\n 'dest': 'include_email',\n 'required': False,\n },\n ]\n\n def _run_main(self, parsed_args, parsed_globals):\n ecr_client = create_client_from_parsed_globals(\n self._session, 'ecr', parsed_globals)\n if not parsed_args.registry_ids:\n result = ecr_client.get_authorization_token()\n else:\n result = ecr_client.get_authorization_token(\n registryIds=parsed_args.registry_ids)\n for auth in result['authorizationData']:\n auth_token = b64decode(auth['authorizationToken']).decode()\n username, password = auth_token.split(':')\n command = ['docker', 'login', '-u', username, '-p', password]\n if parsed_args.include_email:\n command.extend(['-e', 'none'])\n command.append(auth['proxyEndpoint'])\n sys.stdout.write(' '.join(command))\n sys.stdout.write('\\n')\n return 0\n\n\nclass ECRGetLoginPassword(BasicCommand):\n \"\"\"Get a password to be used with container clients such as Docker\"\"\"\n NAME = 'get-login-password'\n\n DESCRIPTION = BasicCommand.FROM_FILE(\n 'ecr/get-login-password_description.rst')\n\n def _run_main(self, parsed_args, parsed_globals):\n ecr_client = create_client_from_parsed_globals(\n self._session,\n 'ecr',\n parsed_globals)\n result = ecr_client.get_authorization_token()\n auth = result['authorizationData'][0]\n auth_token = b64decode(auth['authorizationToken']).decode()\n _, password = auth_token.split(':')\n sys.stdout.write(password)\n sys.stdout.write('\\n')\n return 0\n", "path": "awscli/customizations/ecr.py"}]}
| 1,372 | 576 |
gh_patches_debug_24070
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1727
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pre-commit fails for git >=2.25 if repo is on a Windows subst drive
Cross reference for another issue with same apparent root cause: https://github.com/microsoft/vscode/issues/100274#issuecomment-646499795
Issue observed with pre-commit==2.7.1 and git 2.27.
Issue resolved with downgrading git to 2.21 (I only have access to certain versions on my work machine).
Steps to recreate for pre-commit (some taken from the above cross-reference):
- Install git >= 2.25 on Windows
- Create a subst drive (`mkdir C:\subst_dir && subst Z: C:\subst_dir`)
- Create a git repo in there (`mkdir Z:\repo && cd /d Z:\repo && git init`)
- Add some python code, configure pre-commit, and run pre-commit.
Failure observed: `An unexpected error has occurred: ValueError: path is on mount 'Z:', start on mount 'C:'`
Diagnosis - it appears that the use of `git rev-parse --show-toplevel` in `pre_commit.main.get_root()` is suffering the same issue as seen in cross-referenced ticket; git will "see through" the subst command and rather than return a path on the subst-defined Z: drive, it will return the path from the C: drive. With this, after `pre_commit.main._adjust_args_and_chdir()` calls `pre_commit.main.get_root()` and does a chdir to the returned location, the following call to `os.path.relpath(args.config)` then fails with the ValueError as above, because it sees the path to the config file being on `Z:` but the current location being on `C:`.
Afraid I don't have a suggested resolution but wanted to flag this up. I'm not too familiar with Windows systems and I'm a long way from Admin access on my work machine so opportunities for testing are limited; this was discovered as my scratch space for repos is a subst drive.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/git.py`
Content:
```
1 import logging
2 import os.path
3 import sys
4 from typing import Dict
5 from typing import List
6 from typing import MutableMapping
7 from typing import Optional
8 from typing import Set
9
10 from pre_commit.errors import FatalError
11 from pre_commit.util import CalledProcessError
12 from pre_commit.util import cmd_output
13 from pre_commit.util import cmd_output_b
14
15
16 logger = logging.getLogger(__name__)
17
18
19 def zsplit(s: str) -> List[str]:
20 s = s.strip('\0')
21 if s:
22 return s.split('\0')
23 else:
24 return []
25
26
27 def no_git_env(
28 _env: Optional[MutableMapping[str, str]] = None,
29 ) -> Dict[str, str]:
30 # Too many bugs dealing with environment variables and GIT:
31 # https://github.com/pre-commit/pre-commit/issues/300
32 # In git 2.6.3 (maybe others), git exports GIT_WORK_TREE while running
33 # pre-commit hooks
34 # In git 1.9.1 (maybe others), git exports GIT_DIR and GIT_INDEX_FILE
35 # while running pre-commit hooks in submodules.
36 # GIT_DIR: Causes git clone to clone wrong thing
37 # GIT_INDEX_FILE: Causes 'error invalid object ...' during commit
38 _env = _env if _env is not None else os.environ
39 return {
40 k: v for k, v in _env.items()
41 if not k.startswith('GIT_') or
42 k in {
43 'GIT_EXEC_PATH', 'GIT_SSH', 'GIT_SSH_COMMAND', 'GIT_SSL_CAINFO',
44 'GIT_SSL_NO_VERIFY',
45 }
46 }
47
48
49 def get_root() -> str:
50 try:
51 root = cmd_output('git', 'rev-parse', '--show-toplevel')[1].strip()
52 except CalledProcessError:
53 raise FatalError(
54 'git failed. Is it installed, and are you in a Git repository '
55 'directory?',
56 )
57 else:
58 if root == '': # pragma: no cover (old git)
59 raise FatalError(
60 'git toplevel unexpectedly empty! make sure you are not '
61 'inside the `.git` directory of your repository.',
62 )
63 else:
64 return root
65
66
67 def get_git_dir(git_root: str = '.') -> str:
68 opts = ('--git-common-dir', '--git-dir')
69 _, out, _ = cmd_output('git', 'rev-parse', *opts, cwd=git_root)
70 for line, opt in zip(out.splitlines(), opts):
71 if line != opt: # pragma: no branch (git < 2.5)
72 return os.path.normpath(os.path.join(git_root, line))
73 else:
74 raise AssertionError('unreachable: no git dir')
75
76
77 def get_remote_url(git_root: str) -> str:
78 _, out, _ = cmd_output('git', 'config', 'remote.origin.url', cwd=git_root)
79 return out.strip()
80
81
82 def is_in_merge_conflict() -> bool:
83 git_dir = get_git_dir('.')
84 return (
85 os.path.exists(os.path.join(git_dir, 'MERGE_MSG')) and
86 os.path.exists(os.path.join(git_dir, 'MERGE_HEAD'))
87 )
88
89
90 def parse_merge_msg_for_conflicts(merge_msg: bytes) -> List[str]:
91 # Conflicted files start with tabs
92 return [
93 line.lstrip(b'#').strip().decode()
94 for line in merge_msg.splitlines()
95 # '#\t' for git 2.4.1
96 if line.startswith((b'\t', b'#\t'))
97 ]
98
99
100 def get_conflicted_files() -> Set[str]:
101 logger.info('Checking merge-conflict files only.')
102 # Need to get the conflicted files from the MERGE_MSG because they could
103 # have resolved the conflict by choosing one side or the other
104 with open(os.path.join(get_git_dir('.'), 'MERGE_MSG'), 'rb') as f:
105 merge_msg = f.read()
106 merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)
107
108 # This will get the rest of the changes made after the merge.
109 # If they resolved the merge conflict by choosing a mesh of both sides
110 # this will also include the conflicted files
111 tree_hash = cmd_output('git', 'write-tree')[1].strip()
112 merge_diff_filenames = zsplit(
113 cmd_output(
114 'git', 'diff', '--name-only', '--no-ext-diff', '-z',
115 '-m', tree_hash, 'HEAD', 'MERGE_HEAD',
116 )[1],
117 )
118 return set(merge_conflict_filenames) | set(merge_diff_filenames)
119
120
121 def get_staged_files(cwd: Optional[str] = None) -> List[str]:
122 return zsplit(
123 cmd_output(
124 'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',
125 # Everything except for D
126 '--diff-filter=ACMRTUXB',
127 cwd=cwd,
128 )[1],
129 )
130
131
132 def intent_to_add_files() -> List[str]:
133 _, stdout, _ = cmd_output(
134 'git', 'status', '--ignore-submodules', '--porcelain', '-z',
135 )
136 parts = list(reversed(zsplit(stdout)))
137 intent_to_add = []
138 while parts:
139 line = parts.pop()
140 status, filename = line[:3], line[3:]
141 if status[0] in {'C', 'R'}: # renames / moves have an additional arg
142 parts.pop()
143 if status[1] == 'A':
144 intent_to_add.append(filename)
145 return intent_to_add
146
147
148 def get_all_files() -> List[str]:
149 return zsplit(cmd_output('git', 'ls-files', '-z')[1])
150
151
152 def get_changed_files(old: str, new: str) -> List[str]:
153 return zsplit(
154 cmd_output(
155 'git', 'diff', '--name-only', '--no-ext-diff', '-z',
156 f'{old}...{new}',
157 )[1],
158 )
159
160
161 def head_rev(remote: str) -> str:
162 _, out, _ = cmd_output('git', 'ls-remote', '--exit-code', remote, 'HEAD')
163 return out.split()[0]
164
165
166 def has_diff(*args: str, repo: str = '.') -> bool:
167 cmd = ('git', 'diff', '--quiet', '--no-ext-diff', *args)
168 return cmd_output_b(*cmd, cwd=repo, retcode=None)[0] == 1
169
170
171 def has_core_hookpaths_set() -> bool:
172 _, out, _ = cmd_output_b('git', 'config', 'core.hooksPath', retcode=None)
173 return bool(out.strip())
174
175
176 def init_repo(path: str, remote: str) -> None:
177 if os.path.isdir(remote):
178 remote = os.path.abspath(remote)
179
180 env = no_git_env()
181 # avoid the user's template so that hooks do not recurse
182 cmd_output_b('git', 'init', '--template=', path, env=env)
183 cmd_output_b('git', 'remote', 'add', 'origin', remote, cwd=path, env=env)
184
185
186 def commit(repo: str = '.') -> None:
187 env = no_git_env()
188 name, email = 'pre-commit', '[email protected]'
189 env['GIT_AUTHOR_NAME'] = env['GIT_COMMITTER_NAME'] = name
190 env['GIT_AUTHOR_EMAIL'] = env['GIT_COMMITTER_EMAIL'] = email
191 cmd = ('git', 'commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')
192 cmd_output_b(*cmd, cwd=repo, env=env)
193
194
195 def git_path(name: str, repo: str = '.') -> str:
196 _, out, _ = cmd_output('git', 'rev-parse', '--git-path', name, cwd=repo)
197 return os.path.join(repo, out.strip())
198
199
200 def check_for_cygwin_mismatch() -> None:
201 """See https://github.com/pre-commit/pre-commit/issues/354"""
202 if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)
203 is_cygwin_python = sys.platform == 'cygwin'
204 try:
205 toplevel = get_root()
206 except FatalError: # skip the check if we're not in a git repo
207 return
208 is_cygwin_git = toplevel.startswith('/')
209
210 if is_cygwin_python ^ is_cygwin_git:
211 exe_type = {True: '(cygwin)', False: '(windows)'}
212 logger.warn(
213 f'pre-commit has detected a mix of cygwin python / git\n'
214 f'This combination is not supported, it is likely you will '
215 f'receive an error later in the program.\n'
216 f'Make sure to use cygwin git+python while using cygwin\n'
217 f'These can be installed through the cygwin installer.\n'
218 f' - python {exe_type[is_cygwin_python]}\n'
219 f' - git {exe_type[is_cygwin_git]}\n',
220 )
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -47,21 +47,26 @@
def get_root() -> str:
+ # Git 2.25 introduced a change to "rev-parse --show-toplevel" that exposed
+ # underlying volumes for Windows drives mapped with SUBST. We use
+ # "rev-parse --show-cdup" to get the appropriate path, but must perform
+ # an extra check to see if we are in the .git directory.
try:
- root = cmd_output('git', 'rev-parse', '--show-toplevel')[1].strip()
+ root = os.path.realpath(
+ cmd_output('git', 'rev-parse', '--show-cdup')[1].strip(),
+ )
+ git_dir = os.path.realpath(get_git_dir())
except CalledProcessError:
raise FatalError(
'git failed. Is it installed, and are you in a Git repository '
'directory?',
)
- else:
- if root == '': # pragma: no cover (old git)
- raise FatalError(
- 'git toplevel unexpectedly empty! make sure you are not '
- 'inside the `.git` directory of your repository.',
- )
- else:
- return root
+ if os.path.commonpath((root, git_dir)) == git_dir:
+ raise FatalError(
+ 'git toplevel unexpectedly empty! make sure you are not '
+ 'inside the `.git` directory of your repository.',
+ )
+ return root
def get_git_dir(git_root: str = '.') -> str:
|
{"golden_diff": "diff --git a/pre_commit/git.py b/pre_commit/git.py\n--- a/pre_commit/git.py\n+++ b/pre_commit/git.py\n@@ -47,21 +47,26 @@\n \n \n def get_root() -> str:\n+ # Git 2.25 introduced a change to \"rev-parse --show-toplevel\" that exposed\n+ # underlying volumes for Windows drives mapped with SUBST. We use\n+ # \"rev-parse --show-cdup\" to get the appropriate path, but must perform\n+ # an extra check to see if we are in the .git directory.\n try:\n- root = cmd_output('git', 'rev-parse', '--show-toplevel')[1].strip()\n+ root = os.path.realpath(\n+ cmd_output('git', 'rev-parse', '--show-cdup')[1].strip(),\n+ )\n+ git_dir = os.path.realpath(get_git_dir())\n except CalledProcessError:\n raise FatalError(\n 'git failed. Is it installed, and are you in a Git repository '\n 'directory?',\n )\n- else:\n- if root == '': # pragma: no cover (old git)\n- raise FatalError(\n- 'git toplevel unexpectedly empty! make sure you are not '\n- 'inside the `.git` directory of your repository.',\n- )\n- else:\n- return root\n+ if os.path.commonpath((root, git_dir)) == git_dir:\n+ raise FatalError(\n+ 'git toplevel unexpectedly empty! make sure you are not '\n+ 'inside the `.git` directory of your repository.',\n+ )\n+ return root\n \n \n def get_git_dir(git_root: str = '.') -> str:\n", "issue": "Pre-commit fails for git >=2.25 if repo is on a Windows subst drive\nCross reference for another issue with same apparent root cause: https://github.com/microsoft/vscode/issues/100274#issuecomment-646499795\r\n\r\nIssue observed with pre-commit==2.7.1 and git 2.27.\r\nIssue resolved with downgrading git to 2.21 (I only have access to certain versions on my work machine).\r\n\r\nSteps to recreate for pre-commit (some taken from the above cross-reference):\r\n\r\n- Install git >= 2.25 on Windows\r\n\r\n- Create a subst drive (`mkdir C:\\subst_dir && subst Z: C:\\subst_dir`)\r\n\r\n- Create a git repo in there (`mkdir Z:\\repo && cd /d Z:\\repo && git init`)\r\n\r\n- Add some python code, configure pre-commit, and run pre-commit.\r\n\r\nFailure observed: `An unexpected error has occurred: ValueError: path is on mount 'Z:', start on mount 'C:'`\r\n\r\nDiagnosis - it appears that the use of `git rev-parse --show-toplevel` in `pre_commit.main.get_root()` is suffering the same issue as seen in cross-referenced ticket; git will \"see through\" the subst command and rather than return a path on the subst-defined Z: drive, it will return the path from the C: drive. With this, after `pre_commit.main._adjust_args_and_chdir()` calls `pre_commit.main.get_root()` and does a chdir to the returned location, the following call to `os.path.relpath(args.config)` then fails with the ValueError as above, because it sees the path to the config file being on `Z:` but the current location being on `C:`.\r\n\r\nAfraid I don't have a suggested resolution but wanted to flag this up. I'm not too familiar with Windows systems and I'm a long way from Admin access on my work machine so opportunities for testing are limited; this was discovered as my scratch space for repos is a subst drive.\r\n\n", "before_files": [{"content": "import logging\nimport os.path\nimport sys\nfrom typing import Dict\nfrom typing import List\nfrom typing import MutableMapping\nfrom typing import Optional\nfrom typing import Set\n\nfrom pre_commit.errors import FatalError\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef zsplit(s: str) -> List[str]:\n s = s.strip('\\0')\n if s:\n return s.split('\\0')\n else:\n return []\n\n\ndef no_git_env(\n _env: Optional[MutableMapping[str, str]] = None,\n) -> Dict[str, str]:\n # Too many bugs dealing with environment variables and GIT:\n # https://github.com/pre-commit/pre-commit/issues/300\n # In git 2.6.3 (maybe others), git exports GIT_WORK_TREE while running\n # pre-commit hooks\n # In git 1.9.1 (maybe others), git exports GIT_DIR and GIT_INDEX_FILE\n # while running pre-commit hooks in submodules.\n # GIT_DIR: Causes git clone to clone wrong thing\n # GIT_INDEX_FILE: Causes 'error invalid object ...' during commit\n _env = _env if _env is not None else os.environ\n return {\n k: v for k, v in _env.items()\n if not k.startswith('GIT_') or\n k in {\n 'GIT_EXEC_PATH', 'GIT_SSH', 'GIT_SSH_COMMAND', 'GIT_SSL_CAINFO',\n 'GIT_SSL_NO_VERIFY',\n }\n }\n\n\ndef get_root() -> str:\n try:\n root = cmd_output('git', 'rev-parse', '--show-toplevel')[1].strip()\n except CalledProcessError:\n raise FatalError(\n 'git failed. Is it installed, and are you in a Git repository '\n 'directory?',\n )\n else:\n if root == '': # pragma: no cover (old git)\n raise FatalError(\n 'git toplevel unexpectedly empty! make sure you are not '\n 'inside the `.git` directory of your repository.',\n )\n else:\n return root\n\n\ndef get_git_dir(git_root: str = '.') -> str:\n opts = ('--git-common-dir', '--git-dir')\n _, out, _ = cmd_output('git', 'rev-parse', *opts, cwd=git_root)\n for line, opt in zip(out.splitlines(), opts):\n if line != opt: # pragma: no branch (git < 2.5)\n return os.path.normpath(os.path.join(git_root, line))\n else:\n raise AssertionError('unreachable: no git dir')\n\n\ndef get_remote_url(git_root: str) -> str:\n _, out, _ = cmd_output('git', 'config', 'remote.origin.url', cwd=git_root)\n return out.strip()\n\n\ndef is_in_merge_conflict() -> bool:\n git_dir = get_git_dir('.')\n return (\n os.path.exists(os.path.join(git_dir, 'MERGE_MSG')) and\n os.path.exists(os.path.join(git_dir, 'MERGE_HEAD'))\n )\n\n\ndef parse_merge_msg_for_conflicts(merge_msg: bytes) -> List[str]:\n # Conflicted files start with tabs\n return [\n line.lstrip(b'#').strip().decode()\n for line in merge_msg.splitlines()\n # '#\\t' for git 2.4.1\n if line.startswith((b'\\t', b'#\\t'))\n ]\n\n\ndef get_conflicted_files() -> Set[str]:\n logger.info('Checking merge-conflict files only.')\n # Need to get the conflicted files from the MERGE_MSG because they could\n # have resolved the conflict by choosing one side or the other\n with open(os.path.join(get_git_dir('.'), 'MERGE_MSG'), 'rb') as f:\n merge_msg = f.read()\n merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)\n\n # This will get the rest of the changes made after the merge.\n # If they resolved the merge conflict by choosing a mesh of both sides\n # this will also include the conflicted files\n tree_hash = cmd_output('git', 'write-tree')[1].strip()\n merge_diff_filenames = zsplit(\n cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n '-m', tree_hash, 'HEAD', 'MERGE_HEAD',\n )[1],\n )\n return set(merge_conflict_filenames) | set(merge_diff_filenames)\n\n\ndef get_staged_files(cwd: Optional[str] = None) -> List[str]:\n return zsplit(\n cmd_output(\n 'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',\n # Everything except for D\n '--diff-filter=ACMRTUXB',\n cwd=cwd,\n )[1],\n )\n\n\ndef intent_to_add_files() -> List[str]:\n _, stdout, _ = cmd_output(\n 'git', 'status', '--ignore-submodules', '--porcelain', '-z',\n )\n parts = list(reversed(zsplit(stdout)))\n intent_to_add = []\n while parts:\n line = parts.pop()\n status, filename = line[:3], line[3:]\n if status[0] in {'C', 'R'}: # renames / moves have an additional arg\n parts.pop()\n if status[1] == 'A':\n intent_to_add.append(filename)\n return intent_to_add\n\n\ndef get_all_files() -> List[str]:\n return zsplit(cmd_output('git', 'ls-files', '-z')[1])\n\n\ndef get_changed_files(old: str, new: str) -> List[str]:\n return zsplit(\n cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n f'{old}...{new}',\n )[1],\n )\n\n\ndef head_rev(remote: str) -> str:\n _, out, _ = cmd_output('git', 'ls-remote', '--exit-code', remote, 'HEAD')\n return out.split()[0]\n\n\ndef has_diff(*args: str, repo: str = '.') -> bool:\n cmd = ('git', 'diff', '--quiet', '--no-ext-diff', *args)\n return cmd_output_b(*cmd, cwd=repo, retcode=None)[0] == 1\n\n\ndef has_core_hookpaths_set() -> bool:\n _, out, _ = cmd_output_b('git', 'config', 'core.hooksPath', retcode=None)\n return bool(out.strip())\n\n\ndef init_repo(path: str, remote: str) -> None:\n if os.path.isdir(remote):\n remote = os.path.abspath(remote)\n\n env = no_git_env()\n # avoid the user's template so that hooks do not recurse\n cmd_output_b('git', 'init', '--template=', path, env=env)\n cmd_output_b('git', 'remote', 'add', 'origin', remote, cwd=path, env=env)\n\n\ndef commit(repo: str = '.') -> None:\n env = no_git_env()\n name, email = 'pre-commit', '[email protected]'\n env['GIT_AUTHOR_NAME'] = env['GIT_COMMITTER_NAME'] = name\n env['GIT_AUTHOR_EMAIL'] = env['GIT_COMMITTER_EMAIL'] = email\n cmd = ('git', 'commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')\n cmd_output_b(*cmd, cwd=repo, env=env)\n\n\ndef git_path(name: str, repo: str = '.') -> str:\n _, out, _ = cmd_output('git', 'rev-parse', '--git-path', name, cwd=repo)\n return os.path.join(repo, out.strip())\n\n\ndef check_for_cygwin_mismatch() -> None:\n \"\"\"See https://github.com/pre-commit/pre-commit/issues/354\"\"\"\n if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)\n is_cygwin_python = sys.platform == 'cygwin'\n try:\n toplevel = get_root()\n except FatalError: # skip the check if we're not in a git repo\n return\n is_cygwin_git = toplevel.startswith('/')\n\n if is_cygwin_python ^ is_cygwin_git:\n exe_type = {True: '(cygwin)', False: '(windows)'}\n logger.warn(\n f'pre-commit has detected a mix of cygwin python / git\\n'\n f'This combination is not supported, it is likely you will '\n f'receive an error later in the program.\\n'\n f'Make sure to use cygwin git+python while using cygwin\\n'\n f'These can be installed through the cygwin installer.\\n'\n f' - python {exe_type[is_cygwin_python]}\\n'\n f' - git {exe_type[is_cygwin_git]}\\n',\n )\n", "path": "pre_commit/git.py"}], "after_files": [{"content": "import logging\nimport os.path\nimport sys\nfrom typing import Dict\nfrom typing import List\nfrom typing import MutableMapping\nfrom typing import Optional\nfrom typing import Set\n\nfrom pre_commit.errors import FatalError\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef zsplit(s: str) -> List[str]:\n s = s.strip('\\0')\n if s:\n return s.split('\\0')\n else:\n return []\n\n\ndef no_git_env(\n _env: Optional[MutableMapping[str, str]] = None,\n) -> Dict[str, str]:\n # Too many bugs dealing with environment variables and GIT:\n # https://github.com/pre-commit/pre-commit/issues/300\n # In git 2.6.3 (maybe others), git exports GIT_WORK_TREE while running\n # pre-commit hooks\n # In git 1.9.1 (maybe others), git exports GIT_DIR and GIT_INDEX_FILE\n # while running pre-commit hooks in submodules.\n # GIT_DIR: Causes git clone to clone wrong thing\n # GIT_INDEX_FILE: Causes 'error invalid object ...' during commit\n _env = _env if _env is not None else os.environ\n return {\n k: v for k, v in _env.items()\n if not k.startswith('GIT_') or\n k in {\n 'GIT_EXEC_PATH', 'GIT_SSH', 'GIT_SSH_COMMAND', 'GIT_SSL_CAINFO',\n 'GIT_SSL_NO_VERIFY',\n }\n }\n\n\ndef get_root() -> str:\n # Git 2.25 introduced a change to \"rev-parse --show-toplevel\" that exposed\n # underlying volumes for Windows drives mapped with SUBST. We use\n # \"rev-parse --show-cdup\" to get the appropriate path, but must perform\n # an extra check to see if we are in the .git directory.\n try:\n root = os.path.realpath(\n cmd_output('git', 'rev-parse', '--show-cdup')[1].strip(),\n )\n git_dir = os.path.realpath(get_git_dir())\n except CalledProcessError:\n raise FatalError(\n 'git failed. Is it installed, and are you in a Git repository '\n 'directory?',\n )\n if os.path.commonpath((root, git_dir)) == git_dir:\n raise FatalError(\n 'git toplevel unexpectedly empty! make sure you are not '\n 'inside the `.git` directory of your repository.',\n )\n return root\n\n\ndef get_git_dir(git_root: str = '.') -> str:\n opts = ('--git-common-dir', '--git-dir')\n _, out, _ = cmd_output('git', 'rev-parse', *opts, cwd=git_root)\n for line, opt in zip(out.splitlines(), opts):\n if line != opt: # pragma: no branch (git < 2.5)\n return os.path.normpath(os.path.join(git_root, line))\n else:\n raise AssertionError('unreachable: no git dir')\n\n\ndef get_remote_url(git_root: str) -> str:\n _, out, _ = cmd_output('git', 'config', 'remote.origin.url', cwd=git_root)\n return out.strip()\n\n\ndef is_in_merge_conflict() -> bool:\n git_dir = get_git_dir('.')\n return (\n os.path.exists(os.path.join(git_dir, 'MERGE_MSG')) and\n os.path.exists(os.path.join(git_dir, 'MERGE_HEAD'))\n )\n\n\ndef parse_merge_msg_for_conflicts(merge_msg: bytes) -> List[str]:\n # Conflicted files start with tabs\n return [\n line.lstrip(b'#').strip().decode()\n for line in merge_msg.splitlines()\n # '#\\t' for git 2.4.1\n if line.startswith((b'\\t', b'#\\t'))\n ]\n\n\ndef get_conflicted_files() -> Set[str]:\n logger.info('Checking merge-conflict files only.')\n # Need to get the conflicted files from the MERGE_MSG because they could\n # have resolved the conflict by choosing one side or the other\n with open(os.path.join(get_git_dir('.'), 'MERGE_MSG'), 'rb') as f:\n merge_msg = f.read()\n merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)\n\n # This will get the rest of the changes made after the merge.\n # If they resolved the merge conflict by choosing a mesh of both sides\n # this will also include the conflicted files\n tree_hash = cmd_output('git', 'write-tree')[1].strip()\n merge_diff_filenames = zsplit(\n cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n '-m', tree_hash, 'HEAD', 'MERGE_HEAD',\n )[1],\n )\n return set(merge_conflict_filenames) | set(merge_diff_filenames)\n\n\ndef get_staged_files(cwd: Optional[str] = None) -> List[str]:\n return zsplit(\n cmd_output(\n 'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',\n # Everything except for D\n '--diff-filter=ACMRTUXB',\n cwd=cwd,\n )[1],\n )\n\n\ndef intent_to_add_files() -> List[str]:\n _, stdout, _ = cmd_output(\n 'git', 'status', '--ignore-submodules', '--porcelain', '-z',\n )\n parts = list(reversed(zsplit(stdout)))\n intent_to_add = []\n while parts:\n line = parts.pop()\n status, filename = line[:3], line[3:]\n if status[0] in {'C', 'R'}: # renames / moves have an additional arg\n parts.pop()\n if status[1] == 'A':\n intent_to_add.append(filename)\n return intent_to_add\n\n\ndef get_all_files() -> List[str]:\n return zsplit(cmd_output('git', 'ls-files', '-z')[1])\n\n\ndef get_changed_files(old: str, new: str) -> List[str]:\n return zsplit(\n cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n f'{old}...{new}',\n )[1],\n )\n\n\ndef head_rev(remote: str) -> str:\n _, out, _ = cmd_output('git', 'ls-remote', '--exit-code', remote, 'HEAD')\n return out.split()[0]\n\n\ndef has_diff(*args: str, repo: str = '.') -> bool:\n cmd = ('git', 'diff', '--quiet', '--no-ext-diff', *args)\n return cmd_output_b(*cmd, cwd=repo, retcode=None)[0] == 1\n\n\ndef has_core_hookpaths_set() -> bool:\n _, out, _ = cmd_output_b('git', 'config', 'core.hooksPath', retcode=None)\n return bool(out.strip())\n\n\ndef init_repo(path: str, remote: str) -> None:\n if os.path.isdir(remote):\n remote = os.path.abspath(remote)\n\n env = no_git_env()\n # avoid the user's template so that hooks do not recurse\n cmd_output_b('git', 'init', '--template=', path, env=env)\n cmd_output_b('git', 'remote', 'add', 'origin', remote, cwd=path, env=env)\n\n\ndef commit(repo: str = '.') -> None:\n env = no_git_env()\n name, email = 'pre-commit', '[email protected]'\n env['GIT_AUTHOR_NAME'] = env['GIT_COMMITTER_NAME'] = name\n env['GIT_AUTHOR_EMAIL'] = env['GIT_COMMITTER_EMAIL'] = email\n cmd = ('git', 'commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')\n cmd_output_b(*cmd, cwd=repo, env=env)\n\n\ndef git_path(name: str, repo: str = '.') -> str:\n _, out, _ = cmd_output('git', 'rev-parse', '--git-path', name, cwd=repo)\n return os.path.join(repo, out.strip())\n\n\ndef check_for_cygwin_mismatch() -> None:\n \"\"\"See https://github.com/pre-commit/pre-commit/issues/354\"\"\"\n if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)\n is_cygwin_python = sys.platform == 'cygwin'\n try:\n toplevel = get_root()\n except FatalError: # skip the check if we're not in a git repo\n return\n is_cygwin_git = toplevel.startswith('/')\n\n if is_cygwin_python ^ is_cygwin_git:\n exe_type = {True: '(cygwin)', False: '(windows)'}\n logger.warn(\n f'pre-commit has detected a mix of cygwin python / git\\n'\n f'This combination is not supported, it is likely you will '\n f'receive an error later in the program.\\n'\n f'Make sure to use cygwin git+python while using cygwin\\n'\n f'These can be installed through the cygwin installer.\\n'\n f' - python {exe_type[is_cygwin_python]}\\n'\n f' - git {exe_type[is_cygwin_git]}\\n',\n )\n", "path": "pre_commit/git.py"}]}
| 3,236 | 372 |
gh_patches_debug_13272
|
rasdani/github-patches
|
git_diff
|
arviz-devs__arviz-1133
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `Matplotlib` framework classifier to `setup.py`
`Matplotlib` now has a [trove classifier on pypi](https://twitter.com/matplotlib/status/1235216347925286913). We can add:
```python
classifiers = [
'Framework :: Matplotlib',
]
```
to `arviz`'s `setup.py` to acknowledge that it is part of `Matplotlib` ecosystem.
I believe that `arviz` currently doesn't have any classifiers ([there are many!](https://pypi.org/classifiers/)). We could add something like the following to `setup.py`:
```python
classifiers = [
'Framework :: Matplotlib',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: Apache Software License'
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Topic :: Scientific/Engineering :: Visualization',
]
```
I'm not sure if you would say if `arviz` is:
```
'Development Status :: 5 - Production/Stable',
```
or
```
'Development Status :: 4 - Beta',
```
There may be thoughts on other classifiers to add, but I can quickly put together a PR for this
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import codecs
2 import os
3 import re
4
5 import setuptools
6 from setuptools import setup, find_packages
7 from setuptools.command.install import install
8 from setuptools.command.develop import develop
9
10
11 PROJECT_ROOT = os.path.dirname(os.path.realpath(__file__))
12 REQUIREMENTS_FILE = os.path.join(PROJECT_ROOT, "requirements.txt")
13 REQUIREMENTS_OPTIONAL_FILE = os.path.join(PROJECT_ROOT, "requirements-optional.txt")
14 REQUIREMENTS_DEV_FILE = os.path.join(PROJECT_ROOT, "requirements-dev.txt")
15 README_FILE = os.path.join(PROJECT_ROOT, "README.md")
16 VERSION_FILE = os.path.join(PROJECT_ROOT, "arviz", "__init__.py")
17
18
19 def get_requirements():
20 with codecs.open(REQUIREMENTS_FILE) as buff:
21 return buff.read().splitlines()
22
23
24 def get_requirements_dev():
25 with codecs.open(REQUIREMENTS_DEV_FILE) as buff:
26 return buff.read().splitlines()
27
28
29 def get_requirements_optional():
30 with codecs.open(REQUIREMENTS_OPTIONAL_FILE) as buff:
31 return buff.read().splitlines()
32
33
34 def get_long_description():
35 with codecs.open(README_FILE, "rt") as buff:
36 return buff.read()
37
38
39 def get_version():
40 lines = open(VERSION_FILE, "rt").readlines()
41 version_regex = r"^__version__ = ['\"]([^'\"]*)['\"]"
42 for line in lines:
43 mo = re.search(version_regex, line, re.M)
44 if mo:
45 return mo.group(1)
46 raise RuntimeError("Unable to find version in %s." % (VERSION_FILE,))
47
48
49 setup(
50 name="arviz",
51 license="Apache-2.0",
52 version=get_version(),
53 description="Exploratory analysis of Bayesian models",
54 author="ArviZ Developers",
55 url="http://github.com/arviz-devs/arviz",
56 packages=find_packages(),
57 install_requires=get_requirements(),
58 extras_require=dict(all=get_requirements_optional()), # test=get_requirements_dev(),
59 long_description=get_long_description(),
60 long_description_content_type="text/markdown",
61 include_package_data=True,
62 )
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -59,4 +59,19 @@
long_description=get_long_description(),
long_description_content_type="text/markdown",
include_package_data=True,
+ classifiers=[
+ "Development Status :: 4 - Beta",
+ "Framework :: Matplotlib",
+ "Intended Audience :: Science/Research",
+ "Intended Audience :: Education",
+ "License :: OSI Approved :: Apache Software License",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.6",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Topic :: Scientific/Engineering",
+ "Topic :: Scientific/Engineering :: Visualization",
+ "Topic :: Scientific/Engineering :: Mathematics",
+ ],
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -59,4 +59,19 @@\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n+ classifiers=[\n+ \"Development Status :: 4 - Beta\",\n+ \"Framework :: Matplotlib\",\n+ \"Intended Audience :: Science/Research\",\n+ \"Intended Audience :: Education\",\n+ \"License :: OSI Approved :: Apache Software License\",\n+ \"Programming Language :: Python\",\n+ \"Programming Language :: Python :: 3\",\n+ \"Programming Language :: Python :: 3.6\",\n+ \"Programming Language :: Python :: 3.7\",\n+ \"Programming Language :: Python :: 3.8\",\n+ \"Topic :: Scientific/Engineering\",\n+ \"Topic :: Scientific/Engineering :: Visualization\",\n+ \"Topic :: Scientific/Engineering :: Mathematics\",\n+ ],\n )\n", "issue": "Add `Matplotlib` framework classifier to `setup.py`\n`Matplotlib` now has a [trove classifier on pypi](https://twitter.com/matplotlib/status/1235216347925286913). We can add:\r\n\r\n```python\r\nclassifiers = [\r\n 'Framework :: Matplotlib',\r\n ]\r\n```\r\nto `arviz`'s `setup.py` to acknowledge that it is part of `Matplotlib` ecosystem.\r\n\r\nI believe that `arviz` currently doesn't have any classifiers ([there are many!](https://pypi.org/classifiers/)). We could add something like the following to `setup.py`:\r\n\r\n```python\r\nclassifiers = [\r\n 'Framework :: Matplotlib',\r\n 'Intended Audience :: Science/Research',\r\n 'License :: OSI Approved :: Apache Software License'\r\n 'Programming Language :: Python',\r\n 'Programming Language :: Python :: 3',\r\n 'Programming Language :: Python :: 3.5',\r\n 'Programming Language :: Python :: 3.6',\r\n 'Programming Language :: Python :: 3.7',\r\n 'Topic :: Scientific/Engineering :: Visualization',\r\n ]\r\n```\r\n\r\nI'm not sure if you would say if `arviz` is:\r\n```\r\n'Development Status :: 5 - Production/Stable',\r\n```\r\nor\r\n```\r\n'Development Status :: 4 - Beta',\r\n```\r\n\r\nThere may be thoughts on other classifiers to add, but I can quickly put together a PR for this\n", "before_files": [{"content": "import codecs\nimport os\nimport re\n\nimport setuptools\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\n\n\nPROJECT_ROOT = os.path.dirname(os.path.realpath(__file__))\nREQUIREMENTS_FILE = os.path.join(PROJECT_ROOT, \"requirements.txt\")\nREQUIREMENTS_OPTIONAL_FILE = os.path.join(PROJECT_ROOT, \"requirements-optional.txt\")\nREQUIREMENTS_DEV_FILE = os.path.join(PROJECT_ROOT, \"requirements-dev.txt\")\nREADME_FILE = os.path.join(PROJECT_ROOT, \"README.md\")\nVERSION_FILE = os.path.join(PROJECT_ROOT, \"arviz\", \"__init__.py\")\n\n\ndef get_requirements():\n with codecs.open(REQUIREMENTS_FILE) as buff:\n return buff.read().splitlines()\n\n\ndef get_requirements_dev():\n with codecs.open(REQUIREMENTS_DEV_FILE) as buff:\n return buff.read().splitlines()\n\n\ndef get_requirements_optional():\n with codecs.open(REQUIREMENTS_OPTIONAL_FILE) as buff:\n return buff.read().splitlines()\n\n\ndef get_long_description():\n with codecs.open(README_FILE, \"rt\") as buff:\n return buff.read()\n\n\ndef get_version():\n lines = open(VERSION_FILE, \"rt\").readlines()\n version_regex = r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\"\n for line in lines:\n mo = re.search(version_regex, line, re.M)\n if mo:\n return mo.group(1)\n raise RuntimeError(\"Unable to find version in %s.\" % (VERSION_FILE,))\n\n\nsetup(\n name=\"arviz\",\n license=\"Apache-2.0\",\n version=get_version(),\n description=\"Exploratory analysis of Bayesian models\",\n author=\"ArviZ Developers\",\n url=\"http://github.com/arviz-devs/arviz\",\n packages=find_packages(),\n install_requires=get_requirements(),\n extras_require=dict(all=get_requirements_optional()), # test=get_requirements_dev(),\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n)\n", "path": "setup.py"}], "after_files": [{"content": "import codecs\nimport os\nimport re\n\nimport setuptools\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\n\n\nPROJECT_ROOT = os.path.dirname(os.path.realpath(__file__))\nREQUIREMENTS_FILE = os.path.join(PROJECT_ROOT, \"requirements.txt\")\nREQUIREMENTS_OPTIONAL_FILE = os.path.join(PROJECT_ROOT, \"requirements-optional.txt\")\nREQUIREMENTS_DEV_FILE = os.path.join(PROJECT_ROOT, \"requirements-dev.txt\")\nREADME_FILE = os.path.join(PROJECT_ROOT, \"README.md\")\nVERSION_FILE = os.path.join(PROJECT_ROOT, \"arviz\", \"__init__.py\")\n\n\ndef get_requirements():\n with codecs.open(REQUIREMENTS_FILE) as buff:\n return buff.read().splitlines()\n\n\ndef get_requirements_dev():\n with codecs.open(REQUIREMENTS_DEV_FILE) as buff:\n return buff.read().splitlines()\n\n\ndef get_requirements_optional():\n with codecs.open(REQUIREMENTS_OPTIONAL_FILE) as buff:\n return buff.read().splitlines()\n\n\ndef get_long_description():\n with codecs.open(README_FILE, \"rt\") as buff:\n return buff.read()\n\n\ndef get_version():\n lines = open(VERSION_FILE, \"rt\").readlines()\n version_regex = r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\"\n for line in lines:\n mo = re.search(version_regex, line, re.M)\n if mo:\n return mo.group(1)\n raise RuntimeError(\"Unable to find version in %s.\" % (VERSION_FILE,))\n\n\nsetup(\n name=\"arviz\",\n license=\"Apache-2.0\",\n version=get_version(),\n description=\"Exploratory analysis of Bayesian models\",\n author=\"ArviZ Developers\",\n url=\"http://github.com/arviz-devs/arviz\",\n packages=find_packages(),\n install_requires=get_requirements(),\n extras_require=dict(all=get_requirements_optional()), # test=get_requirements_dev(),\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Framework :: Matplotlib\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Education\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n ],\n)\n", "path": "setup.py"}]}
| 1,122 | 204 |
gh_patches_debug_6966
|
rasdani/github-patches
|
git_diff
|
encode__starlette-706
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WSGI mount error
When I mount a django application and try to access it
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 375, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\applications.py", line 134, in __call__
await self.error_middleware(scope, receive, send)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\middleware\errors.py", line 178, in __call__
raise exc from None
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\middleware\errors.py", line 156, in __call__
await self.app(scope, receive, _send)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\exceptions.py", line 73, in __call__
raise exc from None
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\exceptions.py", line 62, in __call__
await self.app(scope, receive, sender)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\routing.py", line 590, in __call__
await route(scope, receive, send)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\routing.py", line 352, in __call__
await self.app(scope, receive, send)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\middleware\wsgi.py", line 62, in __call__
await responder(receive, send)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\middleware\wsgi.py", line 91, in __call__
await asyncio.wait_for(sender, None)
File "c:\users\abers\appdata\local\programs\python\python37\Lib\asyncio\tasks.py", line 414, in wait_for
return await fut
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\middleware\wsgi.py", line 106, in sender
await send(message)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\exceptions.py", line 59, in sender
await send(message)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\starlette\middleware\errors.py", line 153, in _send
await send(message)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 449, in send
status_code=status_code, headers=headers, reason=reason
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\h11\_events.py", line 47, in __init__
self.headers, _parsed=_parsed)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\h11\_headers.py", line 75, in normalize_and_validate
validate(_field_value_re, value, "Illegal header value {!r}", value)
File "C:\Users\AberS\Documents\Coding\lexiang\lebu\.venv\lib\site-packages\h11\_util.py", line 96, in validate
raise LocalProtocolError(msg)
h11._util.LocalProtocolError: Illegal header value b' sessionid=""; expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/'
```
This is my minimal implementation code, `lebu` is my django program
```python
from starlette.applications import Starlette
from starlette.middleware.wsgi import WSGIMiddleware
import uvicorn
from lebu.wsgi import application
app = Starlette(debug=True)
app.mount("/api", WSGIMiddleware(application))
if __name__ == "__main__":
uvicorn.run(app)
```
By the way, starlette version is 0.12.9
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/middleware/wsgi.py`
Content:
```
1 import asyncio
2 import io
3 import sys
4 import typing
5
6 from starlette.concurrency import run_in_threadpool
7 from starlette.types import Message, Receive, Scope, Send
8
9
10 def build_environ(scope: Scope, body: bytes) -> dict:
11 """
12 Builds a scope and request body into a WSGI environ object.
13 """
14 environ = {
15 "REQUEST_METHOD": scope["method"],
16 "SCRIPT_NAME": scope.get("root_path", ""),
17 "PATH_INFO": scope["path"],
18 "QUERY_STRING": scope["query_string"].decode("ascii"),
19 "SERVER_PROTOCOL": f"HTTP/{scope['http_version']}",
20 "wsgi.version": (1, 0),
21 "wsgi.url_scheme": scope.get("scheme", "http"),
22 "wsgi.input": io.BytesIO(body),
23 "wsgi.errors": sys.stdout,
24 "wsgi.multithread": True,
25 "wsgi.multiprocess": True,
26 "wsgi.run_once": False,
27 }
28
29 # Get server name and port - required in WSGI, not in ASGI
30 server = scope.get("server") or ("localhost", 80)
31 environ["SERVER_NAME"] = server[0]
32 environ["SERVER_PORT"] = server[1]
33
34 # Get client IP address
35 if scope.get("client"):
36 environ["REMOTE_ADDR"] = scope["client"][0]
37
38 # Go through headers and make them into environ entries
39 for name, value in scope.get("headers", []):
40 name = name.decode("latin1")
41 if name == "content-length":
42 corrected_name = "CONTENT_LENGTH"
43 elif name == "content-type":
44 corrected_name = "CONTENT_TYPE"
45 else:
46 corrected_name = f"HTTP_{name}".upper().replace("-", "_")
47 # HTTPbis say only ASCII chars are allowed in headers, but we latin1 just in case
48 value = value.decode("latin1")
49 if corrected_name in environ:
50 value = environ[corrected_name] + "," + value
51 environ[corrected_name] = value
52 return environ
53
54
55 class WSGIMiddleware:
56 def __init__(self, app: typing.Callable, workers: int = 10) -> None:
57 self.app = app
58
59 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
60 assert scope["type"] == "http"
61 responder = WSGIResponder(self.app, scope)
62 await responder(receive, send)
63
64
65 class WSGIResponder:
66 def __init__(self, app: typing.Callable, scope: Scope) -> None:
67 self.app = app
68 self.scope = scope
69 self.status = None
70 self.response_headers = None
71 self.send_event = asyncio.Event()
72 self.send_queue = [] # type: typing.List[typing.Optional[Message]]
73 self.loop = asyncio.get_event_loop()
74 self.response_started = False
75 self.exc_info = None # type: typing.Any
76
77 async def __call__(self, receive: Receive, send: Send) -> None:
78 body = b""
79 more_body = True
80 while more_body:
81 message = await receive()
82 body += message.get("body", b"")
83 more_body = message.get("more_body", False)
84 environ = build_environ(self.scope, body)
85 sender = None
86 try:
87 sender = self.loop.create_task(self.sender(send))
88 await run_in_threadpool(self.wsgi, environ, self.start_response)
89 self.send_queue.append(None)
90 self.send_event.set()
91 await asyncio.wait_for(sender, None)
92 if self.exc_info is not None:
93 raise self.exc_info[0].with_traceback(
94 self.exc_info[1], self.exc_info[2]
95 )
96 finally:
97 if sender and not sender.done():
98 sender.cancel() # pragma: no cover
99
100 async def sender(self, send: Send) -> None:
101 while True:
102 if self.send_queue:
103 message = self.send_queue.pop(0)
104 if message is None:
105 return
106 await send(message)
107 else:
108 await self.send_event.wait()
109 self.send_event.clear()
110
111 def start_response(
112 self,
113 status: str,
114 response_headers: typing.List[typing.Tuple[str, str]],
115 exc_info: typing.Any = None,
116 ) -> None:
117 self.exc_info = exc_info
118 if not self.response_started:
119 self.response_started = True
120 status_code_string, _ = status.split(" ", 1)
121 status_code = int(status_code_string)
122 headers = [
123 (name.encode("ascii"), value.encode("ascii"))
124 for name, value in response_headers
125 ]
126 self.send_queue.append(
127 {
128 "type": "http.response.start",
129 "status": status_code,
130 "headers": headers,
131 }
132 )
133 self.loop.call_soon_threadsafe(self.send_event.set)
134
135 def wsgi(self, environ: dict, start_response: typing.Callable) -> None:
136 for chunk in self.app(environ, start_response):
137 self.send_queue.append(
138 {"type": "http.response.body", "body": chunk, "more_body": True}
139 )
140 self.loop.call_soon_threadsafe(self.send_event.set)
141
142 self.send_queue.append({"type": "http.response.body", "body": b""})
143 self.loop.call_soon_threadsafe(self.send_event.set)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/starlette/middleware/wsgi.py b/starlette/middleware/wsgi.py
--- a/starlette/middleware/wsgi.py
+++ b/starlette/middleware/wsgi.py
@@ -120,7 +120,7 @@
status_code_string, _ = status.split(" ", 1)
status_code = int(status_code_string)
headers = [
- (name.encode("ascii"), value.encode("ascii"))
+ (name.strip().encode("ascii"), value.strip().encode("ascii"))
for name, value in response_headers
]
self.send_queue.append(
|
{"golden_diff": "diff --git a/starlette/middleware/wsgi.py b/starlette/middleware/wsgi.py\n--- a/starlette/middleware/wsgi.py\n+++ b/starlette/middleware/wsgi.py\n@@ -120,7 +120,7 @@\n status_code_string, _ = status.split(\" \", 1)\n status_code = int(status_code_string)\n headers = [\n- (name.encode(\"ascii\"), value.encode(\"ascii\"))\n+ (name.strip().encode(\"ascii\"), value.strip().encode(\"ascii\"))\n for name, value in response_headers\n ]\n self.send_queue.append(\n", "issue": "WSGI mount error\nWhen I mount a django application and try to access it\r\n\r\n```\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 375, in run_asgi\r\n result = await app(self.scope, self.receive, self.send)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\applications.py\", line 134, in __call__\r\n await self.error_middleware(scope, receive, send)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 178, in __call__\r\n raise exc from None\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 156, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\exceptions.py\", line 73, in __call__\r\n raise exc from None\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\exceptions.py\", line 62, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\routing.py\", line 590, in __call__\r\n await route(scope, receive, send)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\routing.py\", line 352, in __call__\r\n await self.app(scope, receive, send)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\middleware\\wsgi.py\", line 62, in __call__\r\n await responder(receive, send)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\middleware\\wsgi.py\", line 91, in __call__\r\n await asyncio.wait_for(sender, None)\r\n File \"c:\\users\\abers\\appdata\\local\\programs\\python\\python37\\Lib\\asyncio\\tasks.py\", line 414, in wait_for\r\n return await fut\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\middleware\\wsgi.py\", line 106, in sender\r\n await send(message)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\exceptions.py\", line 59, in sender\r\n await send(message)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 153, in _send\r\n await send(message)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 449, in send\r\n status_code=status_code, headers=headers, reason=reason\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\h11\\_events.py\", line 47, in __init__\r\n self.headers, _parsed=_parsed)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\h11\\_headers.py\", line 75, in normalize_and_validate\r\n validate(_field_value_re, value, \"Illegal header value {!r}\", value)\r\n File \"C:\\Users\\AberS\\Documents\\Coding\\lexiang\\lebu\\.venv\\lib\\site-packages\\h11\\_util.py\", line 96, in validate\r\n raise LocalProtocolError(msg)\r\nh11._util.LocalProtocolError: Illegal header value b' sessionid=\"\"; expires=Thu, 01 Jan 1970 00:00:00 GMT; Max-Age=0; Path=/'\r\n```\r\n\r\nThis is my minimal implementation code, `lebu` is my django program\r\n\r\n```python\r\nfrom starlette.applications import Starlette\r\nfrom starlette.middleware.wsgi import WSGIMiddleware\r\nimport uvicorn\r\n\r\nfrom lebu.wsgi import application\r\n\r\napp = Starlette(debug=True)\r\napp.mount(\"/api\", WSGIMiddleware(application))\r\n\r\nif __name__ == \"__main__\":\r\n uvicorn.run(app)\r\n```\r\n\r\nBy the way, starlette version is 0.12.9\n", "before_files": [{"content": "import asyncio\nimport io\nimport sys\nimport typing\n\nfrom starlette.concurrency import run_in_threadpool\nfrom starlette.types import Message, Receive, Scope, Send\n\n\ndef build_environ(scope: Scope, body: bytes) -> dict:\n \"\"\"\n Builds a scope and request body into a WSGI environ object.\n \"\"\"\n environ = {\n \"REQUEST_METHOD\": scope[\"method\"],\n \"SCRIPT_NAME\": scope.get(\"root_path\", \"\"),\n \"PATH_INFO\": scope[\"path\"],\n \"QUERY_STRING\": scope[\"query_string\"].decode(\"ascii\"),\n \"SERVER_PROTOCOL\": f\"HTTP/{scope['http_version']}\",\n \"wsgi.version\": (1, 0),\n \"wsgi.url_scheme\": scope.get(\"scheme\", \"http\"),\n \"wsgi.input\": io.BytesIO(body),\n \"wsgi.errors\": sys.stdout,\n \"wsgi.multithread\": True,\n \"wsgi.multiprocess\": True,\n \"wsgi.run_once\": False,\n }\n\n # Get server name and port - required in WSGI, not in ASGI\n server = scope.get(\"server\") or (\"localhost\", 80)\n environ[\"SERVER_NAME\"] = server[0]\n environ[\"SERVER_PORT\"] = server[1]\n\n # Get client IP address\n if scope.get(\"client\"):\n environ[\"REMOTE_ADDR\"] = scope[\"client\"][0]\n\n # Go through headers and make them into environ entries\n for name, value in scope.get(\"headers\", []):\n name = name.decode(\"latin1\")\n if name == \"content-length\":\n corrected_name = \"CONTENT_LENGTH\"\n elif name == \"content-type\":\n corrected_name = \"CONTENT_TYPE\"\n else:\n corrected_name = f\"HTTP_{name}\".upper().replace(\"-\", \"_\")\n # HTTPbis say only ASCII chars are allowed in headers, but we latin1 just in case\n value = value.decode(\"latin1\")\n if corrected_name in environ:\n value = environ[corrected_name] + \",\" + value\n environ[corrected_name] = value\n return environ\n\n\nclass WSGIMiddleware:\n def __init__(self, app: typing.Callable, workers: int = 10) -> None:\n self.app = app\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n assert scope[\"type\"] == \"http\"\n responder = WSGIResponder(self.app, scope)\n await responder(receive, send)\n\n\nclass WSGIResponder:\n def __init__(self, app: typing.Callable, scope: Scope) -> None:\n self.app = app\n self.scope = scope\n self.status = None\n self.response_headers = None\n self.send_event = asyncio.Event()\n self.send_queue = [] # type: typing.List[typing.Optional[Message]]\n self.loop = asyncio.get_event_loop()\n self.response_started = False\n self.exc_info = None # type: typing.Any\n\n async def __call__(self, receive: Receive, send: Send) -> None:\n body = b\"\"\n more_body = True\n while more_body:\n message = await receive()\n body += message.get(\"body\", b\"\")\n more_body = message.get(\"more_body\", False)\n environ = build_environ(self.scope, body)\n sender = None\n try:\n sender = self.loop.create_task(self.sender(send))\n await run_in_threadpool(self.wsgi, environ, self.start_response)\n self.send_queue.append(None)\n self.send_event.set()\n await asyncio.wait_for(sender, None)\n if self.exc_info is not None:\n raise self.exc_info[0].with_traceback(\n self.exc_info[1], self.exc_info[2]\n )\n finally:\n if sender and not sender.done():\n sender.cancel() # pragma: no cover\n\n async def sender(self, send: Send) -> None:\n while True:\n if self.send_queue:\n message = self.send_queue.pop(0)\n if message is None:\n return\n await send(message)\n else:\n await self.send_event.wait()\n self.send_event.clear()\n\n def start_response(\n self,\n status: str,\n response_headers: typing.List[typing.Tuple[str, str]],\n exc_info: typing.Any = None,\n ) -> None:\n self.exc_info = exc_info\n if not self.response_started:\n self.response_started = True\n status_code_string, _ = status.split(\" \", 1)\n status_code = int(status_code_string)\n headers = [\n (name.encode(\"ascii\"), value.encode(\"ascii\"))\n for name, value in response_headers\n ]\n self.send_queue.append(\n {\n \"type\": \"http.response.start\",\n \"status\": status_code,\n \"headers\": headers,\n }\n )\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n def wsgi(self, environ: dict, start_response: typing.Callable) -> None:\n for chunk in self.app(environ, start_response):\n self.send_queue.append(\n {\"type\": \"http.response.body\", \"body\": chunk, \"more_body\": True}\n )\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n self.send_queue.append({\"type\": \"http.response.body\", \"body\": b\"\"})\n self.loop.call_soon_threadsafe(self.send_event.set)\n", "path": "starlette/middleware/wsgi.py"}], "after_files": [{"content": "import asyncio\nimport io\nimport sys\nimport typing\n\nfrom starlette.concurrency import run_in_threadpool\nfrom starlette.types import Message, Receive, Scope, Send\n\n\ndef build_environ(scope: Scope, body: bytes) -> dict:\n \"\"\"\n Builds a scope and request body into a WSGI environ object.\n \"\"\"\n environ = {\n \"REQUEST_METHOD\": scope[\"method\"],\n \"SCRIPT_NAME\": scope.get(\"root_path\", \"\"),\n \"PATH_INFO\": scope[\"path\"],\n \"QUERY_STRING\": scope[\"query_string\"].decode(\"ascii\"),\n \"SERVER_PROTOCOL\": f\"HTTP/{scope['http_version']}\",\n \"wsgi.version\": (1, 0),\n \"wsgi.url_scheme\": scope.get(\"scheme\", \"http\"),\n \"wsgi.input\": io.BytesIO(body),\n \"wsgi.errors\": sys.stdout,\n \"wsgi.multithread\": True,\n \"wsgi.multiprocess\": True,\n \"wsgi.run_once\": False,\n }\n\n # Get server name and port - required in WSGI, not in ASGI\n server = scope.get(\"server\") or (\"localhost\", 80)\n environ[\"SERVER_NAME\"] = server[0]\n environ[\"SERVER_PORT\"] = server[1]\n\n # Get client IP address\n if scope.get(\"client\"):\n environ[\"REMOTE_ADDR\"] = scope[\"client\"][0]\n\n # Go through headers and make them into environ entries\n for name, value in scope.get(\"headers\", []):\n name = name.decode(\"latin1\")\n if name == \"content-length\":\n corrected_name = \"CONTENT_LENGTH\"\n elif name == \"content-type\":\n corrected_name = \"CONTENT_TYPE\"\n else:\n corrected_name = f\"HTTP_{name}\".upper().replace(\"-\", \"_\")\n # HTTPbis say only ASCII chars are allowed in headers, but we latin1 just in case\n value = value.decode(\"latin1\")\n if corrected_name in environ:\n value = environ[corrected_name] + \",\" + value\n environ[corrected_name] = value\n return environ\n\n\nclass WSGIMiddleware:\n def __init__(self, app: typing.Callable, workers: int = 10) -> None:\n self.app = app\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n assert scope[\"type\"] == \"http\"\n responder = WSGIResponder(self.app, scope)\n await responder(receive, send)\n\n\nclass WSGIResponder:\n def __init__(self, app: typing.Callable, scope: Scope) -> None:\n self.app = app\n self.scope = scope\n self.status = None\n self.response_headers = None\n self.send_event = asyncio.Event()\n self.send_queue = [] # type: typing.List[typing.Optional[Message]]\n self.loop = asyncio.get_event_loop()\n self.response_started = False\n self.exc_info = None # type: typing.Any\n\n async def __call__(self, receive: Receive, send: Send) -> None:\n body = b\"\"\n more_body = True\n while more_body:\n message = await receive()\n body += message.get(\"body\", b\"\")\n more_body = message.get(\"more_body\", False)\n environ = build_environ(self.scope, body)\n sender = None\n try:\n sender = self.loop.create_task(self.sender(send))\n await run_in_threadpool(self.wsgi, environ, self.start_response)\n self.send_queue.append(None)\n self.send_event.set()\n await asyncio.wait_for(sender, None)\n if self.exc_info is not None:\n raise self.exc_info[0].with_traceback(\n self.exc_info[1], self.exc_info[2]\n )\n finally:\n if sender and not sender.done():\n sender.cancel() # pragma: no cover\n\n async def sender(self, send: Send) -> None:\n while True:\n if self.send_queue:\n message = self.send_queue.pop(0)\n if message is None:\n return\n await send(message)\n else:\n await self.send_event.wait()\n self.send_event.clear()\n\n def start_response(\n self,\n status: str,\n response_headers: typing.List[typing.Tuple[str, str]],\n exc_info: typing.Any = None,\n ) -> None:\n self.exc_info = exc_info\n if not self.response_started:\n self.response_started = True\n status_code_string, _ = status.split(\" \", 1)\n status_code = int(status_code_string)\n headers = [\n (name.strip().encode(\"ascii\"), value.strip().encode(\"ascii\"))\n for name, value in response_headers\n ]\n self.send_queue.append(\n {\n \"type\": \"http.response.start\",\n \"status\": status_code,\n \"headers\": headers,\n }\n )\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n def wsgi(self, environ: dict, start_response: typing.Callable) -> None:\n for chunk in self.app(environ, start_response):\n self.send_queue.append(\n {\"type\": \"http.response.body\", \"body\": chunk, \"more_body\": True}\n )\n self.loop.call_soon_threadsafe(self.send_event.set)\n\n self.send_queue.append({\"type\": \"http.response.body\", \"body\": b\"\"})\n self.loop.call_soon_threadsafe(self.send_event.set)\n", "path": "starlette/middleware/wsgi.py"}]}
| 2,939 | 128 |
gh_patches_debug_30956
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3602
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider superdrug is broken
During the global build at 2021-06-30-14-42-26, spider **superdrug** failed with **0 features** and **2 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/logs/superdrug.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/superdrug.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/superdrug.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/superdrug.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import json
3
4 import scrapy
5
6 from locations.items import GeojsonPointItem
7
8
9 class SuperdrugSpider(scrapy.Spider):
10 name = "superdrug"
11 item_attributes = {"brand": "Superdrug", "brand_wikidata": "Q7643261"}
12 allowed_domains = ["superdrug.com"]
13 download_delay = 0.5
14
15 start_urls = ["https://www.superdrug.com/stores/a-to-z"]
16
17 def parse(self, response):
18 urls = response.xpath('//a[@class="row store-link"]/@href').extract()
19
20 for url in urls:
21 yield scrapy.Request(response.urljoin(url), callback=self.parse_location)
22
23 def parse_location(self, response):
24 data = json.loads(
25 response.xpath(
26 '//script[@type="application/ld+json" and contains(text(), "streetAddress")]/text()'
27 ).extract_first()
28 )
29
30 properties = {
31 "name": data["name"],
32 "ref": data["name"],
33 "addr_full": data["address"]["streetAddress"],
34 "city": data["address"]["addressLocality"],
35 "state": data["address"]["addressRegion"],
36 "postcode": data["address"]["postalCode"],
37 "country": data["address"]["addressCountry"],
38 "phone": data.get("telephone"),
39 "website": response.url,
40 "lat": float(
41 response.xpath(
42 '//div[@class="store-locator store-locator__overview"]/@data-lat'
43 ).extract_first()
44 ),
45 "lon": float(
46 response.xpath(
47 '//div[@class="store-locator store-locator__overview"]/@data-lng'
48 ).extract_first()
49 ),
50 }
51 yield GeojsonPointItem(**properties)
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/superdrug.py b/locations/spiders/superdrug.py
--- a/locations/spiders/superdrug.py
+++ b/locations/spiders/superdrug.py
@@ -4,6 +4,7 @@
import scrapy
from locations.items import GeojsonPointItem
+from locations.hours import OpeningHours
class SuperdrugSpider(scrapy.Spider):
@@ -14,6 +15,10 @@
start_urls = ["https://www.superdrug.com/stores/a-to-z"]
+ custom_settings = {
+ "USER_AGENT": "Mozilla/5.0 (X11; Linux x86_64; rv:99.0) Gecko/20100101 Firefox/99.0"
+ }
+
def parse(self, response):
urls = response.xpath('//a[@class="row store-link"]/@href').extract()
@@ -28,9 +33,11 @@
)
properties = {
- "name": data["name"],
- "ref": data["name"],
- "addr_full": data["address"]["streetAddress"],
+ "name": data["name"].replace("Superdrug", "").strip(),
+ "ref": data["@id"],
+ "street_address": data["address"]["streetAddress"]
+ .replace("Superdrug", "")
+ .strip(),
"city": data["address"]["addressLocality"],
"state": data["address"]["addressRegion"],
"postcode": data["address"]["postalCode"],
@@ -48,4 +55,15 @@
).extract_first()
),
}
+
+ oh = OpeningHours()
+
+ for rule in data["OpeningHoursSpecification"]:
+ oh.add_range(
+ day=rule["dayOfWeek"][0:2],
+ open_time=rule["opens"],
+ close_time=rule["closes"],
+ time_format="%I:%M %p",
+ )
+
yield GeojsonPointItem(**properties)
|
{"golden_diff": "diff --git a/locations/spiders/superdrug.py b/locations/spiders/superdrug.py\n--- a/locations/spiders/superdrug.py\n+++ b/locations/spiders/superdrug.py\n@@ -4,6 +4,7 @@\n import scrapy\n \n from locations.items import GeojsonPointItem\n+from locations.hours import OpeningHours\n \n \n class SuperdrugSpider(scrapy.Spider):\n@@ -14,6 +15,10 @@\n \n start_urls = [\"https://www.superdrug.com/stores/a-to-z\"]\n \n+ custom_settings = {\n+ \"USER_AGENT\": \"Mozilla/5.0 (X11; Linux x86_64; rv:99.0) Gecko/20100101 Firefox/99.0\"\n+ }\n+\n def parse(self, response):\n urls = response.xpath('//a[@class=\"row store-link\"]/@href').extract()\n \n@@ -28,9 +33,11 @@\n )\n \n properties = {\n- \"name\": data[\"name\"],\n- \"ref\": data[\"name\"],\n- \"addr_full\": data[\"address\"][\"streetAddress\"],\n+ \"name\": data[\"name\"].replace(\"Superdrug\", \"\").strip(),\n+ \"ref\": data[\"@id\"],\n+ \"street_address\": data[\"address\"][\"streetAddress\"]\n+ .replace(\"Superdrug\", \"\")\n+ .strip(),\n \"city\": data[\"address\"][\"addressLocality\"],\n \"state\": data[\"address\"][\"addressRegion\"],\n \"postcode\": data[\"address\"][\"postalCode\"],\n@@ -48,4 +55,15 @@\n ).extract_first()\n ),\n }\n+\n+ oh = OpeningHours()\n+\n+ for rule in data[\"OpeningHoursSpecification\"]:\n+ oh.add_range(\n+ day=rule[\"dayOfWeek\"][0:2],\n+ open_time=rule[\"opens\"],\n+ close_time=rule[\"closes\"],\n+ time_format=\"%I:%M %p\",\n+ )\n+\n yield GeojsonPointItem(**properties)\n", "issue": "Spider superdrug is broken\nDuring the global build at 2021-06-30-14-42-26, spider **superdrug** failed with **0 features** and **2 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/logs/superdrug.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/superdrug.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-30-14-42-26/output/superdrug.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\n\n\nclass SuperdrugSpider(scrapy.Spider):\n name = \"superdrug\"\n item_attributes = {\"brand\": \"Superdrug\", \"brand_wikidata\": \"Q7643261\"}\n allowed_domains = [\"superdrug.com\"]\n download_delay = 0.5\n\n start_urls = [\"https://www.superdrug.com/stores/a-to-z\"]\n\n def parse(self, response):\n urls = response.xpath('//a[@class=\"row store-link\"]/@href').extract()\n\n for url in urls:\n yield scrapy.Request(response.urljoin(url), callback=self.parse_location)\n\n def parse_location(self, response):\n data = json.loads(\n response.xpath(\n '//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()'\n ).extract_first()\n )\n\n properties = {\n \"name\": data[\"name\"],\n \"ref\": data[\"name\"],\n \"addr_full\": data[\"address\"][\"streetAddress\"],\n \"city\": data[\"address\"][\"addressLocality\"],\n \"state\": data[\"address\"][\"addressRegion\"],\n \"postcode\": data[\"address\"][\"postalCode\"],\n \"country\": data[\"address\"][\"addressCountry\"],\n \"phone\": data.get(\"telephone\"),\n \"website\": response.url,\n \"lat\": float(\n response.xpath(\n '//div[@class=\"store-locator store-locator__overview\"]/@data-lat'\n ).extract_first()\n ),\n \"lon\": float(\n response.xpath(\n '//div[@class=\"store-locator store-locator__overview\"]/@data-lng'\n ).extract_first()\n ),\n }\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/superdrug.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass SuperdrugSpider(scrapy.Spider):\n name = \"superdrug\"\n item_attributes = {\"brand\": \"Superdrug\", \"brand_wikidata\": \"Q7643261\"}\n allowed_domains = [\"superdrug.com\"]\n download_delay = 0.5\n\n start_urls = [\"https://www.superdrug.com/stores/a-to-z\"]\n\n custom_settings = {\n \"USER_AGENT\": \"Mozilla/5.0 (X11; Linux x86_64; rv:99.0) Gecko/20100101 Firefox/99.0\"\n }\n\n def parse(self, response):\n urls = response.xpath('//a[@class=\"row store-link\"]/@href').extract()\n\n for url in urls:\n yield scrapy.Request(response.urljoin(url), callback=self.parse_location)\n\n def parse_location(self, response):\n data = json.loads(\n response.xpath(\n '//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()'\n ).extract_first()\n )\n\n properties = {\n \"name\": data[\"name\"].replace(\"Superdrug\", \"\").strip(),\n \"ref\": data[\"@id\"],\n \"street_address\": data[\"address\"][\"streetAddress\"]\n .replace(\"Superdrug\", \"\")\n .strip(),\n \"city\": data[\"address\"][\"addressLocality\"],\n \"state\": data[\"address\"][\"addressRegion\"],\n \"postcode\": data[\"address\"][\"postalCode\"],\n \"country\": data[\"address\"][\"addressCountry\"],\n \"phone\": data.get(\"telephone\"),\n \"website\": response.url,\n \"lat\": float(\n response.xpath(\n '//div[@class=\"store-locator store-locator__overview\"]/@data-lat'\n ).extract_first()\n ),\n \"lon\": float(\n response.xpath(\n '//div[@class=\"store-locator store-locator__overview\"]/@data-lng'\n ).extract_first()\n ),\n }\n\n oh = OpeningHours()\n\n for rule in data[\"OpeningHoursSpecification\"]:\n oh.add_range(\n day=rule[\"dayOfWeek\"][0:2],\n open_time=rule[\"opens\"],\n close_time=rule[\"closes\"],\n time_format=\"%I:%M %p\",\n )\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/superdrug.py"}]}
| 921 | 444 |
gh_patches_debug_996
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-2522
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unpin pytest
revert https://github.com/pyca/cryptography/pull/2513
waiting on a pytest release with https://github.com/pytest-dev/pytest/issues/1238 landed
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 from __future__ import absolute_import, division, print_function
8
9 import os
10 import platform
11 import subprocess
12 import sys
13 from distutils.command.build import build
14
15 import pkg_resources
16
17 from setuptools import find_packages, setup
18 from setuptools.command.install import install
19 from setuptools.command.test import test
20
21
22 base_dir = os.path.dirname(__file__)
23 src_dir = os.path.join(base_dir, "src")
24
25 # When executing the setup.py, we need to be able to import ourselves, this
26 # means that we need to add the src/ directory to the sys.path.
27 sys.path.insert(0, src_dir)
28
29 about = {}
30 with open(os.path.join(src_dir, "cryptography", "__about__.py")) as f:
31 exec(f.read(), about)
32
33
34 VECTORS_DEPENDENCY = "cryptography_vectors=={0}".format(about['__version__'])
35
36 requirements = [
37 "idna>=2.0",
38 "pyasn1>=0.1.8",
39 "six>=1.4.1",
40 "setuptools",
41 ]
42 setup_requirements = []
43
44 if sys.version_info < (3, 4):
45 requirements.append("enum34")
46
47 if sys.version_info < (3, 3):
48 requirements.append("ipaddress")
49
50 if platform.python_implementation() == "PyPy":
51 if sys.pypy_version_info < (2, 6):
52 raise RuntimeError(
53 "cryptography 1.0 is not compatible with PyPy < 2.6. Please "
54 "upgrade PyPy to use this library."
55 )
56 else:
57 requirements.append("cffi>=1.1.0")
58 setup_requirements.append("cffi>=1.1.0")
59
60 # If you add a new dep here you probably need to add it in the tox.ini as well
61 test_requirements = [
62 "pytest!=2.8.4",
63 "pretend",
64 "iso8601",
65 "hypothesis",
66 "pyasn1_modules",
67 ]
68
69 # If there's no vectors locally that probably means we are in a tarball and
70 # need to go and get the matching vectors package from PyPi
71 if not os.path.exists(os.path.join(base_dir, "vectors/setup.py")):
72 test_requirements.append(VECTORS_DEPENDENCY)
73
74
75 def cc_is_available():
76 return sys.platform == "darwin" and list(map(
77 int, platform.mac_ver()[0].split("."))) >= [10, 8, 0]
78
79
80 backends = [
81 "openssl = cryptography.hazmat.backends.openssl:backend"
82 ]
83
84 if cc_is_available():
85 backends.append(
86 "commoncrypto = cryptography.hazmat.backends.commoncrypto:backend",
87 )
88
89
90 class PyTest(test):
91 def finalize_options(self):
92 test.finalize_options(self)
93 self.test_args = []
94 self.test_suite = True
95
96 # This means there's a vectors/ folder with the package in here.
97 # cd into it, install the vectors package and then refresh sys.path
98 if VECTORS_DEPENDENCY not in test_requirements:
99 subprocess.check_call(
100 [sys.executable, "setup.py", "install"], cwd="vectors"
101 )
102 pkg_resources.get_distribution("cryptography_vectors").activate()
103
104 def run_tests(self):
105 # Import here because in module scope the eggs are not loaded.
106 import pytest
107 test_args = [os.path.join(base_dir, "tests")]
108 errno = pytest.main(test_args)
109 sys.exit(errno)
110
111
112 def keywords_with_side_effects(argv):
113 """
114 Get a dictionary with setup keywords that (can) have side effects.
115
116 :param argv: A list of strings with command line arguments.
117 :returns: A dictionary with keyword arguments for the ``setup()`` function.
118
119 This setup.py script uses the setuptools 'setup_requires' feature because
120 this is required by the cffi package to compile extension modules. The
121 purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi
122 build process as a result of setup.py invocations that don't need the cffi
123 module to be built (setup.py serves the dual purpose of exposing package
124 metadata).
125
126 All of the options listed by ``python setup.py --help`` that print
127 information should be recognized here. The commands ``clean``,
128 ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.
129 Any combination of these options and commands is also supported.
130
131 This function was originally based on the `setup.py script`_ of SciPy (see
132 also the discussion in `pip issue #25`_).
133
134 .. _pip issue #25: https://github.com/pypa/pip/issues/25
135 .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py
136 """
137 no_setup_requires_arguments = (
138 '-h', '--help',
139 '-n', '--dry-run',
140 '-q', '--quiet',
141 '-v', '--verbose',
142 '-V', '--version',
143 '--author',
144 '--author-email',
145 '--classifiers',
146 '--contact',
147 '--contact-email',
148 '--description',
149 '--egg-base',
150 '--fullname',
151 '--help-commands',
152 '--keywords',
153 '--licence',
154 '--license',
155 '--long-description',
156 '--maintainer',
157 '--maintainer-email',
158 '--name',
159 '--no-user-cfg',
160 '--obsoletes',
161 '--platforms',
162 '--provides',
163 '--requires',
164 '--url',
165 'clean',
166 'egg_info',
167 'register',
168 'sdist',
169 'upload',
170 )
171
172 def is_short_option(argument):
173 """Check whether a command line argument is a short option."""
174 return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'
175
176 def expand_short_options(argument):
177 """Expand combined short options into canonical short options."""
178 return ('-' + char for char in argument[1:])
179
180 def argument_without_setup_requirements(argv, i):
181 """Check whether a command line argument needs setup requirements."""
182 if argv[i] in no_setup_requires_arguments:
183 # Simple case: An argument which is either an option or a command
184 # which doesn't need setup requirements.
185 return True
186 elif (is_short_option(argv[i]) and
187 all(option in no_setup_requires_arguments
188 for option in expand_short_options(argv[i]))):
189 # Not so simple case: Combined short options none of which need
190 # setup requirements.
191 return True
192 elif argv[i - 1:i] == ['--egg-base']:
193 # Tricky case: --egg-info takes an argument which should not make
194 # us use setup_requires (defeating the purpose of this code).
195 return True
196 else:
197 return False
198
199 if all(argument_without_setup_requirements(argv, i)
200 for i in range(1, len(argv))):
201 return {
202 "cmdclass": {
203 "build": DummyBuild,
204 "install": DummyInstall,
205 "test": DummyPyTest,
206 }
207 }
208 else:
209 cffi_modules = [
210 "src/_cffi_src/build_openssl.py:ffi",
211 "src/_cffi_src/build_constant_time.py:ffi",
212 "src/_cffi_src/build_padding.py:ffi",
213 ]
214 if cc_is_available():
215 cffi_modules.append("src/_cffi_src/build_commoncrypto.py:ffi")
216
217 return {
218 "setup_requires": setup_requirements,
219 "cmdclass": {
220 "test": PyTest,
221 },
222 "cffi_modules": cffi_modules
223 }
224
225
226 setup_requires_error = ("Requested setup command that needs 'setup_requires' "
227 "while command line arguments implied a side effect "
228 "free command or option.")
229
230
231 class DummyBuild(build):
232 """
233 This class makes it very obvious when ``keywords_with_side_effects()`` has
234 incorrectly interpreted the command line arguments to ``setup.py build`` as
235 one of the 'side effect free' commands or options.
236 """
237
238 def run(self):
239 raise RuntimeError(setup_requires_error)
240
241
242 class DummyInstall(install):
243 """
244 This class makes it very obvious when ``keywords_with_side_effects()`` has
245 incorrectly interpreted the command line arguments to ``setup.py install``
246 as one of the 'side effect free' commands or options.
247 """
248
249 def run(self):
250 raise RuntimeError(setup_requires_error)
251
252
253 class DummyPyTest(test):
254 """
255 This class makes it very obvious when ``keywords_with_side_effects()`` has
256 incorrectly interpreted the command line arguments to ``setup.py test`` as
257 one of the 'side effect free' commands or options.
258 """
259
260 def run_tests(self):
261 raise RuntimeError(setup_requires_error)
262
263
264 with open(os.path.join(base_dir, "README.rst")) as f:
265 long_description = f.read()
266
267
268 setup(
269 name=about["__title__"],
270 version=about["__version__"],
271
272 description=about["__summary__"],
273 long_description=long_description,
274 license=about["__license__"],
275 url=about["__uri__"],
276
277 author=about["__author__"],
278 author_email=about["__email__"],
279
280 classifiers=[
281 "Intended Audience :: Developers",
282 "License :: OSI Approved :: Apache Software License",
283 "License :: OSI Approved :: BSD License",
284 "Natural Language :: English",
285 "Operating System :: MacOS :: MacOS X",
286 "Operating System :: POSIX",
287 "Operating System :: POSIX :: BSD",
288 "Operating System :: POSIX :: Linux",
289 "Operating System :: Microsoft :: Windows",
290 "Programming Language :: Python",
291 "Programming Language :: Python :: 2",
292 "Programming Language :: Python :: 2.6",
293 "Programming Language :: Python :: 2.7",
294 "Programming Language :: Python :: 3",
295 "Programming Language :: Python :: 3.3",
296 "Programming Language :: Python :: 3.4",
297 "Programming Language :: Python :: 3.5",
298 "Programming Language :: Python :: Implementation :: CPython",
299 "Programming Language :: Python :: Implementation :: PyPy",
300 "Topic :: Security :: Cryptography",
301 ],
302
303 package_dir={"": "src"},
304 packages=find_packages(
305 where="src", exclude=["_cffi_src", "_cffi_src.*", "tests", "tests.*"]
306 ),
307 include_package_data=True,
308
309 install_requires=requirements,
310 tests_require=test_requirements,
311
312 # for cffi
313 zip_safe=False,
314 ext_package="cryptography.hazmat.bindings",
315 entry_points={
316 "cryptography.backends": backends,
317 },
318 **keywords_with_side_effects(sys.argv)
319 )
320
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -59,7 +59,7 @@
# If you add a new dep here you probably need to add it in the tox.ini as well
test_requirements = [
- "pytest!=2.8.4",
+ "pytest",
"pretend",
"iso8601",
"hypothesis",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -59,7 +59,7 @@\n \n # If you add a new dep here you probably need to add it in the tox.ini as well\n test_requirements = [\n- \"pytest!=2.8.4\",\n+ \"pytest\",\n \"pretend\",\n \"iso8601\",\n \"hypothesis\",\n", "issue": "Unpin pytest\nrevert https://github.com/pyca/cryptography/pull/2513\n\nwaiting on a pytest release with https://github.com/pytest-dev/pytest/issues/1238 landed\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom distutils.command.build import build\n\nimport pkg_resources\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\nabout = {}\nwith open(os.path.join(src_dir, \"cryptography\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\n\nVECTORS_DEPENDENCY = \"cryptography_vectors=={0}\".format(about['__version__'])\n\nrequirements = [\n \"idna>=2.0\",\n \"pyasn1>=0.1.8\",\n \"six>=1.4.1\",\n \"setuptools\",\n]\nsetup_requirements = []\n\nif sys.version_info < (3, 4):\n requirements.append(\"enum34\")\n\nif sys.version_info < (3, 3):\n requirements.append(\"ipaddress\")\n\nif platform.python_implementation() == \"PyPy\":\n if sys.pypy_version_info < (2, 6):\n raise RuntimeError(\n \"cryptography 1.0 is not compatible with PyPy < 2.6. Please \"\n \"upgrade PyPy to use this library.\"\n )\nelse:\n requirements.append(\"cffi>=1.1.0\")\n setup_requirements.append(\"cffi>=1.1.0\")\n\n# If you add a new dep here you probably need to add it in the tox.ini as well\ntest_requirements = [\n \"pytest!=2.8.4\",\n \"pretend\",\n \"iso8601\",\n \"hypothesis\",\n \"pyasn1_modules\",\n]\n\n# If there's no vectors locally that probably means we are in a tarball and\n# need to go and get the matching vectors package from PyPi\nif not os.path.exists(os.path.join(base_dir, \"vectors/setup.py\")):\n test_requirements.append(VECTORS_DEPENDENCY)\n\n\ndef cc_is_available():\n return sys.platform == \"darwin\" and list(map(\n int, platform.mac_ver()[0].split(\".\"))) >= [10, 8, 0]\n\n\nbackends = [\n \"openssl = cryptography.hazmat.backends.openssl:backend\"\n]\n\nif cc_is_available():\n backends.append(\n \"commoncrypto = cryptography.hazmat.backends.commoncrypto:backend\",\n )\n\n\nclass PyTest(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n # This means there's a vectors/ folder with the package in here.\n # cd into it, install the vectors package and then refresh sys.path\n if VECTORS_DEPENDENCY not in test_requirements:\n subprocess.check_call(\n [sys.executable, \"setup.py\", \"install\"], cwd=\"vectors\"\n )\n pkg_resources.get_distribution(\"cryptography_vectors\").activate()\n\n def run_tests(self):\n # Import here because in module scope the eggs are not loaded.\n import pytest\n test_args = [os.path.join(base_dir, \"tests\")]\n errno = pytest.main(test_args)\n sys.exit(errno)\n\n\ndef keywords_with_side_effects(argv):\n \"\"\"\n Get a dictionary with setup keywords that (can) have side effects.\n\n :param argv: A list of strings with command line arguments.\n :returns: A dictionary with keyword arguments for the ``setup()`` function.\n\n This setup.py script uses the setuptools 'setup_requires' feature because\n this is required by the cffi package to compile extension modules. The\n purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi\n build process as a result of setup.py invocations that don't need the cffi\n module to be built (setup.py serves the dual purpose of exposing package\n metadata).\n\n All of the options listed by ``python setup.py --help`` that print\n information should be recognized here. The commands ``clean``,\n ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.\n Any combination of these options and commands is also supported.\n\n This function was originally based on the `setup.py script`_ of SciPy (see\n also the discussion in `pip issue #25`_).\n\n .. _pip issue #25: https://github.com/pypa/pip/issues/25\n .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py\n \"\"\"\n no_setup_requires_arguments = (\n '-h', '--help',\n '-n', '--dry-run',\n '-q', '--quiet',\n '-v', '--verbose',\n '-V', '--version',\n '--author',\n '--author-email',\n '--classifiers',\n '--contact',\n '--contact-email',\n '--description',\n '--egg-base',\n '--fullname',\n '--help-commands',\n '--keywords',\n '--licence',\n '--license',\n '--long-description',\n '--maintainer',\n '--maintainer-email',\n '--name',\n '--no-user-cfg',\n '--obsoletes',\n '--platforms',\n '--provides',\n '--requires',\n '--url',\n 'clean',\n 'egg_info',\n 'register',\n 'sdist',\n 'upload',\n )\n\n def is_short_option(argument):\n \"\"\"Check whether a command line argument is a short option.\"\"\"\n return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'\n\n def expand_short_options(argument):\n \"\"\"Expand combined short options into canonical short options.\"\"\"\n return ('-' + char for char in argument[1:])\n\n def argument_without_setup_requirements(argv, i):\n \"\"\"Check whether a command line argument needs setup requirements.\"\"\"\n if argv[i] in no_setup_requires_arguments:\n # Simple case: An argument which is either an option or a command\n # which doesn't need setup requirements.\n return True\n elif (is_short_option(argv[i]) and\n all(option in no_setup_requires_arguments\n for option in expand_short_options(argv[i]))):\n # Not so simple case: Combined short options none of which need\n # setup requirements.\n return True\n elif argv[i - 1:i] == ['--egg-base']:\n # Tricky case: --egg-info takes an argument which should not make\n # us use setup_requires (defeating the purpose of this code).\n return True\n else:\n return False\n\n if all(argument_without_setup_requirements(argv, i)\n for i in range(1, len(argv))):\n return {\n \"cmdclass\": {\n \"build\": DummyBuild,\n \"install\": DummyInstall,\n \"test\": DummyPyTest,\n }\n }\n else:\n cffi_modules = [\n \"src/_cffi_src/build_openssl.py:ffi\",\n \"src/_cffi_src/build_constant_time.py:ffi\",\n \"src/_cffi_src/build_padding.py:ffi\",\n ]\n if cc_is_available():\n cffi_modules.append(\"src/_cffi_src/build_commoncrypto.py:ffi\")\n\n return {\n \"setup_requires\": setup_requirements,\n \"cmdclass\": {\n \"test\": PyTest,\n },\n \"cffi_modules\": cffi_modules\n }\n\n\nsetup_requires_error = (\"Requested setup command that needs 'setup_requires' \"\n \"while command line arguments implied a side effect \"\n \"free command or option.\")\n\n\nclass DummyBuild(build):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py build`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyInstall(install):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py install``\n as one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyPyTest(test):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py test`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run_tests(self):\n raise RuntimeError(setup_requires_error)\n\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\nsetup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n\n description=about[\"__summary__\"],\n long_description=long_description,\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n\n classifiers=[\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Security :: Cryptography\",\n ],\n\n package_dir={\"\": \"src\"},\n packages=find_packages(\n where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\", \"tests\", \"tests.*\"]\n ),\n include_package_data=True,\n\n install_requires=requirements,\n tests_require=test_requirements,\n\n # for cffi\n zip_safe=False,\n ext_package=\"cryptography.hazmat.bindings\",\n entry_points={\n \"cryptography.backends\": backends,\n },\n **keywords_with_side_effects(sys.argv)\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom distutils.command.build import build\n\nimport pkg_resources\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\nabout = {}\nwith open(os.path.join(src_dir, \"cryptography\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\n\nVECTORS_DEPENDENCY = \"cryptography_vectors=={0}\".format(about['__version__'])\n\nrequirements = [\n \"idna>=2.0\",\n \"pyasn1>=0.1.8\",\n \"six>=1.4.1\",\n \"setuptools\",\n]\nsetup_requirements = []\n\nif sys.version_info < (3, 4):\n requirements.append(\"enum34\")\n\nif sys.version_info < (3, 3):\n requirements.append(\"ipaddress\")\n\nif platform.python_implementation() == \"PyPy\":\n if sys.pypy_version_info < (2, 6):\n raise RuntimeError(\n \"cryptography 1.0 is not compatible with PyPy < 2.6. Please \"\n \"upgrade PyPy to use this library.\"\n )\nelse:\n requirements.append(\"cffi>=1.1.0\")\n setup_requirements.append(\"cffi>=1.1.0\")\n\n# If you add a new dep here you probably need to add it in the tox.ini as well\ntest_requirements = [\n \"pytest\",\n \"pretend\",\n \"iso8601\",\n \"hypothesis\",\n \"pyasn1_modules\",\n]\n\n# If there's no vectors locally that probably means we are in a tarball and\n# need to go and get the matching vectors package from PyPi\nif not os.path.exists(os.path.join(base_dir, \"vectors/setup.py\")):\n test_requirements.append(VECTORS_DEPENDENCY)\n\n\ndef cc_is_available():\n return sys.platform == \"darwin\" and list(map(\n int, platform.mac_ver()[0].split(\".\"))) >= [10, 8, 0]\n\n\nbackends = [\n \"openssl = cryptography.hazmat.backends.openssl:backend\"\n]\n\nif cc_is_available():\n backends.append(\n \"commoncrypto = cryptography.hazmat.backends.commoncrypto:backend\",\n )\n\n\nclass PyTest(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n # This means there's a vectors/ folder with the package in here.\n # cd into it, install the vectors package and then refresh sys.path\n if VECTORS_DEPENDENCY not in test_requirements:\n subprocess.check_call(\n [sys.executable, \"setup.py\", \"install\"], cwd=\"vectors\"\n )\n pkg_resources.get_distribution(\"cryptography_vectors\").activate()\n\n def run_tests(self):\n # Import here because in module scope the eggs are not loaded.\n import pytest\n test_args = [os.path.join(base_dir, \"tests\")]\n errno = pytest.main(test_args)\n sys.exit(errno)\n\n\ndef keywords_with_side_effects(argv):\n \"\"\"\n Get a dictionary with setup keywords that (can) have side effects.\n\n :param argv: A list of strings with command line arguments.\n :returns: A dictionary with keyword arguments for the ``setup()`` function.\n\n This setup.py script uses the setuptools 'setup_requires' feature because\n this is required by the cffi package to compile extension modules. The\n purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi\n build process as a result of setup.py invocations that don't need the cffi\n module to be built (setup.py serves the dual purpose of exposing package\n metadata).\n\n All of the options listed by ``python setup.py --help`` that print\n information should be recognized here. The commands ``clean``,\n ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.\n Any combination of these options and commands is also supported.\n\n This function was originally based on the `setup.py script`_ of SciPy (see\n also the discussion in `pip issue #25`_).\n\n .. _pip issue #25: https://github.com/pypa/pip/issues/25\n .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py\n \"\"\"\n no_setup_requires_arguments = (\n '-h', '--help',\n '-n', '--dry-run',\n '-q', '--quiet',\n '-v', '--verbose',\n '-V', '--version',\n '--author',\n '--author-email',\n '--classifiers',\n '--contact',\n '--contact-email',\n '--description',\n '--egg-base',\n '--fullname',\n '--help-commands',\n '--keywords',\n '--licence',\n '--license',\n '--long-description',\n '--maintainer',\n '--maintainer-email',\n '--name',\n '--no-user-cfg',\n '--obsoletes',\n '--platforms',\n '--provides',\n '--requires',\n '--url',\n 'clean',\n 'egg_info',\n 'register',\n 'sdist',\n 'upload',\n )\n\n def is_short_option(argument):\n \"\"\"Check whether a command line argument is a short option.\"\"\"\n return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'\n\n def expand_short_options(argument):\n \"\"\"Expand combined short options into canonical short options.\"\"\"\n return ('-' + char for char in argument[1:])\n\n def argument_without_setup_requirements(argv, i):\n \"\"\"Check whether a command line argument needs setup requirements.\"\"\"\n if argv[i] in no_setup_requires_arguments:\n # Simple case: An argument which is either an option or a command\n # which doesn't need setup requirements.\n return True\n elif (is_short_option(argv[i]) and\n all(option in no_setup_requires_arguments\n for option in expand_short_options(argv[i]))):\n # Not so simple case: Combined short options none of which need\n # setup requirements.\n return True\n elif argv[i - 1:i] == ['--egg-base']:\n # Tricky case: --egg-info takes an argument which should not make\n # us use setup_requires (defeating the purpose of this code).\n return True\n else:\n return False\n\n if all(argument_without_setup_requirements(argv, i)\n for i in range(1, len(argv))):\n return {\n \"cmdclass\": {\n \"build\": DummyBuild,\n \"install\": DummyInstall,\n \"test\": DummyPyTest,\n }\n }\n else:\n cffi_modules = [\n \"src/_cffi_src/build_openssl.py:ffi\",\n \"src/_cffi_src/build_constant_time.py:ffi\",\n \"src/_cffi_src/build_padding.py:ffi\",\n ]\n if cc_is_available():\n cffi_modules.append(\"src/_cffi_src/build_commoncrypto.py:ffi\")\n\n return {\n \"setup_requires\": setup_requirements,\n \"cmdclass\": {\n \"test\": PyTest,\n },\n \"cffi_modules\": cffi_modules\n }\n\n\nsetup_requires_error = (\"Requested setup command that needs 'setup_requires' \"\n \"while command line arguments implied a side effect \"\n \"free command or option.\")\n\n\nclass DummyBuild(build):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py build`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyInstall(install):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py install``\n as one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyPyTest(test):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py test`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run_tests(self):\n raise RuntimeError(setup_requires_error)\n\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\nsetup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n\n description=about[\"__summary__\"],\n long_description=long_description,\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n\n classifiers=[\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Security :: Cryptography\",\n ],\n\n package_dir={\"\": \"src\"},\n packages=find_packages(\n where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\", \"tests\", \"tests.*\"]\n ),\n include_package_data=True,\n\n install_requires=requirements,\n tests_require=test_requirements,\n\n # for cffi\n zip_safe=False,\n ext_package=\"cryptography.hazmat.bindings\",\n entry_points={\n \"cryptography.backends\": backends,\n },\n **keywords_with_side_effects(sys.argv)\n)\n", "path": "setup.py"}]}
| 3,500 | 93 |
gh_patches_debug_21214
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-893
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
notebooks_dir ~ expands incorrectly
**How to reproduce the issue**
Configure jupyterhub 0.7.0b1 with:
```
c.SudoSpawner.sudospawner_path = "/some/where/bin/sudospawner"
c.SudoSpawner.sudo_args = ['-nH']
c.Spawner.notebook_dir = '~/notebooks'
```
Try to login. Notebook server startup logs:
```
[C 2016-11-21 12:32:15.936 SingleUserNotebookApp application:91] No such notebook dir: '/home/pparente/~/notebooks'
```
**What you expected to happen**
Path should be expanded properly.
**What actually happens**
Path is expanded but also gets the ~ part tacked back on.
**Share what version of JupyterHub you are using**
0.7.0b1
I put a print in the jupyterhub-singleuser script and confirmed that it is receiving `--notebook-dir="~/notebooks"` as in `sys.argv`. So it appears the incorrect expansion is happening somewhere after that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/singleuser.py`
Content:
```
1 #!/usr/bin/env python
2 """Extend regular notebook server to be aware of multiuser things."""
3
4 # Copyright (c) Jupyter Development Team.
5 # Distributed under the terms of the Modified BSD License.
6
7 import os
8
9 from jinja2 import ChoiceLoader, FunctionLoader
10
11 from tornado import ioloop
12 from textwrap import dedent
13
14 try:
15 import notebook
16 except ImportError:
17 raise ImportError("JupyterHub single-user server requires notebook >= 4.0")
18
19 from traitlets import (
20 Bool,
21 Unicode,
22 CUnicode,
23 default,
24 validate,
25 )
26
27 from notebook.notebookapp import (
28 NotebookApp,
29 aliases as notebook_aliases,
30 flags as notebook_flags,
31 )
32 from notebook.auth.login import LoginHandler
33 from notebook.auth.logout import LogoutHandler
34
35 from jupyterhub import __version__
36 from .services.auth import HubAuth, HubAuthenticated
37 from .utils import url_path_join
38
39 # Authenticate requests with the Hub
40
41 class HubAuthenticatedHandler(HubAuthenticated):
42 """Class we are going to patch-in for authentication with the Hub"""
43 @property
44 def hub_auth(self):
45 return self.settings['hub_auth']
46 @property
47 def hub_users(self):
48 return { self.settings['user'] }
49
50
51 class JupyterHubLoginHandler(LoginHandler):
52 """LoginHandler that hooks up Hub authentication"""
53 @staticmethod
54 def login_available(settings):
55 return True
56
57 @staticmethod
58 def get_user(handler):
59 """alternative get_current_user to query the Hub"""
60 # patch in HubAuthenticated class for querying the Hub for cookie authentication
61 name = 'NowHubAuthenticated'
62 if handler.__class__.__name__ != name:
63 handler.__class__ = type(name, (HubAuthenticatedHandler, handler.__class__), {})
64 return handler.get_current_user()
65
66
67 class JupyterHubLogoutHandler(LogoutHandler):
68 def get(self):
69 self.redirect(
70 self.settings['hub_host'] +
71 url_path_join(self.settings['hub_prefix'], 'logout'))
72
73
74 # register new hub related command-line aliases
75 aliases = dict(notebook_aliases)
76 aliases.update({
77 'user' : 'SingleUserNotebookApp.user',
78 'cookie-name': 'HubAuth.cookie_name',
79 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',
80 'hub-host': 'SingleUserNotebookApp.hub_host',
81 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',
82 'base-url': 'SingleUserNotebookApp.base_url',
83 })
84 flags = dict(notebook_flags)
85 flags.update({
86 'disable-user-config': ({
87 'SingleUserNotebookApp': {
88 'disable_user_config': True
89 }
90 }, "Disable user-controlled configuration of the notebook server.")
91 })
92
93 page_template = """
94 {% extends "templates/page.html" %}
95
96 {% block header_buttons %}
97 {{super()}}
98
99 <a href='{{hub_control_panel_url}}'
100 class='btn btn-default btn-sm navbar-btn pull-right'
101 style='margin-right: 4px; margin-left: 2px;'
102 >
103 Control Panel</a>
104 {% endblock %}
105 {% block logo %}
106 <img src='{{logo_url}}' alt='Jupyter Notebook'/>
107 {% endblock logo %}
108 """
109
110 def _exclude_home(path_list):
111 """Filter out any entries in a path list that are in my home directory.
112
113 Used to disable per-user configuration.
114 """
115 home = os.path.expanduser('~')
116 for p in path_list:
117 if not p.startswith(home):
118 yield p
119
120 class SingleUserNotebookApp(NotebookApp):
121 """A Subclass of the regular NotebookApp that is aware of the parent multiuser context."""
122 description = dedent("""
123 Single-user server for JupyterHub. Extends the Jupyter Notebook server.
124
125 Meant to be invoked by JupyterHub Spawners, and not directly.
126 """)
127
128 examples = ""
129 subcommands = {}
130 version = __version__
131 classes = NotebookApp.classes + [HubAuth]
132
133 user = CUnicode(config=True)
134 def _user_changed(self, name, old, new):
135 self.log.name = new
136 hub_prefix = Unicode().tag(config=True)
137 hub_host = Unicode().tag(config=True)
138 hub_api_url = Unicode().tag(config=True)
139 aliases = aliases
140 flags = flags
141 open_browser = False
142 trust_xheaders = True
143 login_handler_class = JupyterHubLoginHandler
144 logout_handler_class = JupyterHubLogoutHandler
145 port_retries = 0 # disable port-retries, since the Spawner will tell us what port to use
146
147 disable_user_config = Bool(False,
148 help="""Disable user configuration of single-user server.
149
150 Prevents user-writable files that normally configure the single-user server
151 from being loaded, ensuring admins have full control of configuration.
152 """
153 ).tag(config=True)
154
155 @default('log_datefmt')
156 def _log_datefmt_default(self):
157 """Exclude date from default date format"""
158 return "%Y-%m-%d %H:%M:%S"
159
160 @default('log_format')
161 def _log_format_default(self):
162 """override default log format to include time"""
163 return "%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s"
164
165 def _confirm_exit(self):
166 # disable the exit confirmation for background notebook processes
167 ioloop.IOLoop.instance().stop()
168
169 def migrate_config(self):
170 if self.disable_user_config:
171 # disable config-migration when user config is disabled
172 return
173 else:
174 super(SingleUserNotebookApp, self).migrate_config()
175
176 @property
177 def config_file_paths(self):
178 path = super(SingleUserNotebookApp, self).config_file_paths
179
180 if self.disable_user_config:
181 # filter out user-writable config dirs if user config is disabled
182 path = list(_exclude_home(path))
183 return path
184
185 @property
186 def nbextensions_path(self):
187 path = super(SingleUserNotebookApp, self).nbextensions_path
188
189 if self.disable_user_config:
190 path = list(_exclude_home(path))
191 return path
192
193 @validate('static_custom_path')
194 def _validate_static_custom_path(self, proposal):
195 path = proposal['value']
196 if self.disable_user_config:
197 path = list(_exclude_home(path))
198 return path
199
200 def start(self):
201 super(SingleUserNotebookApp, self).start()
202
203 def init_hub_auth(self):
204 if not os.environ.get('JPY_API_TOKEN'):
205 self.exit("JPY_API_TOKEN env is required to run jupyterhub-singleuser. Did you launch it manually?")
206 self.hub_auth = HubAuth(
207 parent=self,
208 api_token=os.environ.pop('JPY_API_TOKEN'),
209 api_url=self.hub_api_url,
210 )
211
212 def init_webapp(self):
213 # load the hub related settings into the tornado settings dict
214 self.init_hub_auth()
215 s = self.tornado_settings
216 s['user'] = self.user
217 s['hub_prefix'] = self.hub_prefix
218 s['hub_host'] = self.hub_host
219 s['hub_auth'] = self.hub_auth
220 s['login_url'] = self.hub_host + self.hub_prefix
221 s['csp_report_uri'] = self.hub_host + url_path_join(self.hub_prefix, 'security/csp-report')
222 super(SingleUserNotebookApp, self).init_webapp()
223 self.patch_templates()
224
225 def patch_templates(self):
226 """Patch page templates to add Hub-related buttons"""
227
228 self.jinja_template_vars['logo_url'] = self.hub_host + url_path_join(self.hub_prefix, 'logo')
229 self.jinja_template_vars['hub_host'] = self.hub_host
230 self.jinja_template_vars['hub_prefix'] = self.hub_prefix
231 env = self.web_app.settings['jinja2_env']
232
233 env.globals['hub_control_panel_url'] = \
234 self.hub_host + url_path_join(self.hub_prefix, 'home')
235
236 # patch jinja env loading to modify page template
237 def get_page(name):
238 if name == 'page.html':
239 return page_template
240
241 orig_loader = env.loader
242 env.loader = ChoiceLoader([
243 FunctionLoader(get_page),
244 orig_loader,
245 ])
246
247
248 def main(argv=None):
249 return SingleUserNotebookApp.launch_instance(argv)
250
251
252 if __name__ == "__main__":
253 main()
254
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/jupyterhub/singleuser.py b/jupyterhub/singleuser.py
--- a/jupyterhub/singleuser.py
+++ b/jupyterhub/singleuser.py
@@ -22,6 +22,7 @@
CUnicode,
default,
validate,
+ TraitError,
)
from notebook.notebookapp import (
@@ -151,7 +152,23 @@
from being loaded, ensuring admins have full control of configuration.
"""
).tag(config=True)
-
+
+ @validate('notebook_dir')
+ def _notebook_dir_validate(self, proposal):
+ value = os.path.expanduser(proposal['value'])
+ # Strip any trailing slashes
+ # *except* if it's root
+ _, path = os.path.splitdrive(value)
+ if path == os.sep:
+ return value
+ value = value.rstrip(os.sep)
+ if not os.path.isabs(value):
+ # If we receive a non-absolute path, make it absolute.
+ value = os.path.abspath(value)
+ if not os.path.isdir(value):
+ raise TraitError("No such notebook dir: %r" % value)
+ return value
+
@default('log_datefmt')
def _log_datefmt_default(self):
"""Exclude date from default date format"""
|
{"golden_diff": "diff --git a/jupyterhub/singleuser.py b/jupyterhub/singleuser.py\n--- a/jupyterhub/singleuser.py\n+++ b/jupyterhub/singleuser.py\n@@ -22,6 +22,7 @@\n CUnicode,\n default,\n validate,\n+ TraitError,\n )\n \n from notebook.notebookapp import (\n@@ -151,7 +152,23 @@\n from being loaded, ensuring admins have full control of configuration.\n \"\"\"\n ).tag(config=True)\n- \n+\n+ @validate('notebook_dir')\n+ def _notebook_dir_validate(self, proposal):\n+ value = os.path.expanduser(proposal['value'])\n+ # Strip any trailing slashes\n+ # *except* if it's root\n+ _, path = os.path.splitdrive(value)\n+ if path == os.sep:\n+ return value\n+ value = value.rstrip(os.sep)\n+ if not os.path.isabs(value):\n+ # If we receive a non-absolute path, make it absolute.\n+ value = os.path.abspath(value)\n+ if not os.path.isdir(value):\n+ raise TraitError(\"No such notebook dir: %r\" % value)\n+ return value\n+\n @default('log_datefmt')\n def _log_datefmt_default(self):\n \"\"\"Exclude date from default date format\"\"\"\n", "issue": "notebooks_dir ~ expands incorrectly\n**How to reproduce the issue**\r\n\r\nConfigure jupyterhub 0.7.0b1 with:\r\n\r\n```\r\nc.SudoSpawner.sudospawner_path = \"/some/where/bin/sudospawner\"\r\nc.SudoSpawner.sudo_args = ['-nH']\r\nc.Spawner.notebook_dir = '~/notebooks'\r\n```\r\n\r\nTry to login. Notebook server startup logs:\r\n\r\n```\r\n[C 2016-11-21 12:32:15.936 SingleUserNotebookApp application:91] No such notebook dir: '/home/pparente/~/notebooks'\r\n```\r\n\r\n**What you expected to happen**\r\n\r\nPath should be expanded properly.\r\n\r\n**What actually happens**\r\n\r\nPath is expanded but also gets the ~ part tacked back on.\r\n\r\n**Share what version of JupyterHub you are using**\r\n\r\n0.7.0b1\r\n\r\nI put a print in the jupyterhub-singleuser script and confirmed that it is receiving `--notebook-dir=\"~/notebooks\"` as in `sys.argv`. So it appears the incorrect expansion is happening somewhere after that.\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"Extend regular notebook server to be aware of multiuser things.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport os\n\nfrom jinja2 import ChoiceLoader, FunctionLoader\n\nfrom tornado import ioloop\nfrom textwrap import dedent\n\ntry:\n import notebook\nexcept ImportError:\n raise ImportError(\"JupyterHub single-user server requires notebook >= 4.0\")\n\nfrom traitlets import (\n Bool,\n Unicode,\n CUnicode,\n default,\n validate,\n)\n\nfrom notebook.notebookapp import (\n NotebookApp,\n aliases as notebook_aliases,\n flags as notebook_flags,\n)\nfrom notebook.auth.login import LoginHandler\nfrom notebook.auth.logout import LogoutHandler\n\nfrom jupyterhub import __version__\nfrom .services.auth import HubAuth, HubAuthenticated\nfrom .utils import url_path_join\n\n# Authenticate requests with the Hub\n\nclass HubAuthenticatedHandler(HubAuthenticated):\n \"\"\"Class we are going to patch-in for authentication with the Hub\"\"\"\n @property\n def hub_auth(self):\n return self.settings['hub_auth']\n @property\n def hub_users(self):\n return { self.settings['user'] }\n\n\nclass JupyterHubLoginHandler(LoginHandler):\n \"\"\"LoginHandler that hooks up Hub authentication\"\"\"\n @staticmethod\n def login_available(settings):\n return True\n\n @staticmethod\n def get_user(handler):\n \"\"\"alternative get_current_user to query the Hub\"\"\"\n # patch in HubAuthenticated class for querying the Hub for cookie authentication\n name = 'NowHubAuthenticated'\n if handler.__class__.__name__ != name:\n handler.__class__ = type(name, (HubAuthenticatedHandler, handler.__class__), {})\n return handler.get_current_user()\n\n\nclass JupyterHubLogoutHandler(LogoutHandler):\n def get(self):\n self.redirect(\n self.settings['hub_host'] +\n url_path_join(self.settings['hub_prefix'], 'logout'))\n\n\n# register new hub related command-line aliases\naliases = dict(notebook_aliases)\naliases.update({\n 'user' : 'SingleUserNotebookApp.user',\n 'cookie-name': 'HubAuth.cookie_name',\n 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',\n 'hub-host': 'SingleUserNotebookApp.hub_host',\n 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',\n 'base-url': 'SingleUserNotebookApp.base_url',\n})\nflags = dict(notebook_flags)\nflags.update({\n 'disable-user-config': ({\n 'SingleUserNotebookApp': {\n 'disable_user_config': True\n }\n }, \"Disable user-controlled configuration of the notebook server.\")\n})\n\npage_template = \"\"\"\n{% extends \"templates/page.html\" %}\n\n{% block header_buttons %}\n{{super()}}\n\n<a href='{{hub_control_panel_url}}'\n class='btn btn-default btn-sm navbar-btn pull-right'\n style='margin-right: 4px; margin-left: 2px;'\n>\nControl Panel</a>\n{% endblock %}\n{% block logo %}\n<img src='{{logo_url}}' alt='Jupyter Notebook'/>\n{% endblock logo %}\n\"\"\"\n\ndef _exclude_home(path_list):\n \"\"\"Filter out any entries in a path list that are in my home directory.\n\n Used to disable per-user configuration.\n \"\"\"\n home = os.path.expanduser('~')\n for p in path_list:\n if not p.startswith(home):\n yield p\n\nclass SingleUserNotebookApp(NotebookApp):\n \"\"\"A Subclass of the regular NotebookApp that is aware of the parent multiuser context.\"\"\"\n description = dedent(\"\"\"\n Single-user server for JupyterHub. Extends the Jupyter Notebook server.\n \n Meant to be invoked by JupyterHub Spawners, and not directly.\n \"\"\")\n \n examples = \"\"\n subcommands = {}\n version = __version__\n classes = NotebookApp.classes + [HubAuth]\n\n user = CUnicode(config=True)\n def _user_changed(self, name, old, new):\n self.log.name = new\n hub_prefix = Unicode().tag(config=True)\n hub_host = Unicode().tag(config=True)\n hub_api_url = Unicode().tag(config=True)\n aliases = aliases\n flags = flags\n open_browser = False\n trust_xheaders = True\n login_handler_class = JupyterHubLoginHandler\n logout_handler_class = JupyterHubLogoutHandler\n port_retries = 0 # disable port-retries, since the Spawner will tell us what port to use\n\n disable_user_config = Bool(False,\n help=\"\"\"Disable user configuration of single-user server.\n\n Prevents user-writable files that normally configure the single-user server\n from being loaded, ensuring admins have full control of configuration.\n \"\"\"\n ).tag(config=True)\n \n @default('log_datefmt')\n def _log_datefmt_default(self):\n \"\"\"Exclude date from default date format\"\"\"\n return \"%Y-%m-%d %H:%M:%S\"\n\n @default('log_format')\n def _log_format_default(self):\n \"\"\"override default log format to include time\"\"\"\n return \"%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s\"\n\n def _confirm_exit(self):\n # disable the exit confirmation for background notebook processes\n ioloop.IOLoop.instance().stop()\n\n def migrate_config(self):\n if self.disable_user_config:\n # disable config-migration when user config is disabled\n return\n else:\n super(SingleUserNotebookApp, self).migrate_config()\n\n @property\n def config_file_paths(self):\n path = super(SingleUserNotebookApp, self).config_file_paths\n\n if self.disable_user_config:\n # filter out user-writable config dirs if user config is disabled\n path = list(_exclude_home(path))\n return path\n\n @property\n def nbextensions_path(self):\n path = super(SingleUserNotebookApp, self).nbextensions_path\n\n if self.disable_user_config:\n path = list(_exclude_home(path))\n return path\n\n @validate('static_custom_path')\n def _validate_static_custom_path(self, proposal):\n path = proposal['value']\n if self.disable_user_config:\n path = list(_exclude_home(path))\n return path\n\n def start(self):\n super(SingleUserNotebookApp, self).start()\n\n def init_hub_auth(self):\n if not os.environ.get('JPY_API_TOKEN'):\n self.exit(\"JPY_API_TOKEN env is required to run jupyterhub-singleuser. Did you launch it manually?\")\n self.hub_auth = HubAuth(\n parent=self,\n api_token=os.environ.pop('JPY_API_TOKEN'),\n api_url=self.hub_api_url,\n )\n\n def init_webapp(self):\n # load the hub related settings into the tornado settings dict\n self.init_hub_auth()\n s = self.tornado_settings\n s['user'] = self.user\n s['hub_prefix'] = self.hub_prefix\n s['hub_host'] = self.hub_host\n s['hub_auth'] = self.hub_auth\n s['login_url'] = self.hub_host + self.hub_prefix\n s['csp_report_uri'] = self.hub_host + url_path_join(self.hub_prefix, 'security/csp-report')\n super(SingleUserNotebookApp, self).init_webapp()\n self.patch_templates()\n\n def patch_templates(self):\n \"\"\"Patch page templates to add Hub-related buttons\"\"\"\n\n self.jinja_template_vars['logo_url'] = self.hub_host + url_path_join(self.hub_prefix, 'logo')\n self.jinja_template_vars['hub_host'] = self.hub_host\n self.jinja_template_vars['hub_prefix'] = self.hub_prefix\n env = self.web_app.settings['jinja2_env']\n\n env.globals['hub_control_panel_url'] = \\\n self.hub_host + url_path_join(self.hub_prefix, 'home')\n\n # patch jinja env loading to modify page template\n def get_page(name):\n if name == 'page.html':\n return page_template\n\n orig_loader = env.loader\n env.loader = ChoiceLoader([\n FunctionLoader(get_page),\n orig_loader,\n ])\n\n\ndef main(argv=None):\n return SingleUserNotebookApp.launch_instance(argv)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "jupyterhub/singleuser.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"Extend regular notebook server to be aware of multiuser things.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport os\n\nfrom jinja2 import ChoiceLoader, FunctionLoader\n\nfrom tornado import ioloop\nfrom textwrap import dedent\n\ntry:\n import notebook\nexcept ImportError:\n raise ImportError(\"JupyterHub single-user server requires notebook >= 4.0\")\n\nfrom traitlets import (\n Bool,\n Unicode,\n CUnicode,\n default,\n validate,\n TraitError,\n)\n\nfrom notebook.notebookapp import (\n NotebookApp,\n aliases as notebook_aliases,\n flags as notebook_flags,\n)\nfrom notebook.auth.login import LoginHandler\nfrom notebook.auth.logout import LogoutHandler\n\nfrom jupyterhub import __version__\nfrom .services.auth import HubAuth, HubAuthenticated\nfrom .utils import url_path_join\n\n# Authenticate requests with the Hub\n\nclass HubAuthenticatedHandler(HubAuthenticated):\n \"\"\"Class we are going to patch-in for authentication with the Hub\"\"\"\n @property\n def hub_auth(self):\n return self.settings['hub_auth']\n @property\n def hub_users(self):\n return { self.settings['user'] }\n\n\nclass JupyterHubLoginHandler(LoginHandler):\n \"\"\"LoginHandler that hooks up Hub authentication\"\"\"\n @staticmethod\n def login_available(settings):\n return True\n\n @staticmethod\n def get_user(handler):\n \"\"\"alternative get_current_user to query the Hub\"\"\"\n # patch in HubAuthenticated class for querying the Hub for cookie authentication\n name = 'NowHubAuthenticated'\n if handler.__class__.__name__ != name:\n handler.__class__ = type(name, (HubAuthenticatedHandler, handler.__class__), {})\n return handler.get_current_user()\n\n\nclass JupyterHubLogoutHandler(LogoutHandler):\n def get(self):\n self.redirect(\n self.settings['hub_host'] +\n url_path_join(self.settings['hub_prefix'], 'logout'))\n\n\n# register new hub related command-line aliases\naliases = dict(notebook_aliases)\naliases.update({\n 'user' : 'SingleUserNotebookApp.user',\n 'cookie-name': 'HubAuth.cookie_name',\n 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',\n 'hub-host': 'SingleUserNotebookApp.hub_host',\n 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',\n 'base-url': 'SingleUserNotebookApp.base_url',\n})\nflags = dict(notebook_flags)\nflags.update({\n 'disable-user-config': ({\n 'SingleUserNotebookApp': {\n 'disable_user_config': True\n }\n }, \"Disable user-controlled configuration of the notebook server.\")\n})\n\npage_template = \"\"\"\n{% extends \"templates/page.html\" %}\n\n{% block header_buttons %}\n{{super()}}\n\n<a href='{{hub_control_panel_url}}'\n class='btn btn-default btn-sm navbar-btn pull-right'\n style='margin-right: 4px; margin-left: 2px;'\n>\nControl Panel</a>\n{% endblock %}\n{% block logo %}\n<img src='{{logo_url}}' alt='Jupyter Notebook'/>\n{% endblock logo %}\n\"\"\"\n\ndef _exclude_home(path_list):\n \"\"\"Filter out any entries in a path list that are in my home directory.\n\n Used to disable per-user configuration.\n \"\"\"\n home = os.path.expanduser('~')\n for p in path_list:\n if not p.startswith(home):\n yield p\n\nclass SingleUserNotebookApp(NotebookApp):\n \"\"\"A Subclass of the regular NotebookApp that is aware of the parent multiuser context.\"\"\"\n description = dedent(\"\"\"\n Single-user server for JupyterHub. Extends the Jupyter Notebook server.\n \n Meant to be invoked by JupyterHub Spawners, and not directly.\n \"\"\")\n \n examples = \"\"\n subcommands = {}\n version = __version__\n classes = NotebookApp.classes + [HubAuth]\n\n user = CUnicode(config=True)\n def _user_changed(self, name, old, new):\n self.log.name = new\n hub_prefix = Unicode().tag(config=True)\n hub_host = Unicode().tag(config=True)\n hub_api_url = Unicode().tag(config=True)\n aliases = aliases\n flags = flags\n open_browser = False\n trust_xheaders = True\n login_handler_class = JupyterHubLoginHandler\n logout_handler_class = JupyterHubLogoutHandler\n port_retries = 0 # disable port-retries, since the Spawner will tell us what port to use\n\n disable_user_config = Bool(False,\n help=\"\"\"Disable user configuration of single-user server.\n\n Prevents user-writable files that normally configure the single-user server\n from being loaded, ensuring admins have full control of configuration.\n \"\"\"\n ).tag(config=True)\n\n @validate('notebook_dir')\n def _notebook_dir_validate(self, proposal):\n value = os.path.expanduser(proposal['value'])\n # Strip any trailing slashes\n # *except* if it's root\n _, path = os.path.splitdrive(value)\n if path == os.sep:\n return value\n value = value.rstrip(os.sep)\n if not os.path.isabs(value):\n # If we receive a non-absolute path, make it absolute.\n value = os.path.abspath(value)\n if not os.path.isdir(value):\n raise TraitError(\"No such notebook dir: %r\" % value)\n return value\n\n @default('log_datefmt')\n def _log_datefmt_default(self):\n \"\"\"Exclude date from default date format\"\"\"\n return \"%Y-%m-%d %H:%M:%S\"\n\n @default('log_format')\n def _log_format_default(self):\n \"\"\"override default log format to include time\"\"\"\n return \"%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s\"\n\n def _confirm_exit(self):\n # disable the exit confirmation for background notebook processes\n ioloop.IOLoop.instance().stop()\n\n def migrate_config(self):\n if self.disable_user_config:\n # disable config-migration when user config is disabled\n return\n else:\n super(SingleUserNotebookApp, self).migrate_config()\n\n @property\n def config_file_paths(self):\n path = super(SingleUserNotebookApp, self).config_file_paths\n\n if self.disable_user_config:\n # filter out user-writable config dirs if user config is disabled\n path = list(_exclude_home(path))\n return path\n\n @property\n def nbextensions_path(self):\n path = super(SingleUserNotebookApp, self).nbextensions_path\n\n if self.disable_user_config:\n path = list(_exclude_home(path))\n return path\n\n @validate('static_custom_path')\n def _validate_static_custom_path(self, proposal):\n path = proposal['value']\n if self.disable_user_config:\n path = list(_exclude_home(path))\n return path\n\n def start(self):\n super(SingleUserNotebookApp, self).start()\n\n def init_hub_auth(self):\n if not os.environ.get('JPY_API_TOKEN'):\n self.exit(\"JPY_API_TOKEN env is required to run jupyterhub-singleuser. Did you launch it manually?\")\n self.hub_auth = HubAuth(\n parent=self,\n api_token=os.environ.pop('JPY_API_TOKEN'),\n api_url=self.hub_api_url,\n )\n\n def init_webapp(self):\n # load the hub related settings into the tornado settings dict\n self.init_hub_auth()\n s = self.tornado_settings\n s['user'] = self.user\n s['hub_prefix'] = self.hub_prefix\n s['hub_host'] = self.hub_host\n s['hub_auth'] = self.hub_auth\n s['login_url'] = self.hub_host + self.hub_prefix\n s['csp_report_uri'] = self.hub_host + url_path_join(self.hub_prefix, 'security/csp-report')\n super(SingleUserNotebookApp, self).init_webapp()\n self.patch_templates()\n\n def patch_templates(self):\n \"\"\"Patch page templates to add Hub-related buttons\"\"\"\n\n self.jinja_template_vars['logo_url'] = self.hub_host + url_path_join(self.hub_prefix, 'logo')\n self.jinja_template_vars['hub_host'] = self.hub_host\n self.jinja_template_vars['hub_prefix'] = self.hub_prefix\n env = self.web_app.settings['jinja2_env']\n\n env.globals['hub_control_panel_url'] = \\\n self.hub_host + url_path_join(self.hub_prefix, 'home')\n\n # patch jinja env loading to modify page template\n def get_page(name):\n if name == 'page.html':\n return page_template\n\n orig_loader = env.loader\n env.loader = ChoiceLoader([\n FunctionLoader(get_page),\n orig_loader,\n ])\n\n\ndef main(argv=None):\n return SingleUserNotebookApp.launch_instance(argv)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "jupyterhub/singleuser.py"}]}
| 2,993 | 290 |
gh_patches_debug_39330
|
rasdani/github-patches
|
git_diff
|
nonebot__nonebot2-546
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feature: 对戳一戳事件的响应
**是否在使用中遇到某些问题而需要新的特性?请描述:**
虽然可以通过event.type来判断是否是戳一戳,但加一个对于戳一戳事件的完整支持我觉得还是更好一点
**描述你所需要的特性:**
支持对戳一戳事件的响应
Feature: 对戳一戳事件的响应
**是否在使用中遇到某些问题而需要新的特性?请描述:**
虽然可以通过event.type来判断是否是戳一戳,但加一个对于戳一戳事件的完整支持我觉得还是更好一点
**描述你所需要的特性:**
支持对戳一戳事件的响应
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py`
Content:
```
1 import json
2 from enum import Enum
3 from typing import Any, Dict, Optional, Type
4
5 from pydantic import BaseModel, Field, ValidationError
6 from typing_extensions import Literal
7
8 from nonebot.adapters import Event as BaseEvent
9 from nonebot.adapters import Message as BaseMessage
10 from nonebot.log import logger
11 from nonebot.typing import overrides
12 from nonebot.utils import escape_tag
13
14
15 class UserPermission(str, Enum):
16 """
17 :说明:
18
19 用户权限枚举类
20
21 * ``OWNER``: 群主
22 * ``ADMINISTRATOR``: 群管理
23 * ``MEMBER``: 普通群成员
24 """
25 OWNER = 'OWNER'
26 ADMINISTRATOR = 'ADMINISTRATOR'
27 MEMBER = 'MEMBER'
28
29
30 class GroupInfo(BaseModel):
31 id: int
32 name: str
33 permission: UserPermission
34
35
36 class GroupChatInfo(BaseModel):
37 id: int
38 name: str = Field(alias='memberName')
39 permission: UserPermission
40 group: GroupInfo
41
42
43 class PrivateChatInfo(BaseModel):
44 id: int
45 nickname: str
46 remark: str
47
48
49 class Event(BaseEvent):
50 """
51 mirai-api-http 协议事件,字段与 mirai-api-http 一致。各事件字段参考 `mirai-api-http 事件类型`_
52
53 .. _mirai-api-http 事件类型:
54 https://github.com/project-mirai/mirai-api-http/blob/master/docs/EventType.md
55 """
56 self_id: int
57 type: str
58
59 @classmethod
60 def new(cls, data: Dict[str, Any]) -> "Event":
61 """
62 此事件类的工厂函数, 能够通过事件数据选择合适的子类进行序列化
63 """
64 type = data['type']
65
66 def all_subclasses(cls: Type[Event]):
67 return set(cls.__subclasses__()).union(
68 [s for c in cls.__subclasses__() for s in all_subclasses(c)])
69
70 event_class: Optional[Type[Event]] = None
71 for subclass in all_subclasses(cls):
72 if subclass.__name__ != type:
73 continue
74 event_class = subclass
75
76 if event_class is None:
77 return Event.parse_obj(data)
78
79 while event_class and issubclass(event_class, Event):
80 try:
81 return event_class.parse_obj(data)
82 except ValidationError as e:
83 logger.info(
84 f'Failed to parse {data} to class {event_class.__name__}: '
85 f'{e.errors()!r}. Fallback to parent class.')
86 event_class = event_class.__base__ # type: ignore
87
88 raise ValueError(f'Failed to serialize {data}.')
89
90 @overrides(BaseEvent)
91 def get_type(self) -> Literal["message", "notice", "request", "meta_event"]:
92 from . import message, meta, notice, request
93 if isinstance(self, message.MessageEvent):
94 return 'message'
95 elif isinstance(self, notice.NoticeEvent):
96 return 'notice'
97 elif isinstance(self, request.RequestEvent):
98 return 'request'
99 else:
100 return 'meta_event'
101
102 @overrides(BaseEvent)
103 def get_event_name(self) -> str:
104 return self.type
105
106 @overrides(BaseEvent)
107 def get_event_description(self) -> str:
108 return escape_tag(str(self.normalize_dict()))
109
110 @overrides(BaseEvent)
111 def get_message(self) -> BaseMessage:
112 raise ValueError("Event has no message!")
113
114 @overrides(BaseEvent)
115 def get_plaintext(self) -> str:
116 raise ValueError("Event has no message!")
117
118 @overrides(BaseEvent)
119 def get_user_id(self) -> str:
120 raise ValueError("Event has no message!")
121
122 @overrides(BaseEvent)
123 def get_session_id(self) -> str:
124 raise ValueError("Event has no message!")
125
126 @overrides(BaseEvent)
127 def is_tome(self) -> bool:
128 return False
129
130 def normalize_dict(self, **kwargs) -> Dict[str, Any]:
131 """
132 返回可以被json正常反序列化的结构体
133 """
134 return json.loads(self.json(**kwargs))
135
```
Path: `packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py`
Content:
```
1 from typing import Any, Optional
2
3 from pydantic import Field
4
5 from .base import Event, GroupChatInfo, GroupInfo, UserPermission
6
7
8 class NoticeEvent(Event):
9 """通知事件基类"""
10 pass
11
12
13 class MuteEvent(NoticeEvent):
14 """禁言类事件基类"""
15 operator: GroupChatInfo
16
17
18 class BotMuteEvent(MuteEvent):
19 """Bot被禁言"""
20 pass
21
22
23 class BotUnmuteEvent(MuteEvent):
24 """Bot被取消禁言"""
25 pass
26
27
28 class MemberMuteEvent(MuteEvent):
29 """群成员被禁言事件(该成员不是Bot)"""
30 duration_seconds: int = Field(alias='durationSeconds')
31 member: GroupChatInfo
32 operator: Optional[GroupChatInfo] = None
33
34
35 class MemberUnmuteEvent(MuteEvent):
36 """群成员被取消禁言事件(该成员不是Bot)"""
37 member: GroupChatInfo
38 operator: Optional[GroupChatInfo] = None
39
40
41 class BotJoinGroupEvent(NoticeEvent):
42 """Bot加入了一个新群"""
43 group: GroupInfo
44
45
46 class BotLeaveEventActive(BotJoinGroupEvent):
47 """Bot主动退出一个群"""
48 pass
49
50
51 class BotLeaveEventKick(BotJoinGroupEvent):
52 """Bot被踢出一个群"""
53 pass
54
55
56 class MemberJoinEvent(NoticeEvent):
57 """新人入群的事件"""
58 member: GroupChatInfo
59
60
61 class MemberLeaveEventKick(MemberJoinEvent):
62 """成员被踢出群(该成员不是Bot)"""
63 operator: Optional[GroupChatInfo] = None
64
65
66 class MemberLeaveEventQuit(MemberJoinEvent):
67 """成员主动离群(该成员不是Bot)"""
68 pass
69
70
71 class FriendRecallEvent(NoticeEvent):
72 """好友消息撤回"""
73 author_id: int = Field(alias='authorId')
74 message_id: int = Field(alias='messageId')
75 time: int
76 operator: int
77
78
79 class GroupRecallEvent(FriendRecallEvent):
80 """群消息撤回"""
81 group: GroupInfo
82 operator: Optional[GroupChatInfo] = None
83
84
85 class GroupStateChangeEvent(NoticeEvent):
86 """群变化事件基类"""
87 origin: Any
88 current: Any
89 group: GroupInfo
90 operator: Optional[GroupChatInfo] = None
91
92
93 class GroupNameChangeEvent(GroupStateChangeEvent):
94 """某个群名改变"""
95 origin: str
96 current: str
97
98
99 class GroupEntranceAnnouncementChangeEvent(GroupStateChangeEvent):
100 """某群入群公告改变"""
101 origin: str
102 current: str
103
104
105 class GroupMuteAllEvent(GroupStateChangeEvent):
106 """全员禁言"""
107 origin: bool
108 current: bool
109
110
111 class GroupAllowAnonymousChatEvent(GroupStateChangeEvent):
112 """匿名聊天"""
113 origin: bool
114 current: bool
115
116
117 class GroupAllowConfessTalkEvent(GroupStateChangeEvent):
118 """坦白说"""
119 origin: bool
120 current: bool
121
122
123 class GroupAllowMemberInviteEvent(GroupStateChangeEvent):
124 """允许群员邀请好友加群"""
125 origin: bool
126 current: bool
127
128
129 class MemberStateChangeEvent(NoticeEvent):
130 """群成员变化事件基类"""
131 member: GroupChatInfo
132 operator: Optional[GroupChatInfo] = None
133
134
135 class MemberCardChangeEvent(MemberStateChangeEvent):
136 """群名片改动"""
137 origin: str
138 current: str
139
140
141 class MemberSpecialTitleChangeEvent(MemberStateChangeEvent):
142 """群头衔改动(只有群主有操作限权)"""
143 origin: str
144 current: str
145
146
147 class BotGroupPermissionChangeEvent(MemberStateChangeEvent):
148 """Bot在群里的权限被改变"""
149 origin: UserPermission
150 current: UserPermission
151
152
153 class MemberPermissionChangeEvent(MemberStateChangeEvent):
154 """成员权限改变的事件(该成员不是Bot)"""
155 origin: UserPermission
156 current: UserPermission
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py b/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py
--- a/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py
+++ b/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py
@@ -1,15 +1,15 @@
import json
from enum import Enum
-from typing import Any, Dict, Optional, Type
-
-from pydantic import BaseModel, Field, ValidationError
from typing_extensions import Literal
+from typing import Any, Dict, Type, Optional
+
+from pydantic import Field, BaseModel, ValidationError
-from nonebot.adapters import Event as BaseEvent
-from nonebot.adapters import Message as BaseMessage
from nonebot.log import logger
from nonebot.typing import overrides
from nonebot.utils import escape_tag
+from nonebot.adapters import Event as BaseEvent
+from nonebot.adapters import Message as BaseMessage
class UserPermission(str, Enum):
@@ -18,15 +18,28 @@
用户权限枚举类
- * ``OWNER``: 群主
- * ``ADMINISTRATOR``: 群管理
- * ``MEMBER``: 普通群成员
+ * ``OWNER``: 群主
+ * ``ADMINISTRATOR``: 群管理
+ * ``MEMBER``: 普通群成员
"""
OWNER = 'OWNER'
ADMINISTRATOR = 'ADMINISTRATOR'
MEMBER = 'MEMBER'
+class NudgeSubjectKind(str, Enum):
+ """
+ :说明:
+
+ 戳一戳类型枚举类
+
+ * ``Group``: 群
+ * ``Friend``: 好友
+ """
+ Group = 'Group'
+ Friend = 'Friend'
+
+
class GroupInfo(BaseModel):
id: int
name: str
@@ -46,6 +59,11 @@
remark: str
+class NudgeSubject(BaseModel):
+ id: int
+ kind: NudgeSubjectKind
+
+
class Event(BaseEvent):
"""
mirai-api-http 协议事件,字段与 mirai-api-http 一致。各事件字段参考 `mirai-api-http 事件类型`_
@@ -89,7 +107,7 @@
@overrides(BaseEvent)
def get_type(self) -> Literal["message", "notice", "request", "meta_event"]:
- from . import message, meta, notice, request
+ from . import meta, notice, message, request
if isinstance(self, message.MessageEvent):
return 'message'
elif isinstance(self, notice.NoticeEvent):
diff --git a/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py b/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py
--- a/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py
+++ b/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py
@@ -2,7 +2,7 @@
from pydantic import Field
-from .base import Event, GroupChatInfo, GroupInfo, UserPermission
+from .base import Event, GroupChatInfo, GroupInfo, NudgeSubject, UserPermission
class NoticeEvent(Event):
@@ -154,3 +154,12 @@
"""成员权限改变的事件(该成员不是Bot)"""
origin: UserPermission
current: UserPermission
+
+
+class NudgeEvent(NoticeEvent):
+ """戳一戳触发事件"""
+ from_id: int = Field(alias='fromId')
+ target: int
+ subject: NudgeSubject
+ action: str
+ suffix: str
|
{"golden_diff": "diff --git a/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py b/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py\n--- a/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py\n+++ b/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py\n@@ -1,15 +1,15 @@\n import json\n from enum import Enum\n-from typing import Any, Dict, Optional, Type\n-\n-from pydantic import BaseModel, Field, ValidationError\n from typing_extensions import Literal\n+from typing import Any, Dict, Type, Optional\n+\n+from pydantic import Field, BaseModel, ValidationError\n \n-from nonebot.adapters import Event as BaseEvent\n-from nonebot.adapters import Message as BaseMessage\n from nonebot.log import logger\n from nonebot.typing import overrides\n from nonebot.utils import escape_tag\n+from nonebot.adapters import Event as BaseEvent\n+from nonebot.adapters import Message as BaseMessage\n \n \n class UserPermission(str, Enum):\n@@ -18,15 +18,28 @@\n \n \u7528\u6237\u6743\u9650\u679a\u4e3e\u7c7b\n \n- * ``OWNER``: \u7fa4\u4e3b\n- * ``ADMINISTRATOR``: \u7fa4\u7ba1\u7406\n- * ``MEMBER``: \u666e\u901a\u7fa4\u6210\u5458\n+ * ``OWNER``: \u7fa4\u4e3b\n+ * ``ADMINISTRATOR``: \u7fa4\u7ba1\u7406\n+ * ``MEMBER``: \u666e\u901a\u7fa4\u6210\u5458\n \"\"\"\n OWNER = 'OWNER'\n ADMINISTRATOR = 'ADMINISTRATOR'\n MEMBER = 'MEMBER'\n \n \n+class NudgeSubjectKind(str, Enum):\n+ \"\"\"\n+ :\u8bf4\u660e:\n+\n+ \u6233\u4e00\u6233\u7c7b\u578b\u679a\u4e3e\u7c7b\n+\n+ * ``Group``: \u7fa4\n+ * ``Friend``: \u597d\u53cb\n+ \"\"\"\n+ Group = 'Group'\n+ Friend = 'Friend'\n+\n+\n class GroupInfo(BaseModel):\n id: int\n name: str\n@@ -46,6 +59,11 @@\n remark: str\n \n \n+class NudgeSubject(BaseModel):\n+ id: int\n+ kind: NudgeSubjectKind\n+\n+\n class Event(BaseEvent):\n \"\"\"\n mirai-api-http \u534f\u8bae\u4e8b\u4ef6\uff0c\u5b57\u6bb5\u4e0e mirai-api-http \u4e00\u81f4\u3002\u5404\u4e8b\u4ef6\u5b57\u6bb5\u53c2\u8003 `mirai-api-http \u4e8b\u4ef6\u7c7b\u578b`_\n@@ -89,7 +107,7 @@\n \n @overrides(BaseEvent)\n def get_type(self) -> Literal[\"message\", \"notice\", \"request\", \"meta_event\"]:\n- from . import message, meta, notice, request\n+ from . import meta, notice, message, request\n if isinstance(self, message.MessageEvent):\n return 'message'\n elif isinstance(self, notice.NoticeEvent):\ndiff --git a/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py b/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py\n--- a/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py\n+++ b/packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py\n@@ -2,7 +2,7 @@\n \n from pydantic import Field\n \n-from .base import Event, GroupChatInfo, GroupInfo, UserPermission\n+from .base import Event, GroupChatInfo, GroupInfo, NudgeSubject, UserPermission\n \n \n class NoticeEvent(Event):\n@@ -154,3 +154,12 @@\n \"\"\"\u6210\u5458\u6743\u9650\u6539\u53d8\u7684\u4e8b\u4ef6\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n origin: UserPermission\n current: UserPermission\n+\n+\n+class NudgeEvent(NoticeEvent):\n+ \"\"\"\u6233\u4e00\u6233\u89e6\u53d1\u4e8b\u4ef6\"\"\"\n+ from_id: int = Field(alias='fromId')\n+ target: int\n+ subject: NudgeSubject\n+ action: str\n+ suffix: str\n", "issue": "Feature: \u5bf9\u6233\u4e00\u6233\u4e8b\u4ef6\u7684\u54cd\u5e94\n**\u662f\u5426\u5728\u4f7f\u7528\u4e2d\u9047\u5230\u67d0\u4e9b\u95ee\u9898\u800c\u9700\u8981\u65b0\u7684\u7279\u6027\uff1f\u8bf7\u63cf\u8ff0\uff1a**\r\n\r\n\u867d\u7136\u53ef\u4ee5\u901a\u8fc7event.type\u6765\u5224\u65ad\u662f\u5426\u662f\u6233\u4e00\u6233\uff0c\u4f46\u52a0\u4e00\u4e2a\u5bf9\u4e8e\u6233\u4e00\u6233\u4e8b\u4ef6\u7684\u5b8c\u6574\u652f\u6301\u6211\u89c9\u5f97\u8fd8\u662f\u66f4\u597d\u4e00\u70b9\r\n\r\n**\u63cf\u8ff0\u4f60\u6240\u9700\u8981\u7684\u7279\u6027\uff1a**\r\n\r\n\u652f\u6301\u5bf9\u6233\u4e00\u6233\u4e8b\u4ef6\u7684\u54cd\u5e94\r\n\nFeature: \u5bf9\u6233\u4e00\u6233\u4e8b\u4ef6\u7684\u54cd\u5e94\n**\u662f\u5426\u5728\u4f7f\u7528\u4e2d\u9047\u5230\u67d0\u4e9b\u95ee\u9898\u800c\u9700\u8981\u65b0\u7684\u7279\u6027\uff1f\u8bf7\u63cf\u8ff0\uff1a**\r\n\r\n\u867d\u7136\u53ef\u4ee5\u901a\u8fc7event.type\u6765\u5224\u65ad\u662f\u5426\u662f\u6233\u4e00\u6233\uff0c\u4f46\u52a0\u4e00\u4e2a\u5bf9\u4e8e\u6233\u4e00\u6233\u4e8b\u4ef6\u7684\u5b8c\u6574\u652f\u6301\u6211\u89c9\u5f97\u8fd8\u662f\u66f4\u597d\u4e00\u70b9\r\n\r\n**\u63cf\u8ff0\u4f60\u6240\u9700\u8981\u7684\u7279\u6027\uff1a**\r\n\r\n\u652f\u6301\u5bf9\u6233\u4e00\u6233\u4e8b\u4ef6\u7684\u54cd\u5e94\r\n\n", "before_files": [{"content": "import json\nfrom enum import Enum\nfrom typing import Any, Dict, Optional, Type\n\nfrom pydantic import BaseModel, Field, ValidationError\nfrom typing_extensions import Literal\n\nfrom nonebot.adapters import Event as BaseEvent\nfrom nonebot.adapters import Message as BaseMessage\nfrom nonebot.log import logger\nfrom nonebot.typing import overrides\nfrom nonebot.utils import escape_tag\n\n\nclass UserPermission(str, Enum):\n \"\"\"\n :\u8bf4\u660e:\n\n \u7528\u6237\u6743\u9650\u679a\u4e3e\u7c7b\n\n * ``OWNER``: \u7fa4\u4e3b\n * ``ADMINISTRATOR``: \u7fa4\u7ba1\u7406\n * ``MEMBER``: \u666e\u901a\u7fa4\u6210\u5458\n \"\"\"\n OWNER = 'OWNER'\n ADMINISTRATOR = 'ADMINISTRATOR'\n MEMBER = 'MEMBER'\n\n\nclass GroupInfo(BaseModel):\n id: int\n name: str\n permission: UserPermission\n\n\nclass GroupChatInfo(BaseModel):\n id: int\n name: str = Field(alias='memberName')\n permission: UserPermission\n group: GroupInfo\n\n\nclass PrivateChatInfo(BaseModel):\n id: int\n nickname: str\n remark: str\n\n\nclass Event(BaseEvent):\n \"\"\"\n mirai-api-http \u534f\u8bae\u4e8b\u4ef6\uff0c\u5b57\u6bb5\u4e0e mirai-api-http \u4e00\u81f4\u3002\u5404\u4e8b\u4ef6\u5b57\u6bb5\u53c2\u8003 `mirai-api-http \u4e8b\u4ef6\u7c7b\u578b`_\n\n .. _mirai-api-http \u4e8b\u4ef6\u7c7b\u578b:\n https://github.com/project-mirai/mirai-api-http/blob/master/docs/EventType.md\n \"\"\"\n self_id: int\n type: str\n\n @classmethod\n def new(cls, data: Dict[str, Any]) -> \"Event\":\n \"\"\"\n \u6b64\u4e8b\u4ef6\u7c7b\u7684\u5de5\u5382\u51fd\u6570, \u80fd\u591f\u901a\u8fc7\u4e8b\u4ef6\u6570\u636e\u9009\u62e9\u5408\u9002\u7684\u5b50\u7c7b\u8fdb\u884c\u5e8f\u5217\u5316\n \"\"\"\n type = data['type']\n\n def all_subclasses(cls: Type[Event]):\n return set(cls.__subclasses__()).union(\n [s for c in cls.__subclasses__() for s in all_subclasses(c)])\n\n event_class: Optional[Type[Event]] = None\n for subclass in all_subclasses(cls):\n if subclass.__name__ != type:\n continue\n event_class = subclass\n\n if event_class is None:\n return Event.parse_obj(data)\n\n while event_class and issubclass(event_class, Event):\n try:\n return event_class.parse_obj(data)\n except ValidationError as e:\n logger.info(\n f'Failed to parse {data} to class {event_class.__name__}: '\n f'{e.errors()!r}. Fallback to parent class.')\n event_class = event_class.__base__ # type: ignore\n\n raise ValueError(f'Failed to serialize {data}.')\n\n @overrides(BaseEvent)\n def get_type(self) -> Literal[\"message\", \"notice\", \"request\", \"meta_event\"]:\n from . import message, meta, notice, request\n if isinstance(self, message.MessageEvent):\n return 'message'\n elif isinstance(self, notice.NoticeEvent):\n return 'notice'\n elif isinstance(self, request.RequestEvent):\n return 'request'\n else:\n return 'meta_event'\n\n @overrides(BaseEvent)\n def get_event_name(self) -> str:\n return self.type\n\n @overrides(BaseEvent)\n def get_event_description(self) -> str:\n return escape_tag(str(self.normalize_dict()))\n\n @overrides(BaseEvent)\n def get_message(self) -> BaseMessage:\n raise ValueError(\"Event has no message!\")\n\n @overrides(BaseEvent)\n def get_plaintext(self) -> str:\n raise ValueError(\"Event has no message!\")\n\n @overrides(BaseEvent)\n def get_user_id(self) -> str:\n raise ValueError(\"Event has no message!\")\n\n @overrides(BaseEvent)\n def get_session_id(self) -> str:\n raise ValueError(\"Event has no message!\")\n\n @overrides(BaseEvent)\n def is_tome(self) -> bool:\n return False\n\n def normalize_dict(self, **kwargs) -> Dict[str, Any]:\n \"\"\"\n \u8fd4\u56de\u53ef\u4ee5\u88abjson\u6b63\u5e38\u53cd\u5e8f\u5217\u5316\u7684\u7ed3\u6784\u4f53\n \"\"\"\n return json.loads(self.json(**kwargs))\n", "path": "packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py"}, {"content": "from typing import Any, Optional\n\nfrom pydantic import Field\n\nfrom .base import Event, GroupChatInfo, GroupInfo, UserPermission\n\n\nclass NoticeEvent(Event):\n \"\"\"\u901a\u77e5\u4e8b\u4ef6\u57fa\u7c7b\"\"\"\n pass\n\n\nclass MuteEvent(NoticeEvent):\n \"\"\"\u7981\u8a00\u7c7b\u4e8b\u4ef6\u57fa\u7c7b\"\"\"\n operator: GroupChatInfo\n\n\nclass BotMuteEvent(MuteEvent):\n \"\"\"Bot\u88ab\u7981\u8a00\"\"\"\n pass\n\n\nclass BotUnmuteEvent(MuteEvent):\n \"\"\"Bot\u88ab\u53d6\u6d88\u7981\u8a00\"\"\"\n pass\n\n\nclass MemberMuteEvent(MuteEvent):\n \"\"\"\u7fa4\u6210\u5458\u88ab\u7981\u8a00\u4e8b\u4ef6\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n duration_seconds: int = Field(alias='durationSeconds')\n member: GroupChatInfo\n operator: Optional[GroupChatInfo] = None\n\n\nclass MemberUnmuteEvent(MuteEvent):\n \"\"\"\u7fa4\u6210\u5458\u88ab\u53d6\u6d88\u7981\u8a00\u4e8b\u4ef6\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n member: GroupChatInfo\n operator: Optional[GroupChatInfo] = None\n\n\nclass BotJoinGroupEvent(NoticeEvent):\n \"\"\"Bot\u52a0\u5165\u4e86\u4e00\u4e2a\u65b0\u7fa4\"\"\"\n group: GroupInfo\n\n\nclass BotLeaveEventActive(BotJoinGroupEvent):\n \"\"\"Bot\u4e3b\u52a8\u9000\u51fa\u4e00\u4e2a\u7fa4\"\"\"\n pass\n\n\nclass BotLeaveEventKick(BotJoinGroupEvent):\n \"\"\"Bot\u88ab\u8e22\u51fa\u4e00\u4e2a\u7fa4\"\"\"\n pass\n\n\nclass MemberJoinEvent(NoticeEvent):\n \"\"\"\u65b0\u4eba\u5165\u7fa4\u7684\u4e8b\u4ef6\"\"\"\n member: GroupChatInfo\n\n\nclass MemberLeaveEventKick(MemberJoinEvent):\n \"\"\"\u6210\u5458\u88ab\u8e22\u51fa\u7fa4\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n operator: Optional[GroupChatInfo] = None\n\n\nclass MemberLeaveEventQuit(MemberJoinEvent):\n \"\"\"\u6210\u5458\u4e3b\u52a8\u79bb\u7fa4\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n pass\n\n\nclass FriendRecallEvent(NoticeEvent):\n \"\"\"\u597d\u53cb\u6d88\u606f\u64a4\u56de\"\"\"\n author_id: int = Field(alias='authorId')\n message_id: int = Field(alias='messageId')\n time: int\n operator: int\n\n\nclass GroupRecallEvent(FriendRecallEvent):\n \"\"\"\u7fa4\u6d88\u606f\u64a4\u56de\"\"\"\n group: GroupInfo\n operator: Optional[GroupChatInfo] = None\n\n\nclass GroupStateChangeEvent(NoticeEvent):\n \"\"\"\u7fa4\u53d8\u5316\u4e8b\u4ef6\u57fa\u7c7b\"\"\"\n origin: Any\n current: Any\n group: GroupInfo\n operator: Optional[GroupChatInfo] = None\n\n\nclass GroupNameChangeEvent(GroupStateChangeEvent):\n \"\"\"\u67d0\u4e2a\u7fa4\u540d\u6539\u53d8\"\"\"\n origin: str\n current: str\n\n\nclass GroupEntranceAnnouncementChangeEvent(GroupStateChangeEvent):\n \"\"\"\u67d0\u7fa4\u5165\u7fa4\u516c\u544a\u6539\u53d8\"\"\"\n origin: str\n current: str\n\n\nclass GroupMuteAllEvent(GroupStateChangeEvent):\n \"\"\"\u5168\u5458\u7981\u8a00\"\"\"\n origin: bool\n current: bool\n\n\nclass GroupAllowAnonymousChatEvent(GroupStateChangeEvent):\n \"\"\"\u533f\u540d\u804a\u5929\"\"\"\n origin: bool\n current: bool\n\n\nclass GroupAllowConfessTalkEvent(GroupStateChangeEvent):\n \"\"\"\u5766\u767d\u8bf4\"\"\"\n origin: bool\n current: bool\n\n\nclass GroupAllowMemberInviteEvent(GroupStateChangeEvent):\n \"\"\"\u5141\u8bb8\u7fa4\u5458\u9080\u8bf7\u597d\u53cb\u52a0\u7fa4\"\"\"\n origin: bool\n current: bool\n\n\nclass MemberStateChangeEvent(NoticeEvent):\n \"\"\"\u7fa4\u6210\u5458\u53d8\u5316\u4e8b\u4ef6\u57fa\u7c7b\"\"\"\n member: GroupChatInfo\n operator: Optional[GroupChatInfo] = None\n\n\nclass MemberCardChangeEvent(MemberStateChangeEvent):\n \"\"\"\u7fa4\u540d\u7247\u6539\u52a8\"\"\"\n origin: str\n current: str\n\n\nclass MemberSpecialTitleChangeEvent(MemberStateChangeEvent):\n \"\"\"\u7fa4\u5934\u8854\u6539\u52a8\uff08\u53ea\u6709\u7fa4\u4e3b\u6709\u64cd\u4f5c\u9650\u6743\uff09\"\"\"\n origin: str\n current: str\n\n\nclass BotGroupPermissionChangeEvent(MemberStateChangeEvent):\n \"\"\"Bot\u5728\u7fa4\u91cc\u7684\u6743\u9650\u88ab\u6539\u53d8\"\"\"\n origin: UserPermission\n current: UserPermission\n\n\nclass MemberPermissionChangeEvent(MemberStateChangeEvent):\n \"\"\"\u6210\u5458\u6743\u9650\u6539\u53d8\u7684\u4e8b\u4ef6\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n origin: UserPermission\n current: UserPermission\n", "path": "packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py"}], "after_files": [{"content": "import json\nfrom enum import Enum\nfrom typing_extensions import Literal\nfrom typing import Any, Dict, Type, Optional\n\nfrom pydantic import Field, BaseModel, ValidationError\n\nfrom nonebot.log import logger\nfrom nonebot.typing import overrides\nfrom nonebot.utils import escape_tag\nfrom nonebot.adapters import Event as BaseEvent\nfrom nonebot.adapters import Message as BaseMessage\n\n\nclass UserPermission(str, Enum):\n \"\"\"\n :\u8bf4\u660e:\n\n \u7528\u6237\u6743\u9650\u679a\u4e3e\u7c7b\n\n * ``OWNER``: \u7fa4\u4e3b\n * ``ADMINISTRATOR``: \u7fa4\u7ba1\u7406\n * ``MEMBER``: \u666e\u901a\u7fa4\u6210\u5458\n \"\"\"\n OWNER = 'OWNER'\n ADMINISTRATOR = 'ADMINISTRATOR'\n MEMBER = 'MEMBER'\n\n\nclass NudgeSubjectKind(str, Enum):\n \"\"\"\n :\u8bf4\u660e:\n\n \u6233\u4e00\u6233\u7c7b\u578b\u679a\u4e3e\u7c7b\n\n * ``Group``: \u7fa4\n * ``Friend``: \u597d\u53cb\n \"\"\"\n Group = 'Group'\n Friend = 'Friend'\n\n\nclass GroupInfo(BaseModel):\n id: int\n name: str\n permission: UserPermission\n\n\nclass GroupChatInfo(BaseModel):\n id: int\n name: str = Field(alias='memberName')\n permission: UserPermission\n group: GroupInfo\n\n\nclass PrivateChatInfo(BaseModel):\n id: int\n nickname: str\n remark: str\n\n\nclass NudgeSubject(BaseModel):\n id: int\n kind: NudgeSubjectKind\n\n\nclass Event(BaseEvent):\n \"\"\"\n mirai-api-http \u534f\u8bae\u4e8b\u4ef6\uff0c\u5b57\u6bb5\u4e0e mirai-api-http \u4e00\u81f4\u3002\u5404\u4e8b\u4ef6\u5b57\u6bb5\u53c2\u8003 `mirai-api-http \u4e8b\u4ef6\u7c7b\u578b`_\n\n .. _mirai-api-http \u4e8b\u4ef6\u7c7b\u578b:\n https://github.com/project-mirai/mirai-api-http/blob/master/docs/EventType.md\n \"\"\"\n self_id: int\n type: str\n\n @classmethod\n def new(cls, data: Dict[str, Any]) -> \"Event\":\n \"\"\"\n \u6b64\u4e8b\u4ef6\u7c7b\u7684\u5de5\u5382\u51fd\u6570, \u80fd\u591f\u901a\u8fc7\u4e8b\u4ef6\u6570\u636e\u9009\u62e9\u5408\u9002\u7684\u5b50\u7c7b\u8fdb\u884c\u5e8f\u5217\u5316\n \"\"\"\n type = data['type']\n\n def all_subclasses(cls: Type[Event]):\n return set(cls.__subclasses__()).union(\n [s for c in cls.__subclasses__() for s in all_subclasses(c)])\n\n event_class: Optional[Type[Event]] = None\n for subclass in all_subclasses(cls):\n if subclass.__name__ != type:\n continue\n event_class = subclass\n\n if event_class is None:\n return Event.parse_obj(data)\n\n while event_class and issubclass(event_class, Event):\n try:\n return event_class.parse_obj(data)\n except ValidationError as e:\n logger.info(\n f'Failed to parse {data} to class {event_class.__name__}: '\n f'{e.errors()!r}. Fallback to parent class.')\n event_class = event_class.__base__ # type: ignore\n\n raise ValueError(f'Failed to serialize {data}.')\n\n @overrides(BaseEvent)\n def get_type(self) -> Literal[\"message\", \"notice\", \"request\", \"meta_event\"]:\n from . import meta, notice, message, request\n if isinstance(self, message.MessageEvent):\n return 'message'\n elif isinstance(self, notice.NoticeEvent):\n return 'notice'\n elif isinstance(self, request.RequestEvent):\n return 'request'\n else:\n return 'meta_event'\n\n @overrides(BaseEvent)\n def get_event_name(self) -> str:\n return self.type\n\n @overrides(BaseEvent)\n def get_event_description(self) -> str:\n return escape_tag(str(self.normalize_dict()))\n\n @overrides(BaseEvent)\n def get_message(self) -> BaseMessage:\n raise ValueError(\"Event has no message!\")\n\n @overrides(BaseEvent)\n def get_plaintext(self) -> str:\n raise ValueError(\"Event has no message!\")\n\n @overrides(BaseEvent)\n def get_user_id(self) -> str:\n raise ValueError(\"Event has no message!\")\n\n @overrides(BaseEvent)\n def get_session_id(self) -> str:\n raise ValueError(\"Event has no message!\")\n\n @overrides(BaseEvent)\n def is_tome(self) -> bool:\n return False\n\n def normalize_dict(self, **kwargs) -> Dict[str, Any]:\n \"\"\"\n \u8fd4\u56de\u53ef\u4ee5\u88abjson\u6b63\u5e38\u53cd\u5e8f\u5217\u5316\u7684\u7ed3\u6784\u4f53\n \"\"\"\n return json.loads(self.json(**kwargs))\n", "path": "packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/base.py"}, {"content": "from typing import Any, Optional\n\nfrom pydantic import Field\n\nfrom .base import Event, GroupChatInfo, GroupInfo, NudgeSubject, UserPermission\n\n\nclass NoticeEvent(Event):\n \"\"\"\u901a\u77e5\u4e8b\u4ef6\u57fa\u7c7b\"\"\"\n pass\n\n\nclass MuteEvent(NoticeEvent):\n \"\"\"\u7981\u8a00\u7c7b\u4e8b\u4ef6\u57fa\u7c7b\"\"\"\n operator: GroupChatInfo\n\n\nclass BotMuteEvent(MuteEvent):\n \"\"\"Bot\u88ab\u7981\u8a00\"\"\"\n pass\n\n\nclass BotUnmuteEvent(MuteEvent):\n \"\"\"Bot\u88ab\u53d6\u6d88\u7981\u8a00\"\"\"\n pass\n\n\nclass MemberMuteEvent(MuteEvent):\n \"\"\"\u7fa4\u6210\u5458\u88ab\u7981\u8a00\u4e8b\u4ef6\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n duration_seconds: int = Field(alias='durationSeconds')\n member: GroupChatInfo\n operator: Optional[GroupChatInfo] = None\n\n\nclass MemberUnmuteEvent(MuteEvent):\n \"\"\"\u7fa4\u6210\u5458\u88ab\u53d6\u6d88\u7981\u8a00\u4e8b\u4ef6\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n member: GroupChatInfo\n operator: Optional[GroupChatInfo] = None\n\n\nclass BotJoinGroupEvent(NoticeEvent):\n \"\"\"Bot\u52a0\u5165\u4e86\u4e00\u4e2a\u65b0\u7fa4\"\"\"\n group: GroupInfo\n\n\nclass BotLeaveEventActive(BotJoinGroupEvent):\n \"\"\"Bot\u4e3b\u52a8\u9000\u51fa\u4e00\u4e2a\u7fa4\"\"\"\n pass\n\n\nclass BotLeaveEventKick(BotJoinGroupEvent):\n \"\"\"Bot\u88ab\u8e22\u51fa\u4e00\u4e2a\u7fa4\"\"\"\n pass\n\n\nclass MemberJoinEvent(NoticeEvent):\n \"\"\"\u65b0\u4eba\u5165\u7fa4\u7684\u4e8b\u4ef6\"\"\"\n member: GroupChatInfo\n\n\nclass MemberLeaveEventKick(MemberJoinEvent):\n \"\"\"\u6210\u5458\u88ab\u8e22\u51fa\u7fa4\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n operator: Optional[GroupChatInfo] = None\n\n\nclass MemberLeaveEventQuit(MemberJoinEvent):\n \"\"\"\u6210\u5458\u4e3b\u52a8\u79bb\u7fa4\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n pass\n\n\nclass FriendRecallEvent(NoticeEvent):\n \"\"\"\u597d\u53cb\u6d88\u606f\u64a4\u56de\"\"\"\n author_id: int = Field(alias='authorId')\n message_id: int = Field(alias='messageId')\n time: int\n operator: int\n\n\nclass GroupRecallEvent(FriendRecallEvent):\n \"\"\"\u7fa4\u6d88\u606f\u64a4\u56de\"\"\"\n group: GroupInfo\n operator: Optional[GroupChatInfo] = None\n\n\nclass GroupStateChangeEvent(NoticeEvent):\n \"\"\"\u7fa4\u53d8\u5316\u4e8b\u4ef6\u57fa\u7c7b\"\"\"\n origin: Any\n current: Any\n group: GroupInfo\n operator: Optional[GroupChatInfo] = None\n\n\nclass GroupNameChangeEvent(GroupStateChangeEvent):\n \"\"\"\u67d0\u4e2a\u7fa4\u540d\u6539\u53d8\"\"\"\n origin: str\n current: str\n\n\nclass GroupEntranceAnnouncementChangeEvent(GroupStateChangeEvent):\n \"\"\"\u67d0\u7fa4\u5165\u7fa4\u516c\u544a\u6539\u53d8\"\"\"\n origin: str\n current: str\n\n\nclass GroupMuteAllEvent(GroupStateChangeEvent):\n \"\"\"\u5168\u5458\u7981\u8a00\"\"\"\n origin: bool\n current: bool\n\n\nclass GroupAllowAnonymousChatEvent(GroupStateChangeEvent):\n \"\"\"\u533f\u540d\u804a\u5929\"\"\"\n origin: bool\n current: bool\n\n\nclass GroupAllowConfessTalkEvent(GroupStateChangeEvent):\n \"\"\"\u5766\u767d\u8bf4\"\"\"\n origin: bool\n current: bool\n\n\nclass GroupAllowMemberInviteEvent(GroupStateChangeEvent):\n \"\"\"\u5141\u8bb8\u7fa4\u5458\u9080\u8bf7\u597d\u53cb\u52a0\u7fa4\"\"\"\n origin: bool\n current: bool\n\n\nclass MemberStateChangeEvent(NoticeEvent):\n \"\"\"\u7fa4\u6210\u5458\u53d8\u5316\u4e8b\u4ef6\u57fa\u7c7b\"\"\"\n member: GroupChatInfo\n operator: Optional[GroupChatInfo] = None\n\n\nclass MemberCardChangeEvent(MemberStateChangeEvent):\n \"\"\"\u7fa4\u540d\u7247\u6539\u52a8\"\"\"\n origin: str\n current: str\n\n\nclass MemberSpecialTitleChangeEvent(MemberStateChangeEvent):\n \"\"\"\u7fa4\u5934\u8854\u6539\u52a8\uff08\u53ea\u6709\u7fa4\u4e3b\u6709\u64cd\u4f5c\u9650\u6743\uff09\"\"\"\n origin: str\n current: str\n\n\nclass BotGroupPermissionChangeEvent(MemberStateChangeEvent):\n \"\"\"Bot\u5728\u7fa4\u91cc\u7684\u6743\u9650\u88ab\u6539\u53d8\"\"\"\n origin: UserPermission\n current: UserPermission\n\n\nclass MemberPermissionChangeEvent(MemberStateChangeEvent):\n \"\"\"\u6210\u5458\u6743\u9650\u6539\u53d8\u7684\u4e8b\u4ef6\uff08\u8be5\u6210\u5458\u4e0d\u662fBot\uff09\"\"\"\n origin: UserPermission\n current: UserPermission\n\n\nclass NudgeEvent(NoticeEvent):\n \"\"\"\u6233\u4e00\u6233\u89e6\u53d1\u4e8b\u4ef6\"\"\"\n from_id: int = Field(alias='fromId')\n target: int\n subject: NudgeSubject\n action: str\n suffix: str\n", "path": "packages/nonebot-adapter-mirai/nonebot/adapters/mirai/event/notice.py"}]}
| 2,874 | 890 |
gh_patches_debug_1104
|
rasdani/github-patches
|
git_diff
|
blaze__blaze-872
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Truncate column name is too verbose
Do we have to have a unique name for the result of such operations?
How about having it renamed to the unit, i.e. instead of `when_datetimetruncate` we use `when_day` or `when_week`, etc?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `blaze/expr/datetime.py`
Content:
```
1 from __future__ import absolute_import, division, print_function
2
3 from .expressions import Expr, ElemWise
4 from datashape import dshape, Record, DataShape, Unit, Option, date_, datetime_
5 import datashape
6
7 __all__ = ['DateTime', 'Date', 'date', 'Year', 'year', 'Month', 'month', 'Day',
8 'day', 'Hour', 'hour', 'Second', 'second', 'Millisecond',
9 'millisecond', 'Microsecond', 'microsecond', 'Date', 'date', 'Time',
10 'time', 'UTCFromTimestamp', 'DateTimeTruncate']
11
12 class DateTime(ElemWise):
13 """ Superclass for datetime accessors """
14 __slots__ = '_hash', '_child',
15
16 def __str__(self):
17 return '%s.%s' % (str(self._child), type(self).__name__.lower())
18
19 @property
20 def schema(self):
21 return dshape(self._dtype)
22
23 @property
24 def _name(self):
25 return '%s_%s' % (self._child._name, self.attr)
26
27 @property
28 def attr(self):
29 return type(self).__name__.lower()
30
31
32 class Date(DateTime):
33 _dtype = datashape.date_
34
35 def date(expr):
36 return Date(expr)
37
38 class Year(DateTime):
39 _dtype = datashape.int32
40
41 def year(expr):
42 return Year(expr)
43
44 class Month(DateTime):
45 _dtype = datashape.int32
46
47 def month(expr):
48 return Month(expr)
49
50 class Day(DateTime):
51 _dtype = datashape.int32
52
53 def day(expr):
54 return Day(expr)
55
56 class Time(DateTime):
57 _dtype = datashape.time_
58
59 def time(expr):
60 return Time(Expr)
61
62 class Hour(DateTime):
63 _dtype = datashape.int32
64
65 def hour(expr):
66 return Hour(expr)
67
68 class Minute(DateTime):
69 _dtype = datashape.int32
70
71 def minute(expr):
72 return Minute(expr)
73
74 class Second(DateTime):
75 _dtype = datashape.int32
76
77 def second(expr):
78 return Second(expr)
79
80 class Millisecond(DateTime):
81 _dtype = datashape.int64
82
83 def millisecond(expr):
84 return Millisecond(expr)
85
86 class Microsecond(DateTime):
87 _dtype = datashape.int64
88
89 def microsecond(expr):
90 return Microsecond(expr)
91
92 class UTCFromTimestamp(DateTime):
93 _dtype = datashape.datetime_
94
95 def utcfromtimestamp(expr):
96 return UTCFromTimestamp(expr)
97
98 units = ['year', 'month', 'week', 'day', 'hour', 'minute', 'second',
99 'millisecond', 'microsecond', 'nanosecond']
100
101
102 _unit_aliases = {'y': 'year', 'w': 'week', 'd': 'day', 'date': 'day',
103 'h': 'hour', 's': 'second', 'ms': 'millisecond', 'us': 'microsecond',
104 'ns': 'nanosecond'}
105
106 def normalize_time_unit(s):
107 """ Normalize time input to one of 'year', 'second', 'millisecond', etc..
108
109 Example
110 -------
111
112 >>> normalize_time_unit('milliseconds')
113 'millisecond'
114 >>> normalize_time_unit('ms')
115 'millisecond'
116 """
117 s = s.lower().strip()
118 if s in units:
119 return s
120 if s in _unit_aliases:
121 return _unit_aliases[s]
122 if s[-1] == 's':
123 return normalize_time_unit(s.rstrip('s'))
124
125 raise ValueError("Do not understand time unit %s" % s)
126
127
128 class DateTimeTruncate(DateTime):
129 __slots__ = '_hash', '_child', 'measure', 'unit'
130
131 @property
132 def _dtype(self):
133 if units.index('day') >= units.index(self.unit):
134 return datashape.date_
135 else:
136 return datashape.datetime_
137
138
139 def truncate(expr, *args, **kwargs):
140 """ Truncate datetime expression
141
142 Example
143 -------
144
145 >>> from blaze import symbol, compute
146 >>> from datetime import datetime
147 >>> s = symbol('s', 'datetime')
148
149 >>> expr = s.truncate(10, 'minutes')
150 >>> compute(expr, datetime(2000, 6, 25, 12, 35, 10))
151 datetime.datetime(2000, 6, 25, 12, 30)
152
153 >>> expr = s.truncate(1, 'week')
154 >>> compute(expr, datetime(2000, 6, 25, 12, 35, 10))
155 datetime.date(2000, 6, 25)
156
157 Alternatively use keyword arguments to specify unit and measure
158
159 >>> # expr = s.truncate(2, 'weeks')
160 >>> expr = s.truncate(weeks=2)
161 """
162 if args:
163 assert not kwargs
164 measure, unit = args
165 if kwargs:
166 assert not args
167 [(unit, measure)] = kwargs.items()
168 return DateTimeTruncate(expr, measure, normalize_time_unit(unit))
169
170
171 from .expressions import schema_method_list, method_properties
172 from datashape.predicates import isdatelike, isnumeric
173
174 schema_method_list.extend([
175 (isdatelike, set([year, month, day, hour, minute, date, time, second,
176 millisecond, microsecond, truncate])),
177 (isnumeric, set([utcfromtimestamp]))
178 ])
179
180 method_properties |= set([year, month, day, hour, minute, second, millisecond,
181 microsecond, date, time, utcfromtimestamp])
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/blaze/expr/datetime.py b/blaze/expr/datetime.py
--- a/blaze/expr/datetime.py
+++ b/blaze/expr/datetime.py
@@ -135,6 +135,10 @@
else:
return datashape.datetime_
+ @property
+ def _name(self):
+ return self._child._name
+
def truncate(expr, *args, **kwargs):
""" Truncate datetime expression
|
{"golden_diff": "diff --git a/blaze/expr/datetime.py b/blaze/expr/datetime.py\n--- a/blaze/expr/datetime.py\n+++ b/blaze/expr/datetime.py\n@@ -135,6 +135,10 @@\n else:\n return datashape.datetime_\n \n+ @property\n+ def _name(self):\n+ return self._child._name\n+\n \n def truncate(expr, *args, **kwargs):\n \"\"\" Truncate datetime expression\n", "issue": "Truncate column name is too verbose\nDo we have to have a unique name for the result of such operations?\n\nHow about having it renamed to the unit, i.e. instead of `when_datetimetruncate` we use `when_day` or `when_week`, etc?\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nfrom .expressions import Expr, ElemWise\nfrom datashape import dshape, Record, DataShape, Unit, Option, date_, datetime_\nimport datashape\n\n__all__ = ['DateTime', 'Date', 'date', 'Year', 'year', 'Month', 'month', 'Day',\n 'day', 'Hour', 'hour', 'Second', 'second', 'Millisecond',\n 'millisecond', 'Microsecond', 'microsecond', 'Date', 'date', 'Time',\n 'time', 'UTCFromTimestamp', 'DateTimeTruncate']\n\nclass DateTime(ElemWise):\n \"\"\" Superclass for datetime accessors \"\"\"\n __slots__ = '_hash', '_child',\n\n def __str__(self):\n return '%s.%s' % (str(self._child), type(self).__name__.lower())\n\n @property\n def schema(self):\n return dshape(self._dtype)\n\n @property\n def _name(self):\n return '%s_%s' % (self._child._name, self.attr)\n\n @property\n def attr(self):\n return type(self).__name__.lower()\n\n\nclass Date(DateTime):\n _dtype = datashape.date_\n\ndef date(expr):\n return Date(expr)\n\nclass Year(DateTime):\n _dtype = datashape.int32\n\ndef year(expr):\n return Year(expr)\n\nclass Month(DateTime):\n _dtype = datashape.int32\n\ndef month(expr):\n return Month(expr)\n\nclass Day(DateTime):\n _dtype = datashape.int32\n\ndef day(expr):\n return Day(expr)\n\nclass Time(DateTime):\n _dtype = datashape.time_\n\ndef time(expr):\n return Time(Expr)\n\nclass Hour(DateTime):\n _dtype = datashape.int32\n\ndef hour(expr):\n return Hour(expr)\n\nclass Minute(DateTime):\n _dtype = datashape.int32\n\ndef minute(expr):\n return Minute(expr)\n\nclass Second(DateTime):\n _dtype = datashape.int32\n\ndef second(expr):\n return Second(expr)\n\nclass Millisecond(DateTime):\n _dtype = datashape.int64\n\ndef millisecond(expr):\n return Millisecond(expr)\n\nclass Microsecond(DateTime):\n _dtype = datashape.int64\n\ndef microsecond(expr):\n return Microsecond(expr)\n\nclass UTCFromTimestamp(DateTime):\n _dtype = datashape.datetime_\n\ndef utcfromtimestamp(expr):\n return UTCFromTimestamp(expr)\n\nunits = ['year', 'month', 'week', 'day', 'hour', 'minute', 'second',\n'millisecond', 'microsecond', 'nanosecond']\n\n\n_unit_aliases = {'y': 'year', 'w': 'week', 'd': 'day', 'date': 'day',\n 'h': 'hour', 's': 'second', 'ms': 'millisecond', 'us': 'microsecond',\n 'ns': 'nanosecond'}\n\ndef normalize_time_unit(s):\n \"\"\" Normalize time input to one of 'year', 'second', 'millisecond', etc..\n\n Example\n -------\n\n >>> normalize_time_unit('milliseconds')\n 'millisecond'\n >>> normalize_time_unit('ms')\n 'millisecond'\n \"\"\"\n s = s.lower().strip()\n if s in units:\n return s\n if s in _unit_aliases:\n return _unit_aliases[s]\n if s[-1] == 's':\n return normalize_time_unit(s.rstrip('s'))\n\n raise ValueError(\"Do not understand time unit %s\" % s)\n\n\nclass DateTimeTruncate(DateTime):\n __slots__ = '_hash', '_child', 'measure', 'unit'\n\n @property\n def _dtype(self):\n if units.index('day') >= units.index(self.unit):\n return datashape.date_\n else:\n return datashape.datetime_\n\n\ndef truncate(expr, *args, **kwargs):\n \"\"\" Truncate datetime expression\n\n Example\n -------\n\n >>> from blaze import symbol, compute\n >>> from datetime import datetime\n >>> s = symbol('s', 'datetime')\n\n >>> expr = s.truncate(10, 'minutes')\n >>> compute(expr, datetime(2000, 6, 25, 12, 35, 10))\n datetime.datetime(2000, 6, 25, 12, 30)\n\n >>> expr = s.truncate(1, 'week')\n >>> compute(expr, datetime(2000, 6, 25, 12, 35, 10))\n datetime.date(2000, 6, 25)\n\n Alternatively use keyword arguments to specify unit and measure\n\n >>> # expr = s.truncate(2, 'weeks')\n >>> expr = s.truncate(weeks=2)\n \"\"\"\n if args:\n assert not kwargs\n measure, unit = args\n if kwargs:\n assert not args\n [(unit, measure)] = kwargs.items()\n return DateTimeTruncate(expr, measure, normalize_time_unit(unit))\n\n\nfrom .expressions import schema_method_list, method_properties\nfrom datashape.predicates import isdatelike, isnumeric\n\nschema_method_list.extend([\n (isdatelike, set([year, month, day, hour, minute, date, time, second,\n millisecond, microsecond, truncate])),\n (isnumeric, set([utcfromtimestamp]))\n ])\n\nmethod_properties |= set([year, month, day, hour, minute, second, millisecond,\n microsecond, date, time, utcfromtimestamp])\n", "path": "blaze/expr/datetime.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nfrom .expressions import Expr, ElemWise\nfrom datashape import dshape, Record, DataShape, Unit, Option, date_, datetime_\nimport datashape\n\n__all__ = ['DateTime', 'Date', 'date', 'Year', 'year', 'Month', 'month', 'Day',\n 'day', 'Hour', 'hour', 'Second', 'second', 'Millisecond',\n 'millisecond', 'Microsecond', 'microsecond', 'Date', 'date', 'Time',\n 'time', 'UTCFromTimestamp', 'DateTimeTruncate']\n\nclass DateTime(ElemWise):\n \"\"\" Superclass for datetime accessors \"\"\"\n __slots__ = '_hash', '_child',\n\n def __str__(self):\n return '%s.%s' % (str(self._child), type(self).__name__.lower())\n\n @property\n def schema(self):\n return dshape(self._dtype)\n\n @property\n def _name(self):\n return '%s_%s' % (self._child._name, self.attr)\n\n @property\n def attr(self):\n return type(self).__name__.lower()\n\n\nclass Date(DateTime):\n _dtype = datashape.date_\n\ndef date(expr):\n return Date(expr)\n\nclass Year(DateTime):\n _dtype = datashape.int32\n\ndef year(expr):\n return Year(expr)\n\nclass Month(DateTime):\n _dtype = datashape.int32\n\ndef month(expr):\n return Month(expr)\n\nclass Day(DateTime):\n _dtype = datashape.int32\n\ndef day(expr):\n return Day(expr)\n\nclass Time(DateTime):\n _dtype = datashape.time_\n\ndef time(expr):\n return Time(Expr)\n\nclass Hour(DateTime):\n _dtype = datashape.int32\n\ndef hour(expr):\n return Hour(expr)\n\nclass Minute(DateTime):\n _dtype = datashape.int32\n\ndef minute(expr):\n return Minute(expr)\n\nclass Second(DateTime):\n _dtype = datashape.int32\n\ndef second(expr):\n return Second(expr)\n\nclass Millisecond(DateTime):\n _dtype = datashape.int64\n\ndef millisecond(expr):\n return Millisecond(expr)\n\nclass Microsecond(DateTime):\n _dtype = datashape.int64\n\ndef microsecond(expr):\n return Microsecond(expr)\n\nclass UTCFromTimestamp(DateTime):\n _dtype = datashape.datetime_\n\ndef utcfromtimestamp(expr):\n return UTCFromTimestamp(expr)\n\nunits = ['year', 'month', 'week', 'day', 'hour', 'minute', 'second',\n'millisecond', 'microsecond', 'nanosecond']\n\n\n_unit_aliases = {'y': 'year', 'w': 'week', 'd': 'day', 'date': 'day',\n 'h': 'hour', 's': 'second', 'ms': 'millisecond', 'us': 'microsecond',\n 'ns': 'nanosecond'}\n\ndef normalize_time_unit(s):\n \"\"\" Normalize time input to one of 'year', 'second', 'millisecond', etc..\n\n Example\n -------\n\n >>> normalize_time_unit('milliseconds')\n 'millisecond'\n >>> normalize_time_unit('ms')\n 'millisecond'\n \"\"\"\n s = s.lower().strip()\n if s in units:\n return s\n if s in _unit_aliases:\n return _unit_aliases[s]\n if s[-1] == 's':\n return normalize_time_unit(s.rstrip('s'))\n\n raise ValueError(\"Do not understand time unit %s\" % s)\n\n\nclass DateTimeTruncate(DateTime):\n __slots__ = '_hash', '_child', 'measure', 'unit'\n\n @property\n def _dtype(self):\n if units.index('day') >= units.index(self.unit):\n return datashape.date_\n else:\n return datashape.datetime_\n\n @property\n def _name(self):\n return self._child._name\n\n\ndef truncate(expr, *args, **kwargs):\n \"\"\" Truncate datetime expression\n\n Example\n -------\n\n >>> from blaze import symbol, compute\n >>> from datetime import datetime\n >>> s = symbol('s', 'datetime')\n\n >>> expr = s.truncate(10, 'minutes')\n >>> compute(expr, datetime(2000, 6, 25, 12, 35, 10))\n datetime.datetime(2000, 6, 25, 12, 30)\n\n >>> expr = s.truncate(1, 'week')\n >>> compute(expr, datetime(2000, 6, 25, 12, 35, 10))\n datetime.date(2000, 6, 25)\n\n Alternatively use keyword arguments to specify unit and measure\n\n >>> # expr = s.truncate(2, 'weeks')\n >>> expr = s.truncate(weeks=2)\n \"\"\"\n if args:\n assert not kwargs\n measure, unit = args\n if kwargs:\n assert not args\n [(unit, measure)] = kwargs.items()\n return DateTimeTruncate(expr, measure, normalize_time_unit(unit))\n\n\nfrom .expressions import schema_method_list, method_properties\nfrom datashape.predicates import isdatelike, isnumeric\n\nschema_method_list.extend([\n (isdatelike, set([year, month, day, hour, minute, date, time, second,\n millisecond, microsecond, truncate])),\n (isnumeric, set([utcfromtimestamp]))\n ])\n\nmethod_properties |= set([year, month, day, hour, minute, second, millisecond,\n microsecond, date, time, utcfromtimestamp])\n", "path": "blaze/expr/datetime.py"}]}
| 1,999 | 106 |
gh_patches_debug_59056
|
rasdani/github-patches
|
git_diff
|
google__jax-19166
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unexpected behavior of `jax.scipy.stats.binom.pmf`
### Description
pmf of a random variable should be zero outside of its range. While plotting the graph for `jax.scipy.stats.binom.pmf`, I notice that for $n>5$ and $p>0.5$, there are some oscillations in the values of the pmf, which should not be there. For evidence, I am attaching a plot too.
```python
import jax
from jax import numpy as jnp
from matplotlib import pyplot as plt
x = jnp.linspace(-1, 10, 1000)
xxf = jax.scipy.stats.binom.pmf(k=x, n=5, p=0.8)
plt.plot(x, xxf)
plt.tight_layout()
plt.show()
```

The side left to the zero is as expected.
### What jax/jaxlib version are you using?
jax v0.4.23
### Which accelerator(s) are you using?
CPU
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jax/_src/scipy/stats/binom.py`
Content:
```
1 # Copyright 2023 The JAX Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License
14
15
16 import scipy.stats as osp_stats
17
18 from jax import lax
19 import jax.numpy as jnp
20 from jax._src.numpy.util import _wraps, promote_args_inexact
21 from jax._src.scipy.special import gammaln, xlogy, xlog1py
22 from jax._src.typing import Array, ArrayLike
23
24
25 @_wraps(osp_stats.nbinom.logpmf, update_doc=False)
26 def logpmf(k: ArrayLike, n: ArrayLike, p: ArrayLike, loc: ArrayLike = 0) -> Array:
27 """JAX implementation of scipy.stats.binom.logpmf."""
28 k, n, p, loc = promote_args_inexact("binom.logpmf", k, n, p, loc)
29 y = lax.sub(k, loc)
30 comb_term = lax.sub(
31 gammaln(n + 1),
32 lax.add(gammaln(y + 1), gammaln(n - y + 1))
33 )
34 log_linear_term = lax.add(xlogy(y, p), xlog1py(lax.sub(n, y), lax.neg(p)))
35 log_probs = lax.add(comb_term, log_linear_term)
36 return jnp.where(lax.lt(k, loc), -jnp.inf, log_probs)
37
38
39 @_wraps(osp_stats.nbinom.pmf, update_doc=False)
40 def pmf(k: ArrayLike, n: ArrayLike, p: ArrayLike, loc: ArrayLike = 0) -> Array:
41 """JAX implementation of scipy.stats.binom.pmf."""
42 return lax.exp(logpmf(k, n, p, loc))
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/jax/_src/scipy/stats/binom.py b/jax/_src/scipy/stats/binom.py
--- a/jax/_src/scipy/stats/binom.py
+++ b/jax/_src/scipy/stats/binom.py
@@ -33,7 +33,7 @@
)
log_linear_term = lax.add(xlogy(y, p), xlog1py(lax.sub(n, y), lax.neg(p)))
log_probs = lax.add(comb_term, log_linear_term)
- return jnp.where(lax.lt(k, loc), -jnp.inf, log_probs)
+ return jnp.where(lax.ge(k, loc) & lax.lt(k, loc + n + 1), log_probs, -jnp.inf)
@_wraps(osp_stats.nbinom.pmf, update_doc=False)
|
{"golden_diff": "diff --git a/jax/_src/scipy/stats/binom.py b/jax/_src/scipy/stats/binom.py\n--- a/jax/_src/scipy/stats/binom.py\n+++ b/jax/_src/scipy/stats/binom.py\n@@ -33,7 +33,7 @@\n )\n log_linear_term = lax.add(xlogy(y, p), xlog1py(lax.sub(n, y), lax.neg(p)))\n log_probs = lax.add(comb_term, log_linear_term)\n- return jnp.where(lax.lt(k, loc), -jnp.inf, log_probs)\n+ return jnp.where(lax.ge(k, loc) & lax.lt(k, loc + n + 1), log_probs, -jnp.inf)\n \n \n @_wraps(osp_stats.nbinom.pmf, update_doc=False)\n", "issue": "Unexpected behavior of `jax.scipy.stats.binom.pmf`\n### Description\r\n\r\npmf of a random variable should be zero outside of its range. While plotting the graph for `jax.scipy.stats.binom.pmf`, I notice that for $n>5$ and $p>0.5$, there are some oscillations in the values of the pmf, which should not be there. For evidence, I am attaching a plot too.\r\n\r\n```python\r\nimport jax\r\nfrom jax import numpy as jnp\r\nfrom matplotlib import pyplot as plt\r\n\r\nx = jnp.linspace(-1, 10, 1000)\r\nxxf = jax.scipy.stats.binom.pmf(k=x, n=5, p=0.8)\r\n\r\nplt.plot(x, xxf)\r\nplt.tight_layout()\r\nplt.show()\r\n```\r\n\r\nThe side left to the zero is as expected.\r\n\r\n### What jax/jaxlib version are you using?\r\n\r\njax v0.4.23\r\n\r\n### Which accelerator(s) are you using?\r\n\r\nCPU\n", "before_files": [{"content": "# Copyright 2023 The JAX Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License\n\n\nimport scipy.stats as osp_stats\n\nfrom jax import lax\nimport jax.numpy as jnp\nfrom jax._src.numpy.util import _wraps, promote_args_inexact\nfrom jax._src.scipy.special import gammaln, xlogy, xlog1py\nfrom jax._src.typing import Array, ArrayLike\n\n\n@_wraps(osp_stats.nbinom.logpmf, update_doc=False)\ndef logpmf(k: ArrayLike, n: ArrayLike, p: ArrayLike, loc: ArrayLike = 0) -> Array:\n \"\"\"JAX implementation of scipy.stats.binom.logpmf.\"\"\"\n k, n, p, loc = promote_args_inexact(\"binom.logpmf\", k, n, p, loc)\n y = lax.sub(k, loc)\n comb_term = lax.sub(\n gammaln(n + 1),\n lax.add(gammaln(y + 1), gammaln(n - y + 1))\n )\n log_linear_term = lax.add(xlogy(y, p), xlog1py(lax.sub(n, y), lax.neg(p)))\n log_probs = lax.add(comb_term, log_linear_term)\n return jnp.where(lax.lt(k, loc), -jnp.inf, log_probs)\n\n\n@_wraps(osp_stats.nbinom.pmf, update_doc=False)\ndef pmf(k: ArrayLike, n: ArrayLike, p: ArrayLike, loc: ArrayLike = 0) -> Array:\n \"\"\"JAX implementation of scipy.stats.binom.pmf.\"\"\"\n return lax.exp(logpmf(k, n, p, loc))\n", "path": "jax/_src/scipy/stats/binom.py"}], "after_files": [{"content": "# Copyright 2023 The JAX Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License\n\n\nimport scipy.stats as osp_stats\n\nfrom jax import lax\nimport jax.numpy as jnp\nfrom jax._src.numpy.util import _wraps, promote_args_inexact\nfrom jax._src.scipy.special import gammaln, xlogy, xlog1py\nfrom jax._src.typing import Array, ArrayLike\n\n\n@_wraps(osp_stats.nbinom.logpmf, update_doc=False)\ndef logpmf(k: ArrayLike, n: ArrayLike, p: ArrayLike, loc: ArrayLike = 0) -> Array:\n \"\"\"JAX implementation of scipy.stats.binom.logpmf.\"\"\"\n k, n, p, loc = promote_args_inexact(\"binom.logpmf\", k, n, p, loc)\n y = lax.sub(k, loc)\n comb_term = lax.sub(\n gammaln(n + 1),\n lax.add(gammaln(y + 1), gammaln(n - y + 1))\n )\n log_linear_term = lax.add(xlogy(y, p), xlog1py(lax.sub(n, y), lax.neg(p)))\n log_probs = lax.add(comb_term, log_linear_term)\n return jnp.where(lax.ge(k, loc) & lax.lt(k, loc + n + 1), log_probs, -jnp.inf)\n\n\n@_wraps(osp_stats.nbinom.pmf, update_doc=False)\ndef pmf(k: ArrayLike, n: ArrayLike, p: ArrayLike, loc: ArrayLike = 0) -> Array:\n \"\"\"JAX implementation of scipy.stats.binom.pmf.\"\"\"\n return lax.exp(logpmf(k, n, p, loc))\n", "path": "jax/_src/scipy/stats/binom.py"}]}
| 1,092 | 178 |
gh_patches_debug_43748
|
rasdani/github-patches
|
git_diff
|
twisted__twisted-11636
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`t.w.http_headers.Headers` methods `addRawHeader` and `setRawHeaders` are typed `AnyStr`
A call like `headers.addRawHeader(b'foo', 'bar')` is correct at runtime, but fails to typecheck because `AnyStr` is a type variable that constrains both parameters to be the same type.
Similar for `headers.setRawHeaders('foo', [b'ar'])`.
These calls are valid, so the constraint should be removed from `Headers`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/twisted/web/http_headers.py`
Content:
```
1 # -*- test-case-name: twisted.web.test.test_http_headers -*-
2 # Copyright (c) Twisted Matrix Laboratories.
3 # See LICENSE for details.
4
5 """
6 An API for storing HTTP header names and values.
7 """
8
9 from collections.abc import Sequence as _Sequence
10 from typing import (
11 AnyStr,
12 Dict,
13 Iterator,
14 List,
15 Mapping,
16 Optional,
17 Sequence,
18 Tuple,
19 TypeVar,
20 Union,
21 cast,
22 overload,
23 )
24
25 from twisted.python.compat import cmp, comparable
26
27 _T = TypeVar("_T")
28
29
30 def _dashCapitalize(name: bytes) -> bytes:
31 """
32 Return a byte string which is capitalized using '-' as a word separator.
33
34 @param name: The name of the header to capitalize.
35
36 @return: The given header capitalized using '-' as a word separator.
37 """
38 return b"-".join([word.capitalize() for word in name.split(b"-")])
39
40
41 def _sanitizeLinearWhitespace(headerComponent: bytes) -> bytes:
42 r"""
43 Replace linear whitespace (C{\n}, C{\r\n}, C{\r}) in a header key
44 or value with a single space.
45
46 @param headerComponent: The header key or value to sanitize.
47
48 @return: The sanitized header key or value.
49 """
50 return b" ".join(headerComponent.splitlines())
51
52
53 @comparable
54 class Headers:
55 """
56 Stores HTTP headers in a key and multiple value format.
57
58 When passed L{str}, header names (e.g. 'Content-Type')
59 are encoded using ISO-8859-1 and header values (e.g.
60 'text/html;charset=utf-8') are encoded using UTF-8. Some methods that return
61 values will return them in the same type as the name given.
62
63 If the header keys or values cannot be encoded or decoded using the rules
64 above, using just L{bytes} arguments to the methods of this class will
65 ensure no decoding or encoding is done, and L{Headers} will treat the keys
66 and values as opaque byte strings.
67
68 @cvar _caseMappings: A L{dict} that maps lowercase header names
69 to their canonicalized representation.
70
71 @ivar _rawHeaders: A L{dict} mapping header names as L{bytes} to L{list}s of
72 header values as L{bytes}.
73 """
74
75 _caseMappings = {
76 b"content-md5": b"Content-MD5",
77 b"dnt": b"DNT",
78 b"etag": b"ETag",
79 b"p3p": b"P3P",
80 b"te": b"TE",
81 b"www-authenticate": b"WWW-Authenticate",
82 b"x-xss-protection": b"X-XSS-Protection",
83 }
84
85 def __init__(
86 self,
87 rawHeaders: Optional[Mapping[AnyStr, Sequence[AnyStr]]] = None,
88 ):
89 self._rawHeaders: Dict[bytes, List[bytes]] = {}
90 if rawHeaders is not None:
91 for name, values in rawHeaders.items():
92 self.setRawHeaders(name, values)
93
94 def __repr__(self) -> str:
95 """
96 Return a string fully describing the headers set on this object.
97 """
98 return "{}({!r})".format(
99 self.__class__.__name__,
100 self._rawHeaders,
101 )
102
103 def __cmp__(self, other):
104 """
105 Define L{Headers} instances as being equal to each other if they have
106 the same raw headers.
107 """
108 if isinstance(other, Headers):
109 return cmp(
110 sorted(self._rawHeaders.items()), sorted(other._rawHeaders.items())
111 )
112 return NotImplemented
113
114 def _encodeName(self, name: AnyStr) -> bytes:
115 """
116 Encode the name of a header (eg 'Content-Type') to an ISO-8859-1 encoded
117 bytestring if required.
118
119 @param name: A HTTP header name
120
121 @return: C{name}, encoded if required, lowercased
122 """
123 if isinstance(name, str):
124 return name.lower().encode("iso-8859-1")
125 return name.lower()
126
127 def copy(self):
128 """
129 Return a copy of itself with the same headers set.
130
131 @return: A new L{Headers}
132 """
133 return self.__class__(self._rawHeaders)
134
135 def hasHeader(self, name: AnyStr) -> bool:
136 """
137 Check for the existence of a given header.
138
139 @param name: The name of the HTTP header to check for.
140
141 @return: C{True} if the header exists, otherwise C{False}.
142 """
143 return self._encodeName(name) in self._rawHeaders
144
145 def removeHeader(self, name: AnyStr) -> None:
146 """
147 Remove the named header from this header object.
148
149 @param name: The name of the HTTP header to remove.
150
151 @return: L{None}
152 """
153 self._rawHeaders.pop(self._encodeName(name), None)
154
155 def setRawHeaders(self, name: AnyStr, values: Sequence[AnyStr]) -> None:
156 """
157 Sets the raw representation of the given header.
158
159 @param name: The name of the HTTP header to set the values for.
160
161 @param values: A list of strings each one being a header value of
162 the given name.
163
164 @raise TypeError: Raised if C{values} is not a L{list} of L{bytes}
165 or L{str} strings, or if C{name} is not a L{bytes} or
166 L{str} string.
167
168 @return: L{None}
169 """
170 if not isinstance(values, _Sequence):
171 raise TypeError(
172 "Header entry %r should be sequence but found "
173 "instance of %r instead" % (name, type(values))
174 )
175
176 if not isinstance(name, (bytes, str)):
177 raise TypeError(
178 "Header name is an instance of %r, " "not bytes or str" % (type(name),)
179 )
180
181 for count, value in enumerate(values):
182 if not isinstance(value, (bytes, str)):
183 raise TypeError(
184 "Header value at position %s is an instance of %r, not "
185 "bytes or str"
186 % (
187 count,
188 type(value),
189 )
190 )
191
192 _name = _sanitizeLinearWhitespace(self._encodeName(name))
193 encodedValues: List[bytes] = []
194 for v in values:
195 if isinstance(v, str):
196 _v = v.encode("utf8")
197 else:
198 _v = v
199 encodedValues.append(_sanitizeLinearWhitespace(_v))
200
201 self._rawHeaders[_name] = encodedValues
202
203 def addRawHeader(self, name: AnyStr, value: AnyStr) -> None:
204 """
205 Add a new raw value for the given header.
206
207 @param name: The name of the header for which to set the value.
208
209 @param value: The value to set for the named header.
210 """
211 if not isinstance(name, (bytes, str)):
212 raise TypeError(
213 "Header name is an instance of %r, " "not bytes or str" % (type(name),)
214 )
215
216 if not isinstance(value, (bytes, str)):
217 raise TypeError(
218 "Header value is an instance of %r, not "
219 "bytes or str" % (type(value),)
220 )
221
222 # We secretly know getRawHeaders is really returning a list
223 values = cast(List[AnyStr], self.getRawHeaders(name, default=[]))
224 values.append(value)
225
226 self.setRawHeaders(name, values)
227
228 @overload
229 def getRawHeaders(self, name: AnyStr) -> Optional[Sequence[AnyStr]]:
230 ...
231
232 @overload
233 def getRawHeaders(self, name: AnyStr, default: _T) -> Union[Sequence[AnyStr], _T]:
234 ...
235
236 def getRawHeaders(
237 self, name: AnyStr, default: Optional[_T] = None
238 ) -> Union[Sequence[AnyStr], Optional[_T]]:
239 """
240 Returns a sequence of headers matching the given name as the raw string
241 given.
242
243 @param name: The name of the HTTP header to get the values of.
244
245 @param default: The value to return if no header with the given C{name}
246 exists.
247
248 @return: If the named header is present, a sequence of its
249 values. Otherwise, C{default}.
250 """
251 encodedName = self._encodeName(name)
252 values = self._rawHeaders.get(encodedName, [])
253 if not values:
254 return default
255
256 if isinstance(name, str):
257 return [v.decode("utf8") for v in values]
258 return values
259
260 def getAllRawHeaders(self) -> Iterator[Tuple[bytes, Sequence[bytes]]]:
261 """
262 Return an iterator of key, value pairs of all headers contained in this
263 object, as L{bytes}. The keys are capitalized in canonical
264 capitalization.
265 """
266 for k, v in self._rawHeaders.items():
267 yield self._canonicalNameCaps(k), v
268
269 def _canonicalNameCaps(self, name: bytes) -> bytes:
270 """
271 Return the canonical name for the given header.
272
273 @param name: The all-lowercase header name to capitalize in its
274 canonical form.
275
276 @return: The canonical name of the header.
277 """
278 return self._caseMappings.get(name, _dashCapitalize(name))
279
280
281 __all__ = ["Headers"]
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/twisted/web/http_headers.py b/src/twisted/web/http_headers.py
--- a/src/twisted/web/http_headers.py
+++ b/src/twisted/web/http_headers.py
@@ -18,7 +18,6 @@
Tuple,
TypeVar,
Union,
- cast,
overload,
)
@@ -85,7 +84,7 @@
def __init__(
self,
rawHeaders: Optional[Mapping[AnyStr, Sequence[AnyStr]]] = None,
- ):
+ ) -> None:
self._rawHeaders: Dict[bytes, List[bytes]] = {}
if rawHeaders is not None:
for name, values in rawHeaders.items():
@@ -111,7 +110,7 @@
)
return NotImplemented
- def _encodeName(self, name: AnyStr) -> bytes:
+ def _encodeName(self, name: Union[str, bytes]) -> bytes:
"""
Encode the name of a header (eg 'Content-Type') to an ISO-8859-1 encoded
bytestring if required.
@@ -152,7 +151,21 @@
"""
self._rawHeaders.pop(self._encodeName(name), None)
- def setRawHeaders(self, name: AnyStr, values: Sequence[AnyStr]) -> None:
+ @overload
+ def setRawHeaders(self, name: Union[str, bytes], values: Sequence[bytes]) -> None:
+ ...
+
+ @overload
+ def setRawHeaders(self, name: Union[str, bytes], values: Sequence[str]) -> None:
+ ...
+
+ @overload
+ def setRawHeaders(
+ self, name: Union[str, bytes], values: Sequence[Union[str, bytes]]
+ ) -> None:
+ ...
+
+ def setRawHeaders(self, name: Union[str, bytes], values: object) -> None:
"""
Sets the raw representation of the given header.
@@ -161,9 +174,8 @@
@param values: A list of strings each one being a header value of
the given name.
- @raise TypeError: Raised if C{values} is not a L{list} of L{bytes}
- or L{str} strings, or if C{name} is not a L{bytes} or
- L{str} string.
+ @raise TypeError: Raised if C{values} is not a sequence of L{bytes}
+ or L{str}, or if C{name} is not L{bytes} or L{str}.
@return: L{None}
"""
@@ -175,7 +187,7 @@
if not isinstance(name, (bytes, str)):
raise TypeError(
- "Header name is an instance of %r, " "not bytes or str" % (type(name),)
+ f"Header name is an instance of {type(name)!r}, not bytes or str"
)
for count, value in enumerate(values):
@@ -200,7 +212,7 @@
self._rawHeaders[_name] = encodedValues
- def addRawHeader(self, name: AnyStr, value: AnyStr) -> None:
+ def addRawHeader(self, name: Union[str, bytes], value: Union[str, bytes]) -> None:
"""
Add a new raw value for the given header.
@@ -210,7 +222,7 @@
"""
if not isinstance(name, (bytes, str)):
raise TypeError(
- "Header name is an instance of %r, " "not bytes or str" % (type(name),)
+ f"Header name is an instance of {type(name)!r}, not bytes or str"
)
if not isinstance(value, (bytes, str)):
@@ -219,11 +231,13 @@
"bytes or str" % (type(value),)
)
- # We secretly know getRawHeaders is really returning a list
- values = cast(List[AnyStr], self.getRawHeaders(name, default=[]))
- values.append(value)
-
- self.setRawHeaders(name, values)
+ self._rawHeaders.setdefault(
+ _sanitizeLinearWhitespace(self._encodeName(name)), []
+ ).append(
+ _sanitizeLinearWhitespace(
+ value.encode("utf8") if isinstance(value, str) else value
+ )
+ )
@overload
def getRawHeaders(self, name: AnyStr) -> Optional[Sequence[AnyStr]]:
|
{"golden_diff": "diff --git a/src/twisted/web/http_headers.py b/src/twisted/web/http_headers.py\n--- a/src/twisted/web/http_headers.py\n+++ b/src/twisted/web/http_headers.py\n@@ -18,7 +18,6 @@\n Tuple,\n TypeVar,\n Union,\n- cast,\n overload,\n )\n \n@@ -85,7 +84,7 @@\n def __init__(\n self,\n rawHeaders: Optional[Mapping[AnyStr, Sequence[AnyStr]]] = None,\n- ):\n+ ) -> None:\n self._rawHeaders: Dict[bytes, List[bytes]] = {}\n if rawHeaders is not None:\n for name, values in rawHeaders.items():\n@@ -111,7 +110,7 @@\n )\n return NotImplemented\n \n- def _encodeName(self, name: AnyStr) -> bytes:\n+ def _encodeName(self, name: Union[str, bytes]) -> bytes:\n \"\"\"\n Encode the name of a header (eg 'Content-Type') to an ISO-8859-1 encoded\n bytestring if required.\n@@ -152,7 +151,21 @@\n \"\"\"\n self._rawHeaders.pop(self._encodeName(name), None)\n \n- def setRawHeaders(self, name: AnyStr, values: Sequence[AnyStr]) -> None:\n+ @overload\n+ def setRawHeaders(self, name: Union[str, bytes], values: Sequence[bytes]) -> None:\n+ ...\n+\n+ @overload\n+ def setRawHeaders(self, name: Union[str, bytes], values: Sequence[str]) -> None:\n+ ...\n+\n+ @overload\n+ def setRawHeaders(\n+ self, name: Union[str, bytes], values: Sequence[Union[str, bytes]]\n+ ) -> None:\n+ ...\n+\n+ def setRawHeaders(self, name: Union[str, bytes], values: object) -> None:\n \"\"\"\n Sets the raw representation of the given header.\n \n@@ -161,9 +174,8 @@\n @param values: A list of strings each one being a header value of\n the given name.\n \n- @raise TypeError: Raised if C{values} is not a L{list} of L{bytes}\n- or L{str} strings, or if C{name} is not a L{bytes} or\n- L{str} string.\n+ @raise TypeError: Raised if C{values} is not a sequence of L{bytes}\n+ or L{str}, or if C{name} is not L{bytes} or L{str}.\n \n @return: L{None}\n \"\"\"\n@@ -175,7 +187,7 @@\n \n if not isinstance(name, (bytes, str)):\n raise TypeError(\n- \"Header name is an instance of %r, \" \"not bytes or str\" % (type(name),)\n+ f\"Header name is an instance of {type(name)!r}, not bytes or str\"\n )\n \n for count, value in enumerate(values):\n@@ -200,7 +212,7 @@\n \n self._rawHeaders[_name] = encodedValues\n \n- def addRawHeader(self, name: AnyStr, value: AnyStr) -> None:\n+ def addRawHeader(self, name: Union[str, bytes], value: Union[str, bytes]) -> None:\n \"\"\"\n Add a new raw value for the given header.\n \n@@ -210,7 +222,7 @@\n \"\"\"\n if not isinstance(name, (bytes, str)):\n raise TypeError(\n- \"Header name is an instance of %r, \" \"not bytes or str\" % (type(name),)\n+ f\"Header name is an instance of {type(name)!r}, not bytes or str\"\n )\n \n if not isinstance(value, (bytes, str)):\n@@ -219,11 +231,13 @@\n \"bytes or str\" % (type(value),)\n )\n \n- # We secretly know getRawHeaders is really returning a list\n- values = cast(List[AnyStr], self.getRawHeaders(name, default=[]))\n- values.append(value)\n-\n- self.setRawHeaders(name, values)\n+ self._rawHeaders.setdefault(\n+ _sanitizeLinearWhitespace(self._encodeName(name)), []\n+ ).append(\n+ _sanitizeLinearWhitespace(\n+ value.encode(\"utf8\") if isinstance(value, str) else value\n+ )\n+ )\n \n @overload\n def getRawHeaders(self, name: AnyStr) -> Optional[Sequence[AnyStr]]:\n", "issue": "`t.w.http_headers.Headers` methods `addRawHeader` and `setRawHeaders` are typed `AnyStr`\nA call like `headers.addRawHeader(b'foo', 'bar')` is correct at runtime, but fails to typecheck because `AnyStr` is a type variable that constrains both parameters to be the same type.\r\n\r\nSimilar for `headers.setRawHeaders('foo', [b'ar'])`.\r\n\r\nThese calls are valid, so the constraint should be removed from `Headers`.\n", "before_files": [{"content": "# -*- test-case-name: twisted.web.test.test_http_headers -*-\n# Copyright (c) Twisted Matrix Laboratories.\n# See LICENSE for details.\n\n\"\"\"\nAn API for storing HTTP header names and values.\n\"\"\"\n\nfrom collections.abc import Sequence as _Sequence\nfrom typing import (\n AnyStr,\n Dict,\n Iterator,\n List,\n Mapping,\n Optional,\n Sequence,\n Tuple,\n TypeVar,\n Union,\n cast,\n overload,\n)\n\nfrom twisted.python.compat import cmp, comparable\n\n_T = TypeVar(\"_T\")\n\n\ndef _dashCapitalize(name: bytes) -> bytes:\n \"\"\"\n Return a byte string which is capitalized using '-' as a word separator.\n\n @param name: The name of the header to capitalize.\n\n @return: The given header capitalized using '-' as a word separator.\n \"\"\"\n return b\"-\".join([word.capitalize() for word in name.split(b\"-\")])\n\n\ndef _sanitizeLinearWhitespace(headerComponent: bytes) -> bytes:\n r\"\"\"\n Replace linear whitespace (C{\\n}, C{\\r\\n}, C{\\r}) in a header key\n or value with a single space.\n\n @param headerComponent: The header key or value to sanitize.\n\n @return: The sanitized header key or value.\n \"\"\"\n return b\" \".join(headerComponent.splitlines())\n\n\n@comparable\nclass Headers:\n \"\"\"\n Stores HTTP headers in a key and multiple value format.\n\n When passed L{str}, header names (e.g. 'Content-Type')\n are encoded using ISO-8859-1 and header values (e.g.\n 'text/html;charset=utf-8') are encoded using UTF-8. Some methods that return\n values will return them in the same type as the name given.\n\n If the header keys or values cannot be encoded or decoded using the rules\n above, using just L{bytes} arguments to the methods of this class will\n ensure no decoding or encoding is done, and L{Headers} will treat the keys\n and values as opaque byte strings.\n\n @cvar _caseMappings: A L{dict} that maps lowercase header names\n to their canonicalized representation.\n\n @ivar _rawHeaders: A L{dict} mapping header names as L{bytes} to L{list}s of\n header values as L{bytes}.\n \"\"\"\n\n _caseMappings = {\n b\"content-md5\": b\"Content-MD5\",\n b\"dnt\": b\"DNT\",\n b\"etag\": b\"ETag\",\n b\"p3p\": b\"P3P\",\n b\"te\": b\"TE\",\n b\"www-authenticate\": b\"WWW-Authenticate\",\n b\"x-xss-protection\": b\"X-XSS-Protection\",\n }\n\n def __init__(\n self,\n rawHeaders: Optional[Mapping[AnyStr, Sequence[AnyStr]]] = None,\n ):\n self._rawHeaders: Dict[bytes, List[bytes]] = {}\n if rawHeaders is not None:\n for name, values in rawHeaders.items():\n self.setRawHeaders(name, values)\n\n def __repr__(self) -> str:\n \"\"\"\n Return a string fully describing the headers set on this object.\n \"\"\"\n return \"{}({!r})\".format(\n self.__class__.__name__,\n self._rawHeaders,\n )\n\n def __cmp__(self, other):\n \"\"\"\n Define L{Headers} instances as being equal to each other if they have\n the same raw headers.\n \"\"\"\n if isinstance(other, Headers):\n return cmp(\n sorted(self._rawHeaders.items()), sorted(other._rawHeaders.items())\n )\n return NotImplemented\n\n def _encodeName(self, name: AnyStr) -> bytes:\n \"\"\"\n Encode the name of a header (eg 'Content-Type') to an ISO-8859-1 encoded\n bytestring if required.\n\n @param name: A HTTP header name\n\n @return: C{name}, encoded if required, lowercased\n \"\"\"\n if isinstance(name, str):\n return name.lower().encode(\"iso-8859-1\")\n return name.lower()\n\n def copy(self):\n \"\"\"\n Return a copy of itself with the same headers set.\n\n @return: A new L{Headers}\n \"\"\"\n return self.__class__(self._rawHeaders)\n\n def hasHeader(self, name: AnyStr) -> bool:\n \"\"\"\n Check for the existence of a given header.\n\n @param name: The name of the HTTP header to check for.\n\n @return: C{True} if the header exists, otherwise C{False}.\n \"\"\"\n return self._encodeName(name) in self._rawHeaders\n\n def removeHeader(self, name: AnyStr) -> None:\n \"\"\"\n Remove the named header from this header object.\n\n @param name: The name of the HTTP header to remove.\n\n @return: L{None}\n \"\"\"\n self._rawHeaders.pop(self._encodeName(name), None)\n\n def setRawHeaders(self, name: AnyStr, values: Sequence[AnyStr]) -> None:\n \"\"\"\n Sets the raw representation of the given header.\n\n @param name: The name of the HTTP header to set the values for.\n\n @param values: A list of strings each one being a header value of\n the given name.\n\n @raise TypeError: Raised if C{values} is not a L{list} of L{bytes}\n or L{str} strings, or if C{name} is not a L{bytes} or\n L{str} string.\n\n @return: L{None}\n \"\"\"\n if not isinstance(values, _Sequence):\n raise TypeError(\n \"Header entry %r should be sequence but found \"\n \"instance of %r instead\" % (name, type(values))\n )\n\n if not isinstance(name, (bytes, str)):\n raise TypeError(\n \"Header name is an instance of %r, \" \"not bytes or str\" % (type(name),)\n )\n\n for count, value in enumerate(values):\n if not isinstance(value, (bytes, str)):\n raise TypeError(\n \"Header value at position %s is an instance of %r, not \"\n \"bytes or str\"\n % (\n count,\n type(value),\n )\n )\n\n _name = _sanitizeLinearWhitespace(self._encodeName(name))\n encodedValues: List[bytes] = []\n for v in values:\n if isinstance(v, str):\n _v = v.encode(\"utf8\")\n else:\n _v = v\n encodedValues.append(_sanitizeLinearWhitespace(_v))\n\n self._rawHeaders[_name] = encodedValues\n\n def addRawHeader(self, name: AnyStr, value: AnyStr) -> None:\n \"\"\"\n Add a new raw value for the given header.\n\n @param name: The name of the header for which to set the value.\n\n @param value: The value to set for the named header.\n \"\"\"\n if not isinstance(name, (bytes, str)):\n raise TypeError(\n \"Header name is an instance of %r, \" \"not bytes or str\" % (type(name),)\n )\n\n if not isinstance(value, (bytes, str)):\n raise TypeError(\n \"Header value is an instance of %r, not \"\n \"bytes or str\" % (type(value),)\n )\n\n # We secretly know getRawHeaders is really returning a list\n values = cast(List[AnyStr], self.getRawHeaders(name, default=[]))\n values.append(value)\n\n self.setRawHeaders(name, values)\n\n @overload\n def getRawHeaders(self, name: AnyStr) -> Optional[Sequence[AnyStr]]:\n ...\n\n @overload\n def getRawHeaders(self, name: AnyStr, default: _T) -> Union[Sequence[AnyStr], _T]:\n ...\n\n def getRawHeaders(\n self, name: AnyStr, default: Optional[_T] = None\n ) -> Union[Sequence[AnyStr], Optional[_T]]:\n \"\"\"\n Returns a sequence of headers matching the given name as the raw string\n given.\n\n @param name: The name of the HTTP header to get the values of.\n\n @param default: The value to return if no header with the given C{name}\n exists.\n\n @return: If the named header is present, a sequence of its\n values. Otherwise, C{default}.\n \"\"\"\n encodedName = self._encodeName(name)\n values = self._rawHeaders.get(encodedName, [])\n if not values:\n return default\n\n if isinstance(name, str):\n return [v.decode(\"utf8\") for v in values]\n return values\n\n def getAllRawHeaders(self) -> Iterator[Tuple[bytes, Sequence[bytes]]]:\n \"\"\"\n Return an iterator of key, value pairs of all headers contained in this\n object, as L{bytes}. The keys are capitalized in canonical\n capitalization.\n \"\"\"\n for k, v in self._rawHeaders.items():\n yield self._canonicalNameCaps(k), v\n\n def _canonicalNameCaps(self, name: bytes) -> bytes:\n \"\"\"\n Return the canonical name for the given header.\n\n @param name: The all-lowercase header name to capitalize in its\n canonical form.\n\n @return: The canonical name of the header.\n \"\"\"\n return self._caseMappings.get(name, _dashCapitalize(name))\n\n\n__all__ = [\"Headers\"]\n", "path": "src/twisted/web/http_headers.py"}], "after_files": [{"content": "# -*- test-case-name: twisted.web.test.test_http_headers -*-\n# Copyright (c) Twisted Matrix Laboratories.\n# See LICENSE for details.\n\n\"\"\"\nAn API for storing HTTP header names and values.\n\"\"\"\n\nfrom collections.abc import Sequence as _Sequence\nfrom typing import (\n AnyStr,\n Dict,\n Iterator,\n List,\n Mapping,\n Optional,\n Sequence,\n Tuple,\n TypeVar,\n Union,\n overload,\n)\n\nfrom twisted.python.compat import cmp, comparable\n\n_T = TypeVar(\"_T\")\n\n\ndef _dashCapitalize(name: bytes) -> bytes:\n \"\"\"\n Return a byte string which is capitalized using '-' as a word separator.\n\n @param name: The name of the header to capitalize.\n\n @return: The given header capitalized using '-' as a word separator.\n \"\"\"\n return b\"-\".join([word.capitalize() for word in name.split(b\"-\")])\n\n\ndef _sanitizeLinearWhitespace(headerComponent: bytes) -> bytes:\n r\"\"\"\n Replace linear whitespace (C{\\n}, C{\\r\\n}, C{\\r}) in a header key\n or value with a single space.\n\n @param headerComponent: The header key or value to sanitize.\n\n @return: The sanitized header key or value.\n \"\"\"\n return b\" \".join(headerComponent.splitlines())\n\n\n@comparable\nclass Headers:\n \"\"\"\n Stores HTTP headers in a key and multiple value format.\n\n When passed L{str}, header names (e.g. 'Content-Type')\n are encoded using ISO-8859-1 and header values (e.g.\n 'text/html;charset=utf-8') are encoded using UTF-8. Some methods that return\n values will return them in the same type as the name given.\n\n If the header keys or values cannot be encoded or decoded using the rules\n above, using just L{bytes} arguments to the methods of this class will\n ensure no decoding or encoding is done, and L{Headers} will treat the keys\n and values as opaque byte strings.\n\n @cvar _caseMappings: A L{dict} that maps lowercase header names\n to their canonicalized representation.\n\n @ivar _rawHeaders: A L{dict} mapping header names as L{bytes} to L{list}s of\n header values as L{bytes}.\n \"\"\"\n\n _caseMappings = {\n b\"content-md5\": b\"Content-MD5\",\n b\"dnt\": b\"DNT\",\n b\"etag\": b\"ETag\",\n b\"p3p\": b\"P3P\",\n b\"te\": b\"TE\",\n b\"www-authenticate\": b\"WWW-Authenticate\",\n b\"x-xss-protection\": b\"X-XSS-Protection\",\n }\n\n def __init__(\n self,\n rawHeaders: Optional[Mapping[AnyStr, Sequence[AnyStr]]] = None,\n ) -> None:\n self._rawHeaders: Dict[bytes, List[bytes]] = {}\n if rawHeaders is not None:\n for name, values in rawHeaders.items():\n self.setRawHeaders(name, values)\n\n def __repr__(self) -> str:\n \"\"\"\n Return a string fully describing the headers set on this object.\n \"\"\"\n return \"{}({!r})\".format(\n self.__class__.__name__,\n self._rawHeaders,\n )\n\n def __cmp__(self, other):\n \"\"\"\n Define L{Headers} instances as being equal to each other if they have\n the same raw headers.\n \"\"\"\n if isinstance(other, Headers):\n return cmp(\n sorted(self._rawHeaders.items()), sorted(other._rawHeaders.items())\n )\n return NotImplemented\n\n def _encodeName(self, name: Union[str, bytes]) -> bytes:\n \"\"\"\n Encode the name of a header (eg 'Content-Type') to an ISO-8859-1 encoded\n bytestring if required.\n\n @param name: A HTTP header name\n\n @return: C{name}, encoded if required, lowercased\n \"\"\"\n if isinstance(name, str):\n return name.lower().encode(\"iso-8859-1\")\n return name.lower()\n\n def copy(self):\n \"\"\"\n Return a copy of itself with the same headers set.\n\n @return: A new L{Headers}\n \"\"\"\n return self.__class__(self._rawHeaders)\n\n def hasHeader(self, name: AnyStr) -> bool:\n \"\"\"\n Check for the existence of a given header.\n\n @param name: The name of the HTTP header to check for.\n\n @return: C{True} if the header exists, otherwise C{False}.\n \"\"\"\n return self._encodeName(name) in self._rawHeaders\n\n def removeHeader(self, name: AnyStr) -> None:\n \"\"\"\n Remove the named header from this header object.\n\n @param name: The name of the HTTP header to remove.\n\n @return: L{None}\n \"\"\"\n self._rawHeaders.pop(self._encodeName(name), None)\n\n @overload\n def setRawHeaders(self, name: Union[str, bytes], values: Sequence[bytes]) -> None:\n ...\n\n @overload\n def setRawHeaders(self, name: Union[str, bytes], values: Sequence[str]) -> None:\n ...\n\n @overload\n def setRawHeaders(\n self, name: Union[str, bytes], values: Sequence[Union[str, bytes]]\n ) -> None:\n ...\n\n def setRawHeaders(self, name: Union[str, bytes], values: object) -> None:\n \"\"\"\n Sets the raw representation of the given header.\n\n @param name: The name of the HTTP header to set the values for.\n\n @param values: A list of strings each one being a header value of\n the given name.\n\n @raise TypeError: Raised if C{values} is not a sequence of L{bytes}\n or L{str}, or if C{name} is not L{bytes} or L{str}.\n\n @return: L{None}\n \"\"\"\n if not isinstance(values, _Sequence):\n raise TypeError(\n \"Header entry %r should be sequence but found \"\n \"instance of %r instead\" % (name, type(values))\n )\n\n if not isinstance(name, (bytes, str)):\n raise TypeError(\n f\"Header name is an instance of {type(name)!r}, not bytes or str\"\n )\n\n for count, value in enumerate(values):\n if not isinstance(value, (bytes, str)):\n raise TypeError(\n \"Header value at position %s is an instance of %r, not \"\n \"bytes or str\"\n % (\n count,\n type(value),\n )\n )\n\n _name = _sanitizeLinearWhitespace(self._encodeName(name))\n encodedValues: List[bytes] = []\n for v in values:\n if isinstance(v, str):\n _v = v.encode(\"utf8\")\n else:\n _v = v\n encodedValues.append(_sanitizeLinearWhitespace(_v))\n\n self._rawHeaders[_name] = encodedValues\n\n def addRawHeader(self, name: Union[str, bytes], value: Union[str, bytes]) -> None:\n \"\"\"\n Add a new raw value for the given header.\n\n @param name: The name of the header for which to set the value.\n\n @param value: The value to set for the named header.\n \"\"\"\n if not isinstance(name, (bytes, str)):\n raise TypeError(\n f\"Header name is an instance of {type(name)!r}, not bytes or str\"\n )\n\n if not isinstance(value, (bytes, str)):\n raise TypeError(\n \"Header value is an instance of %r, not \"\n \"bytes or str\" % (type(value),)\n )\n\n self._rawHeaders.setdefault(\n _sanitizeLinearWhitespace(self._encodeName(name)), []\n ).append(\n _sanitizeLinearWhitespace(\n value.encode(\"utf8\") if isinstance(value, str) else value\n )\n )\n\n @overload\n def getRawHeaders(self, name: AnyStr) -> Optional[Sequence[AnyStr]]:\n ...\n\n @overload\n def getRawHeaders(self, name: AnyStr, default: _T) -> Union[Sequence[AnyStr], _T]:\n ...\n\n def getRawHeaders(\n self, name: AnyStr, default: Optional[_T] = None\n ) -> Union[Sequence[AnyStr], Optional[_T]]:\n \"\"\"\n Returns a sequence of headers matching the given name as the raw string\n given.\n\n @param name: The name of the HTTP header to get the values of.\n\n @param default: The value to return if no header with the given C{name}\n exists.\n\n @return: If the named header is present, a sequence of its\n values. Otherwise, C{default}.\n \"\"\"\n encodedName = self._encodeName(name)\n values = self._rawHeaders.get(encodedName, [])\n if not values:\n return default\n\n if isinstance(name, str):\n return [v.decode(\"utf8\") for v in values]\n return values\n\n def getAllRawHeaders(self) -> Iterator[Tuple[bytes, Sequence[bytes]]]:\n \"\"\"\n Return an iterator of key, value pairs of all headers contained in this\n object, as L{bytes}. The keys are capitalized in canonical\n capitalization.\n \"\"\"\n for k, v in self._rawHeaders.items():\n yield self._canonicalNameCaps(k), v\n\n def _canonicalNameCaps(self, name: bytes) -> bytes:\n \"\"\"\n Return the canonical name for the given header.\n\n @param name: The all-lowercase header name to capitalize in its\n canonical form.\n\n @return: The canonical name of the header.\n \"\"\"\n return self._caseMappings.get(name, _dashCapitalize(name))\n\n\n__all__ = [\"Headers\"]\n", "path": "src/twisted/web/http_headers.py"}]}
| 3,172 | 1,017 |
gh_patches_debug_20004
|
rasdani/github-patches
|
git_diff
|
nautobot__nautobot-5739
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception thrown when deleting Software Version or Image from list views
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Nautobot version (Docker tag too if applicable): 2.2.3
* Python version: 3.11
* Database platform, version: PostgreSQL
* Middleware(s):
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Create a Software Version object
2. Create a Software Image File object and associate it with the Software Version object.
3. Associate a device with the new software and image objects
4. Delete either of the new objects using bulk delete from their respective list views.
<!-- What did you expect to happen? -->
### Expected Behavior
An warning should be displayed because of the protected relationships.
<!-- What happened instead? -->
### Observed Behavior
AttributeError and traceback
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/core/views/utils.py`
Content:
```
1 import datetime
2 from io import BytesIO
3 import urllib.parse
4
5 from django.contrib import messages
6 from django.core.exceptions import FieldError, ValidationError
7 from django.db.models import ForeignKey
8 from django.utils.html import format_html, format_html_join
9 from django.utils.safestring import mark_safe
10 from rest_framework import exceptions, serializers
11
12 from nautobot.core.api.fields import ChoiceField, ContentTypeField, TimeZoneSerializerField
13 from nautobot.core.api.parsers import NautobotCSVParser
14 from nautobot.core.models.utils import is_taggable
15 from nautobot.core.utils.data import is_uuid
16 from nautobot.core.utils.filtering import get_filter_field_label
17 from nautobot.core.utils.lookup import get_form_for_model
18
19
20 def check_filter_for_display(filters, field_name, values):
21 """
22 Return any additional context data for the template.
23
24 Args:
25 filters (OrderedDict): The output of `.get_filters()` of a desired FilterSet
26 field_name (str): The name of the filter to get a label for and lookup values
27 values (list[str]): List of strings that may be PKs to look up
28
29 Returns:
30 (dict): A dict containing:
31 - name: (str) Field name
32 - display: (str) Resolved field name, whether that's a field label or fallback to inputted `field_name` if label unavailable
33 - values: (list) List of dictionaries with the same `name` and `display` keys
34 """
35 values = values if isinstance(values, (list, tuple)) else [values]
36
37 resolved_filter = {
38 "name": field_name,
39 "display": field_name,
40 "values": [{"name": value, "display": value} for value in values],
41 }
42
43 if field_name not in filters.keys():
44 return resolved_filter
45
46 filter_field = filters[field_name]
47
48 resolved_filter["display"] = get_filter_field_label(filter_field)
49
50 if len(values) == 0 or not hasattr(filter_field, "queryset") or not is_uuid(values[0]):
51 return resolved_filter
52 else:
53 try:
54 new_values = []
55 for value in filter_field.queryset.filter(pk__in=values):
56 new_values.append({"name": str(value.pk), "display": getattr(value, "display", str(value))})
57 resolved_filter["values"] = new_values
58 except (FieldError, AttributeError):
59 pass
60
61 return resolved_filter
62
63
64 # 2.2 TODO: remove this method as it's no longer used in core.
65 def csv_format(data):
66 """
67 Convert the given list of data to a CSV row string.
68
69 Encapsulate any data which contains a comma within double quotes.
70
71 Obsolete, as CSV rendering in Nautobot core is now handled by nautobot.core.api.renderers.NautobotCSVRenderer.
72 """
73 csv = []
74 for value in data:
75 # Represent None or False with empty string
76 if value is None or value is False:
77 csv.append("")
78 continue
79
80 # Convert dates to ISO format
81 if isinstance(value, (datetime.date, datetime.datetime)):
82 value = value.isoformat()
83
84 # Force conversion to string first so we can check for any commas
85 if not isinstance(value, str):
86 value = f"{value}"
87
88 # Double-quote the value if it contains a comma or line break
89 if "," in value or "\n" in value:
90 value = value.replace('"', '""') # Escape double-quotes
91 csv.append(f'"{value}"')
92 else:
93 csv.append(f"{value}")
94
95 return ",".join(csv)
96
97
98 def get_csv_form_fields_from_serializer_class(serializer_class):
99 """From the given serializer class, build a list of field dicts suitable for rendering in the CSV import form."""
100 serializer = serializer_class(context={"request": None, "depth": 0})
101 fields = []
102 # Note lots of "noqa: S308" in this function. That's `suspicious-mark-safe-usage`, but in all of the below cases
103 # we control the input string and it's known to be safe, so mark_safe() is being used correctly here.
104 for field_name, field in serializer.fields.items():
105 if field.read_only:
106 continue
107 if field_name == "custom_fields":
108 from nautobot.extras.choices import CustomFieldTypeChoices
109 from nautobot.extras.models import CustomField
110
111 cfs = CustomField.objects.get_for_model(serializer_class.Meta.model)
112 for cf in cfs:
113 cf_form_field = cf.to_form_field(set_initial=False)
114 field_info = {
115 "name": cf.add_prefix_to_cf_key(),
116 "required": cf_form_field.required,
117 "foreign_key": False,
118 "label": cf_form_field.label,
119 "help_text": cf_form_field.help_text,
120 }
121 if cf.type == CustomFieldTypeChoices.TYPE_BOOLEAN:
122 field_info["format"] = mark_safe("<code>true</code> or <code>false</code>") # noqa: S308
123 elif cf.type == CustomFieldTypeChoices.TYPE_DATE:
124 field_info["format"] = mark_safe("<code>YYYY-MM-DD</code>") # noqa: S308
125 elif cf.type == CustomFieldTypeChoices.TYPE_SELECT:
126 field_info["choices"] = {cfc.value: cfc.value for cfc in cf.custom_field_choices.all()}
127 elif cf.type == CustomFieldTypeChoices.TYPE_MULTISELECT:
128 field_info["format"] = mark_safe('<code>"value,value"</code>') # noqa: S308
129 field_info["choices"] = {cfc.value: cfc.value for cfc in cf.custom_field_choices.all()}
130 fields.append(field_info)
131 continue
132
133 field_info = {
134 "name": field_name,
135 "required": field.required,
136 "foreign_key": False,
137 "label": field.label,
138 "help_text": field.help_text,
139 }
140 if isinstance(field, serializers.BooleanField):
141 field_info["format"] = mark_safe("<code>true</code> or <code>false</code>") # noqa: S308
142 elif isinstance(field, serializers.DateField):
143 field_info["format"] = mark_safe("<code>YYYY-MM-DD</code>") # noqa: S308
144 elif isinstance(field, TimeZoneSerializerField):
145 field_info["format"] = mark_safe( # noqa: S308
146 '<a href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones">available options</a>'
147 )
148 elif isinstance(field, serializers.ManyRelatedField):
149 if field.field_name == "tags":
150 field_info["format"] = mark_safe('<code>"name,name"</code> or <code>"UUID,UUID"</code>') # noqa: S308
151 elif isinstance(field.child_relation, ContentTypeField):
152 field_info["format"] = mark_safe('<code>"app_label.model,app_label.model"</code>') # noqa: S308
153 else:
154 field_info["foreign_key"] = field.child_relation.queryset.model._meta.label_lower
155 field_info["format"] = mark_safe('<code>"UUID,UUID"</code> or combination of fields') # noqa: S308
156 elif isinstance(field, serializers.RelatedField):
157 if isinstance(field, ContentTypeField):
158 field_info["format"] = mark_safe("<code>app_label.model</code>") # noqa: S308
159 else:
160 field_info["foreign_key"] = field.queryset.model._meta.label_lower
161 field_info["format"] = mark_safe("<code>UUID</code> or combination of fields") # noqa: S308
162 elif isinstance(field, (serializers.ListField, serializers.MultipleChoiceField)):
163 field_info["format"] = mark_safe('<code>"value,value"</code>') # noqa: S308
164 elif isinstance(field, (serializers.DictField, serializers.JSONField)):
165 pass # Not trivial to specify a format as it could be a JSON dict or a comma-separated string
166
167 if isinstance(field, ChoiceField):
168 field_info["choices"] = field.choices
169
170 fields.append(field_info)
171
172 # Move all required fields to the start of the list
173 # TODO this ordering should be defined by the serializer instead...
174 fields = sorted(fields, key=lambda info: 1 if info["required"] else 2)
175 return fields
176
177
178 def import_csv_helper(*, request, form, serializer_class):
179 field_name = "csv_file" if request.FILES else "csv_data"
180 csvtext = form.cleaned_data[field_name]
181 try:
182 data = NautobotCSVParser().parse(
183 stream=BytesIO(csvtext.encode("utf-8")),
184 parser_context={"request": request, "serializer_class": serializer_class},
185 )
186 new_objs = []
187 validation_failed = False
188 for row, entry in enumerate(data, start=1):
189 serializer = serializer_class(data=entry, context={"request": request})
190 if serializer.is_valid():
191 new_objs.append(serializer.save())
192 else:
193 validation_failed = True
194 for field, err in serializer.errors.items():
195 form.add_error(field_name, f"Row {row}: {field}: {err[0]}")
196 except exceptions.ParseError as exc:
197 validation_failed = True
198 form.add_error(None, str(exc))
199
200 if validation_failed:
201 raise ValidationError("")
202
203 return new_objs
204
205
206 def handle_protectederror(obj_list, request, e):
207 """
208 Generate a user-friendly error message in response to a ProtectedError exception.
209 """
210 protected_objects = list(e.protected_objects)
211 protected_count = len(protected_objects) if len(protected_objects) <= 50 else "More than 50"
212 err_message = format_html(
213 "Unable to delete <strong>{}</strong>. {} dependent objects were found: ",
214 ", ".join(str(obj) for obj in obj_list),
215 protected_count,
216 )
217
218 # Append dependent objects to error message
219 err_message += format_html_join(
220 ", ",
221 '<a href="{}">{}</a>',
222 ((dependent.get_absolute_url(), dependent) for dependent in protected_objects[:50]),
223 )
224
225 messages.error(request, err_message)
226
227
228 def prepare_cloned_fields(instance):
229 """
230 Compile an object's `clone_fields` list into a string of URL query parameters. Tags are automatically cloned where
231 applicable.
232 """
233 form_class = get_form_for_model(instance)
234 form = form_class() if form_class is not None else None
235 params = []
236 for field_name in getattr(instance, "clone_fields", []):
237 field = instance._meta.get_field(field_name)
238 field_value = field.value_from_object(instance)
239
240 # For foreign-key fields, if the ModelForm's field has a defined `to_field_name`,
241 # use that field from the related object instead of its PK.
242 # Example: Location.parent, LocationForm().fields["parent"].to_field_name = "name", so use name rather than PK.
243 if isinstance(field, ForeignKey):
244 related_object = getattr(instance, field_name)
245 if (
246 related_object is not None
247 and form is not None
248 and field_name in form.fields
249 and hasattr(form.fields[field_name], "to_field_name")
250 and form.fields[field_name].to_field_name is not None
251 ):
252 field_value = getattr(related_object, form.fields[field_name].to_field_name)
253
254 # Swap out False with URL-friendly value
255 if field_value is False:
256 field_value = ""
257
258 # This is likely an m2m field
259 if isinstance(field_value, list):
260 for fv in field_value:
261 item_value = getattr(fv, "pk", str(fv)) # pk or str()
262 params.append((field_name, item_value))
263
264 # Omit empty values
265 elif field_value not in (None, ""):
266 params.append((field_name, field_value))
267
268 # Copy tags
269 if is_taggable(instance):
270 for tag in instance.tags.all():
271 params.append(("tags", tag.pk))
272
273 # Encode the parameters into a URL query string
274 param_string = urllib.parse.urlencode(params)
275
276 return param_string
277
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nautobot/core/views/utils.py b/nautobot/core/views/utils.py
--- a/nautobot/core/views/utils.py
+++ b/nautobot/core/views/utils.py
@@ -215,11 +215,28 @@
protected_count,
)
+ # Format objects based on whether they have a detail view/absolute url
+ objects_with_absolute_url = []
+ objects_without_absolute_url = []
# Append dependent objects to error message
+ for dependent in protected_objects[:50]:
+ try:
+ dependent.get_absolute_url()
+ objects_with_absolute_url.append(dependent)
+ except AttributeError:
+ objects_without_absolute_url.append(dependent)
+
err_message += format_html_join(
", ",
'<a href="{}">{}</a>',
- ((dependent.get_absolute_url(), dependent) for dependent in protected_objects[:50]),
+ ((dependent.get_absolute_url(), dependent) for dependent in objects_with_absolute_url),
+ )
+ if objects_with_absolute_url and objects_without_absolute_url:
+ err_message += format_html(", ")
+ err_message += format_html_join(
+ ", ",
+ "<span>{}</span>",
+ ((dependent,) for dependent in objects_without_absolute_url),
)
messages.error(request, err_message)
|
{"golden_diff": "diff --git a/nautobot/core/views/utils.py b/nautobot/core/views/utils.py\n--- a/nautobot/core/views/utils.py\n+++ b/nautobot/core/views/utils.py\n@@ -215,11 +215,28 @@\n protected_count,\n )\n \n+ # Format objects based on whether they have a detail view/absolute url\n+ objects_with_absolute_url = []\n+ objects_without_absolute_url = []\n # Append dependent objects to error message\n+ for dependent in protected_objects[:50]:\n+ try:\n+ dependent.get_absolute_url()\n+ objects_with_absolute_url.append(dependent)\n+ except AttributeError:\n+ objects_without_absolute_url.append(dependent)\n+\n err_message += format_html_join(\n \", \",\n '<a href=\"{}\">{}</a>',\n- ((dependent.get_absolute_url(), dependent) for dependent in protected_objects[:50]),\n+ ((dependent.get_absolute_url(), dependent) for dependent in objects_with_absolute_url),\n+ )\n+ if objects_with_absolute_url and objects_without_absolute_url:\n+ err_message += format_html(\", \")\n+ err_message += format_html_join(\n+ \", \",\n+ \"<span>{}</span>\",\n+ ((dependent,) for dependent in objects_without_absolute_url),\n )\n \n messages.error(request, err_message)\n", "issue": "Exception thrown when deleting Software Version or Image from list views\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Nautobot version (Docker tag too if applicable): 2.2.3\r\n* Python version: 3.11\r\n* Database platform, version: PostgreSQL\r\n* Middleware(s):\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Create a Software Version object\r\n2. Create a Software Image File object and associate it with the Software Version object.\r\n3. Associate a device with the new software and image objects\r\n4. Delete either of the new objects using bulk delete from their respective list views.\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nAn warning should be displayed because of the protected relationships.\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\nAttributeError and traceback\r\n\n", "before_files": [{"content": "import datetime\nfrom io import BytesIO\nimport urllib.parse\n\nfrom django.contrib import messages\nfrom django.core.exceptions import FieldError, ValidationError\nfrom django.db.models import ForeignKey\nfrom django.utils.html import format_html, format_html_join\nfrom django.utils.safestring import mark_safe\nfrom rest_framework import exceptions, serializers\n\nfrom nautobot.core.api.fields import ChoiceField, ContentTypeField, TimeZoneSerializerField\nfrom nautobot.core.api.parsers import NautobotCSVParser\nfrom nautobot.core.models.utils import is_taggable\nfrom nautobot.core.utils.data import is_uuid\nfrom nautobot.core.utils.filtering import get_filter_field_label\nfrom nautobot.core.utils.lookup import get_form_for_model\n\n\ndef check_filter_for_display(filters, field_name, values):\n \"\"\"\n Return any additional context data for the template.\n\n Args:\n filters (OrderedDict): The output of `.get_filters()` of a desired FilterSet\n field_name (str): The name of the filter to get a label for and lookup values\n values (list[str]): List of strings that may be PKs to look up\n\n Returns:\n (dict): A dict containing:\n - name: (str) Field name\n - display: (str) Resolved field name, whether that's a field label or fallback to inputted `field_name` if label unavailable\n - values: (list) List of dictionaries with the same `name` and `display` keys\n \"\"\"\n values = values if isinstance(values, (list, tuple)) else [values]\n\n resolved_filter = {\n \"name\": field_name,\n \"display\": field_name,\n \"values\": [{\"name\": value, \"display\": value} for value in values],\n }\n\n if field_name not in filters.keys():\n return resolved_filter\n\n filter_field = filters[field_name]\n\n resolved_filter[\"display\"] = get_filter_field_label(filter_field)\n\n if len(values) == 0 or not hasattr(filter_field, \"queryset\") or not is_uuid(values[0]):\n return resolved_filter\n else:\n try:\n new_values = []\n for value in filter_field.queryset.filter(pk__in=values):\n new_values.append({\"name\": str(value.pk), \"display\": getattr(value, \"display\", str(value))})\n resolved_filter[\"values\"] = new_values\n except (FieldError, AttributeError):\n pass\n\n return resolved_filter\n\n\n# 2.2 TODO: remove this method as it's no longer used in core.\ndef csv_format(data):\n \"\"\"\n Convert the given list of data to a CSV row string.\n\n Encapsulate any data which contains a comma within double quotes.\n\n Obsolete, as CSV rendering in Nautobot core is now handled by nautobot.core.api.renderers.NautobotCSVRenderer.\n \"\"\"\n csv = []\n for value in data:\n # Represent None or False with empty string\n if value is None or value is False:\n csv.append(\"\")\n continue\n\n # Convert dates to ISO format\n if isinstance(value, (datetime.date, datetime.datetime)):\n value = value.isoformat()\n\n # Force conversion to string first so we can check for any commas\n if not isinstance(value, str):\n value = f\"{value}\"\n\n # Double-quote the value if it contains a comma or line break\n if \",\" in value or \"\\n\" in value:\n value = value.replace('\"', '\"\"') # Escape double-quotes\n csv.append(f'\"{value}\"')\n else:\n csv.append(f\"{value}\")\n\n return \",\".join(csv)\n\n\ndef get_csv_form_fields_from_serializer_class(serializer_class):\n \"\"\"From the given serializer class, build a list of field dicts suitable for rendering in the CSV import form.\"\"\"\n serializer = serializer_class(context={\"request\": None, \"depth\": 0})\n fields = []\n # Note lots of \"noqa: S308\" in this function. That's `suspicious-mark-safe-usage`, but in all of the below cases\n # we control the input string and it's known to be safe, so mark_safe() is being used correctly here.\n for field_name, field in serializer.fields.items():\n if field.read_only:\n continue\n if field_name == \"custom_fields\":\n from nautobot.extras.choices import CustomFieldTypeChoices\n from nautobot.extras.models import CustomField\n\n cfs = CustomField.objects.get_for_model(serializer_class.Meta.model)\n for cf in cfs:\n cf_form_field = cf.to_form_field(set_initial=False)\n field_info = {\n \"name\": cf.add_prefix_to_cf_key(),\n \"required\": cf_form_field.required,\n \"foreign_key\": False,\n \"label\": cf_form_field.label,\n \"help_text\": cf_form_field.help_text,\n }\n if cf.type == CustomFieldTypeChoices.TYPE_BOOLEAN:\n field_info[\"format\"] = mark_safe(\"<code>true</code> or <code>false</code>\") # noqa: S308\n elif cf.type == CustomFieldTypeChoices.TYPE_DATE:\n field_info[\"format\"] = mark_safe(\"<code>YYYY-MM-DD</code>\") # noqa: S308\n elif cf.type == CustomFieldTypeChoices.TYPE_SELECT:\n field_info[\"choices\"] = {cfc.value: cfc.value for cfc in cf.custom_field_choices.all()}\n elif cf.type == CustomFieldTypeChoices.TYPE_MULTISELECT:\n field_info[\"format\"] = mark_safe('<code>\"value,value\"</code>') # noqa: S308\n field_info[\"choices\"] = {cfc.value: cfc.value for cfc in cf.custom_field_choices.all()}\n fields.append(field_info)\n continue\n\n field_info = {\n \"name\": field_name,\n \"required\": field.required,\n \"foreign_key\": False,\n \"label\": field.label,\n \"help_text\": field.help_text,\n }\n if isinstance(field, serializers.BooleanField):\n field_info[\"format\"] = mark_safe(\"<code>true</code> or <code>false</code>\") # noqa: S308\n elif isinstance(field, serializers.DateField):\n field_info[\"format\"] = mark_safe(\"<code>YYYY-MM-DD</code>\") # noqa: S308\n elif isinstance(field, TimeZoneSerializerField):\n field_info[\"format\"] = mark_safe( # noqa: S308\n '<a href=\"https://en.wikipedia.org/wiki/List_of_tz_database_time_zones\">available options</a>'\n )\n elif isinstance(field, serializers.ManyRelatedField):\n if field.field_name == \"tags\":\n field_info[\"format\"] = mark_safe('<code>\"name,name\"</code> or <code>\"UUID,UUID\"</code>') # noqa: S308\n elif isinstance(field.child_relation, ContentTypeField):\n field_info[\"format\"] = mark_safe('<code>\"app_label.model,app_label.model\"</code>') # noqa: S308\n else:\n field_info[\"foreign_key\"] = field.child_relation.queryset.model._meta.label_lower\n field_info[\"format\"] = mark_safe('<code>\"UUID,UUID\"</code> or combination of fields') # noqa: S308\n elif isinstance(field, serializers.RelatedField):\n if isinstance(field, ContentTypeField):\n field_info[\"format\"] = mark_safe(\"<code>app_label.model</code>\") # noqa: S308\n else:\n field_info[\"foreign_key\"] = field.queryset.model._meta.label_lower\n field_info[\"format\"] = mark_safe(\"<code>UUID</code> or combination of fields\") # noqa: S308\n elif isinstance(field, (serializers.ListField, serializers.MultipleChoiceField)):\n field_info[\"format\"] = mark_safe('<code>\"value,value\"</code>') # noqa: S308\n elif isinstance(field, (serializers.DictField, serializers.JSONField)):\n pass # Not trivial to specify a format as it could be a JSON dict or a comma-separated string\n\n if isinstance(field, ChoiceField):\n field_info[\"choices\"] = field.choices\n\n fields.append(field_info)\n\n # Move all required fields to the start of the list\n # TODO this ordering should be defined by the serializer instead...\n fields = sorted(fields, key=lambda info: 1 if info[\"required\"] else 2)\n return fields\n\n\ndef import_csv_helper(*, request, form, serializer_class):\n field_name = \"csv_file\" if request.FILES else \"csv_data\"\n csvtext = form.cleaned_data[field_name]\n try:\n data = NautobotCSVParser().parse(\n stream=BytesIO(csvtext.encode(\"utf-8\")),\n parser_context={\"request\": request, \"serializer_class\": serializer_class},\n )\n new_objs = []\n validation_failed = False\n for row, entry in enumerate(data, start=1):\n serializer = serializer_class(data=entry, context={\"request\": request})\n if serializer.is_valid():\n new_objs.append(serializer.save())\n else:\n validation_failed = True\n for field, err in serializer.errors.items():\n form.add_error(field_name, f\"Row {row}: {field}: {err[0]}\")\n except exceptions.ParseError as exc:\n validation_failed = True\n form.add_error(None, str(exc))\n\n if validation_failed:\n raise ValidationError(\"\")\n\n return new_objs\n\n\ndef handle_protectederror(obj_list, request, e):\n \"\"\"\n Generate a user-friendly error message in response to a ProtectedError exception.\n \"\"\"\n protected_objects = list(e.protected_objects)\n protected_count = len(protected_objects) if len(protected_objects) <= 50 else \"More than 50\"\n err_message = format_html(\n \"Unable to delete <strong>{}</strong>. {} dependent objects were found: \",\n \", \".join(str(obj) for obj in obj_list),\n protected_count,\n )\n\n # Append dependent objects to error message\n err_message += format_html_join(\n \", \",\n '<a href=\"{}\">{}</a>',\n ((dependent.get_absolute_url(), dependent) for dependent in protected_objects[:50]),\n )\n\n messages.error(request, err_message)\n\n\ndef prepare_cloned_fields(instance):\n \"\"\"\n Compile an object's `clone_fields` list into a string of URL query parameters. Tags are automatically cloned where\n applicable.\n \"\"\"\n form_class = get_form_for_model(instance)\n form = form_class() if form_class is not None else None\n params = []\n for field_name in getattr(instance, \"clone_fields\", []):\n field = instance._meta.get_field(field_name)\n field_value = field.value_from_object(instance)\n\n # For foreign-key fields, if the ModelForm's field has a defined `to_field_name`,\n # use that field from the related object instead of its PK.\n # Example: Location.parent, LocationForm().fields[\"parent\"].to_field_name = \"name\", so use name rather than PK.\n if isinstance(field, ForeignKey):\n related_object = getattr(instance, field_name)\n if (\n related_object is not None\n and form is not None\n and field_name in form.fields\n and hasattr(form.fields[field_name], \"to_field_name\")\n and form.fields[field_name].to_field_name is not None\n ):\n field_value = getattr(related_object, form.fields[field_name].to_field_name)\n\n # Swap out False with URL-friendly value\n if field_value is False:\n field_value = \"\"\n\n # This is likely an m2m field\n if isinstance(field_value, list):\n for fv in field_value:\n item_value = getattr(fv, \"pk\", str(fv)) # pk or str()\n params.append((field_name, item_value))\n\n # Omit empty values\n elif field_value not in (None, \"\"):\n params.append((field_name, field_value))\n\n # Copy tags\n if is_taggable(instance):\n for tag in instance.tags.all():\n params.append((\"tags\", tag.pk))\n\n # Encode the parameters into a URL query string\n param_string = urllib.parse.urlencode(params)\n\n return param_string\n", "path": "nautobot/core/views/utils.py"}], "after_files": [{"content": "import datetime\nfrom io import BytesIO\nimport urllib.parse\n\nfrom django.contrib import messages\nfrom django.core.exceptions import FieldError, ValidationError\nfrom django.db.models import ForeignKey\nfrom django.utils.html import format_html, format_html_join\nfrom django.utils.safestring import mark_safe\nfrom rest_framework import exceptions, serializers\n\nfrom nautobot.core.api.fields import ChoiceField, ContentTypeField, TimeZoneSerializerField\nfrom nautobot.core.api.parsers import NautobotCSVParser\nfrom nautobot.core.models.utils import is_taggable\nfrom nautobot.core.utils.data import is_uuid\nfrom nautobot.core.utils.filtering import get_filter_field_label\nfrom nautobot.core.utils.lookup import get_form_for_model\n\n\ndef check_filter_for_display(filters, field_name, values):\n \"\"\"\n Return any additional context data for the template.\n\n Args:\n filters (OrderedDict): The output of `.get_filters()` of a desired FilterSet\n field_name (str): The name of the filter to get a label for and lookup values\n values (list[str]): List of strings that may be PKs to look up\n\n Returns:\n (dict): A dict containing:\n - name: (str) Field name\n - display: (str) Resolved field name, whether that's a field label or fallback to inputted `field_name` if label unavailable\n - values: (list) List of dictionaries with the same `name` and `display` keys\n \"\"\"\n values = values if isinstance(values, (list, tuple)) else [values]\n\n resolved_filter = {\n \"name\": field_name,\n \"display\": field_name,\n \"values\": [{\"name\": value, \"display\": value} for value in values],\n }\n\n if field_name not in filters.keys():\n return resolved_filter\n\n filter_field = filters[field_name]\n\n resolved_filter[\"display\"] = get_filter_field_label(filter_field)\n\n if len(values) == 0 or not hasattr(filter_field, \"queryset\") or not is_uuid(values[0]):\n return resolved_filter\n else:\n try:\n new_values = []\n for value in filter_field.queryset.filter(pk__in=values):\n new_values.append({\"name\": str(value.pk), \"display\": getattr(value, \"display\", str(value))})\n resolved_filter[\"values\"] = new_values\n except (FieldError, AttributeError):\n pass\n\n return resolved_filter\n\n\n# 2.2 TODO: remove this method as it's no longer used in core.\ndef csv_format(data):\n \"\"\"\n Convert the given list of data to a CSV row string.\n\n Encapsulate any data which contains a comma within double quotes.\n\n Obsolete, as CSV rendering in Nautobot core is now handled by nautobot.core.api.renderers.NautobotCSVRenderer.\n \"\"\"\n csv = []\n for value in data:\n # Represent None or False with empty string\n if value is None or value is False:\n csv.append(\"\")\n continue\n\n # Convert dates to ISO format\n if isinstance(value, (datetime.date, datetime.datetime)):\n value = value.isoformat()\n\n # Force conversion to string first so we can check for any commas\n if not isinstance(value, str):\n value = f\"{value}\"\n\n # Double-quote the value if it contains a comma or line break\n if \",\" in value or \"\\n\" in value:\n value = value.replace('\"', '\"\"') # Escape double-quotes\n csv.append(f'\"{value}\"')\n else:\n csv.append(f\"{value}\")\n\n return \",\".join(csv)\n\n\ndef get_csv_form_fields_from_serializer_class(serializer_class):\n \"\"\"From the given serializer class, build a list of field dicts suitable for rendering in the CSV import form.\"\"\"\n serializer = serializer_class(context={\"request\": None, \"depth\": 0})\n fields = []\n # Note lots of \"noqa: S308\" in this function. That's `suspicious-mark-safe-usage`, but in all of the below cases\n # we control the input string and it's known to be safe, so mark_safe() is being used correctly here.\n for field_name, field in serializer.fields.items():\n if field.read_only:\n continue\n if field_name == \"custom_fields\":\n from nautobot.extras.choices import CustomFieldTypeChoices\n from nautobot.extras.models import CustomField\n\n cfs = CustomField.objects.get_for_model(serializer_class.Meta.model)\n for cf in cfs:\n cf_form_field = cf.to_form_field(set_initial=False)\n field_info = {\n \"name\": cf.add_prefix_to_cf_key(),\n \"required\": cf_form_field.required,\n \"foreign_key\": False,\n \"label\": cf_form_field.label,\n \"help_text\": cf_form_field.help_text,\n }\n if cf.type == CustomFieldTypeChoices.TYPE_BOOLEAN:\n field_info[\"format\"] = mark_safe(\"<code>true</code> or <code>false</code>\") # noqa: S308\n elif cf.type == CustomFieldTypeChoices.TYPE_DATE:\n field_info[\"format\"] = mark_safe(\"<code>YYYY-MM-DD</code>\") # noqa: S308\n elif cf.type == CustomFieldTypeChoices.TYPE_SELECT:\n field_info[\"choices\"] = {cfc.value: cfc.value for cfc in cf.custom_field_choices.all()}\n elif cf.type == CustomFieldTypeChoices.TYPE_MULTISELECT:\n field_info[\"format\"] = mark_safe('<code>\"value,value\"</code>') # noqa: S308\n field_info[\"choices\"] = {cfc.value: cfc.value for cfc in cf.custom_field_choices.all()}\n fields.append(field_info)\n continue\n\n field_info = {\n \"name\": field_name,\n \"required\": field.required,\n \"foreign_key\": False,\n \"label\": field.label,\n \"help_text\": field.help_text,\n }\n if isinstance(field, serializers.BooleanField):\n field_info[\"format\"] = mark_safe(\"<code>true</code> or <code>false</code>\") # noqa: S308\n elif isinstance(field, serializers.DateField):\n field_info[\"format\"] = mark_safe(\"<code>YYYY-MM-DD</code>\") # noqa: S308\n elif isinstance(field, TimeZoneSerializerField):\n field_info[\"format\"] = mark_safe( # noqa: S308\n '<a href=\"https://en.wikipedia.org/wiki/List_of_tz_database_time_zones\">available options</a>'\n )\n elif isinstance(field, serializers.ManyRelatedField):\n if field.field_name == \"tags\":\n field_info[\"format\"] = mark_safe('<code>\"name,name\"</code> or <code>\"UUID,UUID\"</code>') # noqa: S308\n elif isinstance(field.child_relation, ContentTypeField):\n field_info[\"format\"] = mark_safe('<code>\"app_label.model,app_label.model\"</code>') # noqa: S308\n else:\n field_info[\"foreign_key\"] = field.child_relation.queryset.model._meta.label_lower\n field_info[\"format\"] = mark_safe('<code>\"UUID,UUID\"</code> or combination of fields') # noqa: S308\n elif isinstance(field, serializers.RelatedField):\n if isinstance(field, ContentTypeField):\n field_info[\"format\"] = mark_safe(\"<code>app_label.model</code>\") # noqa: S308\n else:\n field_info[\"foreign_key\"] = field.queryset.model._meta.label_lower\n field_info[\"format\"] = mark_safe(\"<code>UUID</code> or combination of fields\") # noqa: S308\n elif isinstance(field, (serializers.ListField, serializers.MultipleChoiceField)):\n field_info[\"format\"] = mark_safe('<code>\"value,value\"</code>') # noqa: S308\n elif isinstance(field, (serializers.DictField, serializers.JSONField)):\n pass # Not trivial to specify a format as it could be a JSON dict or a comma-separated string\n\n if isinstance(field, ChoiceField):\n field_info[\"choices\"] = field.choices\n\n fields.append(field_info)\n\n # Move all required fields to the start of the list\n # TODO this ordering should be defined by the serializer instead...\n fields = sorted(fields, key=lambda info: 1 if info[\"required\"] else 2)\n return fields\n\n\ndef import_csv_helper(*, request, form, serializer_class):\n field_name = \"csv_file\" if request.FILES else \"csv_data\"\n csvtext = form.cleaned_data[field_name]\n try:\n data = NautobotCSVParser().parse(\n stream=BytesIO(csvtext.encode(\"utf-8\")),\n parser_context={\"request\": request, \"serializer_class\": serializer_class},\n )\n new_objs = []\n validation_failed = False\n for row, entry in enumerate(data, start=1):\n serializer = serializer_class(data=entry, context={\"request\": request})\n if serializer.is_valid():\n new_objs.append(serializer.save())\n else:\n validation_failed = True\n for field, err in serializer.errors.items():\n form.add_error(field_name, f\"Row {row}: {field}: {err[0]}\")\n except exceptions.ParseError as exc:\n validation_failed = True\n form.add_error(None, str(exc))\n\n if validation_failed:\n raise ValidationError(\"\")\n\n return new_objs\n\n\ndef handle_protectederror(obj_list, request, e):\n \"\"\"\n Generate a user-friendly error message in response to a ProtectedError exception.\n \"\"\"\n protected_objects = list(e.protected_objects)\n protected_count = len(protected_objects) if len(protected_objects) <= 50 else \"More than 50\"\n err_message = format_html(\n \"Unable to delete <strong>{}</strong>. {} dependent objects were found: \",\n \", \".join(str(obj) for obj in obj_list),\n protected_count,\n )\n\n # Format objects based on whether they have a detail view/absolute url\n objects_with_absolute_url = []\n objects_without_absolute_url = []\n # Append dependent objects to error message\n for dependent in protected_objects[:50]:\n try:\n dependent.get_absolute_url()\n objects_with_absolute_url.append(dependent)\n except AttributeError:\n objects_without_absolute_url.append(dependent)\n\n err_message += format_html_join(\n \", \",\n '<a href=\"{}\">{}</a>',\n ((dependent.get_absolute_url(), dependent) for dependent in objects_with_absolute_url),\n )\n if objects_with_absolute_url and objects_without_absolute_url:\n err_message += format_html(\", \")\n err_message += format_html_join(\n \", \",\n \"<span>{}</span>\",\n ((dependent,) for dependent in objects_without_absolute_url),\n )\n\n messages.error(request, err_message)\n\n\ndef prepare_cloned_fields(instance):\n \"\"\"\n Compile an object's `clone_fields` list into a string of URL query parameters. Tags are automatically cloned where\n applicable.\n \"\"\"\n form_class = get_form_for_model(instance)\n form = form_class() if form_class is not None else None\n params = []\n for field_name in getattr(instance, \"clone_fields\", []):\n field = instance._meta.get_field(field_name)\n field_value = field.value_from_object(instance)\n\n # For foreign-key fields, if the ModelForm's field has a defined `to_field_name`,\n # use that field from the related object instead of its PK.\n # Example: Location.parent, LocationForm().fields[\"parent\"].to_field_name = \"name\", so use name rather than PK.\n if isinstance(field, ForeignKey):\n related_object = getattr(instance, field_name)\n if (\n related_object is not None\n and form is not None\n and field_name in form.fields\n and hasattr(form.fields[field_name], \"to_field_name\")\n and form.fields[field_name].to_field_name is not None\n ):\n field_value = getattr(related_object, form.fields[field_name].to_field_name)\n\n # Swap out False with URL-friendly value\n if field_value is False:\n field_value = \"\"\n\n # This is likely an m2m field\n if isinstance(field_value, list):\n for fv in field_value:\n item_value = getattr(fv, \"pk\", str(fv)) # pk or str()\n params.append((field_name, item_value))\n\n # Omit empty values\n elif field_value not in (None, \"\"):\n params.append((field_name, field_value))\n\n # Copy tags\n if is_taggable(instance):\n for tag in instance.tags.all():\n params.append((\"tags\", tag.pk))\n\n # Encode the parameters into a URL query string\n param_string = urllib.parse.urlencode(params)\n\n return param_string\n", "path": "nautobot/core/views/utils.py"}]}
| 3,975 | 281 |
gh_patches_debug_4030
|
rasdani/github-patches
|
git_diff
|
jazzband__pip-tools-1419
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Order of packages in requirements.txt -- pip-compile vs. pip freeze
The issue regards packages with names containing `-` character or a digit. This may cause an unexpected order of packages in the output file, different than with `pip freeze`. It seems that the output file is sorted by the whole lines, together with `==` and version name, instead of just by the package names.
#### Environment Versions
1. Windows 10
2. Python version: 3.7.6
3. pip version: 21.1.2
4. pip-tools version: 6.1.0
#### Steps to replicate
`pip-compile` with `requirements.in` file:
```
django
django-redis
django-sendfile
django-sendfile2
djangorestframework
```
#### Expected result
(without comments and additional libraries, for clarity)
```
django==3.2.4
django-redis==5.0.0
django-sendfile==0.3.11
django-sendfile2==0.6.0
djangorestframework==3.12.4
```
#### Actual result
```
django-redis==5.0.0
django-sendfile2==0.6.0
django-sendfile==0.3.11
django==3.2.4
djangorestframework==3.12.4
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `piptools/writer.py`
Content:
```
1 import os
2 import re
3 import sys
4 from itertools import chain
5 from typing import BinaryIO, Dict, Iterable, Iterator, List, Optional, Set, Tuple
6
7 from click import unstyle
8 from click.core import Context
9 from pip._internal.models.format_control import FormatControl
10 from pip._internal.req.req_install import InstallRequirement
11 from pip._vendor.packaging.markers import Marker
12
13 from .logging import log
14 from .utils import (
15 UNSAFE_PACKAGES,
16 comment,
17 dedup,
18 format_requirement,
19 get_compile_command,
20 key_from_ireq,
21 )
22
23 MESSAGE_UNHASHED_PACKAGE = comment(
24 "# WARNING: pip install will require the following package to be hashed."
25 "\n# Consider using a hashable URL like "
26 "https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip"
27 )
28
29 MESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(
30 "# WARNING: The following packages were not pinned, but pip requires them to be"
31 "\n# pinned when the requirements file includes hashes. "
32 "Consider using the --allow-unsafe flag."
33 )
34
35 MESSAGE_UNSAFE_PACKAGES = comment(
36 "# The following packages are considered to be unsafe in a requirements file:"
37 )
38
39 MESSAGE_UNINSTALLABLE = (
40 "The generated requirements file may be rejected by pip install. "
41 "See # WARNING lines for details."
42 )
43
44
45 strip_comes_from_line_re = re.compile(r" \(line \d+\)$")
46
47
48 def _comes_from_as_string(ireq: InstallRequirement) -> str:
49 if isinstance(ireq.comes_from, str):
50 return strip_comes_from_line_re.sub("", ireq.comes_from)
51 return key_from_ireq(ireq.comes_from)
52
53
54 class OutputWriter:
55 def __init__(
56 self,
57 dst_file: BinaryIO,
58 click_ctx: Context,
59 dry_run: bool,
60 emit_header: bool,
61 emit_index_url: bool,
62 emit_trusted_host: bool,
63 annotate: bool,
64 strip_extras: bool,
65 generate_hashes: bool,
66 default_index_url: str,
67 index_urls: Iterable[str],
68 trusted_hosts: Iterable[str],
69 format_control: FormatControl,
70 allow_unsafe: bool,
71 find_links: List[str],
72 emit_find_links: bool,
73 ) -> None:
74 self.dst_file = dst_file
75 self.click_ctx = click_ctx
76 self.dry_run = dry_run
77 self.emit_header = emit_header
78 self.emit_index_url = emit_index_url
79 self.emit_trusted_host = emit_trusted_host
80 self.annotate = annotate
81 self.strip_extras = strip_extras
82 self.generate_hashes = generate_hashes
83 self.default_index_url = default_index_url
84 self.index_urls = index_urls
85 self.trusted_hosts = trusted_hosts
86 self.format_control = format_control
87 self.allow_unsafe = allow_unsafe
88 self.find_links = find_links
89 self.emit_find_links = emit_find_links
90
91 def _sort_key(self, ireq: InstallRequirement) -> Tuple[bool, str]:
92 return (not ireq.editable, str(ireq.req).lower())
93
94 def write_header(self) -> Iterator[str]:
95 if self.emit_header:
96 yield comment("#")
97 yield comment(
98 "# This file is autogenerated by pip-compile with python "
99 f"{sys.version_info.major}.{sys.version_info.minor}"
100 )
101 yield comment("# To update, run:")
102 yield comment("#")
103 compile_command = os.environ.get(
104 "CUSTOM_COMPILE_COMMAND"
105 ) or get_compile_command(self.click_ctx)
106 yield comment(f"# {compile_command}")
107 yield comment("#")
108
109 def write_index_options(self) -> Iterator[str]:
110 if self.emit_index_url:
111 for index, index_url in enumerate(dedup(self.index_urls)):
112 if index == 0 and index_url.rstrip("/") == self.default_index_url:
113 continue
114 flag = "--index-url" if index == 0 else "--extra-index-url"
115 yield f"{flag} {index_url}"
116
117 def write_trusted_hosts(self) -> Iterator[str]:
118 if self.emit_trusted_host:
119 for trusted_host in dedup(self.trusted_hosts):
120 yield f"--trusted-host {trusted_host}"
121
122 def write_format_controls(self) -> Iterator[str]:
123 for nb in dedup(sorted(self.format_control.no_binary)):
124 yield f"--no-binary {nb}"
125 for ob in dedup(sorted(self.format_control.only_binary)):
126 yield f"--only-binary {ob}"
127
128 def write_find_links(self) -> Iterator[str]:
129 if self.emit_find_links:
130 for find_link in dedup(self.find_links):
131 yield f"--find-links {find_link}"
132
133 def write_flags(self) -> Iterator[str]:
134 emitted = False
135 for line in chain(
136 self.write_index_options(),
137 self.write_find_links(),
138 self.write_trusted_hosts(),
139 self.write_format_controls(),
140 ):
141 emitted = True
142 yield line
143 if emitted:
144 yield ""
145
146 def _iter_lines(
147 self,
148 results: Set[InstallRequirement],
149 unsafe_requirements: Optional[Set[InstallRequirement]] = None,
150 markers: Optional[Dict[str, Marker]] = None,
151 hashes: Optional[Dict[InstallRequirement, Set[str]]] = None,
152 ) -> Iterator[str]:
153 # default values
154 unsafe_requirements = unsafe_requirements or set()
155 markers = markers or {}
156 hashes = hashes or {}
157
158 # Check for unhashed or unpinned packages if at least one package does have
159 # hashes, which will trigger pip install's --require-hashes mode.
160 warn_uninstallable = False
161 has_hashes = hashes and any(hash for hash in hashes.values())
162
163 yielded = False
164
165 for line in self.write_header():
166 yield line
167 yielded = True
168 for line in self.write_flags():
169 yield line
170 yielded = True
171
172 unsafe_requirements = (
173 {r for r in results if r.name in UNSAFE_PACKAGES}
174 if not unsafe_requirements
175 else unsafe_requirements
176 )
177 packages = {r for r in results if r.name not in UNSAFE_PACKAGES}
178
179 if packages:
180 for ireq in sorted(packages, key=self._sort_key):
181 if has_hashes and not hashes.get(ireq):
182 yield MESSAGE_UNHASHED_PACKAGE
183 warn_uninstallable = True
184 line = self._format_requirement(
185 ireq, markers.get(key_from_ireq(ireq)), hashes=hashes
186 )
187 yield line
188 yielded = True
189
190 if unsafe_requirements:
191 yield ""
192 yielded = True
193 if has_hashes and not self.allow_unsafe:
194 yield MESSAGE_UNSAFE_PACKAGES_UNPINNED
195 warn_uninstallable = True
196 else:
197 yield MESSAGE_UNSAFE_PACKAGES
198
199 for ireq in sorted(unsafe_requirements, key=self._sort_key):
200 ireq_key = key_from_ireq(ireq)
201 if not self.allow_unsafe:
202 yield comment(f"# {ireq_key}")
203 else:
204 line = self._format_requirement(
205 ireq, marker=markers.get(ireq_key), hashes=hashes
206 )
207 yield line
208
209 # Yield even when there's no real content, so that blank files are written
210 if not yielded:
211 yield ""
212
213 if warn_uninstallable:
214 log.warning(MESSAGE_UNINSTALLABLE)
215
216 def write(
217 self,
218 results: Set[InstallRequirement],
219 unsafe_requirements: Set[InstallRequirement],
220 markers: Dict[str, Marker],
221 hashes: Optional[Dict[InstallRequirement, Set[str]]],
222 ) -> None:
223
224 for line in self._iter_lines(results, unsafe_requirements, markers, hashes):
225 log.info(line)
226 if not self.dry_run:
227 self.dst_file.write(unstyle(line).encode())
228 self.dst_file.write(os.linesep.encode())
229
230 def _format_requirement(
231 self,
232 ireq: InstallRequirement,
233 marker: Optional[Marker] = None,
234 hashes: Optional[Dict[InstallRequirement, Set[str]]] = None,
235 ) -> str:
236 ireq_hashes = (hashes if hashes is not None else {}).get(ireq)
237
238 line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)
239 if self.strip_extras:
240 line = re.sub(r"\[.+?\]", "", line)
241
242 if not self.annotate:
243 return line
244
245 # Annotate what packages or reqs-ins this package is required by
246 required_by = set()
247 if hasattr(ireq, "_source_ireqs"):
248 required_by |= {
249 _comes_from_as_string(src_ireq)
250 for src_ireq in ireq._source_ireqs
251 if src_ireq.comes_from
252 }
253 elif ireq.comes_from:
254 required_by.add(_comes_from_as_string(ireq))
255
256 if required_by:
257 sorted_required_by = sorted(required_by)
258 if len(sorted_required_by) == 1:
259 source = sorted_required_by[0]
260 annotation = " # via " + source
261 else:
262 annotation_lines = [" # via"]
263 for source in sorted_required_by:
264 annotation_lines.append(" # " + source)
265 annotation = "\n".join(annotation_lines)
266 line = f"{line}\n{comment(annotation)}"
267
268 return line
269
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/piptools/writer.py b/piptools/writer.py
--- a/piptools/writer.py
+++ b/piptools/writer.py
@@ -89,7 +89,7 @@
self.emit_find_links = emit_find_links
def _sort_key(self, ireq: InstallRequirement) -> Tuple[bool, str]:
- return (not ireq.editable, str(ireq.req).lower())
+ return (not ireq.editable, key_from_ireq(ireq))
def write_header(self) -> Iterator[str]:
if self.emit_header:
|
{"golden_diff": "diff --git a/piptools/writer.py b/piptools/writer.py\n--- a/piptools/writer.py\n+++ b/piptools/writer.py\n@@ -89,7 +89,7 @@\n self.emit_find_links = emit_find_links\n \n def _sort_key(self, ireq: InstallRequirement) -> Tuple[bool, str]:\n- return (not ireq.editable, str(ireq.req).lower())\n+ return (not ireq.editable, key_from_ireq(ireq))\n \n def write_header(self) -> Iterator[str]:\n if self.emit_header:\n", "issue": "Order of packages in requirements.txt -- pip-compile vs. pip freeze\nThe issue regards packages with names containing `-` character or a digit. This may cause an unexpected order of packages in the output file, different than with `pip freeze`. It seems that the output file is sorted by the whole lines, together with `==` and version name, instead of just by the package names.\r\n\r\n#### Environment Versions\r\n\r\n1. Windows 10\r\n2. Python version: 3.7.6\r\n3. pip version: 21.1.2\r\n4. pip-tools version: 6.1.0\r\n\r\n#### Steps to replicate\r\n\r\n`pip-compile` with `requirements.in` file:\r\n\r\n```\r\ndjango\r\ndjango-redis\r\ndjango-sendfile\r\ndjango-sendfile2\r\ndjangorestframework\r\n```\r\n\r\n#### Expected result\r\n\r\n(without comments and additional libraries, for clarity)\r\n\r\n```\r\ndjango==3.2.4\r\ndjango-redis==5.0.0\r\ndjango-sendfile==0.3.11\r\ndjango-sendfile2==0.6.0\r\ndjangorestframework==3.12.4\r\n```\r\n\r\n#### Actual result\r\n\r\n```\r\ndjango-redis==5.0.0\r\ndjango-sendfile2==0.6.0\r\ndjango-sendfile==0.3.11\r\ndjango==3.2.4\r\ndjangorestframework==3.12.4\r\n```\r\n\n", "before_files": [{"content": "import os\nimport re\nimport sys\nfrom itertools import chain\nfrom typing import BinaryIO, Dict, Iterable, Iterator, List, Optional, Set, Tuple\n\nfrom click import unstyle\nfrom click.core import Context\nfrom pip._internal.models.format_control import FormatControl\nfrom pip._internal.req.req_install import InstallRequirement\nfrom pip._vendor.packaging.markers import Marker\n\nfrom .logging import log\nfrom .utils import (\n UNSAFE_PACKAGES,\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\nstrip_comes_from_line_re = re.compile(r\" \\(line \\d+\\)$\")\n\n\ndef _comes_from_as_string(ireq: InstallRequirement) -> str:\n if isinstance(ireq.comes_from, str):\n return strip_comes_from_line_re.sub(\"\", ireq.comes_from)\n return key_from_ireq(ireq.comes_from)\n\n\nclass OutputWriter:\n def __init__(\n self,\n dst_file: BinaryIO,\n click_ctx: Context,\n dry_run: bool,\n emit_header: bool,\n emit_index_url: bool,\n emit_trusted_host: bool,\n annotate: bool,\n strip_extras: bool,\n generate_hashes: bool,\n default_index_url: str,\n index_urls: Iterable[str],\n trusted_hosts: Iterable[str],\n format_control: FormatControl,\n allow_unsafe: bool,\n find_links: List[str],\n emit_find_links: bool,\n ) -> None:\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index_url = emit_index_url\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.strip_extras = strip_extras\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n\n def _sort_key(self, ireq: InstallRequirement) -> Tuple[bool, str]:\n return (not ireq.editable, str(ireq.req).lower())\n\n def write_header(self) -> Iterator[str]:\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\n \"# This file is autogenerated by pip-compile with python \"\n f\"{sys.version_info.major}.{sys.version_info.minor}\"\n )\n yield comment(\"# To update, run:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(f\"# {compile_command}\")\n yield comment(\"#\")\n\n def write_index_options(self) -> Iterator[str]:\n if self.emit_index_url:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index == 0 and index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield f\"{flag} {index_url}\"\n\n def write_trusted_hosts(self) -> Iterator[str]:\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield f\"--trusted-host {trusted_host}\"\n\n def write_format_controls(self) -> Iterator[str]:\n for nb in dedup(sorted(self.format_control.no_binary)):\n yield f\"--no-binary {nb}\"\n for ob in dedup(sorted(self.format_control.only_binary)):\n yield f\"--only-binary {ob}\"\n\n def write_find_links(self) -> Iterator[str]:\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield f\"--find-links {find_link}\"\n\n def write_flags(self) -> Iterator[str]:\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(\n self,\n results: Set[InstallRequirement],\n unsafe_requirements: Optional[Set[InstallRequirement]] = None,\n markers: Optional[Dict[str, Marker]] = None,\n hashes: Optional[Dict[InstallRequirement, Set[str]]] = None,\n ) -> Iterator[str]:\n # default values\n unsafe_requirements = unsafe_requirements or set()\n markers = markers or {}\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = (\n {r for r in results if r.name in UNSAFE_PACKAGES}\n if not unsafe_requirements\n else unsafe_requirements\n )\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n if packages:\n for ireq in sorted(packages, key=self._sort_key):\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq, markers.get(key_from_ireq(ireq)), hashes=hashes\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in sorted(unsafe_requirements, key=self._sort_key):\n ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n yield comment(f\"# {ireq_key}\")\n else:\n line = self._format_requirement(\n ireq, marker=markers.get(ireq_key), hashes=hashes\n )\n yield line\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(\n self,\n results: Set[InstallRequirement],\n unsafe_requirements: Set[InstallRequirement],\n markers: Dict[str, Marker],\n hashes: Optional[Dict[InstallRequirement, Set[str]]],\n ) -> None:\n\n for line in self._iter_lines(results, unsafe_requirements, markers, hashes):\n log.info(line)\n if not self.dry_run:\n self.dst_file.write(unstyle(line).encode())\n self.dst_file.write(os.linesep.encode())\n\n def _format_requirement(\n self,\n ireq: InstallRequirement,\n marker: Optional[Marker] = None,\n hashes: Optional[Dict[InstallRequirement, Set[str]]] = None,\n ) -> str:\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n if self.strip_extras:\n line = re.sub(r\"\\[.+?\\]\", \"\", line)\n\n if not self.annotate:\n return line\n\n # Annotate what packages or reqs-ins this package is required by\n required_by = set()\n if hasattr(ireq, \"_source_ireqs\"):\n required_by |= {\n _comes_from_as_string(src_ireq)\n for src_ireq in ireq._source_ireqs\n if src_ireq.comes_from\n }\n elif ireq.comes_from:\n required_by.add(_comes_from_as_string(ireq))\n\n if required_by:\n sorted_required_by = sorted(required_by)\n if len(sorted_required_by) == 1:\n source = sorted_required_by[0]\n annotation = \" # via \" + source\n else:\n annotation_lines = [\" # via\"]\n for source in sorted_required_by:\n annotation_lines.append(\" # \" + source)\n annotation = \"\\n\".join(annotation_lines)\n line = f\"{line}\\n{comment(annotation)}\"\n\n return line\n", "path": "piptools/writer.py"}], "after_files": [{"content": "import os\nimport re\nimport sys\nfrom itertools import chain\nfrom typing import BinaryIO, Dict, Iterable, Iterator, List, Optional, Set, Tuple\n\nfrom click import unstyle\nfrom click.core import Context\nfrom pip._internal.models.format_control import FormatControl\nfrom pip._internal.req.req_install import InstallRequirement\nfrom pip._vendor.packaging.markers import Marker\n\nfrom .logging import log\nfrom .utils import (\n UNSAFE_PACKAGES,\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\nstrip_comes_from_line_re = re.compile(r\" \\(line \\d+\\)$\")\n\n\ndef _comes_from_as_string(ireq: InstallRequirement) -> str:\n if isinstance(ireq.comes_from, str):\n return strip_comes_from_line_re.sub(\"\", ireq.comes_from)\n return key_from_ireq(ireq.comes_from)\n\n\nclass OutputWriter:\n def __init__(\n self,\n dst_file: BinaryIO,\n click_ctx: Context,\n dry_run: bool,\n emit_header: bool,\n emit_index_url: bool,\n emit_trusted_host: bool,\n annotate: bool,\n strip_extras: bool,\n generate_hashes: bool,\n default_index_url: str,\n index_urls: Iterable[str],\n trusted_hosts: Iterable[str],\n format_control: FormatControl,\n allow_unsafe: bool,\n find_links: List[str],\n emit_find_links: bool,\n ) -> None:\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index_url = emit_index_url\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.strip_extras = strip_extras\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n\n def _sort_key(self, ireq: InstallRequirement) -> Tuple[bool, str]:\n return (not ireq.editable, key_from_ireq(ireq))\n\n def write_header(self) -> Iterator[str]:\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\n \"# This file is autogenerated by pip-compile with python \"\n f\"{sys.version_info.major}.{sys.version_info.minor}\"\n )\n yield comment(\"# To update, run:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(f\"# {compile_command}\")\n yield comment(\"#\")\n\n def write_index_options(self) -> Iterator[str]:\n if self.emit_index_url:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index == 0 and index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield f\"{flag} {index_url}\"\n\n def write_trusted_hosts(self) -> Iterator[str]:\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield f\"--trusted-host {trusted_host}\"\n\n def write_format_controls(self) -> Iterator[str]:\n for nb in dedup(sorted(self.format_control.no_binary)):\n yield f\"--no-binary {nb}\"\n for ob in dedup(sorted(self.format_control.only_binary)):\n yield f\"--only-binary {ob}\"\n\n def write_find_links(self) -> Iterator[str]:\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield f\"--find-links {find_link}\"\n\n def write_flags(self) -> Iterator[str]:\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(\n self,\n results: Set[InstallRequirement],\n unsafe_requirements: Optional[Set[InstallRequirement]] = None,\n markers: Optional[Dict[str, Marker]] = None,\n hashes: Optional[Dict[InstallRequirement, Set[str]]] = None,\n ) -> Iterator[str]:\n # default values\n unsafe_requirements = unsafe_requirements or set()\n markers = markers or {}\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = (\n {r for r in results if r.name in UNSAFE_PACKAGES}\n if not unsafe_requirements\n else unsafe_requirements\n )\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n if packages:\n for ireq in sorted(packages, key=self._sort_key):\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq, markers.get(key_from_ireq(ireq)), hashes=hashes\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in sorted(unsafe_requirements, key=self._sort_key):\n ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n yield comment(f\"# {ireq_key}\")\n else:\n line = self._format_requirement(\n ireq, marker=markers.get(ireq_key), hashes=hashes\n )\n yield line\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(\n self,\n results: Set[InstallRequirement],\n unsafe_requirements: Set[InstallRequirement],\n markers: Dict[str, Marker],\n hashes: Optional[Dict[InstallRequirement, Set[str]]],\n ) -> None:\n\n for line in self._iter_lines(results, unsafe_requirements, markers, hashes):\n log.info(line)\n if not self.dry_run:\n self.dst_file.write(unstyle(line).encode())\n self.dst_file.write(os.linesep.encode())\n\n def _format_requirement(\n self,\n ireq: InstallRequirement,\n marker: Optional[Marker] = None,\n hashes: Optional[Dict[InstallRequirement, Set[str]]] = None,\n ) -> str:\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n if self.strip_extras:\n line = re.sub(r\"\\[.+?\\]\", \"\", line)\n\n if not self.annotate:\n return line\n\n # Annotate what packages or reqs-ins this package is required by\n required_by = set()\n if hasattr(ireq, \"_source_ireqs\"):\n required_by |= {\n _comes_from_as_string(src_ireq)\n for src_ireq in ireq._source_ireqs\n if src_ireq.comes_from\n }\n elif ireq.comes_from:\n required_by.add(_comes_from_as_string(ireq))\n\n if required_by:\n sorted_required_by = sorted(required_by)\n if len(sorted_required_by) == 1:\n source = sorted_required_by[0]\n annotation = \" # via \" + source\n else:\n annotation_lines = [\" # via\"]\n for source in sorted_required_by:\n annotation_lines.append(\" # \" + source)\n annotation = \"\\n\".join(annotation_lines)\n line = f\"{line}\\n{comment(annotation)}\"\n\n return line\n", "path": "piptools/writer.py"}]}
| 3,283 | 132 |
gh_patches_debug_13311
|
rasdani/github-patches
|
git_diff
|
plone__Products.CMFPlone-3367
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DX-Site-Root: ZMI Nav-Tree is no longer expandable
After migrating to dx-site-root, the navtree within the zmi is no longer expandable

https://github.com/plone/Products.CMFPlone/issues/2454 @jaroel @ale-rt
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/Portal.py`
Content:
```
1 from AccessControl import ClassSecurityInfo
2 from AccessControl import Unauthorized
3 from AccessControl.class_init import InitializeClass
4 from Acquisition import aq_base
5 from ComputedAttribute import ComputedAttribute
6 from five.localsitemanager.registry import PersistentComponents
7 from OFS.ObjectManager import REPLACEABLE
8 from plone.dexterity.content import Container
9 from Products.CMFCore import permissions
10 from Products.CMFCore.interfaces import IContentish
11 from Products.CMFCore.interfaces import ISiteRoot
12 from Products.CMFCore.permissions import AccessContentsInformation
13 from Products.CMFCore.permissions import AddPortalMember
14 from Products.CMFCore.permissions import MailForgottenPassword
15 from Products.CMFCore.permissions import RequestReview
16 from Products.CMFCore.permissions import ReviewPortalContent
17 from Products.CMFCore.permissions import SetOwnPassword
18 from Products.CMFCore.permissions import SetOwnProperties
19 from Products.CMFCore.PortalFolder import PortalFolderBase
20 from Products.CMFCore.PortalObject import PortalObjectBase
21 from Products.CMFCore.Skinnable import SkinnableObjectManager
22 from Products.CMFCore.utils import _checkPermission
23 from Products.CMFCore.utils import getToolByName
24 from Products.CMFCore.utils import UniqueObject
25 from Products.CMFPlone import bbb
26 from Products.CMFPlone import PloneMessageFactory as _
27 from Products.CMFPlone.interfaces.siteroot import IPloneSiteRoot
28 from Products.CMFPlone.interfaces.syndication import ISyndicatable
29 from Products.CMFPlone.permissions import AddPortalContent
30 from Products.CMFPlone.permissions import AddPortalFolders
31 from Products.CMFPlone.permissions import ListPortalMembers
32 from Products.CMFPlone.permissions import ModifyPortalContent
33 from Products.CMFPlone.permissions import ReplyToItem
34 from Products.CMFPlone.permissions import View
35 from Products.Five.component.interfaces import IObjectManagerSite
36 from zope.interface.interfaces import ComponentLookupError
37 from zope.event import notify
38 from zope.interface import classImplementsOnly
39 from zope.interface import implementedBy
40 from zope.interface import implementer
41 from zope.traversing.interfaces import BeforeTraverseEvent
42
43
44 if bbb.HAS_ZSERVER:
45 from webdav.NullResource import NullResource
46
47
48 @implementer(IPloneSiteRoot, ISiteRoot, ISyndicatable, IObjectManagerSite)
49 class PloneSite(Container, SkinnableObjectManager, UniqueObject):
50 """ The Plone site object. """
51
52 security = ClassSecurityInfo()
53 meta_type = portal_type = 'Plone Site'
54
55 # Ensure certain attributes come from the correct base class.
56 _checkId = SkinnableObjectManager._checkId
57 manage_main = PortalFolderBase.manage_main
58
59 def __getattr__(self, name):
60 try:
61 # Try DX
62 return super().__getattr__(name)
63 except AttributeError:
64 # Check portal_skins
65 return SkinnableObjectManager.__getattr__(self, name)
66
67 def __setattr__(self, name, obj):
68 # handle re setting an item as an attribute
69 if self._tree is not None and name in self:
70 del self[name]
71 self[name] = obj
72 else:
73 super().__setattr__(name, obj)
74
75 def __delattr__(self, name):
76 try:
77 return super().__delattr__(name)
78 except AttributeError:
79 return self.__delitem__(name)
80
81 # Removes the 'Components Folder'
82
83 manage_options = (
84 Container.manage_options[:2] +
85 Container.manage_options[3:])
86
87 __ac_permissions__ = (
88 (AccessContentsInformation, ()),
89 (AddPortalMember, ()),
90 (SetOwnPassword, ()),
91 (SetOwnProperties, ()),
92 (MailForgottenPassword, ()),
93 (RequestReview, ()),
94 (ReviewPortalContent, ()),
95 (AddPortalContent, ()),
96 (AddPortalFolders, ()),
97 (ListPortalMembers, ()),
98 (ReplyToItem, ()),
99 (View, ('isEffective',)),
100 (ModifyPortalContent, ('manage_cutObjects', 'manage_pasteObjects',
101 'manage_renameForm', 'manage_renameObject',
102 'manage_renameObjects')))
103
104 # Switch off ZMI ordering interface as it assumes a slightly
105 # different functionality
106 has_order_support = 0
107 management_page_charset = 'utf-8'
108 _default_sort_key = 'id'
109 _properties = (
110 {'id': 'title', 'type': 'string', 'mode': 'w'},
111 {'id': 'description', 'type': 'text', 'mode': 'w'},
112 )
113 title = ''
114 description = ''
115 icon = 'misc_/CMFPlone/tool.gif'
116
117 # From PortalObjectBase
118 def __init__(self, id, title=''):
119 super(PloneSite, self).__init__(id, title=title)
120 components = PersistentComponents('++etc++site')
121 components.__parent__ = self
122 self.setSiteManager(components)
123
124 # From PortalObjectBase
125 def __before_publishing_traverse__(self, arg1, arg2=None):
126 """ Pre-traversal hook.
127 """
128 # XXX hack around a bug(?) in BeforeTraverse.MultiHook
129 REQUEST = arg2 or arg1
130
131 try:
132 notify(BeforeTraverseEvent(self, REQUEST))
133 except ComponentLookupError:
134 # allow ZMI access, even if the portal's site manager is missing
135 pass
136 self.setupCurrentSkin(REQUEST)
137
138 super(PloneSite, self).__before_publishing_traverse__(arg1, arg2)
139
140 def __browser_default__(self, request):
141 """ Set default so we can return whatever we want instead
142 of index_html """
143 return getToolByName(self, 'plone_utils').browserDefault(self)
144
145 def index_html(self):
146 """ Acquire if not present. """
147 request = getattr(self, 'REQUEST', None)
148 if (
149 request is not None
150 and 'REQUEST_METHOD' in request
151 and request.maybe_webdav_client
152 ):
153 method = request['REQUEST_METHOD']
154 if bbb.HAS_ZSERVER and method in ('PUT', ):
155 # Very likely a WebDAV client trying to create something
156 result = NullResource(self, 'index_html')
157 setattr(result, '__replaceable__', REPLACEABLE)
158 return result
159 elif method not in ('GET', 'HEAD', 'POST'):
160 raise AttributeError('index_html')
161 # Acquire from skin.
162 _target = self.__getattr__('index_html')
163 result = aq_base(_target).__of__(self)
164 setattr(result, '__replaceable__', REPLACEABLE)
165 return result
166
167 index_html = ComputedAttribute(index_html, 1)
168
169 def manage_beforeDelete(self, container, item):
170 # Should send out an Event before Site is being deleted.
171 self.removal_inprogress = 1
172 PloneSite.inheritedAttribute('manage_beforeDelete')(self, container,
173 item)
174
175 @security.protected(permissions.DeleteObjects)
176 def manage_delObjects(self, ids=None, REQUEST=None):
177 """We need to enforce security."""
178 if ids is None:
179 ids = []
180 if isinstance(ids, str):
181 ids = [ids]
182 for id in ids:
183 item = self._getOb(id)
184 if not _checkPermission(permissions.DeleteObjects, item):
185 raise Unauthorized(
186 "Do not have permissions to remove this object")
187 return PortalObjectBase.manage_delObjects(self, ids, REQUEST=REQUEST)
188
189 def view(self):
190 """ Ensure that we get a plain view of the object, via a delegation to
191 __call__(), which is defined in BrowserDefaultMixin
192 """
193 return self()
194
195 @security.protected(permissions.AccessContentsInformation)
196 def folderlistingFolderContents(self, contentFilter=None):
197 """Calls listFolderContents in protected only by ACI so that
198 folder_listing can work without the List folder contents permission.
199
200 This is copied from Archetypes Basefolder and is needed by the
201 reference browser.
202 """
203 return self.listFolderContents(contentFilter)
204
205 def isEffective(self, date):
206 # Override DefaultDublinCoreImpl's test, since we are always viewable.
207 return 1
208
209
210 # Remove the IContentish interface so we don't listen to events that won't
211 # apply to the site root, ie handleUidAnnotationEvent
212 classImplementsOnly(PloneSite, implementedBy(PloneSite) - IContentish)
213
214 InitializeClass(PloneSite)
215
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/Products/CMFPlone/Portal.py b/Products/CMFPlone/Portal.py
--- a/Products/CMFPlone/Portal.py
+++ b/Products/CMFPlone/Portal.py
@@ -137,6 +137,16 @@
super(PloneSite, self).__before_publishing_traverse__(arg1, arg2)
+ # Concept from OFS.OrderSupport
+ @security.protected(permissions.AccessContentsInformation)
+ def tpValues(self):
+ # Return a list of subobjects, used by ZMI tree tag (and only there).
+ # see also https://github.com/plone/Products.CMFPlone/issues/3323
+ return sorted(
+ (obj for obj in self.objectValues() if getattr(aq_base(obj), 'isPrincipiaFolderish', False)),
+ key=lambda obj: obj.getId(),
+ )
+
def __browser_default__(self, request):
""" Set default so we can return whatever we want instead
of index_html """
|
{"golden_diff": "diff --git a/Products/CMFPlone/Portal.py b/Products/CMFPlone/Portal.py\n--- a/Products/CMFPlone/Portal.py\n+++ b/Products/CMFPlone/Portal.py\n@@ -137,6 +137,16 @@\n \n super(PloneSite, self).__before_publishing_traverse__(arg1, arg2)\n \n+ # Concept from OFS.OrderSupport\n+ @security.protected(permissions.AccessContentsInformation)\n+ def tpValues(self):\n+ # Return a list of subobjects, used by ZMI tree tag (and only there).\n+ # see also https://github.com/plone/Products.CMFPlone/issues/3323\n+ return sorted(\n+ (obj for obj in self.objectValues() if getattr(aq_base(obj), 'isPrincipiaFolderish', False)),\n+ key=lambda obj: obj.getId(),\n+ )\n+\n def __browser_default__(self, request):\n \"\"\" Set default so we can return whatever we want instead\n of index_html \"\"\"\n", "issue": "DX-Site-Root: ZMI Nav-Tree is no longer expandable\nAfter migrating to dx-site-root, the navtree within the zmi is no longer expandable\r\n\r\n\r\n\r\nhttps://github.com/plone/Products.CMFPlone/issues/2454 @jaroel @ale-rt \n", "before_files": [{"content": "from AccessControl import ClassSecurityInfo\nfrom AccessControl import Unauthorized\nfrom AccessControl.class_init import InitializeClass\nfrom Acquisition import aq_base\nfrom ComputedAttribute import ComputedAttribute\nfrom five.localsitemanager.registry import PersistentComponents\nfrom OFS.ObjectManager import REPLACEABLE\nfrom plone.dexterity.content import Container\nfrom Products.CMFCore import permissions\nfrom Products.CMFCore.interfaces import IContentish\nfrom Products.CMFCore.interfaces import ISiteRoot\nfrom Products.CMFCore.permissions import AccessContentsInformation\nfrom Products.CMFCore.permissions import AddPortalMember\nfrom Products.CMFCore.permissions import MailForgottenPassword\nfrom Products.CMFCore.permissions import RequestReview\nfrom Products.CMFCore.permissions import ReviewPortalContent\nfrom Products.CMFCore.permissions import SetOwnPassword\nfrom Products.CMFCore.permissions import SetOwnProperties\nfrom Products.CMFCore.PortalFolder import PortalFolderBase\nfrom Products.CMFCore.PortalObject import PortalObjectBase\nfrom Products.CMFCore.Skinnable import SkinnableObjectManager\nfrom Products.CMFCore.utils import _checkPermission\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFCore.utils import UniqueObject\nfrom Products.CMFPlone import bbb\nfrom Products.CMFPlone import PloneMessageFactory as _\nfrom Products.CMFPlone.interfaces.siteroot import IPloneSiteRoot\nfrom Products.CMFPlone.interfaces.syndication import ISyndicatable\nfrom Products.CMFPlone.permissions import AddPortalContent\nfrom Products.CMFPlone.permissions import AddPortalFolders\nfrom Products.CMFPlone.permissions import ListPortalMembers\nfrom Products.CMFPlone.permissions import ModifyPortalContent\nfrom Products.CMFPlone.permissions import ReplyToItem\nfrom Products.CMFPlone.permissions import View\nfrom Products.Five.component.interfaces import IObjectManagerSite\nfrom zope.interface.interfaces import ComponentLookupError\nfrom zope.event import notify\nfrom zope.interface import classImplementsOnly\nfrom zope.interface import implementedBy\nfrom zope.interface import implementer\nfrom zope.traversing.interfaces import BeforeTraverseEvent\n\n\nif bbb.HAS_ZSERVER:\n from webdav.NullResource import NullResource\n\n\n@implementer(IPloneSiteRoot, ISiteRoot, ISyndicatable, IObjectManagerSite)\nclass PloneSite(Container, SkinnableObjectManager, UniqueObject):\n \"\"\" The Plone site object. \"\"\"\n\n security = ClassSecurityInfo()\n meta_type = portal_type = 'Plone Site'\n\n # Ensure certain attributes come from the correct base class.\n _checkId = SkinnableObjectManager._checkId\n manage_main = PortalFolderBase.manage_main\n\n def __getattr__(self, name):\n try:\n # Try DX\n return super().__getattr__(name)\n except AttributeError:\n # Check portal_skins\n return SkinnableObjectManager.__getattr__(self, name)\n\n def __setattr__(self, name, obj):\n # handle re setting an item as an attribute\n if self._tree is not None and name in self:\n del self[name]\n self[name] = obj\n else:\n super().__setattr__(name, obj)\n\n def __delattr__(self, name):\n try:\n return super().__delattr__(name)\n except AttributeError:\n return self.__delitem__(name)\n\n # Removes the 'Components Folder'\n\n manage_options = (\n Container.manage_options[:2] +\n Container.manage_options[3:])\n\n __ac_permissions__ = (\n (AccessContentsInformation, ()),\n (AddPortalMember, ()),\n (SetOwnPassword, ()),\n (SetOwnProperties, ()),\n (MailForgottenPassword, ()),\n (RequestReview, ()),\n (ReviewPortalContent, ()),\n (AddPortalContent, ()),\n (AddPortalFolders, ()),\n (ListPortalMembers, ()),\n (ReplyToItem, ()),\n (View, ('isEffective',)),\n (ModifyPortalContent, ('manage_cutObjects', 'manage_pasteObjects',\n 'manage_renameForm', 'manage_renameObject',\n 'manage_renameObjects')))\n\n # Switch off ZMI ordering interface as it assumes a slightly\n # different functionality\n has_order_support = 0\n management_page_charset = 'utf-8'\n _default_sort_key = 'id'\n _properties = (\n {'id': 'title', 'type': 'string', 'mode': 'w'},\n {'id': 'description', 'type': 'text', 'mode': 'w'},\n )\n title = ''\n description = ''\n icon = 'misc_/CMFPlone/tool.gif'\n\n # From PortalObjectBase\n def __init__(self, id, title=''):\n super(PloneSite, self).__init__(id, title=title)\n components = PersistentComponents('++etc++site')\n components.__parent__ = self\n self.setSiteManager(components)\n\n # From PortalObjectBase\n def __before_publishing_traverse__(self, arg1, arg2=None):\n \"\"\" Pre-traversal hook.\n \"\"\"\n # XXX hack around a bug(?) in BeforeTraverse.MultiHook\n REQUEST = arg2 or arg1\n\n try:\n notify(BeforeTraverseEvent(self, REQUEST))\n except ComponentLookupError:\n # allow ZMI access, even if the portal's site manager is missing\n pass\n self.setupCurrentSkin(REQUEST)\n\n super(PloneSite, self).__before_publishing_traverse__(arg1, arg2)\n\n def __browser_default__(self, request):\n \"\"\" Set default so we can return whatever we want instead\n of index_html \"\"\"\n return getToolByName(self, 'plone_utils').browserDefault(self)\n\n def index_html(self):\n \"\"\" Acquire if not present. \"\"\"\n request = getattr(self, 'REQUEST', None)\n if (\n request is not None\n and 'REQUEST_METHOD' in request\n and request.maybe_webdav_client\n ):\n method = request['REQUEST_METHOD']\n if bbb.HAS_ZSERVER and method in ('PUT', ):\n # Very likely a WebDAV client trying to create something\n result = NullResource(self, 'index_html')\n setattr(result, '__replaceable__', REPLACEABLE)\n return result\n elif method not in ('GET', 'HEAD', 'POST'):\n raise AttributeError('index_html')\n # Acquire from skin.\n _target = self.__getattr__('index_html')\n result = aq_base(_target).__of__(self)\n setattr(result, '__replaceable__', REPLACEABLE)\n return result\n\n index_html = ComputedAttribute(index_html, 1)\n\n def manage_beforeDelete(self, container, item):\n # Should send out an Event before Site is being deleted.\n self.removal_inprogress = 1\n PloneSite.inheritedAttribute('manage_beforeDelete')(self, container,\n item)\n\n @security.protected(permissions.DeleteObjects)\n def manage_delObjects(self, ids=None, REQUEST=None):\n \"\"\"We need to enforce security.\"\"\"\n if ids is None:\n ids = []\n if isinstance(ids, str):\n ids = [ids]\n for id in ids:\n item = self._getOb(id)\n if not _checkPermission(permissions.DeleteObjects, item):\n raise Unauthorized(\n \"Do not have permissions to remove this object\")\n return PortalObjectBase.manage_delObjects(self, ids, REQUEST=REQUEST)\n\n def view(self):\n \"\"\" Ensure that we get a plain view of the object, via a delegation to\n __call__(), which is defined in BrowserDefaultMixin\n \"\"\"\n return self()\n\n @security.protected(permissions.AccessContentsInformation)\n def folderlistingFolderContents(self, contentFilter=None):\n \"\"\"Calls listFolderContents in protected only by ACI so that\n folder_listing can work without the List folder contents permission.\n\n This is copied from Archetypes Basefolder and is needed by the\n reference browser.\n \"\"\"\n return self.listFolderContents(contentFilter)\n\n def isEffective(self, date):\n # Override DefaultDublinCoreImpl's test, since we are always viewable.\n return 1\n\n\n# Remove the IContentish interface so we don't listen to events that won't\n# apply to the site root, ie handleUidAnnotationEvent\nclassImplementsOnly(PloneSite, implementedBy(PloneSite) - IContentish)\n\nInitializeClass(PloneSite)\n", "path": "Products/CMFPlone/Portal.py"}], "after_files": [{"content": "from AccessControl import ClassSecurityInfo\nfrom AccessControl import Unauthorized\nfrom AccessControl.class_init import InitializeClass\nfrom Acquisition import aq_base\nfrom ComputedAttribute import ComputedAttribute\nfrom five.localsitemanager.registry import PersistentComponents\nfrom OFS.ObjectManager import REPLACEABLE\nfrom plone.dexterity.content import Container\nfrom Products.CMFCore import permissions\nfrom Products.CMFCore.interfaces import IContentish\nfrom Products.CMFCore.interfaces import ISiteRoot\nfrom Products.CMFCore.permissions import AccessContentsInformation\nfrom Products.CMFCore.permissions import AddPortalMember\nfrom Products.CMFCore.permissions import MailForgottenPassword\nfrom Products.CMFCore.permissions import RequestReview\nfrom Products.CMFCore.permissions import ReviewPortalContent\nfrom Products.CMFCore.permissions import SetOwnPassword\nfrom Products.CMFCore.permissions import SetOwnProperties\nfrom Products.CMFCore.PortalFolder import PortalFolderBase\nfrom Products.CMFCore.PortalObject import PortalObjectBase\nfrom Products.CMFCore.Skinnable import SkinnableObjectManager\nfrom Products.CMFCore.utils import _checkPermission\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFCore.utils import UniqueObject\nfrom Products.CMFPlone import bbb\nfrom Products.CMFPlone import PloneMessageFactory as _\nfrom Products.CMFPlone.interfaces.siteroot import IPloneSiteRoot\nfrom Products.CMFPlone.interfaces.syndication import ISyndicatable\nfrom Products.CMFPlone.permissions import AddPortalContent\nfrom Products.CMFPlone.permissions import AddPortalFolders\nfrom Products.CMFPlone.permissions import ListPortalMembers\nfrom Products.CMFPlone.permissions import ModifyPortalContent\nfrom Products.CMFPlone.permissions import ReplyToItem\nfrom Products.CMFPlone.permissions import View\nfrom Products.Five.component.interfaces import IObjectManagerSite\nfrom zope.interface.interfaces import ComponentLookupError\nfrom zope.event import notify\nfrom zope.interface import classImplementsOnly\nfrom zope.interface import implementedBy\nfrom zope.interface import implementer\nfrom zope.traversing.interfaces import BeforeTraverseEvent\n\n\nif bbb.HAS_ZSERVER:\n from webdav.NullResource import NullResource\n\n\n@implementer(IPloneSiteRoot, ISiteRoot, ISyndicatable, IObjectManagerSite)\nclass PloneSite(Container, SkinnableObjectManager, UniqueObject):\n \"\"\" The Plone site object. \"\"\"\n\n security = ClassSecurityInfo()\n meta_type = portal_type = 'Plone Site'\n\n # Ensure certain attributes come from the correct base class.\n _checkId = SkinnableObjectManager._checkId\n manage_main = PortalFolderBase.manage_main\n\n def __getattr__(self, name):\n try:\n # Try DX\n return super().__getattr__(name)\n except AttributeError:\n # Check portal_skins\n return SkinnableObjectManager.__getattr__(self, name)\n\n def __setattr__(self, name, obj):\n # handle re setting an item as an attribute\n if self._tree is not None and name in self:\n del self[name]\n self[name] = obj\n else:\n super().__setattr__(name, obj)\n\n def __delattr__(self, name):\n try:\n return super().__delattr__(name)\n except AttributeError:\n return self.__delitem__(name)\n\n # Removes the 'Components Folder'\n\n manage_options = (\n Container.manage_options[:2] +\n Container.manage_options[3:])\n\n __ac_permissions__ = (\n (AccessContentsInformation, ()),\n (AddPortalMember, ()),\n (SetOwnPassword, ()),\n (SetOwnProperties, ()),\n (MailForgottenPassword, ()),\n (RequestReview, ()),\n (ReviewPortalContent, ()),\n (AddPortalContent, ()),\n (AddPortalFolders, ()),\n (ListPortalMembers, ()),\n (ReplyToItem, ()),\n (View, ('isEffective',)),\n (ModifyPortalContent, ('manage_cutObjects', 'manage_pasteObjects',\n 'manage_renameForm', 'manage_renameObject',\n 'manage_renameObjects')))\n\n # Switch off ZMI ordering interface as it assumes a slightly\n # different functionality\n has_order_support = 0\n management_page_charset = 'utf-8'\n _default_sort_key = 'id'\n _properties = (\n {'id': 'title', 'type': 'string', 'mode': 'w'},\n {'id': 'description', 'type': 'text', 'mode': 'w'},\n )\n title = ''\n description = ''\n icon = 'misc_/CMFPlone/tool.gif'\n\n # From PortalObjectBase\n def __init__(self, id, title=''):\n super(PloneSite, self).__init__(id, title=title)\n components = PersistentComponents('++etc++site')\n components.__parent__ = self\n self.setSiteManager(components)\n\n # From PortalObjectBase\n def __before_publishing_traverse__(self, arg1, arg2=None):\n \"\"\" Pre-traversal hook.\n \"\"\"\n # XXX hack around a bug(?) in BeforeTraverse.MultiHook\n REQUEST = arg2 or arg1\n\n try:\n notify(BeforeTraverseEvent(self, REQUEST))\n except ComponentLookupError:\n # allow ZMI access, even if the portal's site manager is missing\n pass\n self.setupCurrentSkin(REQUEST)\n\n super(PloneSite, self).__before_publishing_traverse__(arg1, arg2)\n\n # Concept from OFS.OrderSupport\n @security.protected(permissions.AccessContentsInformation)\n def tpValues(self):\n # Return a list of subobjects, used by ZMI tree tag (and only there).\n # see also https://github.com/plone/Products.CMFPlone/issues/3323\n return sorted(\n (obj for obj in self.objectValues() if getattr(aq_base(obj), 'isPrincipiaFolderish', False)),\n key=lambda obj: obj.getId(),\n )\n\n def __browser_default__(self, request):\n \"\"\" Set default so we can return whatever we want instead\n of index_html \"\"\"\n return getToolByName(self, 'plone_utils').browserDefault(self)\n\n def index_html(self):\n \"\"\" Acquire if not present. \"\"\"\n request = getattr(self, 'REQUEST', None)\n if (\n request is not None\n and 'REQUEST_METHOD' in request\n and request.maybe_webdav_client\n ):\n method = request['REQUEST_METHOD']\n if bbb.HAS_ZSERVER and method in ('PUT', ):\n # Very likely a WebDAV client trying to create something\n result = NullResource(self, 'index_html')\n setattr(result, '__replaceable__', REPLACEABLE)\n return result\n elif method not in ('GET', 'HEAD', 'POST'):\n raise AttributeError('index_html')\n # Acquire from skin.\n _target = self.__getattr__('index_html')\n result = aq_base(_target).__of__(self)\n setattr(result, '__replaceable__', REPLACEABLE)\n return result\n\n index_html = ComputedAttribute(index_html, 1)\n\n def manage_beforeDelete(self, container, item):\n # Should send out an Event before Site is being deleted.\n self.removal_inprogress = 1\n PloneSite.inheritedAttribute('manage_beforeDelete')(self, container,\n item)\n\n @security.protected(permissions.DeleteObjects)\n def manage_delObjects(self, ids=None, REQUEST=None):\n \"\"\"We need to enforce security.\"\"\"\n if ids is None:\n ids = []\n if isinstance(ids, str):\n ids = [ids]\n for id in ids:\n item = self._getOb(id)\n if not _checkPermission(permissions.DeleteObjects, item):\n raise Unauthorized(\n \"Do not have permissions to remove this object\")\n return PortalObjectBase.manage_delObjects(self, ids, REQUEST=REQUEST)\n\n def view(self):\n \"\"\" Ensure that we get a plain view of the object, via a delegation to\n __call__(), which is defined in BrowserDefaultMixin\n \"\"\"\n return self()\n\n @security.protected(permissions.AccessContentsInformation)\n def folderlistingFolderContents(self, contentFilter=None):\n \"\"\"Calls listFolderContents in protected only by ACI so that\n folder_listing can work without the List folder contents permission.\n\n This is copied from Archetypes Basefolder and is needed by the\n reference browser.\n \"\"\"\n return self.listFolderContents(contentFilter)\n\n def isEffective(self, date):\n # Override DefaultDublinCoreImpl's test, since we are always viewable.\n return 1\n\n\n# Remove the IContentish interface so we don't listen to events that won't\n# apply to the site root, ie handleUidAnnotationEvent\nclassImplementsOnly(PloneSite, implementedBy(PloneSite) - IContentish)\n\nInitializeClass(PloneSite)\n", "path": "Products/CMFPlone/Portal.py"}]}
| 2,769 | 238 |
gh_patches_debug_9029
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-1420
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sort evaluations in email lists by name
When sending emails which include lists of evaluations (when asking for preparation, reminding for preparation, publishing results), these lists should be sorted alphabetically by the name of the evaluation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/templatetags/evaluation_filters.py`
Content:
```
1 from collections import namedtuple
2
3 from django.forms import TypedChoiceField
4 from django.template import Library
5 from django.utils.translation import gettext_lazy as _
6
7 from evap.evaluation.models import BASE_UNIPOLAR_CHOICES
8 from evap.rewards.tools import can_reward_points_be_used_by
9 from evap.student.forms import HeadingField
10
11
12 # the names displayed for contributors
13 STATE_NAMES = {
14 'new': _('new'),
15 'prepared': _('prepared'),
16 'editor_approved': _('editor approved'),
17 'approved': _('approved'),
18 'in_evaluation': _('in evaluation'),
19 'evaluated': _('evaluated'),
20 'reviewed': _('reviewed'),
21 'published': _('published'),
22 }
23
24
25 # the descriptions used in tooltips for contributors
26 STATE_DESCRIPTIONS = {
27 'new': _('The evaluation was newly created and will be prepared by the evaluation team.'),
28 'prepared': _('The evaluation was prepared by the evaluation team and is now available for editors.'),
29 'editor_approved': _('The evaluation was approved by an editor and will now be checked by the evaluation team.'),
30 'approved': _('All preparations are finished. The evaluation will begin once the defined start date is reached.'),
31 'in_evaluation': _('The evaluation is currently running until the defined end date is reached.'),
32 'evaluated': _('The evaluation has finished and will now be reviewed by the evaluation team.'),
33 'reviewed': _('The evaluation has finished and was reviewed by the evaluation team. You will receive an email when its results are published.'),
34 'published': _('The results for this evaluation have been published.'),
35 }
36
37
38 # values for approval states shown to staff
39 StateValues = namedtuple('StateValues', ('order', 'icon', 'filter', 'description'))
40 APPROVAL_STATES = {
41 'new': StateValues(0, 'fas fa-circle icon-yellow', 'new', _('In preparation')),
42 'prepared': StateValues(2, 'far fa-square icon-gray', 'prepared', _('Awaiting editor review')),
43 'editor_approved': StateValues(1, 'far fa-check-square icon-yellow', 'editor_approved', _('Approved by editor, awaiting manager review')),
44 'approved': StateValues(3, 'far fa-check-square icon-green', 'approved', _('Approved by manager')),
45 }
46
47
48 register = Library()
49
50
51 @register.filter(name='zip')
52 def _zip(a, b):
53 return zip(a, b)
54
55
56 @register.filter()
57 def zip_choices(counts, choices):
58 return zip(counts, choices.names, choices.colors, choices.values)
59
60
61 @register.filter
62 def ordering_index(evaluation):
63 if evaluation.state in ['new', 'prepared', 'editor_approved', 'approved']:
64 return evaluation.days_until_evaluation
65 if evaluation.state == "in_evaluation":
66 return 100000 + evaluation.days_left_for_evaluation
67 return 200000 + evaluation.days_left_for_evaluation
68
69
70 # from http://www.jongales.com/blog/2009/10/19/percentage-django-template-tag/
71 @register.filter
72 def percentage(fraction, population):
73 try:
74 return "{0:.0f}%".format(int(float(fraction) / float(population) * 100))
75 except ValueError:
76 return None
77 except ZeroDivisionError:
78 return None
79
80
81 @register.filter
82 def percentage_one_decimal(fraction, population):
83 try:
84 return "{0:.1f}%".format((float(fraction) / float(population)) * 100)
85 except ValueError:
86 return None
87 except ZeroDivisionError:
88 return None
89
90
91 @register.filter
92 def to_colors(choices):
93 if not choices:
94 # When displaying the course distribution, there are no associated voting choices.
95 # In that case, we just use the colors of a unipolar scale.
96 return BASE_UNIPOLAR_CHOICES['colors']
97 return choices.colors
98
99
100 @register.filter
101 def weight_info(evaluation):
102 try:
103 course = evaluation.course
104 except AttributeError:
105 return None
106 if course.evaluation_weight_sum and course.evaluation_count > 1:
107 return percentage(evaluation.weight, course.evaluation_weight_sum)
108 return None
109
110
111 @register.filter
112 def statename(state):
113 return STATE_NAMES.get(state)
114
115
116 @register.filter
117 def statedescription(state):
118 return STATE_DESCRIPTIONS.get(state)
119
120
121 @register.filter
122 def approval_state_values(state):
123 if state in APPROVAL_STATES:
124 return APPROVAL_STATES[state]
125 if state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:
126 return APPROVAL_STATES['approved']
127 return None
128
129
130 @register.filter
131 def approval_state_icon(state):
132 if state in APPROVAL_STATES:
133 return APPROVAL_STATES[state].icon
134 if state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:
135 return APPROVAL_STATES['approved'].icon
136 return None
137
138
139 @register.filter
140 def can_results_page_be_seen_by(evaluation, user):
141 return evaluation.can_results_page_be_seen_by(user)
142
143
144 @register.filter(name='can_reward_points_be_used_by')
145 def _can_reward_points_be_used_by(user):
146 return can_reward_points_be_used_by(user)
147
148
149 @register.filter
150 def is_choice_field(field):
151 return isinstance(field.field, TypedChoiceField)
152
153
154 @register.filter
155 def is_heading_field(field):
156 return isinstance(field.field, HeadingField)
157
158
159 @register.filter
160 def is_user_editor_or_delegate(evaluation, user):
161 return evaluation.is_user_editor_or_delegate(user)
162
163
164 @register.filter
165 def is_user_responsible_or_contributor_or_delegate(evaluation, user):
166 return evaluation.is_user_responsible_or_contributor_or_delegate(user)
167
168
169 @register.filter
170 def message_class(level):
171 return {
172 'debug': 'info',
173 'info': 'info',
174 'success': 'success',
175 'warning': 'warning',
176 'error': 'danger',
177 }.get(level, 'info')
178
179
180 @register.filter
181 def hours_and_minutes(time_left_for_evaluation):
182 hours = time_left_for_evaluation.seconds // 3600
183 minutes = (time_left_for_evaluation.seconds // 60) % 60
184 return "{:02}:{:02}".format(hours, minutes)
185
186
187 @register.filter
188 def has_nonresponsible_editor(evaluation):
189 return evaluation.contributions.filter(can_edit=True).exclude(contributor__in=evaluation.course.responsibles.all()).exists()
190
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/evap/evaluation/templatetags/evaluation_filters.py b/evap/evaluation/templatetags/evaluation_filters.py
--- a/evap/evaluation/templatetags/evaluation_filters.py
+++ b/evap/evaluation/templatetags/evaluation_filters.py
@@ -187,3 +187,13 @@
@register.filter
def has_nonresponsible_editor(evaluation):
return evaluation.contributions.filter(can_edit=True).exclude(contributor__in=evaluation.course.responsibles.all()).exists()
+
+
[email protected]
+def order_by(iterable, attribute):
+ return sorted(iterable, key=lambda item: getattr(item, attribute))
+
+
[email protected]
+def order_due_evaluations_by(due_evaluations, attribute):
+ return sorted(due_evaluations, key=lambda due_evaluation: getattr(due_evaluation[1], attribute))
|
{"golden_diff": "diff --git a/evap/evaluation/templatetags/evaluation_filters.py b/evap/evaluation/templatetags/evaluation_filters.py\n--- a/evap/evaluation/templatetags/evaluation_filters.py\n+++ b/evap/evaluation/templatetags/evaluation_filters.py\n@@ -187,3 +187,13 @@\n @register.filter\n def has_nonresponsible_editor(evaluation):\n return evaluation.contributions.filter(can_edit=True).exclude(contributor__in=evaluation.course.responsibles.all()).exists()\n+\n+\[email protected]\n+def order_by(iterable, attribute):\n+ return sorted(iterable, key=lambda item: getattr(item, attribute))\n+\n+\[email protected]\n+def order_due_evaluations_by(due_evaluations, attribute):\n+ return sorted(due_evaluations, key=lambda due_evaluation: getattr(due_evaluation[1], attribute))\n", "issue": "Sort evaluations in email lists by name\nWhen sending emails which include lists of evaluations (when asking for preparation, reminding for preparation, publishing results), these lists should be sorted alphabetically by the name of the evaluation.\n", "before_files": [{"content": "from collections import namedtuple\n\nfrom django.forms import TypedChoiceField\nfrom django.template import Library\nfrom django.utils.translation import gettext_lazy as _\n\nfrom evap.evaluation.models import BASE_UNIPOLAR_CHOICES\nfrom evap.rewards.tools import can_reward_points_be_used_by\nfrom evap.student.forms import HeadingField\n\n\n# the names displayed for contributors\nSTATE_NAMES = {\n 'new': _('new'),\n 'prepared': _('prepared'),\n 'editor_approved': _('editor approved'),\n 'approved': _('approved'),\n 'in_evaluation': _('in evaluation'),\n 'evaluated': _('evaluated'),\n 'reviewed': _('reviewed'),\n 'published': _('published'),\n}\n\n\n# the descriptions used in tooltips for contributors\nSTATE_DESCRIPTIONS = {\n 'new': _('The evaluation was newly created and will be prepared by the evaluation team.'),\n 'prepared': _('The evaluation was prepared by the evaluation team and is now available for editors.'),\n 'editor_approved': _('The evaluation was approved by an editor and will now be checked by the evaluation team.'),\n 'approved': _('All preparations are finished. The evaluation will begin once the defined start date is reached.'),\n 'in_evaluation': _('The evaluation is currently running until the defined end date is reached.'),\n 'evaluated': _('The evaluation has finished and will now be reviewed by the evaluation team.'),\n 'reviewed': _('The evaluation has finished and was reviewed by the evaluation team. You will receive an email when its results are published.'),\n 'published': _('The results for this evaluation have been published.'),\n}\n\n\n# values for approval states shown to staff\nStateValues = namedtuple('StateValues', ('order', 'icon', 'filter', 'description'))\nAPPROVAL_STATES = {\n 'new': StateValues(0, 'fas fa-circle icon-yellow', 'new', _('In preparation')),\n 'prepared': StateValues(2, 'far fa-square icon-gray', 'prepared', _('Awaiting editor review')),\n 'editor_approved': StateValues(1, 'far fa-check-square icon-yellow', 'editor_approved', _('Approved by editor, awaiting manager review')),\n 'approved': StateValues(3, 'far fa-check-square icon-green', 'approved', _('Approved by manager')),\n}\n\n\nregister = Library()\n\n\[email protected](name='zip')\ndef _zip(a, b):\n return zip(a, b)\n\n\[email protected]()\ndef zip_choices(counts, choices):\n return zip(counts, choices.names, choices.colors, choices.values)\n\n\[email protected]\ndef ordering_index(evaluation):\n if evaluation.state in ['new', 'prepared', 'editor_approved', 'approved']:\n return evaluation.days_until_evaluation\n if evaluation.state == \"in_evaluation\":\n return 100000 + evaluation.days_left_for_evaluation\n return 200000 + evaluation.days_left_for_evaluation\n\n\n# from http://www.jongales.com/blog/2009/10/19/percentage-django-template-tag/\[email protected]\ndef percentage(fraction, population):\n try:\n return \"{0:.0f}%\".format(int(float(fraction) / float(population) * 100))\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef percentage_one_decimal(fraction, population):\n try:\n return \"{0:.1f}%\".format((float(fraction) / float(population)) * 100)\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef to_colors(choices):\n if not choices:\n # When displaying the course distribution, there are no associated voting choices.\n # In that case, we just use the colors of a unipolar scale.\n return BASE_UNIPOLAR_CHOICES['colors']\n return choices.colors\n\n\[email protected]\ndef weight_info(evaluation):\n try:\n course = evaluation.course\n except AttributeError:\n return None\n if course.evaluation_weight_sum and course.evaluation_count > 1:\n return percentage(evaluation.weight, course.evaluation_weight_sum)\n return None\n\n\[email protected]\ndef statename(state):\n return STATE_NAMES.get(state)\n\n\[email protected]\ndef statedescription(state):\n return STATE_DESCRIPTIONS.get(state)\n\n\[email protected]\ndef approval_state_values(state):\n if state in APPROVAL_STATES:\n return APPROVAL_STATES[state]\n if state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:\n return APPROVAL_STATES['approved']\n return None\n\n\[email protected]\ndef approval_state_icon(state):\n if state in APPROVAL_STATES:\n return APPROVAL_STATES[state].icon\n if state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:\n return APPROVAL_STATES['approved'].icon\n return None\n\n\[email protected]\ndef can_results_page_be_seen_by(evaluation, user):\n return evaluation.can_results_page_be_seen_by(user)\n\n\[email protected](name='can_reward_points_be_used_by')\ndef _can_reward_points_be_used_by(user):\n return can_reward_points_be_used_by(user)\n\n\[email protected]\ndef is_choice_field(field):\n return isinstance(field.field, TypedChoiceField)\n\n\[email protected]\ndef is_heading_field(field):\n return isinstance(field.field, HeadingField)\n\n\[email protected]\ndef is_user_editor_or_delegate(evaluation, user):\n return evaluation.is_user_editor_or_delegate(user)\n\n\[email protected]\ndef is_user_responsible_or_contributor_or_delegate(evaluation, user):\n return evaluation.is_user_responsible_or_contributor_or_delegate(user)\n\n\[email protected]\ndef message_class(level):\n return {\n 'debug': 'info',\n 'info': 'info',\n 'success': 'success',\n 'warning': 'warning',\n 'error': 'danger',\n }.get(level, 'info')\n\n\[email protected]\ndef hours_and_minutes(time_left_for_evaluation):\n hours = time_left_for_evaluation.seconds // 3600\n minutes = (time_left_for_evaluation.seconds // 60) % 60\n return \"{:02}:{:02}\".format(hours, minutes)\n\n\[email protected]\ndef has_nonresponsible_editor(evaluation):\n return evaluation.contributions.filter(can_edit=True).exclude(contributor__in=evaluation.course.responsibles.all()).exists()\n", "path": "evap/evaluation/templatetags/evaluation_filters.py"}], "after_files": [{"content": "from collections import namedtuple\n\nfrom django.forms import TypedChoiceField\nfrom django.template import Library\nfrom django.utils.translation import gettext_lazy as _\n\nfrom evap.evaluation.models import BASE_UNIPOLAR_CHOICES\nfrom evap.rewards.tools import can_reward_points_be_used_by\nfrom evap.student.forms import HeadingField\n\n\n# the names displayed for contributors\nSTATE_NAMES = {\n 'new': _('new'),\n 'prepared': _('prepared'),\n 'editor_approved': _('editor approved'),\n 'approved': _('approved'),\n 'in_evaluation': _('in evaluation'),\n 'evaluated': _('evaluated'),\n 'reviewed': _('reviewed'),\n 'published': _('published'),\n}\n\n\n# the descriptions used in tooltips for contributors\nSTATE_DESCRIPTIONS = {\n 'new': _('The evaluation was newly created and will be prepared by the evaluation team.'),\n 'prepared': _('The evaluation was prepared by the evaluation team and is now available for editors.'),\n 'editor_approved': _('The evaluation was approved by an editor and will now be checked by the evaluation team.'),\n 'approved': _('All preparations are finished. The evaluation will begin once the defined start date is reached.'),\n 'in_evaluation': _('The evaluation is currently running until the defined end date is reached.'),\n 'evaluated': _('The evaluation has finished and will now be reviewed by the evaluation team.'),\n 'reviewed': _('The evaluation has finished and was reviewed by the evaluation team. You will receive an email when its results are published.'),\n 'published': _('The results for this evaluation have been published.'),\n}\n\n\n# values for approval states shown to staff\nStateValues = namedtuple('StateValues', ('order', 'icon', 'filter', 'description'))\nAPPROVAL_STATES = {\n 'new': StateValues(0, 'fas fa-circle icon-yellow', 'new', _('In preparation')),\n 'prepared': StateValues(2, 'far fa-square icon-gray', 'prepared', _('Awaiting editor review')),\n 'editor_approved': StateValues(1, 'far fa-check-square icon-yellow', 'editor_approved', _('Approved by editor, awaiting manager review')),\n 'approved': StateValues(3, 'far fa-check-square icon-green', 'approved', _('Approved by manager')),\n}\n\n\nregister = Library()\n\n\[email protected](name='zip')\ndef _zip(a, b):\n return zip(a, b)\n\n\[email protected]()\ndef zip_choices(counts, choices):\n return zip(counts, choices.names, choices.colors, choices.values)\n\n\[email protected]\ndef ordering_index(evaluation):\n if evaluation.state in ['new', 'prepared', 'editor_approved', 'approved']:\n return evaluation.days_until_evaluation\n if evaluation.state == \"in_evaluation\":\n return 100000 + evaluation.days_left_for_evaluation\n return 200000 + evaluation.days_left_for_evaluation\n\n\n# from http://www.jongales.com/blog/2009/10/19/percentage-django-template-tag/\[email protected]\ndef percentage(fraction, population):\n try:\n return \"{0:.0f}%\".format(int(float(fraction) / float(population) * 100))\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef percentage_one_decimal(fraction, population):\n try:\n return \"{0:.1f}%\".format((float(fraction) / float(population)) * 100)\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef to_colors(choices):\n if not choices:\n # When displaying the course distribution, there are no associated voting choices.\n # In that case, we just use the colors of a unipolar scale.\n return BASE_UNIPOLAR_CHOICES['colors']\n return choices.colors\n\n\[email protected]\ndef weight_info(evaluation):\n try:\n course = evaluation.course\n except AttributeError:\n return None\n if course.evaluation_weight_sum and course.evaluation_count > 1:\n return percentage(evaluation.weight, course.evaluation_weight_sum)\n return None\n\n\[email protected]\ndef statename(state):\n return STATE_NAMES.get(state)\n\n\[email protected]\ndef statedescription(state):\n return STATE_DESCRIPTIONS.get(state)\n\n\[email protected]\ndef approval_state_values(state):\n if state in APPROVAL_STATES:\n return APPROVAL_STATES[state]\n if state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:\n return APPROVAL_STATES['approved']\n return None\n\n\[email protected]\ndef approval_state_icon(state):\n if state in APPROVAL_STATES:\n return APPROVAL_STATES[state].icon\n if state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:\n return APPROVAL_STATES['approved'].icon\n return None\n\n\[email protected]\ndef can_results_page_be_seen_by(evaluation, user):\n return evaluation.can_results_page_be_seen_by(user)\n\n\[email protected](name='can_reward_points_be_used_by')\ndef _can_reward_points_be_used_by(user):\n return can_reward_points_be_used_by(user)\n\n\[email protected]\ndef is_choice_field(field):\n return isinstance(field.field, TypedChoiceField)\n\n\[email protected]\ndef is_heading_field(field):\n return isinstance(field.field, HeadingField)\n\n\[email protected]\ndef is_user_editor_or_delegate(evaluation, user):\n return evaluation.is_user_editor_or_delegate(user)\n\n\[email protected]\ndef is_user_responsible_or_contributor_or_delegate(evaluation, user):\n return evaluation.is_user_responsible_or_contributor_or_delegate(user)\n\n\[email protected]\ndef message_class(level):\n return {\n 'debug': 'info',\n 'info': 'info',\n 'success': 'success',\n 'warning': 'warning',\n 'error': 'danger',\n }.get(level, 'info')\n\n\[email protected]\ndef hours_and_minutes(time_left_for_evaluation):\n hours = time_left_for_evaluation.seconds // 3600\n minutes = (time_left_for_evaluation.seconds // 60) % 60\n return \"{:02}:{:02}\".format(hours, minutes)\n\n\[email protected]\ndef has_nonresponsible_editor(evaluation):\n return evaluation.contributions.filter(can_edit=True).exclude(contributor__in=evaluation.course.responsibles.all()).exists()\n\n\[email protected]\ndef order_by(iterable, attribute):\n return sorted(iterable, key=lambda item: getattr(item, attribute))\n\n\[email protected]\ndef order_due_evaluations_by(due_evaluations, attribute):\n return sorted(due_evaluations, key=lambda due_evaluation: getattr(due_evaluation[1], attribute))\n", "path": "evap/evaluation/templatetags/evaluation_filters.py"}]}
| 2,142 | 198 |
gh_patches_debug_1261
|
rasdani/github-patches
|
git_diff
|
swcarpentry__python-novice-inflammation-736
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lesson 10 - numpy.mean(data) and data.mean
In lesson 10, when the lesson refers to readings_03.py, the code shows that to calculate the mean over 'data' across all days, numpy.mean is used: numpy.mean(data, axis=1). However when looking at the file readings_03.py (at least the version I downloaded recently) uses the instruction data.mean(axis=1). Both lead to the same result, but for consistency I would suggest to either modify the readings_*.py to use numpy.mean (as this is what it has been used throughout the entire lesson), or explain explicitly that both expressions lead to the same result (it would be a good time to remind students about object attributes).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `code/readings_03.py`
Content:
```
1 import sys
2 import numpy
3
4
5 def main():
6 script = sys.argv[0]
7 for filename in sys.argv[1:]:
8 data = numpy.loadtxt(filename, delimiter=',')
9 for m in data.mean(axis=1):
10 print(m)
11
12
13 if __name__ == '__main__':
14 main()
15
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/code/readings_03.py b/code/readings_03.py
--- a/code/readings_03.py
+++ b/code/readings_03.py
@@ -6,7 +6,7 @@
script = sys.argv[0]
for filename in sys.argv[1:]:
data = numpy.loadtxt(filename, delimiter=',')
- for m in data.mean(axis=1):
+ for m in numpy.mean(data, axis=1):
print(m)
|
{"golden_diff": "diff --git a/code/readings_03.py b/code/readings_03.py\n--- a/code/readings_03.py\n+++ b/code/readings_03.py\n@@ -6,7 +6,7 @@\n script = sys.argv[0]\n for filename in sys.argv[1:]:\n data = numpy.loadtxt(filename, delimiter=',')\n- for m in data.mean(axis=1):\n+ for m in numpy.mean(data, axis=1):\n print(m)\n", "issue": "Lesson 10 - numpy.mean(data) and data.mean\nIn lesson 10, when the lesson refers to readings_03.py, the code shows that to calculate the mean over 'data' across all days, numpy.mean is used: numpy.mean(data, axis=1). However when looking at the file readings_03.py (at least the version I downloaded recently) uses the instruction data.mean(axis=1). Both lead to the same result, but for consistency I would suggest to either modify the readings_*.py to use numpy.mean (as this is what it has been used throughout the entire lesson), or explain explicitly that both expressions lead to the same result (it would be a good time to remind students about object attributes). \n", "before_files": [{"content": "import sys\nimport numpy\n\n\ndef main():\n script = sys.argv[0]\n for filename in sys.argv[1:]:\n data = numpy.loadtxt(filename, delimiter=',')\n for m in data.mean(axis=1):\n print(m)\n\n\nif __name__ == '__main__':\n main()\n", "path": "code/readings_03.py"}], "after_files": [{"content": "import sys\nimport numpy\n\n\ndef main():\n script = sys.argv[0]\n for filename in sys.argv[1:]:\n data = numpy.loadtxt(filename, delimiter=',')\n for m in numpy.mean(data, axis=1):\n print(m)\n\n\nif __name__ == '__main__':\n main()\n", "path": "code/readings_03.py"}]}
| 502 | 105 |
gh_patches_debug_37853
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-9611
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
api: support config/credentials passing
dvcfs now supports config passing https://github.com/iterative/dvc/issues/9154 and we need to allow for the same with api methods.
Related https://github.com/iterative/dvc/issues/4336
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/api/data.py`
Content:
```
1 from contextlib import _GeneratorContextManager as GCM
2 from contextlib import contextmanager
3 from typing import Any, Dict, Optional
4
5 from funcy import reraise
6
7 from dvc.exceptions import FileMissingError, OutputNotFoundError, PathMissingError
8 from dvc.repo import Repo
9
10
11 @contextmanager
12 def _wrap_exceptions(repo, url):
13 from dvc.config import NoRemoteError
14 from dvc.exceptions import NoOutputInExternalRepoError, NoRemoteInExternalRepoError
15
16 try:
17 yield
18 except NoRemoteError as exc:
19 raise NoRemoteInExternalRepoError(url) from exc
20 except OutputNotFoundError as exc:
21 if exc.repo is repo:
22 raise NoOutputInExternalRepoError(exc.output, repo.root_dir, url) from exc
23 raise
24 except FileMissingError as exc:
25 raise PathMissingError(exc.path, url) from exc
26
27
28 def get_url(path, repo=None, rev=None, remote=None):
29 """
30 Returns the URL to the storage location of a data file or directory tracked
31 in a DVC repo. For Git repos, HEAD is used unless a rev argument is
32 supplied. The default remote is tried unless a remote argument is supplied.
33
34 Raises OutputNotFoundError if the file is not tracked by DVC.
35
36 NOTE: This function does not check for the actual existence of the file or
37 directory in the remote storage.
38 """
39 with Repo.open(repo, rev=rev, subrepos=True, uninitialized=True) as _repo:
40 with _wrap_exceptions(_repo, path):
41 fs_path = _repo.dvcfs.from_os_path(path)
42
43 with reraise(FileNotFoundError, PathMissingError(path, repo)):
44 info = _repo.dvcfs.info(fs_path)
45
46 dvc_info = info.get("dvc_info")
47 if not dvc_info:
48 raise OutputNotFoundError(path, repo)
49
50 dvc_repo = info["repo"] # pylint: disable=unsubscriptable-object
51 md5 = dvc_info["md5"]
52
53 return dvc_repo.cloud.get_url_for(remote, checksum=md5)
54
55
56 class _OpenContextManager(GCM):
57 def __init__(self, func, args, kwds): # pylint: disable=super-init-not-called
58 self.gen = func(*args, **kwds)
59 self.func, self.args, self.kwds = ( # type: ignore[assignment]
60 func,
61 args,
62 kwds,
63 )
64
65 def __getattr__(self, name):
66 raise AttributeError("dvc.api.open() should be used in a with statement.")
67
68
69 def open( # noqa, pylint: disable=redefined-builtin
70 path: str,
71 repo: Optional[str] = None,
72 rev: Optional[str] = None,
73 remote: Optional[str] = None,
74 mode: str = "r",
75 encoding: Optional[str] = None,
76 ):
77 """
78 Opens a file tracked in a DVC project.
79
80 This function may only be used as a context manager (using the `with`
81 keyword, as shown in the examples).
82
83 This function makes a direct connection to the remote storage, so the file
84 contents can be streamed. Your code can process the data buffer as it's
85 streamed, which optimizes memory usage.
86
87 Note:
88 Use dvc.api.read() to load the complete file contents
89 in a single function call, no context manager involved.
90 Neither function utilizes disc space.
91
92 Args:
93 path (str): location and file name of the target to open,
94 relative to the root of `repo`.
95 repo (str, optional): location of the DVC project or Git Repo.
96 Defaults to the current DVC project (found by walking up from the
97 current working directory tree).
98 It can be a URL or a file system path.
99 Both HTTP and SSH protocols are supported for online Git repos
100 (e.g. [user@]server:project.git).
101 rev (str, optional): Any `Git revision`_ such as a branch or tag name,
102 a commit hash or a dvc experiment name.
103 Defaults to HEAD.
104 If `repo` is not a Git repo, this option is ignored.
105 remote (str, optional): Name of the `DVC remote`_ used to form the
106 returned URL string.
107 Defaults to the `default remote`_ of `repo`.
108 For local projects, the cache is tried before the default remote.
109 mode (str, optional): Specifies the mode in which the file is opened.
110 Defaults to "r" (read).
111 Mirrors the namesake parameter in builtin `open()`_.
112 Only reading `mode` is supported.
113 encoding(str, optional): `Codec`_ used to decode the file contents.
114 Defaults to None.
115 This should only be used in text mode.
116 Mirrors the namesake parameter in builtin `open()`_.
117
118 Returns:
119 _OpenContextManager: A context manager that generatse a corresponding
120 `file object`_.
121 The exact type of file object depends on the mode used.
122 For more details, please refer to Python's `open()`_ built-in,
123 which is used under the hood.
124
125 Raises:
126 AttributeError: If this method is not used as a context manager.
127 ValueError: If non-read `mode` is used.
128
129 Examples:
130
131 - Use data or models from a DVC repository.
132
133 Any file tracked in a DVC project (and stored remotely) can be
134 processed directly in your Python code with this API.
135 For example, an XML file tracked in a public DVC repo on GitHub can be
136 processed like this:
137
138 >>> from xml.sax import parse
139 >>> import dvc.api
140 >>> from mymodule import mySAXHandler
141
142 >>> with dvc.api.open(
143 ... 'get-started/data.xml',
144 ... repo='https://github.com/iterative/dataset-registry'
145 ... ) as fd:
146 ... parse(fd, mySAXHandler)
147
148 We use a SAX XML parser here because dvc.api.open() is able to stream
149 the data from remote storage.
150 The mySAXHandler object should handle the event-driven parsing of the
151 document in this case.
152 This increases the performance of the code (minimizing memory usage),
153 and is typically faster than loading the whole data into memory.
154
155 - Accessing private repos
156
157 This is just a matter of using the right repo argument, for example an
158 SSH URL (requires that the credentials are configured locally):
159
160 >>> import dvc.api
161
162 >>> with dvc.api.open(
163 ... 'features.dat',
164 ... repo='[email protected]:path/to/repo.git'
165 ... ) as fd:
166 ... # ... Process 'features'
167 ... pass
168
169 - Use different versions of data
170
171 Any git revision (see `rev`) can be accessed programmatically.
172 For example, if your DVC repo has tagged releases of a CSV dataset:
173
174 >>> import csv
175 >>> import dvc.api
176 >>> with dvc.api.open(
177 ... 'clean.csv',
178 ... rev='v1.1.0'
179 ... ) as fd:
180 ... reader = csv.reader(fd)
181 ... # ... Process 'clean' data from version 1.1.0
182
183 .. _Git revision:
184 https://git-scm.com/docs/revisions
185
186 .. _DVC remote:
187 https://dvc.org/doc/command-reference/remote
188
189 .. _default remote:
190 https://dvc.org/doc/command-reference/remote/default
191
192 .. _open():
193 https://docs.python.org/3/library/functions.html#open
194
195 .. _Codec:
196 https://docs.python.org/3/library/codecs.html#standard-encodings
197
198 .. _file object:
199 https://docs.python.org/3/glossary.html#term-file-object
200
201 """
202 if "r" not in mode:
203 raise ValueError("Only reading `mode` is supported.")
204
205 args = (path,)
206 kwargs = {
207 "repo": repo,
208 "remote": remote,
209 "rev": rev,
210 "mode": mode,
211 "encoding": encoding,
212 }
213 return _OpenContextManager(_open, args, kwargs)
214
215
216 def _open(path, repo=None, rev=None, remote=None, mode="r", encoding=None):
217 repo_kwargs: Dict[str, Any] = {"subrepos": True, "uninitialized": True}
218 if remote:
219 repo_kwargs["config"] = {"core": {"remote": remote}}
220
221 with Repo.open(repo, rev=rev, **repo_kwargs) as _repo:
222 with _wrap_exceptions(_repo, path):
223 import os
224 from typing import TYPE_CHECKING, Union
225
226 from dvc.exceptions import IsADirectoryError as DvcIsADirectoryError
227 from dvc.fs.data import DataFileSystem
228 from dvc.fs.dvc import DVCFileSystem
229
230 if TYPE_CHECKING:
231 from dvc.fs import FileSystem
232
233 fs: Union["FileSystem", DataFileSystem, DVCFileSystem]
234 if os.path.isabs(path):
235 fs = DataFileSystem(index=_repo.index.data["local"])
236 fs_path = path
237 else:
238 fs = DVCFileSystem(repo=_repo, subrepos=True)
239 fs_path = fs.from_os_path(path)
240
241 try:
242 with fs.open(
243 fs_path,
244 mode=mode,
245 encoding=encoding,
246 ) as fobj:
247 yield fobj
248 except FileNotFoundError as exc:
249 raise FileMissingError(path) from exc
250 except IsADirectoryError as exc:
251 raise DvcIsADirectoryError(f"'{path}' is a directory") from exc
252
253
254 def read(path, repo=None, rev=None, remote=None, mode="r", encoding=None):
255 """
256 Returns the contents of a tracked file (by DVC or Git). For Git repos, HEAD
257 is used unless a rev argument is supplied. The default remote is tried
258 unless a remote argument is supplied.
259 """
260 with open(
261 path, repo=repo, rev=rev, remote=remote, mode=mode, encoding=encoding
262 ) as fd:
263 return fd.read()
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dvc/api/data.py b/dvc/api/data.py
--- a/dvc/api/data.py
+++ b/dvc/api/data.py
@@ -73,6 +73,7 @@
remote: Optional[str] = None,
mode: str = "r",
encoding: Optional[str] = None,
+ config: Optional[Dict[str, Any]] = None,
):
"""
Opens a file tracked in a DVC project.
@@ -114,6 +115,8 @@
Defaults to None.
This should only be used in text mode.
Mirrors the namesake parameter in builtin `open()`_.
+ config(dict, optional): config to be passed to the DVC repository.
+ Defaults to None.
Returns:
_OpenContextManager: A context manager that generatse a corresponding
@@ -209,14 +212,24 @@
"rev": rev,
"mode": mode,
"encoding": encoding,
+ "config": config,
}
return _OpenContextManager(_open, args, kwargs)
-def _open(path, repo=None, rev=None, remote=None, mode="r", encoding=None):
- repo_kwargs: Dict[str, Any] = {"subrepos": True, "uninitialized": True}
+def _open(path, repo=None, rev=None, remote=None, mode="r", encoding=None, config=None):
if remote:
- repo_kwargs["config"] = {"core": {"remote": remote}}
+ if config is not None:
+ raise ValueError(
+ "can't specify both `remote` and `config` at the same time"
+ )
+ config = {"core": {"remote": remote}}
+
+ repo_kwargs: Dict[str, Any] = {
+ "subrepos": True,
+ "uninitialized": True,
+ "config": config,
+ }
with Repo.open(repo, rev=rev, **repo_kwargs) as _repo:
with _wrap_exceptions(_repo, path):
@@ -251,13 +264,19 @@
raise DvcIsADirectoryError(f"'{path}' is a directory") from exc
-def read(path, repo=None, rev=None, remote=None, mode="r", encoding=None):
+def read(path, repo=None, rev=None, remote=None, mode="r", encoding=None, config=None):
"""
Returns the contents of a tracked file (by DVC or Git). For Git repos, HEAD
is used unless a rev argument is supplied. The default remote is tried
unless a remote argument is supplied.
"""
with open(
- path, repo=repo, rev=rev, remote=remote, mode=mode, encoding=encoding
+ path,
+ repo=repo,
+ rev=rev,
+ remote=remote,
+ mode=mode,
+ encoding=encoding,
+ config=config,
) as fd:
return fd.read()
|
{"golden_diff": "diff --git a/dvc/api/data.py b/dvc/api/data.py\n--- a/dvc/api/data.py\n+++ b/dvc/api/data.py\n@@ -73,6 +73,7 @@\n remote: Optional[str] = None,\n mode: str = \"r\",\n encoding: Optional[str] = None,\n+ config: Optional[Dict[str, Any]] = None,\n ):\n \"\"\"\n Opens a file tracked in a DVC project.\n@@ -114,6 +115,8 @@\n Defaults to None.\n This should only be used in text mode.\n Mirrors the namesake parameter in builtin `open()`_.\n+ config(dict, optional): config to be passed to the DVC repository.\n+ Defaults to None.\n \n Returns:\n _OpenContextManager: A context manager that generatse a corresponding\n@@ -209,14 +212,24 @@\n \"rev\": rev,\n \"mode\": mode,\n \"encoding\": encoding,\n+ \"config\": config,\n }\n return _OpenContextManager(_open, args, kwargs)\n \n \n-def _open(path, repo=None, rev=None, remote=None, mode=\"r\", encoding=None):\n- repo_kwargs: Dict[str, Any] = {\"subrepos\": True, \"uninitialized\": True}\n+def _open(path, repo=None, rev=None, remote=None, mode=\"r\", encoding=None, config=None):\n if remote:\n- repo_kwargs[\"config\"] = {\"core\": {\"remote\": remote}}\n+ if config is not None:\n+ raise ValueError(\n+ \"can't specify both `remote` and `config` at the same time\"\n+ )\n+ config = {\"core\": {\"remote\": remote}}\n+\n+ repo_kwargs: Dict[str, Any] = {\n+ \"subrepos\": True,\n+ \"uninitialized\": True,\n+ \"config\": config,\n+ }\n \n with Repo.open(repo, rev=rev, **repo_kwargs) as _repo:\n with _wrap_exceptions(_repo, path):\n@@ -251,13 +264,19 @@\n raise DvcIsADirectoryError(f\"'{path}' is a directory\") from exc\n \n \n-def read(path, repo=None, rev=None, remote=None, mode=\"r\", encoding=None):\n+def read(path, repo=None, rev=None, remote=None, mode=\"r\", encoding=None, config=None):\n \"\"\"\n Returns the contents of a tracked file (by DVC or Git). For Git repos, HEAD\n is used unless a rev argument is supplied. The default remote is tried\n unless a remote argument is supplied.\n \"\"\"\n with open(\n- path, repo=repo, rev=rev, remote=remote, mode=mode, encoding=encoding\n+ path,\n+ repo=repo,\n+ rev=rev,\n+ remote=remote,\n+ mode=mode,\n+ encoding=encoding,\n+ config=config,\n ) as fd:\n return fd.read()\n", "issue": "api: support config/credentials passing\ndvcfs now supports config passing https://github.com/iterative/dvc/issues/9154 and we need to allow for the same with api methods.\r\n\r\nRelated https://github.com/iterative/dvc/issues/4336\n", "before_files": [{"content": "from contextlib import _GeneratorContextManager as GCM\nfrom contextlib import contextmanager\nfrom typing import Any, Dict, Optional\n\nfrom funcy import reraise\n\nfrom dvc.exceptions import FileMissingError, OutputNotFoundError, PathMissingError\nfrom dvc.repo import Repo\n\n\n@contextmanager\ndef _wrap_exceptions(repo, url):\n from dvc.config import NoRemoteError\n from dvc.exceptions import NoOutputInExternalRepoError, NoRemoteInExternalRepoError\n\n try:\n yield\n except NoRemoteError as exc:\n raise NoRemoteInExternalRepoError(url) from exc\n except OutputNotFoundError as exc:\n if exc.repo is repo:\n raise NoOutputInExternalRepoError(exc.output, repo.root_dir, url) from exc\n raise\n except FileMissingError as exc:\n raise PathMissingError(exc.path, url) from exc\n\n\ndef get_url(path, repo=None, rev=None, remote=None):\n \"\"\"\n Returns the URL to the storage location of a data file or directory tracked\n in a DVC repo. For Git repos, HEAD is used unless a rev argument is\n supplied. The default remote is tried unless a remote argument is supplied.\n\n Raises OutputNotFoundError if the file is not tracked by DVC.\n\n NOTE: This function does not check for the actual existence of the file or\n directory in the remote storage.\n \"\"\"\n with Repo.open(repo, rev=rev, subrepos=True, uninitialized=True) as _repo:\n with _wrap_exceptions(_repo, path):\n fs_path = _repo.dvcfs.from_os_path(path)\n\n with reraise(FileNotFoundError, PathMissingError(path, repo)):\n info = _repo.dvcfs.info(fs_path)\n\n dvc_info = info.get(\"dvc_info\")\n if not dvc_info:\n raise OutputNotFoundError(path, repo)\n\n dvc_repo = info[\"repo\"] # pylint: disable=unsubscriptable-object\n md5 = dvc_info[\"md5\"]\n\n return dvc_repo.cloud.get_url_for(remote, checksum=md5)\n\n\nclass _OpenContextManager(GCM):\n def __init__(self, func, args, kwds): # pylint: disable=super-init-not-called\n self.gen = func(*args, **kwds)\n self.func, self.args, self.kwds = ( # type: ignore[assignment]\n func,\n args,\n kwds,\n )\n\n def __getattr__(self, name):\n raise AttributeError(\"dvc.api.open() should be used in a with statement.\")\n\n\ndef open( # noqa, pylint: disable=redefined-builtin\n path: str,\n repo: Optional[str] = None,\n rev: Optional[str] = None,\n remote: Optional[str] = None,\n mode: str = \"r\",\n encoding: Optional[str] = None,\n):\n \"\"\"\n Opens a file tracked in a DVC project.\n\n This function may only be used as a context manager (using the `with`\n keyword, as shown in the examples).\n\n This function makes a direct connection to the remote storage, so the file\n contents can be streamed. Your code can process the data buffer as it's\n streamed, which optimizes memory usage.\n\n Note:\n Use dvc.api.read() to load the complete file contents\n in a single function call, no context manager involved.\n Neither function utilizes disc space.\n\n Args:\n path (str): location and file name of the target to open,\n relative to the root of `repo`.\n repo (str, optional): location of the DVC project or Git Repo.\n Defaults to the current DVC project (found by walking up from the\n current working directory tree).\n It can be a URL or a file system path.\n Both HTTP and SSH protocols are supported for online Git repos\n (e.g. [user@]server:project.git).\n rev (str, optional): Any `Git revision`_ such as a branch or tag name,\n a commit hash or a dvc experiment name.\n Defaults to HEAD.\n If `repo` is not a Git repo, this option is ignored.\n remote (str, optional): Name of the `DVC remote`_ used to form the\n returned URL string.\n Defaults to the `default remote`_ of `repo`.\n For local projects, the cache is tried before the default remote.\n mode (str, optional): Specifies the mode in which the file is opened.\n Defaults to \"r\" (read).\n Mirrors the namesake parameter in builtin `open()`_.\n Only reading `mode` is supported.\n encoding(str, optional): `Codec`_ used to decode the file contents.\n Defaults to None.\n This should only be used in text mode.\n Mirrors the namesake parameter in builtin `open()`_.\n\n Returns:\n _OpenContextManager: A context manager that generatse a corresponding\n `file object`_.\n The exact type of file object depends on the mode used.\n For more details, please refer to Python's `open()`_ built-in,\n which is used under the hood.\n\n Raises:\n AttributeError: If this method is not used as a context manager.\n ValueError: If non-read `mode` is used.\n\n Examples:\n\n - Use data or models from a DVC repository.\n\n Any file tracked in a DVC project (and stored remotely) can be\n processed directly in your Python code with this API.\n For example, an XML file tracked in a public DVC repo on GitHub can be\n processed like this:\n\n >>> from xml.sax import parse\n >>> import dvc.api\n >>> from mymodule import mySAXHandler\n\n >>> with dvc.api.open(\n ... 'get-started/data.xml',\n ... repo='https://github.com/iterative/dataset-registry'\n ... ) as fd:\n ... parse(fd, mySAXHandler)\n\n We use a SAX XML parser here because dvc.api.open() is able to stream\n the data from remote storage.\n The mySAXHandler object should handle the event-driven parsing of the\n document in this case.\n This increases the performance of the code (minimizing memory usage),\n and is typically faster than loading the whole data into memory.\n\n - Accessing private repos\n\n This is just a matter of using the right repo argument, for example an\n SSH URL (requires that the credentials are configured locally):\n\n >>> import dvc.api\n\n >>> with dvc.api.open(\n ... 'features.dat',\n ... repo='[email protected]:path/to/repo.git'\n ... ) as fd:\n ... # ... Process 'features'\n ... pass\n\n - Use different versions of data\n\n Any git revision (see `rev`) can be accessed programmatically.\n For example, if your DVC repo has tagged releases of a CSV dataset:\n\n >>> import csv\n >>> import dvc.api\n >>> with dvc.api.open(\n ... 'clean.csv',\n ... rev='v1.1.0'\n ... ) as fd:\n ... reader = csv.reader(fd)\n ... # ... Process 'clean' data from version 1.1.0\n\n .. _Git revision:\n https://git-scm.com/docs/revisions\n\n .. _DVC remote:\n https://dvc.org/doc/command-reference/remote\n\n .. _default remote:\n https://dvc.org/doc/command-reference/remote/default\n\n .. _open():\n https://docs.python.org/3/library/functions.html#open\n\n .. _Codec:\n https://docs.python.org/3/library/codecs.html#standard-encodings\n\n .. _file object:\n https://docs.python.org/3/glossary.html#term-file-object\n\n \"\"\"\n if \"r\" not in mode:\n raise ValueError(\"Only reading `mode` is supported.\")\n\n args = (path,)\n kwargs = {\n \"repo\": repo,\n \"remote\": remote,\n \"rev\": rev,\n \"mode\": mode,\n \"encoding\": encoding,\n }\n return _OpenContextManager(_open, args, kwargs)\n\n\ndef _open(path, repo=None, rev=None, remote=None, mode=\"r\", encoding=None):\n repo_kwargs: Dict[str, Any] = {\"subrepos\": True, \"uninitialized\": True}\n if remote:\n repo_kwargs[\"config\"] = {\"core\": {\"remote\": remote}}\n\n with Repo.open(repo, rev=rev, **repo_kwargs) as _repo:\n with _wrap_exceptions(_repo, path):\n import os\n from typing import TYPE_CHECKING, Union\n\n from dvc.exceptions import IsADirectoryError as DvcIsADirectoryError\n from dvc.fs.data import DataFileSystem\n from dvc.fs.dvc import DVCFileSystem\n\n if TYPE_CHECKING:\n from dvc.fs import FileSystem\n\n fs: Union[\"FileSystem\", DataFileSystem, DVCFileSystem]\n if os.path.isabs(path):\n fs = DataFileSystem(index=_repo.index.data[\"local\"])\n fs_path = path\n else:\n fs = DVCFileSystem(repo=_repo, subrepos=True)\n fs_path = fs.from_os_path(path)\n\n try:\n with fs.open(\n fs_path,\n mode=mode,\n encoding=encoding,\n ) as fobj:\n yield fobj\n except FileNotFoundError as exc:\n raise FileMissingError(path) from exc\n except IsADirectoryError as exc:\n raise DvcIsADirectoryError(f\"'{path}' is a directory\") from exc\n\n\ndef read(path, repo=None, rev=None, remote=None, mode=\"r\", encoding=None):\n \"\"\"\n Returns the contents of a tracked file (by DVC or Git). For Git repos, HEAD\n is used unless a rev argument is supplied. The default remote is tried\n unless a remote argument is supplied.\n \"\"\"\n with open(\n path, repo=repo, rev=rev, remote=remote, mode=mode, encoding=encoding\n ) as fd:\n return fd.read()\n", "path": "dvc/api/data.py"}], "after_files": [{"content": "from contextlib import _GeneratorContextManager as GCM\nfrom contextlib import contextmanager\nfrom typing import Any, Dict, Optional\n\nfrom funcy import reraise\n\nfrom dvc.exceptions import FileMissingError, OutputNotFoundError, PathMissingError\nfrom dvc.repo import Repo\n\n\n@contextmanager\ndef _wrap_exceptions(repo, url):\n from dvc.config import NoRemoteError\n from dvc.exceptions import NoOutputInExternalRepoError, NoRemoteInExternalRepoError\n\n try:\n yield\n except NoRemoteError as exc:\n raise NoRemoteInExternalRepoError(url) from exc\n except OutputNotFoundError as exc:\n if exc.repo is repo:\n raise NoOutputInExternalRepoError(exc.output, repo.root_dir, url) from exc\n raise\n except FileMissingError as exc:\n raise PathMissingError(exc.path, url) from exc\n\n\ndef get_url(path, repo=None, rev=None, remote=None):\n \"\"\"\n Returns the URL to the storage location of a data file or directory tracked\n in a DVC repo. For Git repos, HEAD is used unless a rev argument is\n supplied. The default remote is tried unless a remote argument is supplied.\n\n Raises OutputNotFoundError if the file is not tracked by DVC.\n\n NOTE: This function does not check for the actual existence of the file or\n directory in the remote storage.\n \"\"\"\n with Repo.open(repo, rev=rev, subrepos=True, uninitialized=True) as _repo:\n with _wrap_exceptions(_repo, path):\n fs_path = _repo.dvcfs.from_os_path(path)\n\n with reraise(FileNotFoundError, PathMissingError(path, repo)):\n info = _repo.dvcfs.info(fs_path)\n\n dvc_info = info.get(\"dvc_info\")\n if not dvc_info:\n raise OutputNotFoundError(path, repo)\n\n dvc_repo = info[\"repo\"] # pylint: disable=unsubscriptable-object\n md5 = dvc_info[\"md5\"]\n\n return dvc_repo.cloud.get_url_for(remote, checksum=md5)\n\n\nclass _OpenContextManager(GCM):\n def __init__(self, func, args, kwds): # pylint: disable=super-init-not-called\n self.gen = func(*args, **kwds)\n self.func, self.args, self.kwds = ( # type: ignore[assignment]\n func,\n args,\n kwds,\n )\n\n def __getattr__(self, name):\n raise AttributeError(\"dvc.api.open() should be used in a with statement.\")\n\n\ndef open( # noqa, pylint: disable=redefined-builtin\n path: str,\n repo: Optional[str] = None,\n rev: Optional[str] = None,\n remote: Optional[str] = None,\n mode: str = \"r\",\n encoding: Optional[str] = None,\n config: Optional[Dict[str, Any]] = None,\n):\n \"\"\"\n Opens a file tracked in a DVC project.\n\n This function may only be used as a context manager (using the `with`\n keyword, as shown in the examples).\n\n This function makes a direct connection to the remote storage, so the file\n contents can be streamed. Your code can process the data buffer as it's\n streamed, which optimizes memory usage.\n\n Note:\n Use dvc.api.read() to load the complete file contents\n in a single function call, no context manager involved.\n Neither function utilizes disc space.\n\n Args:\n path (str): location and file name of the target to open,\n relative to the root of `repo`.\n repo (str, optional): location of the DVC project or Git Repo.\n Defaults to the current DVC project (found by walking up from the\n current working directory tree).\n It can be a URL or a file system path.\n Both HTTP and SSH protocols are supported for online Git repos\n (e.g. [user@]server:project.git).\n rev (str, optional): Any `Git revision`_ such as a branch or tag name,\n a commit hash or a dvc experiment name.\n Defaults to HEAD.\n If `repo` is not a Git repo, this option is ignored.\n remote (str, optional): Name of the `DVC remote`_ used to form the\n returned URL string.\n Defaults to the `default remote`_ of `repo`.\n For local projects, the cache is tried before the default remote.\n mode (str, optional): Specifies the mode in which the file is opened.\n Defaults to \"r\" (read).\n Mirrors the namesake parameter in builtin `open()`_.\n Only reading `mode` is supported.\n encoding(str, optional): `Codec`_ used to decode the file contents.\n Defaults to None.\n This should only be used in text mode.\n Mirrors the namesake parameter in builtin `open()`_.\n config(dict, optional): config to be passed to the DVC repository.\n Defaults to None.\n\n Returns:\n _OpenContextManager: A context manager that generatse a corresponding\n `file object`_.\n The exact type of file object depends on the mode used.\n For more details, please refer to Python's `open()`_ built-in,\n which is used under the hood.\n\n Raises:\n AttributeError: If this method is not used as a context manager.\n ValueError: If non-read `mode` is used.\n\n Examples:\n\n - Use data or models from a DVC repository.\n\n Any file tracked in a DVC project (and stored remotely) can be\n processed directly in your Python code with this API.\n For example, an XML file tracked in a public DVC repo on GitHub can be\n processed like this:\n\n >>> from xml.sax import parse\n >>> import dvc.api\n >>> from mymodule import mySAXHandler\n\n >>> with dvc.api.open(\n ... 'get-started/data.xml',\n ... repo='https://github.com/iterative/dataset-registry'\n ... ) as fd:\n ... parse(fd, mySAXHandler)\n\n We use a SAX XML parser here because dvc.api.open() is able to stream\n the data from remote storage.\n The mySAXHandler object should handle the event-driven parsing of the\n document in this case.\n This increases the performance of the code (minimizing memory usage),\n and is typically faster than loading the whole data into memory.\n\n - Accessing private repos\n\n This is just a matter of using the right repo argument, for example an\n SSH URL (requires that the credentials are configured locally):\n\n >>> import dvc.api\n\n >>> with dvc.api.open(\n ... 'features.dat',\n ... repo='[email protected]:path/to/repo.git'\n ... ) as fd:\n ... # ... Process 'features'\n ... pass\n\n - Use different versions of data\n\n Any git revision (see `rev`) can be accessed programmatically.\n For example, if your DVC repo has tagged releases of a CSV dataset:\n\n >>> import csv\n >>> import dvc.api\n >>> with dvc.api.open(\n ... 'clean.csv',\n ... rev='v1.1.0'\n ... ) as fd:\n ... reader = csv.reader(fd)\n ... # ... Process 'clean' data from version 1.1.0\n\n .. _Git revision:\n https://git-scm.com/docs/revisions\n\n .. _DVC remote:\n https://dvc.org/doc/command-reference/remote\n\n .. _default remote:\n https://dvc.org/doc/command-reference/remote/default\n\n .. _open():\n https://docs.python.org/3/library/functions.html#open\n\n .. _Codec:\n https://docs.python.org/3/library/codecs.html#standard-encodings\n\n .. _file object:\n https://docs.python.org/3/glossary.html#term-file-object\n\n \"\"\"\n if \"r\" not in mode:\n raise ValueError(\"Only reading `mode` is supported.\")\n\n args = (path,)\n kwargs = {\n \"repo\": repo,\n \"remote\": remote,\n \"rev\": rev,\n \"mode\": mode,\n \"encoding\": encoding,\n \"config\": config,\n }\n return _OpenContextManager(_open, args, kwargs)\n\n\ndef _open(path, repo=None, rev=None, remote=None, mode=\"r\", encoding=None, config=None):\n if remote:\n if config is not None:\n raise ValueError(\n \"can't specify both `remote` and `config` at the same time\"\n )\n config = {\"core\": {\"remote\": remote}}\n\n repo_kwargs: Dict[str, Any] = {\n \"subrepos\": True,\n \"uninitialized\": True,\n \"config\": config,\n }\n\n with Repo.open(repo, rev=rev, **repo_kwargs) as _repo:\n with _wrap_exceptions(_repo, path):\n import os\n from typing import TYPE_CHECKING, Union\n\n from dvc.exceptions import IsADirectoryError as DvcIsADirectoryError\n from dvc.fs.data import DataFileSystem\n from dvc.fs.dvc import DVCFileSystem\n\n if TYPE_CHECKING:\n from dvc.fs import FileSystem\n\n fs: Union[\"FileSystem\", DataFileSystem, DVCFileSystem]\n if os.path.isabs(path):\n fs = DataFileSystem(index=_repo.index.data[\"local\"])\n fs_path = path\n else:\n fs = DVCFileSystem(repo=_repo, subrepos=True)\n fs_path = fs.from_os_path(path)\n\n try:\n with fs.open(\n fs_path,\n mode=mode,\n encoding=encoding,\n ) as fobj:\n yield fobj\n except FileNotFoundError as exc:\n raise FileMissingError(path) from exc\n except IsADirectoryError as exc:\n raise DvcIsADirectoryError(f\"'{path}' is a directory\") from exc\n\n\ndef read(path, repo=None, rev=None, remote=None, mode=\"r\", encoding=None, config=None):\n \"\"\"\n Returns the contents of a tracked file (by DVC or Git). For Git repos, HEAD\n is used unless a rev argument is supplied. The default remote is tried\n unless a remote argument is supplied.\n \"\"\"\n with open(\n path,\n repo=repo,\n rev=rev,\n remote=remote,\n mode=mode,\n encoding=encoding,\n config=config,\n ) as fd:\n return fd.read()\n", "path": "dvc/api/data.py"}]}
| 3,216 | 652 |
gh_patches_debug_7288
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-1378
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Merge changes from 0.3.8 into develop
This is blocking on #1313 and #1345.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/version.py`
Content:
```
1 __version__ = '0.3.7'
2
```
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # SecureDrop documentation build configuration file, created by
4 # sphinx-quickstart on Tue Oct 13 12:08:52 2015.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import sys
16 import os
17 import shlex
18
19 # Detect if we're being built by Read the Docs
20 # https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs
21 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
22
23 # If extensions (or modules to document with autodoc) are in another directory,
24 # add these directories to sys.path here. If the directory is relative to the
25 # documentation root, use os.path.abspath to make it absolute, like shown here.
26 #sys.path.insert(0, os.path.abspath('.'))
27
28 # -- General configuration ------------------------------------------------
29
30 # If your documentation needs a minimal Sphinx version, state it here.
31 #needs_sphinx = '1.0'
32
33 # Add any Sphinx extension module names here, as strings. They can be
34 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
35 # ones.
36 extensions = ['sphinx.ext.todo', ]
37
38 # Add any paths that contain templates here, relative to this directory.
39 templates_path = ['_templates']
40
41 # The suffix(es) of source filenames.
42 # You can specify multiple suffix as a list of string:
43 # source_suffix = ['.rst', '.md']
44 source_suffix = '.rst'
45
46 # The encoding of source files.
47 #source_encoding = 'utf-8-sig'
48
49 # The master toctree document.
50 master_doc = 'index'
51
52 # General information about the project.
53 project = u'SecureDrop'
54 copyright = u'2015, Freedom of the Press Foundation'
55 author = u'SecureDrop Team and Contributors'
56
57 # The version info for the project you're documenting, acts as replacement for
58 # |version| and |release|, also used in various other places throughout the
59 # built documents.
60 #
61 # The short X.Y version.
62 version = '0.3.7'
63 # The full version, including alpha/beta/rc tags.
64 release = '0.3.7'
65
66 # The language for content autogenerated by Sphinx. Refer to documentation
67 # for a list of supported languages.
68 #
69 # This is also used if you do content translation via gettext catalogs.
70 # Usually you set "language" from the command line for these cases.
71 language = None
72
73 # There are two options for replacing |today|: either, you set today to some
74 # non-false value, then it is used:
75 #today = ''
76 # Else, today_fmt is used as the format for a strftime call.
77 #today_fmt = '%B %d, %Y'
78
79 # List of patterns, relative to source directory, that match files and
80 # directories to ignore when looking for source files.
81 exclude_patterns = ['_build']
82
83 # The reST default role (used for this markup: `text`) to use for all
84 # documents.
85 #default_role = None
86
87 # If true, '()' will be appended to :func: etc. cross-reference text.
88 #add_function_parentheses = True
89
90 # If true, the current module name will be prepended to all description
91 # unit titles (such as .. function::).
92 #add_module_names = True
93
94 # If true, sectionauthor and moduleauthor directives will be shown in the
95 # output. They are ignored by default.
96 #show_authors = False
97
98 # The name of the Pygments (syntax highlighting) style to use.
99 pygments_style = 'sphinx'
100
101 # A list of ignored prefixes for module index sorting.
102 #modindex_common_prefix = []
103
104 # If true, keep warnings as "system message" paragraphs in the built documents.
105 #keep_warnings = False
106
107 # If true, `todo` and `todoList` produce output, else they produce nothing.
108 todo_include_todos = False
109
110
111 # -- Options for HTML output ----------------------------------------------
112
113 # The theme to use for HTML and HTML Help pages. See the documentation for
114 # a list of builtin themes.
115 if on_rtd:
116 html_theme = 'default'
117 else:
118 try:
119 # If you want to build the docs locally using the RTD theme,
120 # you may need to install it: ``pip install sphinx_rtd_theme``.
121 # https://github.com/snide/sphinx_rtd_theme#via-package
122 import sphinx_rtd_theme
123 html_theme = "sphinx_rtd_theme"
124 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
125 except ImportError:
126 # This theme is included with Sphinx and is quite nice (based
127 # on the Pocoo themes), but since we're using the RTD theme
128 # for the production docs, it's best to use that to avoid
129 # issues due to discrepancies between the themes.
130 html_theme = 'alabaster'
131
132 # Theme options are theme-specific and customize the look and feel of a theme
133 # further. For a list of options available for each theme, see the
134 # documentation.
135 #html_theme_options = {}
136
137 # Add any paths that contain custom themes here, relative to this directory.
138 #html_theme_path = []
139
140 # The name for this set of Sphinx documents. If None, it defaults to
141 # "<project> v<release> documentation".
142 #html_title = None
143
144 # A shorter title for the navigation bar. Default is the same as html_title.
145 #html_short_title = None
146
147 # The name of an image file (relative to this directory) to place at the top
148 # of the sidebar.
149 #html_logo = None
150
151 # The name of an image file (within the static path) to use as favicon of the
152 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
153 # pixels large.
154 #html_favicon = None
155
156 # Add any paths that contain custom static files (such as style sheets) here,
157 # relative to this directory. They are copied after the builtin static files,
158 # so a file named "default.css" will overwrite the builtin "default.css".
159 html_static_path = ['_static']
160
161 # Add any extra paths that contain custom files (such as robots.txt or
162 # .htaccess) here, relative to this directory. These files are copied
163 # directly to the root of the documentation.
164 #html_extra_path = []
165
166 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
167 # using the given strftime format.
168 #html_last_updated_fmt = '%b %d, %Y'
169
170 # If true, SmartyPants will be used to convert quotes and dashes to
171 # typographically correct entities.
172 #html_use_smartypants = True
173
174 # Custom sidebar templates, maps document names to template names.
175 #html_sidebars = {}
176
177 # Additional templates that should be rendered to pages, maps page names to
178 # template names.
179 #html_additional_pages = {}
180
181 # If false, no module index is generated.
182 #html_domain_indices = True
183
184 # If false, no index is generated.
185 #html_use_index = True
186
187 # If true, the index is split into individual pages for each letter.
188 #html_split_index = False
189
190 # If true, links to the reST sources are added to the pages.
191 #html_show_sourcelink = True
192
193 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
194 #html_show_sphinx = True
195
196 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
197 #html_show_copyright = True
198
199 # If true, an OpenSearch description file will be output, and all pages will
200 # contain a <link> tag referring to it. The value of this option must be the
201 # base URL from which the finished HTML is served.
202 #html_use_opensearch = ''
203
204 # This is the file name suffix for HTML files (e.g. ".xhtml").
205 #html_file_suffix = None
206
207 # Language to be used for generating the HTML full-text search index.
208 # Sphinx supports the following languages:
209 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
210 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
211 #html_search_language = 'en'
212
213 # A dictionary with options for the search language support, empty by default.
214 # Now only 'ja' uses this config value
215 #html_search_options = {'type': 'default'}
216
217 # The name of a javascript file (relative to the configuration directory) that
218 # implements a search results scorer. If empty, the default will be used.
219 #html_search_scorer = 'scorer.js'
220
221 # Output file base name for HTML help builder.
222 htmlhelp_basename = 'SecureDropdoc'
223
224 # -- Options for LaTeX output ---------------------------------------------
225
226 latex_elements = {
227 # The paper size ('letterpaper' or 'a4paper').
228 #'papersize': 'letterpaper',
229
230 # The font size ('10pt', '11pt' or '12pt').
231 #'pointsize': '10pt',
232
233 # Additional stuff for the LaTeX preamble.
234 #'preamble': '',
235
236 # Latex figure (float) alignment
237 #'figure_align': 'htbp',
238 }
239
240 # Grouping the document tree into LaTeX files. List of tuples
241 # (source start file, target name, title,
242 # author, documentclass [howto, manual, or own class]).
243 latex_documents = [
244 (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',
245 author, 'manual'),
246 ]
247
248 # The name of an image file (relative to this directory) to place at the top of
249 # the title page.
250 #latex_logo = None
251
252 # For "manual" documents, if this is true, then toplevel headings are parts,
253 # not chapters.
254 #latex_use_parts = False
255
256 # If true, show page references after internal links.
257 #latex_show_pagerefs = False
258
259 # If true, show URL addresses after external links.
260 #latex_show_urls = False
261
262 # Documents to append as an appendix to all manuals.
263 #latex_appendices = []
264
265 # If false, no module index is generated.
266 #latex_domain_indices = True
267
268
269 # -- Options for manual page output ---------------------------------------
270
271 # One entry per manual page. List of tuples
272 # (source start file, name, description, authors, manual section).
273 man_pages = [
274 (master_doc, 'securedrop', u'SecureDrop Documentation',
275 [author], 1)
276 ]
277
278 # If true, show URL addresses after external links.
279 #man_show_urls = False
280
281
282 # -- Options for Texinfo output -------------------------------------------
283
284 # Grouping the document tree into Texinfo files. List of tuples
285 # (source start file, target name, title, author,
286 # dir menu entry, description, category)
287 texinfo_documents = [
288 (master_doc, 'SecureDrop', u'SecureDrop Documentation',
289 author, 'SecureDrop', 'One line description of project.',
290 'Miscellaneous'),
291 ]
292
293 # Documents to append as an appendix to all manuals.
294 #texinfo_appendices = []
295
296 # If false, no module index is generated.
297 #texinfo_domain_indices = True
298
299 # How to display URL addresses: 'footnote', 'no', or 'inline'.
300 #texinfo_show_urls = 'footnote'
301
302 # If true, do not generate a @detailmenu in the "Top" node's menu.
303 #texinfo_no_detailmenu = False
304
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -59,9 +59,9 @@
# built documents.
#
# The short X.Y version.
-version = '0.3.7'
+version = '0.3.8'
# The full version, including alpha/beta/rc tags.
-release = '0.3.7'
+release = '0.3.8'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
diff --git a/securedrop/version.py b/securedrop/version.py
--- a/securedrop/version.py
+++ b/securedrop/version.py
@@ -1 +1 @@
-__version__ = '0.3.7'
+__version__ = '0.3.8'
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -59,9 +59,9 @@\n # built documents.\n #\n # The short X.Y version.\n-version = '0.3.7'\n+version = '0.3.8'\n # The full version, including alpha/beta/rc tags.\n-release = '0.3.7'\n+release = '0.3.8'\n \n # The language for content autogenerated by Sphinx. Refer to documentation\n # for a list of supported languages.\ndiff --git a/securedrop/version.py b/securedrop/version.py\n--- a/securedrop/version.py\n+++ b/securedrop/version.py\n@@ -1 +1 @@\n-__version__ = '0.3.7'\n+__version__ = '0.3.8'\n", "issue": "Merge changes from 0.3.8 into develop\nThis is blocking on #1313 and #1345.\n\n", "before_files": [{"content": "__version__ = '0.3.7'\n", "path": "securedrop/version.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# SecureDrop documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 13 12:08:52 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\nimport shlex\n\n# Detect if we're being built by Read the Docs\n# https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.todo', ]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'SecureDrop'\ncopyright = u'2015, Freedom of the Press Foundation'\nauthor = u'SecureDrop Team and Contributors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.3.7'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.3.7'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'default'\nelse:\n try:\n # If you want to build the docs locally using the RTD theme,\n # you may need to install it: ``pip install sphinx_rtd_theme``.\n # https://github.com/snide/sphinx_rtd_theme#via-package\n import sphinx_rtd_theme\n html_theme = \"sphinx_rtd_theme\"\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n except ImportError:\n # This theme is included with Sphinx and is quite nice (based\n # on the Pocoo themes), but since we're using the RTD theme\n # for the production docs, it's best to use that to avoid\n # issues due to discrepancies between the themes.\n html_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'SecureDropdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n\n# Latex figure (float) alignment\n#'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',\n author, 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'securedrop', u'SecureDrop Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'SecureDrop', u'SecureDrop Documentation',\n author, 'SecureDrop', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n", "path": "docs/conf.py"}], "after_files": [{"content": "__version__ = '0.3.8'\n", "path": "securedrop/version.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# SecureDrop documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 13 12:08:52 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\nimport shlex\n\n# Detect if we're being built by Read the Docs\n# https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.todo', ]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'SecureDrop'\ncopyright = u'2015, Freedom of the Press Foundation'\nauthor = u'SecureDrop Team and Contributors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.3.8'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.3.8'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'default'\nelse:\n try:\n # If you want to build the docs locally using the RTD theme,\n # you may need to install it: ``pip install sphinx_rtd_theme``.\n # https://github.com/snide/sphinx_rtd_theme#via-package\n import sphinx_rtd_theme\n html_theme = \"sphinx_rtd_theme\"\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n except ImportError:\n # This theme is included with Sphinx and is quite nice (based\n # on the Pocoo themes), but since we're using the RTD theme\n # for the production docs, it's best to use that to avoid\n # issues due to discrepancies between the themes.\n html_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'SecureDropdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n\n# Latex figure (float) alignment\n#'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',\n author, 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'securedrop', u'SecureDrop Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'SecureDrop', u'SecureDrop Documentation',\n author, 'SecureDrop', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n", "path": "docs/conf.py"}]}
| 3,670 | 182 |
gh_patches_debug_25732
|
rasdani/github-patches
|
git_diff
|
borgbackup__borg-524
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
borg binary: Exception when trying to find/load libc
I was reticent to mention this, since it's a very specific situation, but somebody in IRC suggested opening a bug. Anyhow, I'm attempting to use Borg via a bootable ISO based off of CentOS 6 using the Relax and Recover disaster recovery system.
When attempting to run either 0.28.2 or 0.29.0 from inside of the bootable recovery system I get the Traceback mentioned in the title. I ran a stack trace to see if I was missing a library, but I didn't see anything amiss off-hand. Also running ldd against borg doesn't seem to turn up any missing items.
I don't know much about Python, so I'm not sure what you need in order to assist, but here's the last couple of lines from the Traceback:
```
File "<string>", line 121, in __init__
File "/home/vagrant/.pyenv/versions/3.5.1/lib/python3.5/posixpath.py", line 139, in basename
AttributeError: 'NoneType' object has no attribute 'rfind'
__main__ returned -1
```
More Traceback info can be found here:
[borg_0.29.0_traceback.txt](https://github.com/borgbackup/borg/files/63456/borg_0.29.0_traceback.txt)
Before I found Borg, I was using this setup with Attic and that binary seemed to work okay.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `borg/xattr.py`
Content:
```
1 """A basic extended attributes (xattr) implementation for Linux and MacOS X
2 """
3 import errno
4 import os
5 import sys
6 import tempfile
7 from ctypes import CDLL, create_string_buffer, c_ssize_t, c_size_t, c_char_p, c_int, c_uint32, get_errno
8 from ctypes.util import find_library
9
10
11 def is_enabled(path=None):
12 """Determine if xattr is enabled on the filesystem
13 """
14 with tempfile.NamedTemporaryFile(dir=path, prefix='borg-tmp') as fd:
15 try:
16 setxattr(fd.fileno(), 'user.name', b'value')
17 except OSError:
18 return False
19 return getxattr(fd.fileno(), 'user.name') == b'value'
20
21
22 def get_all(path, follow_symlinks=True):
23 try:
24 return dict((name, getxattr(path, name, follow_symlinks=follow_symlinks))
25 for name in listxattr(path, follow_symlinks=follow_symlinks))
26 except OSError as e:
27 if e.errno in (errno.ENOTSUP, errno.EPERM):
28 return {}
29
30
31 libc = CDLL(find_library('c'), use_errno=True)
32
33
34 def _check(rv, path=None):
35 if rv < 0:
36 raise OSError(get_errno(), path)
37 return rv
38
39 if sys.platform.startswith('linux'): # pragma: linux only
40 libc.llistxattr.argtypes = (c_char_p, c_char_p, c_size_t)
41 libc.llistxattr.restype = c_ssize_t
42 libc.flistxattr.argtypes = (c_int, c_char_p, c_size_t)
43 libc.flistxattr.restype = c_ssize_t
44 libc.lsetxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t, c_int)
45 libc.lsetxattr.restype = c_int
46 libc.fsetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t, c_int)
47 libc.fsetxattr.restype = c_int
48 libc.lgetxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t)
49 libc.lgetxattr.restype = c_ssize_t
50 libc.fgetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t)
51 libc.fgetxattr.restype = c_ssize_t
52
53 def listxattr(path, *, follow_symlinks=True):
54 if isinstance(path, str):
55 path = os.fsencode(path)
56 if isinstance(path, int):
57 func = libc.flistxattr
58 elif follow_symlinks:
59 func = libc.listxattr
60 else:
61 func = libc.llistxattr
62 n = _check(func(path, None, 0), path)
63 if n == 0:
64 return []
65 namebuf = create_string_buffer(n)
66 n2 = _check(func(path, namebuf, n), path)
67 if n2 != n:
68 raise Exception('listxattr failed')
69 return [os.fsdecode(name) for name in namebuf.raw.split(b'\0')[:-1] if not name.startswith(b'system.posix_acl_')]
70
71 def getxattr(path, name, *, follow_symlinks=True):
72 name = os.fsencode(name)
73 if isinstance(path, str):
74 path = os.fsencode(path)
75 if isinstance(path, int):
76 func = libc.fgetxattr
77 elif follow_symlinks:
78 func = libc.getxattr
79 else:
80 func = libc.lgetxattr
81 n = _check(func(path, name, None, 0))
82 if n == 0:
83 return
84 valuebuf = create_string_buffer(n)
85 n2 = _check(func(path, name, valuebuf, n), path)
86 if n2 != n:
87 raise Exception('getxattr failed')
88 return valuebuf.raw
89
90 def setxattr(path, name, value, *, follow_symlinks=True):
91 name = os.fsencode(name)
92 value = value and os.fsencode(value)
93 if isinstance(path, str):
94 path = os.fsencode(path)
95 if isinstance(path, int):
96 func = libc.fsetxattr
97 elif follow_symlinks:
98 func = libc.setxattr
99 else:
100 func = libc.lsetxattr
101 _check(func(path, name, value, len(value) if value else 0, 0), path)
102
103 elif sys.platform == 'darwin': # pragma: darwin only
104 libc.listxattr.argtypes = (c_char_p, c_char_p, c_size_t, c_int)
105 libc.listxattr.restype = c_ssize_t
106 libc.flistxattr.argtypes = (c_int, c_char_p, c_size_t)
107 libc.flistxattr.restype = c_ssize_t
108 libc.setxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t, c_uint32, c_int)
109 libc.setxattr.restype = c_int
110 libc.fsetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t, c_uint32, c_int)
111 libc.fsetxattr.restype = c_int
112 libc.getxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t, c_uint32, c_int)
113 libc.getxattr.restype = c_ssize_t
114 libc.fgetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t, c_uint32, c_int)
115 libc.fgetxattr.restype = c_ssize_t
116
117 XATTR_NOFOLLOW = 0x0001
118
119 def listxattr(path, *, follow_symlinks=True):
120 func = libc.listxattr
121 flags = 0
122 if isinstance(path, str):
123 path = os.fsencode(path)
124 if isinstance(path, int):
125 func = libc.flistxattr
126 elif not follow_symlinks:
127 flags = XATTR_NOFOLLOW
128 n = _check(func(path, None, 0, flags), path)
129 if n == 0:
130 return []
131 namebuf = create_string_buffer(n)
132 n2 = _check(func(path, namebuf, n, flags), path)
133 if n2 != n:
134 raise Exception('listxattr failed')
135 return [os.fsdecode(name) for name in namebuf.raw.split(b'\0')[:-1]]
136
137 def getxattr(path, name, *, follow_symlinks=True):
138 name = os.fsencode(name)
139 func = libc.getxattr
140 flags = 0
141 if isinstance(path, str):
142 path = os.fsencode(path)
143 if isinstance(path, int):
144 func = libc.fgetxattr
145 elif not follow_symlinks:
146 flags = XATTR_NOFOLLOW
147 n = _check(func(path, name, None, 0, 0, flags))
148 if n == 0:
149 return
150 valuebuf = create_string_buffer(n)
151 n2 = _check(func(path, name, valuebuf, n, 0, flags), path)
152 if n2 != n:
153 raise Exception('getxattr failed')
154 return valuebuf.raw
155
156 def setxattr(path, name, value, *, follow_symlinks=True):
157 name = os.fsencode(name)
158 value = value and os.fsencode(value)
159 func = libc.setxattr
160 flags = 0
161 if isinstance(path, str):
162 path = os.fsencode(path)
163 if isinstance(path, int):
164 func = libc.fsetxattr
165 elif not follow_symlinks:
166 flags = XATTR_NOFOLLOW
167 _check(func(path, name, value, len(value) if value else 0, 0, flags), path)
168
169 elif sys.platform.startswith('freebsd'): # pragma: freebsd only
170 EXTATTR_NAMESPACE_USER = 0x0001
171 libc.extattr_list_fd.argtypes = (c_int, c_int, c_char_p, c_size_t)
172 libc.extattr_list_fd.restype = c_ssize_t
173 libc.extattr_list_link.argtypes = (c_char_p, c_int, c_char_p, c_size_t)
174 libc.extattr_list_link.restype = c_ssize_t
175 libc.extattr_list_file.argtypes = (c_char_p, c_int, c_char_p, c_size_t)
176 libc.extattr_list_file.restype = c_ssize_t
177 libc.extattr_get_fd.argtypes = (c_int, c_int, c_char_p, c_char_p, c_size_t)
178 libc.extattr_get_fd.restype = c_ssize_t
179 libc.extattr_get_link.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)
180 libc.extattr_get_link.restype = c_ssize_t
181 libc.extattr_get_file.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)
182 libc.extattr_get_file.restype = c_ssize_t
183 libc.extattr_set_fd.argtypes = (c_int, c_int, c_char_p, c_char_p, c_size_t)
184 libc.extattr_set_fd.restype = c_int
185 libc.extattr_set_link.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)
186 libc.extattr_set_link.restype = c_int
187 libc.extattr_set_file.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)
188 libc.extattr_set_file.restype = c_int
189
190 def listxattr(path, *, follow_symlinks=True):
191 ns = EXTATTR_NAMESPACE_USER
192 if isinstance(path, str):
193 path = os.fsencode(path)
194 if isinstance(path, int):
195 func = libc.extattr_list_fd
196 elif follow_symlinks:
197 func = libc.extattr_list_file
198 else:
199 func = libc.extattr_list_link
200 n = _check(func(path, ns, None, 0), path)
201 if n == 0:
202 return []
203 namebuf = create_string_buffer(n)
204 n2 = _check(func(path, ns, namebuf, n), path)
205 if n2 != n:
206 raise Exception('listxattr failed')
207 names = []
208 mv = memoryview(namebuf.raw)
209 while mv:
210 length = mv[0]
211 # Python < 3.3 returns bytes instead of int
212 if isinstance(length, bytes):
213 length = ord(length)
214 names.append(os.fsdecode(bytes(mv[1:1+length])))
215 mv = mv[1+length:]
216 return names
217
218 def getxattr(path, name, *, follow_symlinks=True):
219 name = os.fsencode(name)
220 if isinstance(path, str):
221 path = os.fsencode(path)
222 if isinstance(path, int):
223 func = libc.extattr_get_fd
224 elif follow_symlinks:
225 func = libc.extattr_get_file
226 else:
227 func = libc.extattr_get_link
228 n = _check(func(path, EXTATTR_NAMESPACE_USER, name, None, 0))
229 if n == 0:
230 return
231 valuebuf = create_string_buffer(n)
232 n2 = _check(func(path, EXTATTR_NAMESPACE_USER, name, valuebuf, n), path)
233 if n2 != n:
234 raise Exception('getxattr failed')
235 return valuebuf.raw
236
237 def setxattr(path, name, value, *, follow_symlinks=True):
238 name = os.fsencode(name)
239 value = value and os.fsencode(value)
240 if isinstance(path, str):
241 path = os.fsencode(path)
242 if isinstance(path, int):
243 func = libc.extattr_set_fd
244 elif follow_symlinks:
245 func = libc.extattr_set_file
246 else:
247 func = libc.extattr_set_link
248 _check(func(path, EXTATTR_NAMESPACE_USER, name, value, len(value) if value else 0), path)
249
250 else: # pragma: unknown platform only
251 def listxattr(path, *, follow_symlinks=True):
252 return []
253
254 def getxattr(path, name, *, follow_symlinks=True):
255 pass
256
257 def setxattr(path, name, value, *, follow_symlinks=True):
258 pass
259
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/borg/xattr.py b/borg/xattr.py
--- a/borg/xattr.py
+++ b/borg/xattr.py
@@ -7,6 +7,9 @@
from ctypes import CDLL, create_string_buffer, c_ssize_t, c_size_t, c_char_p, c_int, c_uint32, get_errno
from ctypes.util import find_library
+from .logger import create_logger
+logger = create_logger()
+
def is_enabled(path=None):
"""Determine if xattr is enabled on the filesystem
@@ -27,8 +30,28 @@
if e.errno in (errno.ENOTSUP, errno.EPERM):
return {}
+libc_name = find_library('c')
+if libc_name is None:
+ # find_library didn't work, maybe we are on some minimal system that misses essential
+ # tools used by find_library, like ldconfig, gcc/cc, objdump.
+ # so we can only try some "usual" names for the C library:
+ if sys.platform.startswith('linux'):
+ libc_name = 'libc.so.6'
+ elif sys.platform.startswith(('freebsd', 'netbsd')):
+ libc_name = 'libc.so'
+ elif sys.platform == 'darwin':
+ libc_name = 'libc.dylib'
+ else:
+ msg = "Can't find C library. No fallback known. Try installing ldconfig, gcc/cc or objdump."
+ logger.error(msg)
+ raise Exception(msg)
-libc = CDLL(find_library('c'), use_errno=True)
+try:
+ libc = CDLL(libc_name, use_errno=True)
+except OSError as e:
+ msg = "Can't find C library [%s]. Try installing ldconfig, gcc/cc or objdump." % e
+ logger.error(msg)
+ raise Exception(msg)
def _check(rv, path=None):
|
{"golden_diff": "diff --git a/borg/xattr.py b/borg/xattr.py\n--- a/borg/xattr.py\n+++ b/borg/xattr.py\n@@ -7,6 +7,9 @@\n from ctypes import CDLL, create_string_buffer, c_ssize_t, c_size_t, c_char_p, c_int, c_uint32, get_errno\n from ctypes.util import find_library\n \n+from .logger import create_logger\n+logger = create_logger()\n+\n \n def is_enabled(path=None):\n \"\"\"Determine if xattr is enabled on the filesystem\n@@ -27,8 +30,28 @@\n if e.errno in (errno.ENOTSUP, errno.EPERM):\n return {}\n \n+libc_name = find_library('c')\n+if libc_name is None:\n+ # find_library didn't work, maybe we are on some minimal system that misses essential\n+ # tools used by find_library, like ldconfig, gcc/cc, objdump.\n+ # so we can only try some \"usual\" names for the C library:\n+ if sys.platform.startswith('linux'):\n+ libc_name = 'libc.so.6'\n+ elif sys.platform.startswith(('freebsd', 'netbsd')):\n+ libc_name = 'libc.so'\n+ elif sys.platform == 'darwin':\n+ libc_name = 'libc.dylib'\n+ else:\n+ msg = \"Can't find C library. No fallback known. Try installing ldconfig, gcc/cc or objdump.\"\n+ logger.error(msg)\n+ raise Exception(msg)\n \n-libc = CDLL(find_library('c'), use_errno=True)\n+try:\n+ libc = CDLL(libc_name, use_errno=True)\n+except OSError as e:\n+ msg = \"Can't find C library [%s]. Try installing ldconfig, gcc/cc or objdump.\" % e\n+ logger.error(msg)\n+ raise Exception(msg)\n \n \n def _check(rv, path=None):\n", "issue": "borg binary: Exception when trying to find/load libc\nI was reticent to mention this, since it's a very specific situation, but somebody in IRC suggested opening a bug. Anyhow, I'm attempting to use Borg via a bootable ISO based off of CentOS 6 using the Relax and Recover disaster recovery system.\n\nWhen attempting to run either 0.28.2 or 0.29.0 from inside of the bootable recovery system I get the Traceback mentioned in the title. I ran a stack trace to see if I was missing a library, but I didn't see anything amiss off-hand. Also running ldd against borg doesn't seem to turn up any missing items.\n\nI don't know much about Python, so I'm not sure what you need in order to assist, but here's the last couple of lines from the Traceback:\n\n```\nFile \"<string>\", line 121, in __init__\nFile \"/home/vagrant/.pyenv/versions/3.5.1/lib/python3.5/posixpath.py\", line 139, in basename\nAttributeError: 'NoneType' object has no attribute 'rfind'\n__main__ returned -1\n```\n\nMore Traceback info can be found here:\n[borg_0.29.0_traceback.txt](https://github.com/borgbackup/borg/files/63456/borg_0.29.0_traceback.txt)\n\nBefore I found Borg, I was using this setup with Attic and that binary seemed to work okay.\n\n", "before_files": [{"content": "\"\"\"A basic extended attributes (xattr) implementation for Linux and MacOS X\n\"\"\"\nimport errno\nimport os\nimport sys\nimport tempfile\nfrom ctypes import CDLL, create_string_buffer, c_ssize_t, c_size_t, c_char_p, c_int, c_uint32, get_errno\nfrom ctypes.util import find_library\n\n\ndef is_enabled(path=None):\n \"\"\"Determine if xattr is enabled on the filesystem\n \"\"\"\n with tempfile.NamedTemporaryFile(dir=path, prefix='borg-tmp') as fd:\n try:\n setxattr(fd.fileno(), 'user.name', b'value')\n except OSError:\n return False\n return getxattr(fd.fileno(), 'user.name') == b'value'\n\n\ndef get_all(path, follow_symlinks=True):\n try:\n return dict((name, getxattr(path, name, follow_symlinks=follow_symlinks))\n for name in listxattr(path, follow_symlinks=follow_symlinks))\n except OSError as e:\n if e.errno in (errno.ENOTSUP, errno.EPERM):\n return {}\n\n\nlibc = CDLL(find_library('c'), use_errno=True)\n\n\ndef _check(rv, path=None):\n if rv < 0:\n raise OSError(get_errno(), path)\n return rv\n\nif sys.platform.startswith('linux'): # pragma: linux only\n libc.llistxattr.argtypes = (c_char_p, c_char_p, c_size_t)\n libc.llistxattr.restype = c_ssize_t\n libc.flistxattr.argtypes = (c_int, c_char_p, c_size_t)\n libc.flistxattr.restype = c_ssize_t\n libc.lsetxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t, c_int)\n libc.lsetxattr.restype = c_int\n libc.fsetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t, c_int)\n libc.fsetxattr.restype = c_int\n libc.lgetxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t)\n libc.lgetxattr.restype = c_ssize_t\n libc.fgetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t)\n libc.fgetxattr.restype = c_ssize_t\n\n def listxattr(path, *, follow_symlinks=True):\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.flistxattr\n elif follow_symlinks:\n func = libc.listxattr\n else:\n func = libc.llistxattr\n n = _check(func(path, None, 0), path)\n if n == 0:\n return []\n namebuf = create_string_buffer(n)\n n2 = _check(func(path, namebuf, n), path)\n if n2 != n:\n raise Exception('listxattr failed')\n return [os.fsdecode(name) for name in namebuf.raw.split(b'\\0')[:-1] if not name.startswith(b'system.posix_acl_')]\n\n def getxattr(path, name, *, follow_symlinks=True):\n name = os.fsencode(name)\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.fgetxattr\n elif follow_symlinks:\n func = libc.getxattr\n else:\n func = libc.lgetxattr\n n = _check(func(path, name, None, 0))\n if n == 0:\n return\n valuebuf = create_string_buffer(n)\n n2 = _check(func(path, name, valuebuf, n), path)\n if n2 != n:\n raise Exception('getxattr failed')\n return valuebuf.raw\n\n def setxattr(path, name, value, *, follow_symlinks=True):\n name = os.fsencode(name)\n value = value and os.fsencode(value)\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.fsetxattr\n elif follow_symlinks:\n func = libc.setxattr\n else:\n func = libc.lsetxattr\n _check(func(path, name, value, len(value) if value else 0, 0), path)\n\nelif sys.platform == 'darwin': # pragma: darwin only\n libc.listxattr.argtypes = (c_char_p, c_char_p, c_size_t, c_int)\n libc.listxattr.restype = c_ssize_t\n libc.flistxattr.argtypes = (c_int, c_char_p, c_size_t)\n libc.flistxattr.restype = c_ssize_t\n libc.setxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t, c_uint32, c_int)\n libc.setxattr.restype = c_int\n libc.fsetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t, c_uint32, c_int)\n libc.fsetxattr.restype = c_int\n libc.getxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t, c_uint32, c_int)\n libc.getxattr.restype = c_ssize_t\n libc.fgetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t, c_uint32, c_int)\n libc.fgetxattr.restype = c_ssize_t\n\n XATTR_NOFOLLOW = 0x0001\n\n def listxattr(path, *, follow_symlinks=True):\n func = libc.listxattr\n flags = 0\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.flistxattr\n elif not follow_symlinks:\n flags = XATTR_NOFOLLOW\n n = _check(func(path, None, 0, flags), path)\n if n == 0:\n return []\n namebuf = create_string_buffer(n)\n n2 = _check(func(path, namebuf, n, flags), path)\n if n2 != n:\n raise Exception('listxattr failed')\n return [os.fsdecode(name) for name in namebuf.raw.split(b'\\0')[:-1]]\n\n def getxattr(path, name, *, follow_symlinks=True):\n name = os.fsencode(name)\n func = libc.getxattr\n flags = 0\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.fgetxattr\n elif not follow_symlinks:\n flags = XATTR_NOFOLLOW\n n = _check(func(path, name, None, 0, 0, flags))\n if n == 0:\n return\n valuebuf = create_string_buffer(n)\n n2 = _check(func(path, name, valuebuf, n, 0, flags), path)\n if n2 != n:\n raise Exception('getxattr failed')\n return valuebuf.raw\n\n def setxattr(path, name, value, *, follow_symlinks=True):\n name = os.fsencode(name)\n value = value and os.fsencode(value)\n func = libc.setxattr\n flags = 0\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.fsetxattr\n elif not follow_symlinks:\n flags = XATTR_NOFOLLOW\n _check(func(path, name, value, len(value) if value else 0, 0, flags), path)\n\nelif sys.platform.startswith('freebsd'): # pragma: freebsd only\n EXTATTR_NAMESPACE_USER = 0x0001\n libc.extattr_list_fd.argtypes = (c_int, c_int, c_char_p, c_size_t)\n libc.extattr_list_fd.restype = c_ssize_t\n libc.extattr_list_link.argtypes = (c_char_p, c_int, c_char_p, c_size_t)\n libc.extattr_list_link.restype = c_ssize_t\n libc.extattr_list_file.argtypes = (c_char_p, c_int, c_char_p, c_size_t)\n libc.extattr_list_file.restype = c_ssize_t\n libc.extattr_get_fd.argtypes = (c_int, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_get_fd.restype = c_ssize_t\n libc.extattr_get_link.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_get_link.restype = c_ssize_t\n libc.extattr_get_file.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_get_file.restype = c_ssize_t\n libc.extattr_set_fd.argtypes = (c_int, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_set_fd.restype = c_int\n libc.extattr_set_link.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_set_link.restype = c_int\n libc.extattr_set_file.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_set_file.restype = c_int\n\n def listxattr(path, *, follow_symlinks=True):\n ns = EXTATTR_NAMESPACE_USER\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.extattr_list_fd\n elif follow_symlinks:\n func = libc.extattr_list_file\n else:\n func = libc.extattr_list_link\n n = _check(func(path, ns, None, 0), path)\n if n == 0:\n return []\n namebuf = create_string_buffer(n)\n n2 = _check(func(path, ns, namebuf, n), path)\n if n2 != n:\n raise Exception('listxattr failed')\n names = []\n mv = memoryview(namebuf.raw)\n while mv:\n length = mv[0]\n # Python < 3.3 returns bytes instead of int\n if isinstance(length, bytes):\n length = ord(length)\n names.append(os.fsdecode(bytes(mv[1:1+length])))\n mv = mv[1+length:]\n return names\n\n def getxattr(path, name, *, follow_symlinks=True):\n name = os.fsencode(name)\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.extattr_get_fd\n elif follow_symlinks:\n func = libc.extattr_get_file\n else:\n func = libc.extattr_get_link\n n = _check(func(path, EXTATTR_NAMESPACE_USER, name, None, 0))\n if n == 0:\n return\n valuebuf = create_string_buffer(n)\n n2 = _check(func(path, EXTATTR_NAMESPACE_USER, name, valuebuf, n), path)\n if n2 != n:\n raise Exception('getxattr failed')\n return valuebuf.raw\n\n def setxattr(path, name, value, *, follow_symlinks=True):\n name = os.fsencode(name)\n value = value and os.fsencode(value)\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.extattr_set_fd\n elif follow_symlinks:\n func = libc.extattr_set_file\n else:\n func = libc.extattr_set_link\n _check(func(path, EXTATTR_NAMESPACE_USER, name, value, len(value) if value else 0), path)\n\nelse: # pragma: unknown platform only\n def listxattr(path, *, follow_symlinks=True):\n return []\n\n def getxattr(path, name, *, follow_symlinks=True):\n pass\n\n def setxattr(path, name, value, *, follow_symlinks=True):\n pass\n", "path": "borg/xattr.py"}], "after_files": [{"content": "\"\"\"A basic extended attributes (xattr) implementation for Linux and MacOS X\n\"\"\"\nimport errno\nimport os\nimport sys\nimport tempfile\nfrom ctypes import CDLL, create_string_buffer, c_ssize_t, c_size_t, c_char_p, c_int, c_uint32, get_errno\nfrom ctypes.util import find_library\n\nfrom .logger import create_logger\nlogger = create_logger()\n\n\ndef is_enabled(path=None):\n \"\"\"Determine if xattr is enabled on the filesystem\n \"\"\"\n with tempfile.NamedTemporaryFile(dir=path, prefix='borg-tmp') as fd:\n try:\n setxattr(fd.fileno(), 'user.name', b'value')\n except OSError:\n return False\n return getxattr(fd.fileno(), 'user.name') == b'value'\n\n\ndef get_all(path, follow_symlinks=True):\n try:\n return dict((name, getxattr(path, name, follow_symlinks=follow_symlinks))\n for name in listxattr(path, follow_symlinks=follow_symlinks))\n except OSError as e:\n if e.errno in (errno.ENOTSUP, errno.EPERM):\n return {}\n\nlibc_name = find_library('c')\nif libc_name is None:\n # find_library didn't work, maybe we are on some minimal system that misses essential\n # tools used by find_library, like ldconfig, gcc/cc, objdump.\n # so we can only try some \"usual\" names for the C library:\n if sys.platform.startswith('linux'):\n libc_name = 'libc.so.6'\n elif sys.platform.startswith(('freebsd', 'netbsd')):\n libc_name = 'libc.so'\n elif sys.platform == 'darwin':\n libc_name = 'libc.dylib'\n else:\n msg = \"Can't find C library. No fallback known. Try installing ldconfig, gcc/cc or objdump.\"\n logger.error(msg)\n raise Exception(msg)\n\ntry:\n libc = CDLL(libc_name, use_errno=True)\nexcept OSError as e:\n msg = \"Can't find C library [%s]. Try installing ldconfig, gcc/cc or objdump.\" % e\n logger.error(msg)\n raise Exception(msg)\n\n\ndef _check(rv, path=None):\n if rv < 0:\n raise OSError(get_errno(), path)\n return rv\n\nif sys.platform.startswith('linux'): # pragma: linux only\n libc.llistxattr.argtypes = (c_char_p, c_char_p, c_size_t)\n libc.llistxattr.restype = c_ssize_t\n libc.flistxattr.argtypes = (c_int, c_char_p, c_size_t)\n libc.flistxattr.restype = c_ssize_t\n libc.lsetxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t, c_int)\n libc.lsetxattr.restype = c_int\n libc.fsetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t, c_int)\n libc.fsetxattr.restype = c_int\n libc.lgetxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t)\n libc.lgetxattr.restype = c_ssize_t\n libc.fgetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t)\n libc.fgetxattr.restype = c_ssize_t\n\n def listxattr(path, *, follow_symlinks=True):\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.flistxattr\n elif follow_symlinks:\n func = libc.listxattr\n else:\n func = libc.llistxattr\n n = _check(func(path, None, 0), path)\n if n == 0:\n return []\n namebuf = create_string_buffer(n)\n n2 = _check(func(path, namebuf, n), path)\n if n2 != n:\n raise Exception('listxattr failed')\n return [os.fsdecode(name) for name in namebuf.raw.split(b'\\0')[:-1] if not name.startswith(b'system.posix_acl_')]\n\n def getxattr(path, name, *, follow_symlinks=True):\n name = os.fsencode(name)\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.fgetxattr\n elif follow_symlinks:\n func = libc.getxattr\n else:\n func = libc.lgetxattr\n n = _check(func(path, name, None, 0))\n if n == 0:\n return\n valuebuf = create_string_buffer(n)\n n2 = _check(func(path, name, valuebuf, n), path)\n if n2 != n:\n raise Exception('getxattr failed')\n return valuebuf.raw\n\n def setxattr(path, name, value, *, follow_symlinks=True):\n name = os.fsencode(name)\n value = value and os.fsencode(value)\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.fsetxattr\n elif follow_symlinks:\n func = libc.setxattr\n else:\n func = libc.lsetxattr\n _check(func(path, name, value, len(value) if value else 0, 0), path)\n\nelif sys.platform == 'darwin': # pragma: darwin only\n libc.listxattr.argtypes = (c_char_p, c_char_p, c_size_t, c_int)\n libc.listxattr.restype = c_ssize_t\n libc.flistxattr.argtypes = (c_int, c_char_p, c_size_t)\n libc.flistxattr.restype = c_ssize_t\n libc.setxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t, c_uint32, c_int)\n libc.setxattr.restype = c_int\n libc.fsetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t, c_uint32, c_int)\n libc.fsetxattr.restype = c_int\n libc.getxattr.argtypes = (c_char_p, c_char_p, c_char_p, c_size_t, c_uint32, c_int)\n libc.getxattr.restype = c_ssize_t\n libc.fgetxattr.argtypes = (c_int, c_char_p, c_char_p, c_size_t, c_uint32, c_int)\n libc.fgetxattr.restype = c_ssize_t\n\n XATTR_NOFOLLOW = 0x0001\n\n def listxattr(path, *, follow_symlinks=True):\n func = libc.listxattr\n flags = 0\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.flistxattr\n elif not follow_symlinks:\n flags = XATTR_NOFOLLOW\n n = _check(func(path, None, 0, flags), path)\n if n == 0:\n return []\n namebuf = create_string_buffer(n)\n n2 = _check(func(path, namebuf, n, flags), path)\n if n2 != n:\n raise Exception('listxattr failed')\n return [os.fsdecode(name) for name in namebuf.raw.split(b'\\0')[:-1]]\n\n def getxattr(path, name, *, follow_symlinks=True):\n name = os.fsencode(name)\n func = libc.getxattr\n flags = 0\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.fgetxattr\n elif not follow_symlinks:\n flags = XATTR_NOFOLLOW\n n = _check(func(path, name, None, 0, 0, flags))\n if n == 0:\n return\n valuebuf = create_string_buffer(n)\n n2 = _check(func(path, name, valuebuf, n, 0, flags), path)\n if n2 != n:\n raise Exception('getxattr failed')\n return valuebuf.raw\n\n def setxattr(path, name, value, *, follow_symlinks=True):\n name = os.fsencode(name)\n value = value and os.fsencode(value)\n func = libc.setxattr\n flags = 0\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.fsetxattr\n elif not follow_symlinks:\n flags = XATTR_NOFOLLOW\n _check(func(path, name, value, len(value) if value else 0, 0, flags), path)\n\nelif sys.platform.startswith('freebsd'): # pragma: freebsd only\n EXTATTR_NAMESPACE_USER = 0x0001\n libc.extattr_list_fd.argtypes = (c_int, c_int, c_char_p, c_size_t)\n libc.extattr_list_fd.restype = c_ssize_t\n libc.extattr_list_link.argtypes = (c_char_p, c_int, c_char_p, c_size_t)\n libc.extattr_list_link.restype = c_ssize_t\n libc.extattr_list_file.argtypes = (c_char_p, c_int, c_char_p, c_size_t)\n libc.extattr_list_file.restype = c_ssize_t\n libc.extattr_get_fd.argtypes = (c_int, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_get_fd.restype = c_ssize_t\n libc.extattr_get_link.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_get_link.restype = c_ssize_t\n libc.extattr_get_file.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_get_file.restype = c_ssize_t\n libc.extattr_set_fd.argtypes = (c_int, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_set_fd.restype = c_int\n libc.extattr_set_link.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_set_link.restype = c_int\n libc.extattr_set_file.argtypes = (c_char_p, c_int, c_char_p, c_char_p, c_size_t)\n libc.extattr_set_file.restype = c_int\n\n def listxattr(path, *, follow_symlinks=True):\n ns = EXTATTR_NAMESPACE_USER\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.extattr_list_fd\n elif follow_symlinks:\n func = libc.extattr_list_file\n else:\n func = libc.extattr_list_link\n n = _check(func(path, ns, None, 0), path)\n if n == 0:\n return []\n namebuf = create_string_buffer(n)\n n2 = _check(func(path, ns, namebuf, n), path)\n if n2 != n:\n raise Exception('listxattr failed')\n names = []\n mv = memoryview(namebuf.raw)\n while mv:\n length = mv[0]\n # Python < 3.3 returns bytes instead of int\n if isinstance(length, bytes):\n length = ord(length)\n names.append(os.fsdecode(bytes(mv[1:1+length])))\n mv = mv[1+length:]\n return names\n\n def getxattr(path, name, *, follow_symlinks=True):\n name = os.fsencode(name)\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.extattr_get_fd\n elif follow_symlinks:\n func = libc.extattr_get_file\n else:\n func = libc.extattr_get_link\n n = _check(func(path, EXTATTR_NAMESPACE_USER, name, None, 0))\n if n == 0:\n return\n valuebuf = create_string_buffer(n)\n n2 = _check(func(path, EXTATTR_NAMESPACE_USER, name, valuebuf, n), path)\n if n2 != n:\n raise Exception('getxattr failed')\n return valuebuf.raw\n\n def setxattr(path, name, value, *, follow_symlinks=True):\n name = os.fsencode(name)\n value = value and os.fsencode(value)\n if isinstance(path, str):\n path = os.fsencode(path)\n if isinstance(path, int):\n func = libc.extattr_set_fd\n elif follow_symlinks:\n func = libc.extattr_set_file\n else:\n func = libc.extattr_set_link\n _check(func(path, EXTATTR_NAMESPACE_USER, name, value, len(value) if value else 0), path)\n\nelse: # pragma: unknown platform only\n def listxattr(path, *, follow_symlinks=True):\n return []\n\n def getxattr(path, name, *, follow_symlinks=True):\n pass\n\n def setxattr(path, name, value, *, follow_symlinks=True):\n pass\n", "path": "borg/xattr.py"}]}
| 3,971 | 414 |
gh_patches_debug_29770
|
rasdani/github-patches
|
git_diff
|
huggingface__trl-1415
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can not import name top_k_top_p_filtering
Hello everyone, I just came across this kind of error, I cannot import trl because of this kind of error:
```ImportError: cannot import name 'top_k_top_p_filtering' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/__init__.py)```
although I upgrade transformers library, it remained not working
https://github.com/huggingface/trl/blob/22b4f548f4954319ece6f17cab226a26e2db65be/trl/core.py#L25
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `trl/core.py`
Content:
```
1 # Copyright 2022 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import gc
15 import random
16 import warnings
17 from contextlib import contextmanager
18 from typing import Dict, List, Optional, Tuple, Union
19
20 import numpy as np
21 import torch
22 import torch.nn as nn
23 import torch.nn.functional as F
24 from torch.nn.utils.rnn import pad_sequence
25 from transformers import top_k_top_p_filtering
26
27 from .import_utils import is_npu_available, is_xpu_available
28
29
30 try:
31 from collections.abc import Mapping
32 except ImportError:
33 from collections.abc import Mapping
34
35
36 WANDB_PADDING = -1
37
38
39 def flatten_dict(nested: Dict, sep: str = "/") -> Dict:
40 """Flatten dictionary and concatenate nested keys with separator."""
41
42 def recurse(nest: Dict, prefix: str, into: Dict) -> None:
43 for k, v in nest.items():
44 if sep in k:
45 raise ValueError(f"separator '{sep}' not allowed to be in key '{k}'")
46 if isinstance(v, Mapping):
47 recurse(v, prefix + k + sep, into)
48 else:
49 into[prefix + k] = v
50
51 flat = {}
52 recurse(nested, "", flat)
53 return flat
54
55
56 def convert_to_scalar(stats: Dict) -> Dict:
57 """
58 Converts the stats from a flattened dict to single scalar dicts
59 """
60 tensorboard_stats = {}
61 for k, v in stats.items():
62 # for tensorboard compatibility - arrays and tensors are ignored with tensorboard
63 # therefore we convert single element tensors to scalars
64 if (isinstance(v, torch.Tensor) or isinstance(v, np.ndarray)) and (
65 len(v.shape) == 0 or (len(v.shape) == 1 and v.shape[0] == 1)
66 ):
67 v = v.item()
68 tensorboard_stats[k] = v
69 return tensorboard_stats
70
71
72 def stack_dicts(stats_dicts: List[Dict]) -> Dict:
73 """Stack the values of a dict."""
74 results = dict()
75 for k in stats_dicts[0]:
76 stats_list = [torch.flatten(d[k]) for d in stats_dicts]
77 results[k] = pad_sequence(stats_list, batch_first=True, padding_value=WANDB_PADDING)
78 return results
79
80
81 def add_suffix(input_dict: Dict, suffix: str) -> Dict:
82 """Add suffix to dict keys."""
83 return {k + suffix: v for k, v in input_dict.items()}
84
85
86 def pad_to_size(tensor: torch.Tensor, size: int, dim: int = 1, padding: int = 50256) -> torch.Tensor:
87 """Pad tensor to size."""
88 t_size = tensor.size()[dim]
89 if t_size == size:
90 return tensor
91 else:
92 return torch.nn.functional.pad(tensor, (0, size - t_size), "constant", padding)
93
94
95 def logprobs_from_logits(logits: torch.Tensor, labels: torch.Tensor, gather: bool = True) -> torch.Tensor:
96 """
97 See: https://github.com/pytorch/pytorch/issues/563#issuecomment-330103591
98 """
99 logp = F.log_softmax(logits, dim=2)
100
101 if not gather:
102 return logp
103 logpy = torch.gather(logp, 2, labels.unsqueeze(2)).squeeze(-1)
104 return logpy
105
106
107 def whiten(values: torch.Tensor, shift_mean: bool = True) -> torch.Tensor:
108 """Whiten values."""
109 mean, var = torch.mean(values), torch.var(values)
110 whitened = (values - mean) * torch.rsqrt(var + 1e-8)
111 if not shift_mean:
112 whitened += mean
113 return whitened
114
115
116 def masked_mean(values: torch.Tensor, mask: torch.Tensor, axis: Optional[bool] = None) -> torch.Tensor:
117 """Compute mean of tensor with a masked values."""
118 if axis is not None:
119 return (values * mask).sum(axis=axis) / mask.sum(axis=axis)
120 else:
121 return (values * mask).sum() / mask.sum()
122
123
124 def masked_var(values: torch.Tensor, mask: torch.Tensor, unbiased: bool = True) -> torch.Tensor:
125 """Compute variance of tensor with masked values."""
126 mean = masked_mean(values, mask)
127 centered_values = values - mean
128 variance = masked_mean(centered_values**2, mask)
129 if unbiased:
130 mask_sum = mask.sum()
131 if mask_sum == 0:
132 raise ValueError(
133 "The sum of the mask is zero, which can happen when `mini_batch_size=1`;"
134 "try increase the `mini_batch_size` or `gradient_accumulation_steps`"
135 )
136 # note that if mask_sum == 1, then there is a division by zero issue
137 # to avoid it you just need to use a larger minibatch_size
138 bessel_correction = mask_sum / (mask_sum - 1)
139 variance = variance * bessel_correction
140 return variance
141
142
143 def masked_whiten(values: torch.Tensor, mask: torch.Tensor, shift_mean: bool = True) -> torch.Tensor:
144 """Whiten values with masked values."""
145 mean, var = masked_mean(values, mask), masked_var(values, mask)
146 whitened = (values - mean) * torch.rsqrt(var + 1e-8)
147 if not shift_mean:
148 whitened += mean
149 return whitened
150
151
152 def clip_by_value(x: torch.Tensor, tensor_min: float, tensor_max: float) -> torch.Tensor:
153 """
154 Tensor extension to torch.clamp
155 https://github.com/pytorch/pytorch/issues/2793#issuecomment-428784713
156 """
157 clipped = torch.max(torch.min(x, tensor_max), tensor_min)
158 return clipped
159
160
161 def entropy_from_logits(logits: torch.Tensor) -> torch.Tensor:
162 """Calculate entropy from logits."""
163 pd = torch.nn.functional.softmax(logits, dim=-1)
164 entropy = torch.logsumexp(logits, axis=-1) - torch.sum(pd * logits, axis=-1)
165 return entropy
166
167
168 def average_torch_dicts(list_of_dicts: List[Dict]) -> Dict:
169 """Average values of a list of dicts with torch tensors."""
170 average_dict = dict()
171 for key in list_of_dicts[0].keys():
172 average_dict[key] = torch.mean(torch.stack([d[key] for d in list_of_dicts]), axis=0)
173 return average_dict
174
175
176 def stats_to_np(stats_dict: Dict) -> Dict:
177 """Cast all torch.tensors in dict to numpy arrays."""
178 new_dict = dict()
179 for k, v in stats_dict.items():
180 if isinstance(v, torch.Tensor):
181 new_dict[k] = v.detach().cpu()
182 if new_dict[k].dtype == torch.bfloat16:
183 new_dict[k] = new_dict[k].float()
184 new_dict[k] = new_dict[k].numpy()
185 else:
186 new_dict[k] = v
187 if np.isscalar(new_dict[k]):
188 new_dict[k] = float(new_dict[k])
189 return new_dict
190
191
192 def respond_to_batch(
193 model: nn.Module, queries: List[torch.LongTensor], txt_len: int = 20, top_k: int = 0, top_p: float = 1.0
194 ) -> torch.LongTensor:
195 """Sample text from language model."""
196 input_ids = queries
197 for _i in range(txt_len):
198 # Get Logits
199 outputs = model(input_ids)
200 next_token_logits = outputs[0][:, -1, :]
201 next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)
202 # Sample
203 probs = F.softmax(next_token_logits, dim=-1)
204 next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
205 input_ids = torch.cat([input_ids, next_token.unsqueeze(-1)], dim=-1)
206 return input_ids[:, -txt_len:]
207
208
209 def set_seed(seed: int) -> None:
210 """
211 Helper function for reproducible behavior to set the seed in `random`, `numpy`, and `torch`.
212
213 Args:
214 seed (`int`): The seed to set.
215 """
216 random.seed(seed)
217 np.random.seed(seed)
218 torch.manual_seed(seed)
219 if is_xpu_available():
220 torch.xpu.manual_seed_all(seed)
221 elif is_npu_available():
222 torch.npu.manual_seed_all(seed)
223 else:
224 torch.cuda.manual_seed_all(seed)
225
226
227 class LengthSampler:
228 """
229 Samples a length
230 """
231
232 def __init__(self, min_value: int, max_value: int):
233 self.values = list(range(min_value, max_value))
234
235 def __call__(self) -> int:
236 return np.random.choice(self.values)
237
238
239 class PPODecorators:
240 optimize_device_cache = False
241
242 @classmethod
243 @contextmanager
244 def empty_device_cache(cls):
245 yield
246 if cls.optimize_device_cache:
247 if is_xpu_available():
248 gc.collect()
249 torch.xpu.empty_cache()
250 gc.collect()
251 elif is_npu_available():
252 gc.collect()
253 torch.npu.empty_cache()
254 gc.collect()
255 elif torch.cuda.is_available():
256 gc.collect()
257 torch.cuda.empty_cache()
258 gc.collect()
259
260
261 def randn_tensor(
262 shape: Union[Tuple, List],
263 generator: Optional[Union[List[torch.Generator], torch.Generator]] = None,
264 device: Optional[torch.device] = None,
265 dtype: Optional[torch.dtype] = None,
266 layout: Optional[torch.layout] = None,
267 ) -> torch.Tensor:
268 """A helper function to create random tensors on the desired `device` with the desired `dtype`. When
269 passing a list of generators, you can seed each batch size individually. If CPU generators are passed, the tensor
270 is always created on the CPU.
271 """
272 # device on which tensor is created defaults to device
273 rand_device = device
274 batch_size = shape[0]
275
276 layout = layout or torch.strided
277 device = device or torch.device("cpu")
278
279 if generator is not None:
280 gen_device_type = generator.device.type if not isinstance(generator, list) else generator[0].device.type
281 if gen_device_type != device.type and gen_device_type == "cpu":
282 rand_device = "cpu"
283 if device != "mps":
284 warnings.warn(
285 f"The passed generator was created on 'cpu' even though a tensor on {device} was expected."
286 f" Tensors will be created on 'cpu' and then moved to {device}. Note that one can probably"
287 f" slighly speed up this function by passing a generator that was created on the {device} device."
288 )
289 elif gen_device_type != device.type and gen_device_type == "cuda":
290 raise ValueError(f"Cannot generate a {device} tensor from a generator of type {gen_device_type}.")
291
292 # make sure generator list of length 1 is treated like a non-list
293 if isinstance(generator, list) and len(generator) == 1:
294 generator = generator[0]
295
296 if isinstance(generator, list):
297 shape = (1,) + shape[1:]
298 latents = [
299 torch.randn(shape, generator=generator[i], device=rand_device, dtype=dtype, layout=layout)
300 for i in range(batch_size)
301 ]
302 latents = torch.cat(latents, dim=0).to(device)
303 else:
304 latents = torch.randn(shape, generator=generator, device=rand_device, dtype=dtype, layout=layout).to(device)
305
306 return latents
307
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/trl/core.py b/trl/core.py
--- a/trl/core.py
+++ b/trl/core.py
@@ -22,7 +22,7 @@
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils.rnn import pad_sequence
-from transformers import top_k_top_p_filtering
+from transformers.generation import TopKLogitsWarper, TopPLogitsWarper
from .import_utils import is_npu_available, is_xpu_available
@@ -36,6 +36,42 @@
WANDB_PADDING = -1
+def top_k_top_p_filtering(
+ logits: torch.FloatTensor,
+ top_k: int = 0,
+ top_p: float = 1.0,
+ filter_value: float = -float("Inf"),
+ min_tokens_to_keep: int = 1,
+) -> torch.FloatTensor:
+ """
+ Filter a distribution of logits using top-k and/or nucleus (top-p) filtering.
+
+ Args:
+ logits: logits distribution shape (batch size, vocabulary size)
+ top_k (`int`, *optional*, defaults to 0):
+ If > 0, only keep the top k tokens with highest probability (top-k filtering)
+ top_p (`float`, *optional*, defaults to 1.0):
+ If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus
+ filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
+ min_tokens_to_keep (`int`, *optional*, defaults to 1):
+ Minimumber of tokens we keep per batch example in the output.
+
+ From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
+ """
+
+ if top_k > 0:
+ logits = TopKLogitsWarper(top_k=top_k, filter_value=filter_value, min_tokens_to_keep=min_tokens_to_keep)(
+ None, logits
+ )
+
+ if 0 <= top_p <= 1.0:
+ logits = TopPLogitsWarper(top_p=top_p, filter_value=filter_value, min_tokens_to_keep=min_tokens_to_keep)(
+ None, logits
+ )
+
+ return logits
+
+
def flatten_dict(nested: Dict, sep: str = "/") -> Dict:
"""Flatten dictionary and concatenate nested keys with separator."""
|
{"golden_diff": "diff --git a/trl/core.py b/trl/core.py\n--- a/trl/core.py\n+++ b/trl/core.py\n@@ -22,7 +22,7 @@\n import torch.nn as nn\n import torch.nn.functional as F\n from torch.nn.utils.rnn import pad_sequence\n-from transformers import top_k_top_p_filtering\n+from transformers.generation import TopKLogitsWarper, TopPLogitsWarper\n \n from .import_utils import is_npu_available, is_xpu_available\n \n@@ -36,6 +36,42 @@\n WANDB_PADDING = -1\n \n \n+def top_k_top_p_filtering(\n+ logits: torch.FloatTensor,\n+ top_k: int = 0,\n+ top_p: float = 1.0,\n+ filter_value: float = -float(\"Inf\"),\n+ min_tokens_to_keep: int = 1,\n+) -> torch.FloatTensor:\n+ \"\"\"\n+ Filter a distribution of logits using top-k and/or nucleus (top-p) filtering.\n+\n+ Args:\n+ logits: logits distribution shape (batch size, vocabulary size)\n+ top_k (`int`, *optional*, defaults to 0):\n+ If > 0, only keep the top k tokens with highest probability (top-k filtering)\n+ top_p (`float`, *optional*, defaults to 1.0):\n+ If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus\n+ filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)\n+ min_tokens_to_keep (`int`, *optional*, defaults to 1):\n+ Minimumber of tokens we keep per batch example in the output.\n+\n+ From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317\n+ \"\"\"\n+\n+ if top_k > 0:\n+ logits = TopKLogitsWarper(top_k=top_k, filter_value=filter_value, min_tokens_to_keep=min_tokens_to_keep)(\n+ None, logits\n+ )\n+\n+ if 0 <= top_p <= 1.0:\n+ logits = TopPLogitsWarper(top_p=top_p, filter_value=filter_value, min_tokens_to_keep=min_tokens_to_keep)(\n+ None, logits\n+ )\n+\n+ return logits\n+\n+\n def flatten_dict(nested: Dict, sep: str = \"/\") -> Dict:\n \"\"\"Flatten dictionary and concatenate nested keys with separator.\"\"\"\n", "issue": "Can not import name top_k_top_p_filtering\nHello everyone, I just came across this kind of error, I cannot import trl because of this kind of error: \r\n```ImportError: cannot import name 'top_k_top_p_filtering' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/__init__.py)```\r\nalthough I upgrade transformers library, it remained not working\r\n\r\nhttps://github.com/huggingface/trl/blob/22b4f548f4954319ece6f17cab226a26e2db65be/trl/core.py#L25\n", "before_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport gc\nimport random\nimport warnings\nfrom contextlib import contextmanager\nfrom typing import Dict, List, Optional, Tuple, Union\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.nn.utils.rnn import pad_sequence\nfrom transformers import top_k_top_p_filtering\n\nfrom .import_utils import is_npu_available, is_xpu_available\n\n\ntry:\n from collections.abc import Mapping\nexcept ImportError:\n from collections.abc import Mapping\n\n\nWANDB_PADDING = -1\n\n\ndef flatten_dict(nested: Dict, sep: str = \"/\") -> Dict:\n \"\"\"Flatten dictionary and concatenate nested keys with separator.\"\"\"\n\n def recurse(nest: Dict, prefix: str, into: Dict) -> None:\n for k, v in nest.items():\n if sep in k:\n raise ValueError(f\"separator '{sep}' not allowed to be in key '{k}'\")\n if isinstance(v, Mapping):\n recurse(v, prefix + k + sep, into)\n else:\n into[prefix + k] = v\n\n flat = {}\n recurse(nested, \"\", flat)\n return flat\n\n\ndef convert_to_scalar(stats: Dict) -> Dict:\n \"\"\"\n Converts the stats from a flattened dict to single scalar dicts\n \"\"\"\n tensorboard_stats = {}\n for k, v in stats.items():\n # for tensorboard compatibility - arrays and tensors are ignored with tensorboard\n # therefore we convert single element tensors to scalars\n if (isinstance(v, torch.Tensor) or isinstance(v, np.ndarray)) and (\n len(v.shape) == 0 or (len(v.shape) == 1 and v.shape[0] == 1)\n ):\n v = v.item()\n tensorboard_stats[k] = v\n return tensorboard_stats\n\n\ndef stack_dicts(stats_dicts: List[Dict]) -> Dict:\n \"\"\"Stack the values of a dict.\"\"\"\n results = dict()\n for k in stats_dicts[0]:\n stats_list = [torch.flatten(d[k]) for d in stats_dicts]\n results[k] = pad_sequence(stats_list, batch_first=True, padding_value=WANDB_PADDING)\n return results\n\n\ndef add_suffix(input_dict: Dict, suffix: str) -> Dict:\n \"\"\"Add suffix to dict keys.\"\"\"\n return {k + suffix: v for k, v in input_dict.items()}\n\n\ndef pad_to_size(tensor: torch.Tensor, size: int, dim: int = 1, padding: int = 50256) -> torch.Tensor:\n \"\"\"Pad tensor to size.\"\"\"\n t_size = tensor.size()[dim]\n if t_size == size:\n return tensor\n else:\n return torch.nn.functional.pad(tensor, (0, size - t_size), \"constant\", padding)\n\n\ndef logprobs_from_logits(logits: torch.Tensor, labels: torch.Tensor, gather: bool = True) -> torch.Tensor:\n \"\"\"\n See: https://github.com/pytorch/pytorch/issues/563#issuecomment-330103591\n \"\"\"\n logp = F.log_softmax(logits, dim=2)\n\n if not gather:\n return logp\n logpy = torch.gather(logp, 2, labels.unsqueeze(2)).squeeze(-1)\n return logpy\n\n\ndef whiten(values: torch.Tensor, shift_mean: bool = True) -> torch.Tensor:\n \"\"\"Whiten values.\"\"\"\n mean, var = torch.mean(values), torch.var(values)\n whitened = (values - mean) * torch.rsqrt(var + 1e-8)\n if not shift_mean:\n whitened += mean\n return whitened\n\n\ndef masked_mean(values: torch.Tensor, mask: torch.Tensor, axis: Optional[bool] = None) -> torch.Tensor:\n \"\"\"Compute mean of tensor with a masked values.\"\"\"\n if axis is not None:\n return (values * mask).sum(axis=axis) / mask.sum(axis=axis)\n else:\n return (values * mask).sum() / mask.sum()\n\n\ndef masked_var(values: torch.Tensor, mask: torch.Tensor, unbiased: bool = True) -> torch.Tensor:\n \"\"\"Compute variance of tensor with masked values.\"\"\"\n mean = masked_mean(values, mask)\n centered_values = values - mean\n variance = masked_mean(centered_values**2, mask)\n if unbiased:\n mask_sum = mask.sum()\n if mask_sum == 0:\n raise ValueError(\n \"The sum of the mask is zero, which can happen when `mini_batch_size=1`;\"\n \"try increase the `mini_batch_size` or `gradient_accumulation_steps`\"\n )\n # note that if mask_sum == 1, then there is a division by zero issue\n # to avoid it you just need to use a larger minibatch_size\n bessel_correction = mask_sum / (mask_sum - 1)\n variance = variance * bessel_correction\n return variance\n\n\ndef masked_whiten(values: torch.Tensor, mask: torch.Tensor, shift_mean: bool = True) -> torch.Tensor:\n \"\"\"Whiten values with masked values.\"\"\"\n mean, var = masked_mean(values, mask), masked_var(values, mask)\n whitened = (values - mean) * torch.rsqrt(var + 1e-8)\n if not shift_mean:\n whitened += mean\n return whitened\n\n\ndef clip_by_value(x: torch.Tensor, tensor_min: float, tensor_max: float) -> torch.Tensor:\n \"\"\"\n Tensor extension to torch.clamp\n https://github.com/pytorch/pytorch/issues/2793#issuecomment-428784713\n \"\"\"\n clipped = torch.max(torch.min(x, tensor_max), tensor_min)\n return clipped\n\n\ndef entropy_from_logits(logits: torch.Tensor) -> torch.Tensor:\n \"\"\"Calculate entropy from logits.\"\"\"\n pd = torch.nn.functional.softmax(logits, dim=-1)\n entropy = torch.logsumexp(logits, axis=-1) - torch.sum(pd * logits, axis=-1)\n return entropy\n\n\ndef average_torch_dicts(list_of_dicts: List[Dict]) -> Dict:\n \"\"\"Average values of a list of dicts with torch tensors.\"\"\"\n average_dict = dict()\n for key in list_of_dicts[0].keys():\n average_dict[key] = torch.mean(torch.stack([d[key] for d in list_of_dicts]), axis=0)\n return average_dict\n\n\ndef stats_to_np(stats_dict: Dict) -> Dict:\n \"\"\"Cast all torch.tensors in dict to numpy arrays.\"\"\"\n new_dict = dict()\n for k, v in stats_dict.items():\n if isinstance(v, torch.Tensor):\n new_dict[k] = v.detach().cpu()\n if new_dict[k].dtype == torch.bfloat16:\n new_dict[k] = new_dict[k].float()\n new_dict[k] = new_dict[k].numpy()\n else:\n new_dict[k] = v\n if np.isscalar(new_dict[k]):\n new_dict[k] = float(new_dict[k])\n return new_dict\n\n\ndef respond_to_batch(\n model: nn.Module, queries: List[torch.LongTensor], txt_len: int = 20, top_k: int = 0, top_p: float = 1.0\n) -> torch.LongTensor:\n \"\"\"Sample text from language model.\"\"\"\n input_ids = queries\n for _i in range(txt_len):\n # Get Logits\n outputs = model(input_ids)\n next_token_logits = outputs[0][:, -1, :]\n next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)\n # Sample\n probs = F.softmax(next_token_logits, dim=-1)\n next_token = torch.multinomial(probs, num_samples=1).squeeze(1)\n input_ids = torch.cat([input_ids, next_token.unsqueeze(-1)], dim=-1)\n return input_ids[:, -txt_len:]\n\n\ndef set_seed(seed: int) -> None:\n \"\"\"\n Helper function for reproducible behavior to set the seed in `random`, `numpy`, and `torch`.\n\n Args:\n seed (`int`): The seed to set.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n if is_xpu_available():\n torch.xpu.manual_seed_all(seed)\n elif is_npu_available():\n torch.npu.manual_seed_all(seed)\n else:\n torch.cuda.manual_seed_all(seed)\n\n\nclass LengthSampler:\n \"\"\"\n Samples a length\n \"\"\"\n\n def __init__(self, min_value: int, max_value: int):\n self.values = list(range(min_value, max_value))\n\n def __call__(self) -> int:\n return np.random.choice(self.values)\n\n\nclass PPODecorators:\n optimize_device_cache = False\n\n @classmethod\n @contextmanager\n def empty_device_cache(cls):\n yield\n if cls.optimize_device_cache:\n if is_xpu_available():\n gc.collect()\n torch.xpu.empty_cache()\n gc.collect()\n elif is_npu_available():\n gc.collect()\n torch.npu.empty_cache()\n gc.collect()\n elif torch.cuda.is_available():\n gc.collect()\n torch.cuda.empty_cache()\n gc.collect()\n\n\ndef randn_tensor(\n shape: Union[Tuple, List],\n generator: Optional[Union[List[torch.Generator], torch.Generator]] = None,\n device: Optional[torch.device] = None,\n dtype: Optional[torch.dtype] = None,\n layout: Optional[torch.layout] = None,\n) -> torch.Tensor:\n \"\"\"A helper function to create random tensors on the desired `device` with the desired `dtype`. When\n passing a list of generators, you can seed each batch size individually. If CPU generators are passed, the tensor\n is always created on the CPU.\n \"\"\"\n # device on which tensor is created defaults to device\n rand_device = device\n batch_size = shape[0]\n\n layout = layout or torch.strided\n device = device or torch.device(\"cpu\")\n\n if generator is not None:\n gen_device_type = generator.device.type if not isinstance(generator, list) else generator[0].device.type\n if gen_device_type != device.type and gen_device_type == \"cpu\":\n rand_device = \"cpu\"\n if device != \"mps\":\n warnings.warn(\n f\"The passed generator was created on 'cpu' even though a tensor on {device} was expected.\"\n f\" Tensors will be created on 'cpu' and then moved to {device}. Note that one can probably\"\n f\" slighly speed up this function by passing a generator that was created on the {device} device.\"\n )\n elif gen_device_type != device.type and gen_device_type == \"cuda\":\n raise ValueError(f\"Cannot generate a {device} tensor from a generator of type {gen_device_type}.\")\n\n # make sure generator list of length 1 is treated like a non-list\n if isinstance(generator, list) and len(generator) == 1:\n generator = generator[0]\n\n if isinstance(generator, list):\n shape = (1,) + shape[1:]\n latents = [\n torch.randn(shape, generator=generator[i], device=rand_device, dtype=dtype, layout=layout)\n for i in range(batch_size)\n ]\n latents = torch.cat(latents, dim=0).to(device)\n else:\n latents = torch.randn(shape, generator=generator, device=rand_device, dtype=dtype, layout=layout).to(device)\n\n return latents\n", "path": "trl/core.py"}], "after_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport gc\nimport random\nimport warnings\nfrom contextlib import contextmanager\nfrom typing import Dict, List, Optional, Tuple, Union\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.nn.utils.rnn import pad_sequence\nfrom transformers.generation import TopKLogitsWarper, TopPLogitsWarper\n\nfrom .import_utils import is_npu_available, is_xpu_available\n\n\ntry:\n from collections.abc import Mapping\nexcept ImportError:\n from collections.abc import Mapping\n\n\nWANDB_PADDING = -1\n\n\ndef top_k_top_p_filtering(\n logits: torch.FloatTensor,\n top_k: int = 0,\n top_p: float = 1.0,\n filter_value: float = -float(\"Inf\"),\n min_tokens_to_keep: int = 1,\n) -> torch.FloatTensor:\n \"\"\"\n Filter a distribution of logits using top-k and/or nucleus (top-p) filtering.\n\n Args:\n logits: logits distribution shape (batch size, vocabulary size)\n top_k (`int`, *optional*, defaults to 0):\n If > 0, only keep the top k tokens with highest probability (top-k filtering)\n top_p (`float`, *optional*, defaults to 1.0):\n If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus\n filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)\n min_tokens_to_keep (`int`, *optional*, defaults to 1):\n Minimumber of tokens we keep per batch example in the output.\n\n From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317\n \"\"\"\n\n if top_k > 0:\n logits = TopKLogitsWarper(top_k=top_k, filter_value=filter_value, min_tokens_to_keep=min_tokens_to_keep)(\n None, logits\n )\n\n if 0 <= top_p <= 1.0:\n logits = TopPLogitsWarper(top_p=top_p, filter_value=filter_value, min_tokens_to_keep=min_tokens_to_keep)(\n None, logits\n )\n\n return logits\n\n\ndef flatten_dict(nested: Dict, sep: str = \"/\") -> Dict:\n \"\"\"Flatten dictionary and concatenate nested keys with separator.\"\"\"\n\n def recurse(nest: Dict, prefix: str, into: Dict) -> None:\n for k, v in nest.items():\n if sep in k:\n raise ValueError(f\"separator '{sep}' not allowed to be in key '{k}'\")\n if isinstance(v, Mapping):\n recurse(v, prefix + k + sep, into)\n else:\n into[prefix + k] = v\n\n flat = {}\n recurse(nested, \"\", flat)\n return flat\n\n\ndef convert_to_scalar(stats: Dict) -> Dict:\n \"\"\"\n Converts the stats from a flattened dict to single scalar dicts\n \"\"\"\n tensorboard_stats = {}\n for k, v in stats.items():\n # for tensorboard compatibility - arrays and tensors are ignored with tensorboard\n # therefore we convert single element tensors to scalars\n if (isinstance(v, torch.Tensor) or isinstance(v, np.ndarray)) and (\n len(v.shape) == 0 or (len(v.shape) == 1 and v.shape[0] == 1)\n ):\n v = v.item()\n tensorboard_stats[k] = v\n return tensorboard_stats\n\n\ndef stack_dicts(stats_dicts: List[Dict]) -> Dict:\n \"\"\"Stack the values of a dict.\"\"\"\n results = dict()\n for k in stats_dicts[0]:\n stats_list = [torch.flatten(d[k]) for d in stats_dicts]\n results[k] = pad_sequence(stats_list, batch_first=True, padding_value=WANDB_PADDING)\n return results\n\n\ndef add_suffix(input_dict: Dict, suffix: str) -> Dict:\n \"\"\"Add suffix to dict keys.\"\"\"\n return {k + suffix: v for k, v in input_dict.items()}\n\n\ndef pad_to_size(tensor: torch.Tensor, size: int, dim: int = 1, padding: int = 50256) -> torch.Tensor:\n \"\"\"Pad tensor to size.\"\"\"\n t_size = tensor.size()[dim]\n if t_size == size:\n return tensor\n else:\n return torch.nn.functional.pad(tensor, (0, size - t_size), \"constant\", padding)\n\n\ndef logprobs_from_logits(logits: torch.Tensor, labels: torch.Tensor, gather: bool = True) -> torch.Tensor:\n \"\"\"\n See: https://github.com/pytorch/pytorch/issues/563#issuecomment-330103591\n \"\"\"\n logp = F.log_softmax(logits, dim=2)\n\n if not gather:\n return logp\n logpy = torch.gather(logp, 2, labels.unsqueeze(2)).squeeze(-1)\n return logpy\n\n\ndef whiten(values: torch.Tensor, shift_mean: bool = True) -> torch.Tensor:\n \"\"\"Whiten values.\"\"\"\n mean, var = torch.mean(values), torch.var(values)\n whitened = (values - mean) * torch.rsqrt(var + 1e-8)\n if not shift_mean:\n whitened += mean\n return whitened\n\n\ndef masked_mean(values: torch.Tensor, mask: torch.Tensor, axis: Optional[bool] = None) -> torch.Tensor:\n \"\"\"Compute mean of tensor with a masked values.\"\"\"\n if axis is not None:\n return (values * mask).sum(axis=axis) / mask.sum(axis=axis)\n else:\n return (values * mask).sum() / mask.sum()\n\n\ndef masked_var(values: torch.Tensor, mask: torch.Tensor, unbiased: bool = True) -> torch.Tensor:\n \"\"\"Compute variance of tensor with masked values.\"\"\"\n mean = masked_mean(values, mask)\n centered_values = values - mean\n variance = masked_mean(centered_values**2, mask)\n if unbiased:\n mask_sum = mask.sum()\n if mask_sum == 0:\n raise ValueError(\n \"The sum of the mask is zero, which can happen when `mini_batch_size=1`;\"\n \"try increase the `mini_batch_size` or `gradient_accumulation_steps`\"\n )\n # note that if mask_sum == 1, then there is a division by zero issue\n # to avoid it you just need to use a larger minibatch_size\n bessel_correction = mask_sum / (mask_sum - 1)\n variance = variance * bessel_correction\n return variance\n\n\ndef masked_whiten(values: torch.Tensor, mask: torch.Tensor, shift_mean: bool = True) -> torch.Tensor:\n \"\"\"Whiten values with masked values.\"\"\"\n mean, var = masked_mean(values, mask), masked_var(values, mask)\n whitened = (values - mean) * torch.rsqrt(var + 1e-8)\n if not shift_mean:\n whitened += mean\n return whitened\n\n\ndef clip_by_value(x: torch.Tensor, tensor_min: float, tensor_max: float) -> torch.Tensor:\n \"\"\"\n Tensor extension to torch.clamp\n https://github.com/pytorch/pytorch/issues/2793#issuecomment-428784713\n \"\"\"\n clipped = torch.max(torch.min(x, tensor_max), tensor_min)\n return clipped\n\n\ndef entropy_from_logits(logits: torch.Tensor) -> torch.Tensor:\n \"\"\"Calculate entropy from logits.\"\"\"\n pd = torch.nn.functional.softmax(logits, dim=-1)\n entropy = torch.logsumexp(logits, axis=-1) - torch.sum(pd * logits, axis=-1)\n return entropy\n\n\ndef average_torch_dicts(list_of_dicts: List[Dict]) -> Dict:\n \"\"\"Average values of a list of dicts with torch tensors.\"\"\"\n average_dict = dict()\n for key in list_of_dicts[0].keys():\n average_dict[key] = torch.mean(torch.stack([d[key] for d in list_of_dicts]), axis=0)\n return average_dict\n\n\ndef stats_to_np(stats_dict: Dict) -> Dict:\n \"\"\"Cast all torch.tensors in dict to numpy arrays.\"\"\"\n new_dict = dict()\n for k, v in stats_dict.items():\n if isinstance(v, torch.Tensor):\n new_dict[k] = v.detach().cpu()\n if new_dict[k].dtype == torch.bfloat16:\n new_dict[k] = new_dict[k].float()\n new_dict[k] = new_dict[k].numpy()\n else:\n new_dict[k] = v\n if np.isscalar(new_dict[k]):\n new_dict[k] = float(new_dict[k])\n return new_dict\n\n\ndef respond_to_batch(\n model: nn.Module, queries: List[torch.LongTensor], txt_len: int = 20, top_k: int = 0, top_p: float = 1.0\n) -> torch.LongTensor:\n \"\"\"Sample text from language model.\"\"\"\n input_ids = queries\n for _i in range(txt_len):\n # Get Logits\n outputs = model(input_ids)\n next_token_logits = outputs[0][:, -1, :]\n next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)\n # Sample\n probs = F.softmax(next_token_logits, dim=-1)\n next_token = torch.multinomial(probs, num_samples=1).squeeze(1)\n input_ids = torch.cat([input_ids, next_token.unsqueeze(-1)], dim=-1)\n return input_ids[:, -txt_len:]\n\n\ndef set_seed(seed: int) -> None:\n \"\"\"\n Helper function for reproducible behavior to set the seed in `random`, `numpy`, and `torch`.\n\n Args:\n seed (`int`): The seed to set.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n if is_xpu_available():\n torch.xpu.manual_seed_all(seed)\n elif is_npu_available():\n torch.npu.manual_seed_all(seed)\n else:\n torch.cuda.manual_seed_all(seed)\n\n\nclass LengthSampler:\n \"\"\"\n Samples a length\n \"\"\"\n\n def __init__(self, min_value: int, max_value: int):\n self.values = list(range(min_value, max_value))\n\n def __call__(self) -> int:\n return np.random.choice(self.values)\n\n\nclass PPODecorators:\n optimize_device_cache = False\n\n @classmethod\n @contextmanager\n def empty_device_cache(cls):\n yield\n if cls.optimize_device_cache:\n if is_xpu_available():\n gc.collect()\n torch.xpu.empty_cache()\n gc.collect()\n elif is_npu_available():\n gc.collect()\n torch.npu.empty_cache()\n gc.collect()\n elif torch.cuda.is_available():\n gc.collect()\n torch.cuda.empty_cache()\n gc.collect()\n\n\ndef randn_tensor(\n shape: Union[Tuple, List],\n generator: Optional[Union[List[torch.Generator], torch.Generator]] = None,\n device: Optional[torch.device] = None,\n dtype: Optional[torch.dtype] = None,\n layout: Optional[torch.layout] = None,\n) -> torch.Tensor:\n \"\"\"A helper function to create random tensors on the desired `device` with the desired `dtype`. When\n passing a list of generators, you can seed each batch size individually. If CPU generators are passed, the tensor\n is always created on the CPU.\n \"\"\"\n # device on which tensor is created defaults to device\n rand_device = device\n batch_size = shape[0]\n\n layout = layout or torch.strided\n device = device or torch.device(\"cpu\")\n\n if generator is not None:\n gen_device_type = generator.device.type if not isinstance(generator, list) else generator[0].device.type\n if gen_device_type != device.type and gen_device_type == \"cpu\":\n rand_device = \"cpu\"\n if device != \"mps\":\n warnings.warn(\n f\"The passed generator was created on 'cpu' even though a tensor on {device} was expected.\"\n f\" Tensors will be created on 'cpu' and then moved to {device}. Note that one can probably\"\n f\" slighly speed up this function by passing a generator that was created on the {device} device.\"\n )\n elif gen_device_type != device.type and gen_device_type == \"cuda\":\n raise ValueError(f\"Cannot generate a {device} tensor from a generator of type {gen_device_type}.\")\n\n # make sure generator list of length 1 is treated like a non-list\n if isinstance(generator, list) and len(generator) == 1:\n generator = generator[0]\n\n if isinstance(generator, list):\n shape = (1,) + shape[1:]\n latents = [\n torch.randn(shape, generator=generator[i], device=rand_device, dtype=dtype, layout=layout)\n for i in range(batch_size)\n ]\n latents = torch.cat(latents, dim=0).to(device)\n else:\n latents = torch.randn(shape, generator=generator, device=rand_device, dtype=dtype, layout=layout).to(device)\n\n return latents\n", "path": "trl/core.py"}]}
| 3,853 | 569 |
gh_patches_debug_954
|
rasdani/github-patches
|
git_diff
|
nltk__nltk-2895
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Link to book in python documentation wrong
Not sure if this is a bug in the documentation or in the DNS/web server setup.
The python documentation for nltk says:
```
Steven Bird, Ewan Klein, and Edward Loper (2009).
Natural Language Processing with Python. O'Reilly Media Inc.
http://nltk.org/book
```
but this link does not work, `https://www.nltk.org/book/` does.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nltk/__init__.py`
Content:
```
1 # Natural Language Toolkit (NLTK)
2 #
3 # Copyright (C) 2001-2021 NLTK Project
4 # Authors: Steven Bird <[email protected]>
5 # Edward Loper <[email protected]>
6 # URL: <https://www.nltk.org/>
7 # For license information, see LICENSE.TXT
8
9 """
10 The Natural Language Toolkit (NLTK) is an open source Python library
11 for Natural Language Processing. A free online book is available.
12 (If you use the library for academic research, please cite the book.)
13
14 Steven Bird, Ewan Klein, and Edward Loper (2009).
15 Natural Language Processing with Python. O'Reilly Media Inc.
16 https://www.nltk.org/book
17
18 isort:skip_file
19 """
20
21 import os
22
23 # //////////////////////////////////////////////////////
24 # Metadata
25 # //////////////////////////////////////////////////////
26
27 # Version. For each new release, the version number should be updated
28 # in the file VERSION.
29 try:
30 # If a VERSION file exists, use it!
31 version_file = os.path.join(os.path.dirname(__file__), "VERSION")
32 with open(version_file) as infile:
33 __version__ = infile.read().strip()
34 except NameError:
35 __version__ = "unknown (running code interactively?)"
36 except OSError as ex:
37 __version__ = "unknown (%s)" % ex
38
39 if __doc__ is not None: # fix for the ``python -OO``
40 __doc__ += "\n@version: " + __version__
41
42
43 # Copyright notice
44 __copyright__ = """\
45 Copyright (C) 2001-2021 NLTK Project.
46
47 Distributed and Licensed under the Apache License, Version 2.0,
48 which is included by reference.
49 """
50
51 __license__ = "Apache License, Version 2.0"
52 # Description of the toolkit, keywords, and the project's primary URL.
53 __longdescr__ = """\
54 The Natural Language Toolkit (NLTK) is a Python package for
55 natural language processing. NLTK requires Python 3.6, 3.7, 3.8, or 3.9."""
56 __keywords__ = [
57 "NLP",
58 "CL",
59 "natural language processing",
60 "computational linguistics",
61 "parsing",
62 "tagging",
63 "tokenizing",
64 "syntax",
65 "linguistics",
66 "language",
67 "natural language",
68 "text analytics",
69 ]
70 __url__ = "https://www.nltk.org/"
71
72 # Maintainer, contributors, etc.
73 __maintainer__ = "NLTK Team"
74 __maintainer_email__ = "[email protected]"
75 __author__ = __maintainer__
76 __author_email__ = __maintainer_email__
77
78 # "Trove" classifiers for Python Package Index.
79 __classifiers__ = [
80 "Development Status :: 5 - Production/Stable",
81 "Intended Audience :: Developers",
82 "Intended Audience :: Education",
83 "Intended Audience :: Information Technology",
84 "Intended Audience :: Science/Research",
85 "License :: OSI Approved :: Apache Software License",
86 "Operating System :: OS Independent",
87 "Programming Language :: Python :: 3.6",
88 "Programming Language :: Python :: 3.7",
89 "Programming Language :: Python :: 3.8",
90 "Programming Language :: Python :: 3.9",
91 "Topic :: Scientific/Engineering",
92 "Topic :: Scientific/Engineering :: Artificial Intelligence",
93 "Topic :: Scientific/Engineering :: Human Machine Interfaces",
94 "Topic :: Scientific/Engineering :: Information Analysis",
95 "Topic :: Text Processing",
96 "Topic :: Text Processing :: Filters",
97 "Topic :: Text Processing :: General",
98 "Topic :: Text Processing :: Indexing",
99 "Topic :: Text Processing :: Linguistic",
100 ]
101
102 from nltk.internals import config_java
103
104 # support numpy from pypy
105 try:
106 import numpypy
107 except ImportError:
108 pass
109
110 # Override missing methods on environments where it cannot be used like GAE.
111 import subprocess
112
113 if not hasattr(subprocess, "PIPE"):
114
115 def _fake_PIPE(*args, **kwargs):
116 raise NotImplementedError("subprocess.PIPE is not supported.")
117
118 subprocess.PIPE = _fake_PIPE
119 if not hasattr(subprocess, "Popen"):
120
121 def _fake_Popen(*args, **kwargs):
122 raise NotImplementedError("subprocess.Popen is not supported.")
123
124 subprocess.Popen = _fake_Popen
125
126 ###########################################################
127 # TOP-LEVEL MODULES
128 ###########################################################
129
130 # Import top-level functionality into top-level namespace
131
132 from nltk.collocations import *
133 from nltk.decorators import decorator, memoize
134 from nltk.featstruct import *
135 from nltk.grammar import *
136 from nltk.probability import *
137 from nltk.text import *
138 from nltk.util import *
139 from nltk.jsontags import *
140
141 ###########################################################
142 # PACKAGES
143 ###########################################################
144
145 from nltk.chunk import *
146 from nltk.classify import *
147 from nltk.inference import *
148 from nltk.metrics import *
149 from nltk.parse import *
150 from nltk.tag import *
151 from nltk.tokenize import *
152 from nltk.translate import *
153 from nltk.tree import *
154 from nltk.sem import *
155 from nltk.stem import *
156
157 # Packages which can be lazily imported
158 # (a) we don't import *
159 # (b) they're slow to import or have run-time dependencies
160 # that can safely fail at run time
161
162 from nltk import lazyimport
163
164 app = lazyimport.LazyModule("nltk.app", locals(), globals())
165 chat = lazyimport.LazyModule("nltk.chat", locals(), globals())
166 corpus = lazyimport.LazyModule("nltk.corpus", locals(), globals())
167 draw = lazyimport.LazyModule("nltk.draw", locals(), globals())
168 toolbox = lazyimport.LazyModule("nltk.toolbox", locals(), globals())
169
170 # Optional loading
171
172 try:
173 import numpy
174 except ImportError:
175 pass
176 else:
177 from nltk import cluster
178
179 from nltk.downloader import download, download_shell
180
181 try:
182 import tkinter
183 except ImportError:
184 pass
185 else:
186 try:
187 from nltk.downloader import download_gui
188 except RuntimeError as e:
189 import warnings
190
191 warnings.warn(
192 "Corpus downloader GUI not loaded "
193 "(RuntimeError during import: %s)" % str(e)
194 )
195
196 # explicitly import all top-level modules (ensuring
197 # they override the same names inadvertently imported
198 # from a subpackage)
199
200 from nltk import ccg, chunk, classify, collocations
201 from nltk import data, featstruct, grammar, help, inference, metrics
202 from nltk import misc, parse, probability, sem, stem, wsd
203 from nltk import tag, tbl, text, tokenize, translate, tree, util
204
205
206 # FIXME: override any accidentally imported demo, see https://github.com/nltk/nltk/issues/2116
207 def demo():
208 print("To run the demo code for a module, type nltk.module.demo()")
209
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nltk/__init__.py b/nltk/__init__.py
--- a/nltk/__init__.py
+++ b/nltk/__init__.py
@@ -13,7 +13,7 @@
Steven Bird, Ewan Klein, and Edward Loper (2009).
Natural Language Processing with Python. O'Reilly Media Inc.
-https://www.nltk.org/book
+https://www.nltk.org/book/
isort:skip_file
"""
|
{"golden_diff": "diff --git a/nltk/__init__.py b/nltk/__init__.py\n--- a/nltk/__init__.py\n+++ b/nltk/__init__.py\n@@ -13,7 +13,7 @@\n \n Steven Bird, Ewan Klein, and Edward Loper (2009).\n Natural Language Processing with Python. O'Reilly Media Inc.\n-https://www.nltk.org/book\n+https://www.nltk.org/book/\n \n isort:skip_file\n \"\"\"\n", "issue": "Link to book in python documentation wrong\nNot sure if this is a bug in the documentation or in the DNS/web server setup.\r\nThe python documentation for nltk says:\r\n```\r\n Steven Bird, Ewan Klein, and Edward Loper (2009).\r\n Natural Language Processing with Python. O'Reilly Media Inc.\r\n http://nltk.org/book\r\n```\r\nbut this link does not work, `https://www.nltk.org/book/` does.\n", "before_files": [{"content": "# Natural Language Toolkit (NLTK)\n#\n# Copyright (C) 2001-2021 NLTK Project\n# Authors: Steven Bird <[email protected]>\n# Edward Loper <[email protected]>\n# URL: <https://www.nltk.org/>\n# For license information, see LICENSE.TXT\n\n\"\"\"\nThe Natural Language Toolkit (NLTK) is an open source Python library\nfor Natural Language Processing. A free online book is available.\n(If you use the library for academic research, please cite the book.)\n\nSteven Bird, Ewan Klein, and Edward Loper (2009).\nNatural Language Processing with Python. O'Reilly Media Inc.\nhttps://www.nltk.org/book\n\nisort:skip_file\n\"\"\"\n\nimport os\n\n# //////////////////////////////////////////////////////\n# Metadata\n# //////////////////////////////////////////////////////\n\n# Version. For each new release, the version number should be updated\n# in the file VERSION.\ntry:\n # If a VERSION file exists, use it!\n version_file = os.path.join(os.path.dirname(__file__), \"VERSION\")\n with open(version_file) as infile:\n __version__ = infile.read().strip()\nexcept NameError:\n __version__ = \"unknown (running code interactively?)\"\nexcept OSError as ex:\n __version__ = \"unknown (%s)\" % ex\n\nif __doc__ is not None: # fix for the ``python -OO``\n __doc__ += \"\\n@version: \" + __version__\n\n\n# Copyright notice\n__copyright__ = \"\"\"\\\nCopyright (C) 2001-2021 NLTK Project.\n\nDistributed and Licensed under the Apache License, Version 2.0,\nwhich is included by reference.\n\"\"\"\n\n__license__ = \"Apache License, Version 2.0\"\n# Description of the toolkit, keywords, and the project's primary URL.\n__longdescr__ = \"\"\"\\\nThe Natural Language Toolkit (NLTK) is a Python package for\nnatural language processing. NLTK requires Python 3.6, 3.7, 3.8, or 3.9.\"\"\"\n__keywords__ = [\n \"NLP\",\n \"CL\",\n \"natural language processing\",\n \"computational linguistics\",\n \"parsing\",\n \"tagging\",\n \"tokenizing\",\n \"syntax\",\n \"linguistics\",\n \"language\",\n \"natural language\",\n \"text analytics\",\n]\n__url__ = \"https://www.nltk.org/\"\n\n# Maintainer, contributors, etc.\n__maintainer__ = \"NLTK Team\"\n__maintainer_email__ = \"[email protected]\"\n__author__ = __maintainer__\n__author_email__ = __maintainer_email__\n\n# \"Trove\" classifiers for Python Package Index.\n__classifiers__ = [\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Information Technology\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Human Machine Interfaces\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n \"Topic :: Text Processing\",\n \"Topic :: Text Processing :: Filters\",\n \"Topic :: Text Processing :: General\",\n \"Topic :: Text Processing :: Indexing\",\n \"Topic :: Text Processing :: Linguistic\",\n]\n\nfrom nltk.internals import config_java\n\n# support numpy from pypy\ntry:\n import numpypy\nexcept ImportError:\n pass\n\n# Override missing methods on environments where it cannot be used like GAE.\nimport subprocess\n\nif not hasattr(subprocess, \"PIPE\"):\n\n def _fake_PIPE(*args, **kwargs):\n raise NotImplementedError(\"subprocess.PIPE is not supported.\")\n\n subprocess.PIPE = _fake_PIPE\nif not hasattr(subprocess, \"Popen\"):\n\n def _fake_Popen(*args, **kwargs):\n raise NotImplementedError(\"subprocess.Popen is not supported.\")\n\n subprocess.Popen = _fake_Popen\n\n###########################################################\n# TOP-LEVEL MODULES\n###########################################################\n\n# Import top-level functionality into top-level namespace\n\nfrom nltk.collocations import *\nfrom nltk.decorators import decorator, memoize\nfrom nltk.featstruct import *\nfrom nltk.grammar import *\nfrom nltk.probability import *\nfrom nltk.text import *\nfrom nltk.util import *\nfrom nltk.jsontags import *\n\n###########################################################\n# PACKAGES\n###########################################################\n\nfrom nltk.chunk import *\nfrom nltk.classify import *\nfrom nltk.inference import *\nfrom nltk.metrics import *\nfrom nltk.parse import *\nfrom nltk.tag import *\nfrom nltk.tokenize import *\nfrom nltk.translate import *\nfrom nltk.tree import *\nfrom nltk.sem import *\nfrom nltk.stem import *\n\n# Packages which can be lazily imported\n# (a) we don't import *\n# (b) they're slow to import or have run-time dependencies\n# that can safely fail at run time\n\nfrom nltk import lazyimport\n\napp = lazyimport.LazyModule(\"nltk.app\", locals(), globals())\nchat = lazyimport.LazyModule(\"nltk.chat\", locals(), globals())\ncorpus = lazyimport.LazyModule(\"nltk.corpus\", locals(), globals())\ndraw = lazyimport.LazyModule(\"nltk.draw\", locals(), globals())\ntoolbox = lazyimport.LazyModule(\"nltk.toolbox\", locals(), globals())\n\n# Optional loading\n\ntry:\n import numpy\nexcept ImportError:\n pass\nelse:\n from nltk import cluster\n\nfrom nltk.downloader import download, download_shell\n\ntry:\n import tkinter\nexcept ImportError:\n pass\nelse:\n try:\n from nltk.downloader import download_gui\n except RuntimeError as e:\n import warnings\n\n warnings.warn(\n \"Corpus downloader GUI not loaded \"\n \"(RuntimeError during import: %s)\" % str(e)\n )\n\n# explicitly import all top-level modules (ensuring\n# they override the same names inadvertently imported\n# from a subpackage)\n\nfrom nltk import ccg, chunk, classify, collocations\nfrom nltk import data, featstruct, grammar, help, inference, metrics\nfrom nltk import misc, parse, probability, sem, stem, wsd\nfrom nltk import tag, tbl, text, tokenize, translate, tree, util\n\n\n# FIXME: override any accidentally imported demo, see https://github.com/nltk/nltk/issues/2116\ndef demo():\n print(\"To run the demo code for a module, type nltk.module.demo()\")\n", "path": "nltk/__init__.py"}], "after_files": [{"content": "# Natural Language Toolkit (NLTK)\n#\n# Copyright (C) 2001-2021 NLTK Project\n# Authors: Steven Bird <[email protected]>\n# Edward Loper <[email protected]>\n# URL: <https://www.nltk.org/>\n# For license information, see LICENSE.TXT\n\n\"\"\"\nThe Natural Language Toolkit (NLTK) is an open source Python library\nfor Natural Language Processing. A free online book is available.\n(If you use the library for academic research, please cite the book.)\n\nSteven Bird, Ewan Klein, and Edward Loper (2009).\nNatural Language Processing with Python. O'Reilly Media Inc.\nhttps://www.nltk.org/book/\n\nisort:skip_file\n\"\"\"\n\nimport os\n\n# //////////////////////////////////////////////////////\n# Metadata\n# //////////////////////////////////////////////////////\n\n# Version. For each new release, the version number should be updated\n# in the file VERSION.\ntry:\n # If a VERSION file exists, use it!\n version_file = os.path.join(os.path.dirname(__file__), \"VERSION\")\n with open(version_file) as infile:\n __version__ = infile.read().strip()\nexcept NameError:\n __version__ = \"unknown (running code interactively?)\"\nexcept OSError as ex:\n __version__ = \"unknown (%s)\" % ex\n\nif __doc__ is not None: # fix for the ``python -OO``\n __doc__ += \"\\n@version: \" + __version__\n\n\n# Copyright notice\n__copyright__ = \"\"\"\\\nCopyright (C) 2001-2021 NLTK Project.\n\nDistributed and Licensed under the Apache License, Version 2.0,\nwhich is included by reference.\n\"\"\"\n\n__license__ = \"Apache License, Version 2.0\"\n# Description of the toolkit, keywords, and the project's primary URL.\n__longdescr__ = \"\"\"\\\nThe Natural Language Toolkit (NLTK) is a Python package for\nnatural language processing. NLTK requires Python 3.6, 3.7, 3.8, or 3.9.\"\"\"\n__keywords__ = [\n \"NLP\",\n \"CL\",\n \"natural language processing\",\n \"computational linguistics\",\n \"parsing\",\n \"tagging\",\n \"tokenizing\",\n \"syntax\",\n \"linguistics\",\n \"language\",\n \"natural language\",\n \"text analytics\",\n]\n__url__ = \"https://www.nltk.org/\"\n\n# Maintainer, contributors, etc.\n__maintainer__ = \"NLTK Team\"\n__maintainer_email__ = \"[email protected]\"\n__author__ = __maintainer__\n__author_email__ = __maintainer_email__\n\n# \"Trove\" classifiers for Python Package Index.\n__classifiers__ = [\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Information Technology\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Human Machine Interfaces\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n \"Topic :: Text Processing\",\n \"Topic :: Text Processing :: Filters\",\n \"Topic :: Text Processing :: General\",\n \"Topic :: Text Processing :: Indexing\",\n \"Topic :: Text Processing :: Linguistic\",\n]\n\nfrom nltk.internals import config_java\n\n# support numpy from pypy\ntry:\n import numpypy\nexcept ImportError:\n pass\n\n# Override missing methods on environments where it cannot be used like GAE.\nimport subprocess\n\nif not hasattr(subprocess, \"PIPE\"):\n\n def _fake_PIPE(*args, **kwargs):\n raise NotImplementedError(\"subprocess.PIPE is not supported.\")\n\n subprocess.PIPE = _fake_PIPE\nif not hasattr(subprocess, \"Popen\"):\n\n def _fake_Popen(*args, **kwargs):\n raise NotImplementedError(\"subprocess.Popen is not supported.\")\n\n subprocess.Popen = _fake_Popen\n\n###########################################################\n# TOP-LEVEL MODULES\n###########################################################\n\n# Import top-level functionality into top-level namespace\n\nfrom nltk.collocations import *\nfrom nltk.decorators import decorator, memoize\nfrom nltk.featstruct import *\nfrom nltk.grammar import *\nfrom nltk.probability import *\nfrom nltk.text import *\nfrom nltk.util import *\nfrom nltk.jsontags import *\n\n###########################################################\n# PACKAGES\n###########################################################\n\nfrom nltk.chunk import *\nfrom nltk.classify import *\nfrom nltk.inference import *\nfrom nltk.metrics import *\nfrom nltk.parse import *\nfrom nltk.tag import *\nfrom nltk.tokenize import *\nfrom nltk.translate import *\nfrom nltk.tree import *\nfrom nltk.sem import *\nfrom nltk.stem import *\n\n# Packages which can be lazily imported\n# (a) we don't import *\n# (b) they're slow to import or have run-time dependencies\n# that can safely fail at run time\n\nfrom nltk import lazyimport\n\napp = lazyimport.LazyModule(\"nltk.app\", locals(), globals())\nchat = lazyimport.LazyModule(\"nltk.chat\", locals(), globals())\ncorpus = lazyimport.LazyModule(\"nltk.corpus\", locals(), globals())\ndraw = lazyimport.LazyModule(\"nltk.draw\", locals(), globals())\ntoolbox = lazyimport.LazyModule(\"nltk.toolbox\", locals(), globals())\n\n# Optional loading\n\ntry:\n import numpy\nexcept ImportError:\n pass\nelse:\n from nltk import cluster\n\nfrom nltk.downloader import download, download_shell\n\ntry:\n import tkinter\nexcept ImportError:\n pass\nelse:\n try:\n from nltk.downloader import download_gui\n except RuntimeError as e:\n import warnings\n\n warnings.warn(\n \"Corpus downloader GUI not loaded \"\n \"(RuntimeError during import: %s)\" % str(e)\n )\n\n# explicitly import all top-level modules (ensuring\n# they override the same names inadvertently imported\n# from a subpackage)\n\nfrom nltk import ccg, chunk, classify, collocations\nfrom nltk import data, featstruct, grammar, help, inference, metrics\nfrom nltk import misc, parse, probability, sem, stem, wsd\nfrom nltk import tag, tbl, text, tokenize, translate, tree, util\n\n\n# FIXME: override any accidentally imported demo, see https://github.com/nltk/nltk/issues/2116\ndef demo():\n print(\"To run the demo code for a module, type nltk.module.demo()\")\n", "path": "nltk/__init__.py"}]}
| 2,336 | 105 |
gh_patches_debug_30884
|
rasdani/github-patches
|
git_diff
|
nltk__nltk-3042
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sonority sequencing syllable tokenizer performs significantly slower on numbers than on words
The sonority sequencing syllable tokenizer (`nltk.SyllableTokenizer`) performs significantly slower on numbers than on words. It seems that the time complexity for words is O(n), which is okay, but O(n^2) for numbers, which is not so good.
```
>>> timeit.timeit('t.tokenize("99")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 100)
0.03364099999998871
>>> timeit.timeit('t.tokenize("thisisanextremelylongword")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 100)
0.002708099999949809
>>> timeit.timeit('t.tokenize("99")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 1000)
2.5833234000000402
>>> timeit.timeit('t.tokenize("thisisanextremelylongword")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 1000)
0.023796200000106182
>>> timeit.timeit('t.tokenize("99")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 10000)
264.43897390000006
>>> timeit.timeit('t.tokenize("thisisanextremelylongword")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 10000)
0.24109669999984362
```
OS: Windows 10 x64
Python: 3.8.10 x64
NLTK: 3.7
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nltk/tokenize/sonority_sequencing.py`
Content:
```
1 # Natural Language Toolkit: Tokenizers
2 #
3 # Copyright (C) 2001-2022 NLTK Project
4 # Author: Christopher Hench <[email protected]>
5 # Alex Estes
6 # URL: <https://www.nltk.org>
7 # For license information, see LICENSE.TXT
8
9 """
10 The Sonority Sequencing Principle (SSP) is a language agnostic algorithm proposed
11 by Otto Jesperson in 1904. The sonorous quality of a phoneme is judged by the
12 openness of the lips. Syllable breaks occur before troughs in sonority. For more
13 on the SSP see Selkirk (1984).
14
15 The default implementation uses the English alphabet, but the `sonority_hiearchy`
16 can be modified to IPA or any other alphabet for the use-case. The SSP is a
17 universal syllabification algorithm, but that does not mean it performs equally
18 across languages. Bartlett et al. (2009) is a good benchmark for English accuracy
19 if utilizing IPA (pg. 311).
20
21 Importantly, if a custom hierarchy is supplied and vowels span across more than
22 one level, they should be given separately to the `vowels` class attribute.
23
24 References:
25
26 - Otto Jespersen. 1904. Lehrbuch der Phonetik.
27 Leipzig, Teubner. Chapter 13, Silbe, pp. 185-203.
28 - Elisabeth Selkirk. 1984. On the major class features and syllable theory.
29 In Aronoff & Oehrle (eds.) Language Sound Structure: Studies in Phonology.
30 Cambridge, MIT Press. pp. 107-136.
31 - Susan Bartlett, et al. 2009. On the Syllabification of Phonemes.
32 In HLT-NAACL. pp. 308-316.
33 """
34
35 import re
36 import warnings
37 from string import punctuation
38
39 from nltk.tokenize.api import TokenizerI
40 from nltk.util import ngrams
41
42
43 class SyllableTokenizer(TokenizerI):
44 """
45 Syllabifies words based on the Sonority Sequencing Principle (SSP).
46
47 >>> from nltk.tokenize import SyllableTokenizer
48 >>> from nltk import word_tokenize
49 >>> SSP = SyllableTokenizer()
50 >>> SSP.tokenize('justification')
51 ['jus', 'ti', 'fi', 'ca', 'tion']
52 >>> text = "This is a foobar-like sentence."
53 >>> [SSP.tokenize(token) for token in word_tokenize(text)]
54 [['This'], ['is'], ['a'], ['foo', 'bar', '-', 'li', 'ke'], ['sen', 'ten', 'ce'], ['.']]
55 """
56
57 def __init__(self, lang="en", sonority_hierarchy=False):
58 """
59 :param lang: Language parameter, default is English, 'en'
60 :type lang: str
61 :param sonority_hierarchy: Sonority hierarchy according to the
62 Sonority Sequencing Principle.
63 :type sonority_hierarchy: list(str)
64 """
65 # Sonority hierarchy should be provided in descending order.
66 # If vowels are spread across multiple levels, they should be
67 # passed assigned self.vowels var together, otherwise should be
68 # placed in first index of hierarchy.
69 if not sonority_hierarchy and lang == "en":
70 sonority_hierarchy = [
71 "aeiouy", # vowels.
72 "lmnrw", # nasals.
73 "zvsf", # fricatives.
74 "bcdgtkpqxhj", # stops.
75 ]
76
77 self.vowels = sonority_hierarchy[0]
78 self.phoneme_map = {}
79 for i, level in enumerate(sonority_hierarchy):
80 for c in level:
81 sonority_level = len(sonority_hierarchy) - i
82 self.phoneme_map[c] = sonority_level
83 self.phoneme_map[c.upper()] = sonority_level
84
85 def assign_values(self, token):
86 """
87 Assigns each phoneme its value from the sonority hierarchy.
88 Note: Sentence/text has to be tokenized first.
89
90 :param token: Single word or token
91 :type token: str
92 :return: List of tuples, first element is character/phoneme and
93 second is the soronity value.
94 :rtype: list(tuple(str, int))
95 """
96 syllables_values = []
97 for c in token:
98 try:
99 syllables_values.append((c, self.phoneme_map[c]))
100 except KeyError:
101 if c not in punctuation:
102 warnings.warn(
103 "Character not defined in sonority_hierarchy,"
104 " assigning as vowel: '{}'".format(c)
105 )
106 syllables_values.append((c, max(self.phoneme_map.values())))
107 self.vowels += c
108 else: # If it's a punctuation, assign -1.
109 syllables_values.append((c, -1))
110 return syllables_values
111
112 def validate_syllables(self, syllable_list):
113 """
114 Ensures each syllable has at least one vowel.
115 If the following syllable doesn't have vowel, add it to the current one.
116
117 :param syllable_list: Single word or token broken up into syllables.
118 :type syllable_list: list(str)
119 :return: Single word or token broken up into syllables
120 (with added syllables if necessary)
121 :rtype: list(str)
122 """
123 valid_syllables = []
124 front = ""
125 for i, syllable in enumerate(syllable_list):
126 if syllable in punctuation:
127 valid_syllables.append(syllable)
128 continue
129 if not re.search("|".join(self.vowels), syllable):
130 if len(valid_syllables) == 0:
131 front += syllable
132 else:
133 valid_syllables = valid_syllables[:-1] + [
134 valid_syllables[-1] + syllable
135 ]
136 else:
137 if len(valid_syllables) == 0:
138 valid_syllables.append(front + syllable)
139 else:
140 valid_syllables.append(syllable)
141
142 return valid_syllables
143
144 def tokenize(self, token):
145 """
146 Apply the SSP to return a list of syllables.
147 Note: Sentence/text has to be tokenized first.
148
149 :param token: Single word or token
150 :type token: str
151 :return syllable_list: Single word or token broken up into syllables.
152 :rtype: list(str)
153 """
154 # assign values from hierarchy
155 syllables_values = self.assign_values(token)
156
157 # if only one vowel return word
158 if sum(token.count(x) for x in self.vowels) <= 1:
159 return [token]
160
161 syllable_list = []
162 syllable = syllables_values[0][0] # start syllable with first phoneme
163 for trigram in ngrams(syllables_values, n=3):
164 phonemes, values = zip(*trigram)
165 # Sonority of previous, focal and following phoneme
166 prev_value, focal_value, next_value = values
167 # Focal phoneme.
168 focal_phoneme = phonemes[1]
169
170 # These cases trigger syllable break.
171 if focal_value == -1: # If it's a punctuation, just break.
172 syllable_list.append(syllable)
173 syllable_list.append(focal_phoneme)
174 syllable = ""
175 elif prev_value >= focal_value == next_value:
176 syllable += focal_phoneme
177 syllable_list.append(syllable)
178 syllable = ""
179
180 elif prev_value > focal_value < next_value:
181 syllable_list.append(syllable)
182 syllable = ""
183 syllable += focal_phoneme
184
185 # no syllable break
186 else:
187 syllable += focal_phoneme
188
189 syllable += syllables_values[-1][0] # append last phoneme
190 syllable_list.append(syllable)
191
192 return self.validate_syllables(syllable_list)
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nltk/tokenize/sonority_sequencing.py b/nltk/tokenize/sonority_sequencing.py
--- a/nltk/tokenize/sonority_sequencing.py
+++ b/nltk/tokenize/sonority_sequencing.py
@@ -98,14 +98,15 @@
try:
syllables_values.append((c, self.phoneme_map[c]))
except KeyError:
- if c not in punctuation:
+ if c not in "0123456789" and c not in punctuation:
warnings.warn(
"Character not defined in sonority_hierarchy,"
" assigning as vowel: '{}'".format(c)
)
syllables_values.append((c, max(self.phoneme_map.values())))
- self.vowels += c
- else: # If it's a punctuation, assign -1.
+ if c not in self.vowels:
+ self.vowels += c
+ else: # If it's a punctuation or numbers, assign -1.
syllables_values.append((c, -1))
return syllables_values
@@ -122,11 +123,12 @@
"""
valid_syllables = []
front = ""
+ vowel_pattern = re.compile("|".join(self.vowels))
for i, syllable in enumerate(syllable_list):
if syllable in punctuation:
valid_syllables.append(syllable)
continue
- if not re.search("|".join(self.vowels), syllable):
+ if not vowel_pattern.search(syllable):
if len(valid_syllables) == 0:
front += syllable
else:
|
{"golden_diff": "diff --git a/nltk/tokenize/sonority_sequencing.py b/nltk/tokenize/sonority_sequencing.py\n--- a/nltk/tokenize/sonority_sequencing.py\n+++ b/nltk/tokenize/sonority_sequencing.py\n@@ -98,14 +98,15 @@\n try:\n syllables_values.append((c, self.phoneme_map[c]))\n except KeyError:\n- if c not in punctuation:\n+ if c not in \"0123456789\" and c not in punctuation:\n warnings.warn(\n \"Character not defined in sonority_hierarchy,\"\n \" assigning as vowel: '{}'\".format(c)\n )\n syllables_values.append((c, max(self.phoneme_map.values())))\n- self.vowels += c\n- else: # If it's a punctuation, assign -1.\n+ if c not in self.vowels:\n+ self.vowels += c\n+ else: # If it's a punctuation or numbers, assign -1.\n syllables_values.append((c, -1))\n return syllables_values\n \n@@ -122,11 +123,12 @@\n \"\"\"\n valid_syllables = []\n front = \"\"\n+ vowel_pattern = re.compile(\"|\".join(self.vowels))\n for i, syllable in enumerate(syllable_list):\n if syllable in punctuation:\n valid_syllables.append(syllable)\n continue\n- if not re.search(\"|\".join(self.vowels), syllable):\n+ if not vowel_pattern.search(syllable):\n if len(valid_syllables) == 0:\n front += syllable\n else:\n", "issue": "Sonority sequencing syllable tokenizer performs significantly slower on numbers than on words\nThe sonority sequencing syllable tokenizer (`nltk.SyllableTokenizer`) performs significantly slower on numbers than on words. It seems that the time complexity for words is O(n), which is okay, but O(n^2) for numbers, which is not so good.\r\n\r\n```\r\n>>> timeit.timeit('t.tokenize(\"99\")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 100)\r\n0.03364099999998871\r\n>>> timeit.timeit('t.tokenize(\"thisisanextremelylongword\")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 100)\r\n0.002708099999949809\r\n>>> timeit.timeit('t.tokenize(\"99\")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 1000)\r\n2.5833234000000402\r\n>>> timeit.timeit('t.tokenize(\"thisisanextremelylongword\")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 1000)\r\n0.023796200000106182\r\n>>> timeit.timeit('t.tokenize(\"99\")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 10000)\r\n264.43897390000006\r\n>>> timeit.timeit('t.tokenize(\"thisisanextremelylongword\")', setup='import nltk; t = nltk.SyllableTokenizer()', number = 10000)\r\n0.24109669999984362\r\n```\r\n\r\nOS: Windows 10 x64\r\nPython: 3.8.10 x64\r\nNLTK: 3.7\n", "before_files": [{"content": "# Natural Language Toolkit: Tokenizers\n#\n# Copyright (C) 2001-2022 NLTK Project\n# Author: Christopher Hench <[email protected]>\n# Alex Estes\n# URL: <https://www.nltk.org>\n# For license information, see LICENSE.TXT\n\n\"\"\"\nThe Sonority Sequencing Principle (SSP) is a language agnostic algorithm proposed\nby Otto Jesperson in 1904. The sonorous quality of a phoneme is judged by the\nopenness of the lips. Syllable breaks occur before troughs in sonority. For more\non the SSP see Selkirk (1984).\n\nThe default implementation uses the English alphabet, but the `sonority_hiearchy`\ncan be modified to IPA or any other alphabet for the use-case. The SSP is a\nuniversal syllabification algorithm, but that does not mean it performs equally\nacross languages. Bartlett et al. (2009) is a good benchmark for English accuracy\nif utilizing IPA (pg. 311).\n\nImportantly, if a custom hierarchy is supplied and vowels span across more than\none level, they should be given separately to the `vowels` class attribute.\n\nReferences:\n\n- Otto Jespersen. 1904. Lehrbuch der Phonetik.\n Leipzig, Teubner. Chapter 13, Silbe, pp. 185-203.\n- Elisabeth Selkirk. 1984. On the major class features and syllable theory.\n In Aronoff & Oehrle (eds.) Language Sound Structure: Studies in Phonology.\n Cambridge, MIT Press. pp. 107-136.\n- Susan Bartlett, et al. 2009. On the Syllabification of Phonemes.\n In HLT-NAACL. pp. 308-316.\n\"\"\"\n\nimport re\nimport warnings\nfrom string import punctuation\n\nfrom nltk.tokenize.api import TokenizerI\nfrom nltk.util import ngrams\n\n\nclass SyllableTokenizer(TokenizerI):\n \"\"\"\n Syllabifies words based on the Sonority Sequencing Principle (SSP).\n\n >>> from nltk.tokenize import SyllableTokenizer\n >>> from nltk import word_tokenize\n >>> SSP = SyllableTokenizer()\n >>> SSP.tokenize('justification')\n ['jus', 'ti', 'fi', 'ca', 'tion']\n >>> text = \"This is a foobar-like sentence.\"\n >>> [SSP.tokenize(token) for token in word_tokenize(text)]\n [['This'], ['is'], ['a'], ['foo', 'bar', '-', 'li', 'ke'], ['sen', 'ten', 'ce'], ['.']]\n \"\"\"\n\n def __init__(self, lang=\"en\", sonority_hierarchy=False):\n \"\"\"\n :param lang: Language parameter, default is English, 'en'\n :type lang: str\n :param sonority_hierarchy: Sonority hierarchy according to the\n Sonority Sequencing Principle.\n :type sonority_hierarchy: list(str)\n \"\"\"\n # Sonority hierarchy should be provided in descending order.\n # If vowels are spread across multiple levels, they should be\n # passed assigned self.vowels var together, otherwise should be\n # placed in first index of hierarchy.\n if not sonority_hierarchy and lang == \"en\":\n sonority_hierarchy = [\n \"aeiouy\", # vowels.\n \"lmnrw\", # nasals.\n \"zvsf\", # fricatives.\n \"bcdgtkpqxhj\", # stops.\n ]\n\n self.vowels = sonority_hierarchy[0]\n self.phoneme_map = {}\n for i, level in enumerate(sonority_hierarchy):\n for c in level:\n sonority_level = len(sonority_hierarchy) - i\n self.phoneme_map[c] = sonority_level\n self.phoneme_map[c.upper()] = sonority_level\n\n def assign_values(self, token):\n \"\"\"\n Assigns each phoneme its value from the sonority hierarchy.\n Note: Sentence/text has to be tokenized first.\n\n :param token: Single word or token\n :type token: str\n :return: List of tuples, first element is character/phoneme and\n second is the soronity value.\n :rtype: list(tuple(str, int))\n \"\"\"\n syllables_values = []\n for c in token:\n try:\n syllables_values.append((c, self.phoneme_map[c]))\n except KeyError:\n if c not in punctuation:\n warnings.warn(\n \"Character not defined in sonority_hierarchy,\"\n \" assigning as vowel: '{}'\".format(c)\n )\n syllables_values.append((c, max(self.phoneme_map.values())))\n self.vowels += c\n else: # If it's a punctuation, assign -1.\n syllables_values.append((c, -1))\n return syllables_values\n\n def validate_syllables(self, syllable_list):\n \"\"\"\n Ensures each syllable has at least one vowel.\n If the following syllable doesn't have vowel, add it to the current one.\n\n :param syllable_list: Single word or token broken up into syllables.\n :type syllable_list: list(str)\n :return: Single word or token broken up into syllables\n (with added syllables if necessary)\n :rtype: list(str)\n \"\"\"\n valid_syllables = []\n front = \"\"\n for i, syllable in enumerate(syllable_list):\n if syllable in punctuation:\n valid_syllables.append(syllable)\n continue\n if not re.search(\"|\".join(self.vowels), syllable):\n if len(valid_syllables) == 0:\n front += syllable\n else:\n valid_syllables = valid_syllables[:-1] + [\n valid_syllables[-1] + syllable\n ]\n else:\n if len(valid_syllables) == 0:\n valid_syllables.append(front + syllable)\n else:\n valid_syllables.append(syllable)\n\n return valid_syllables\n\n def tokenize(self, token):\n \"\"\"\n Apply the SSP to return a list of syllables.\n Note: Sentence/text has to be tokenized first.\n\n :param token: Single word or token\n :type token: str\n :return syllable_list: Single word or token broken up into syllables.\n :rtype: list(str)\n \"\"\"\n # assign values from hierarchy\n syllables_values = self.assign_values(token)\n\n # if only one vowel return word\n if sum(token.count(x) for x in self.vowels) <= 1:\n return [token]\n\n syllable_list = []\n syllable = syllables_values[0][0] # start syllable with first phoneme\n for trigram in ngrams(syllables_values, n=3):\n phonemes, values = zip(*trigram)\n # Sonority of previous, focal and following phoneme\n prev_value, focal_value, next_value = values\n # Focal phoneme.\n focal_phoneme = phonemes[1]\n\n # These cases trigger syllable break.\n if focal_value == -1: # If it's a punctuation, just break.\n syllable_list.append(syllable)\n syllable_list.append(focal_phoneme)\n syllable = \"\"\n elif prev_value >= focal_value == next_value:\n syllable += focal_phoneme\n syllable_list.append(syllable)\n syllable = \"\"\n\n elif prev_value > focal_value < next_value:\n syllable_list.append(syllable)\n syllable = \"\"\n syllable += focal_phoneme\n\n # no syllable break\n else:\n syllable += focal_phoneme\n\n syllable += syllables_values[-1][0] # append last phoneme\n syllable_list.append(syllable)\n\n return self.validate_syllables(syllable_list)\n", "path": "nltk/tokenize/sonority_sequencing.py"}], "after_files": [{"content": "# Natural Language Toolkit: Tokenizers\n#\n# Copyright (C) 2001-2022 NLTK Project\n# Author: Christopher Hench <[email protected]>\n# Alex Estes\n# URL: <https://www.nltk.org>\n# For license information, see LICENSE.TXT\n\n\"\"\"\nThe Sonority Sequencing Principle (SSP) is a language agnostic algorithm proposed\nby Otto Jesperson in 1904. The sonorous quality of a phoneme is judged by the\nopenness of the lips. Syllable breaks occur before troughs in sonority. For more\non the SSP see Selkirk (1984).\n\nThe default implementation uses the English alphabet, but the `sonority_hiearchy`\ncan be modified to IPA or any other alphabet for the use-case. The SSP is a\nuniversal syllabification algorithm, but that does not mean it performs equally\nacross languages. Bartlett et al. (2009) is a good benchmark for English accuracy\nif utilizing IPA (pg. 311).\n\nImportantly, if a custom hierarchy is supplied and vowels span across more than\none level, they should be given separately to the `vowels` class attribute.\n\nReferences:\n\n- Otto Jespersen. 1904. Lehrbuch der Phonetik.\n Leipzig, Teubner. Chapter 13, Silbe, pp. 185-203.\n- Elisabeth Selkirk. 1984. On the major class features and syllable theory.\n In Aronoff & Oehrle (eds.) Language Sound Structure: Studies in Phonology.\n Cambridge, MIT Press. pp. 107-136.\n- Susan Bartlett, et al. 2009. On the Syllabification of Phonemes.\n In HLT-NAACL. pp. 308-316.\n\"\"\"\n\nimport re\nimport warnings\nfrom string import punctuation\n\nfrom nltk.tokenize.api import TokenizerI\nfrom nltk.util import ngrams\n\n\nclass SyllableTokenizer(TokenizerI):\n \"\"\"\n Syllabifies words based on the Sonority Sequencing Principle (SSP).\n\n >>> from nltk.tokenize import SyllableTokenizer\n >>> from nltk import word_tokenize\n >>> SSP = SyllableTokenizer()\n >>> SSP.tokenize('justification')\n ['jus', 'ti', 'fi', 'ca', 'tion']\n >>> text = \"This is a foobar-like sentence.\"\n >>> [SSP.tokenize(token) for token in word_tokenize(text)]\n [['This'], ['is'], ['a'], ['foo', 'bar', '-', 'li', 'ke'], ['sen', 'ten', 'ce'], ['.']]\n \"\"\"\n\n def __init__(self, lang=\"en\", sonority_hierarchy=False):\n \"\"\"\n :param lang: Language parameter, default is English, 'en'\n :type lang: str\n :param sonority_hierarchy: Sonority hierarchy according to the\n Sonority Sequencing Principle.\n :type sonority_hierarchy: list(str)\n \"\"\"\n # Sonority hierarchy should be provided in descending order.\n # If vowels are spread across multiple levels, they should be\n # passed assigned self.vowels var together, otherwise should be\n # placed in first index of hierarchy.\n if not sonority_hierarchy and lang == \"en\":\n sonority_hierarchy = [\n \"aeiouy\", # vowels.\n \"lmnrw\", # nasals.\n \"zvsf\", # fricatives.\n \"bcdgtkpqxhj\", # stops.\n ]\n\n self.vowels = sonority_hierarchy[0]\n self.phoneme_map = {}\n for i, level in enumerate(sonority_hierarchy):\n for c in level:\n sonority_level = len(sonority_hierarchy) - i\n self.phoneme_map[c] = sonority_level\n self.phoneme_map[c.upper()] = sonority_level\n\n def assign_values(self, token):\n \"\"\"\n Assigns each phoneme its value from the sonority hierarchy.\n Note: Sentence/text has to be tokenized first.\n\n :param token: Single word or token\n :type token: str\n :return: List of tuples, first element is character/phoneme and\n second is the soronity value.\n :rtype: list(tuple(str, int))\n \"\"\"\n syllables_values = []\n for c in token:\n try:\n syllables_values.append((c, self.phoneme_map[c]))\n except KeyError:\n if c not in \"0123456789\" and c not in punctuation:\n warnings.warn(\n \"Character not defined in sonority_hierarchy,\"\n \" assigning as vowel: '{}'\".format(c)\n )\n syllables_values.append((c, max(self.phoneme_map.values())))\n if c not in self.vowels:\n self.vowels += c\n else: # If it's a punctuation or numbers, assign -1.\n syllables_values.append((c, -1))\n return syllables_values\n\n def validate_syllables(self, syllable_list):\n \"\"\"\n Ensures each syllable has at least one vowel.\n If the following syllable doesn't have vowel, add it to the current one.\n\n :param syllable_list: Single word or token broken up into syllables.\n :type syllable_list: list(str)\n :return: Single word or token broken up into syllables\n (with added syllables if necessary)\n :rtype: list(str)\n \"\"\"\n valid_syllables = []\n front = \"\"\n vowel_pattern = re.compile(\"|\".join(self.vowels))\n for i, syllable in enumerate(syllable_list):\n if syllable in punctuation:\n valid_syllables.append(syllable)\n continue\n if not vowel_pattern.search(syllable):\n if len(valid_syllables) == 0:\n front += syllable\n else:\n valid_syllables = valid_syllables[:-1] + [\n valid_syllables[-1] + syllable\n ]\n else:\n if len(valid_syllables) == 0:\n valid_syllables.append(front + syllable)\n else:\n valid_syllables.append(syllable)\n\n return valid_syllables\n\n def tokenize(self, token):\n \"\"\"\n Apply the SSP to return a list of syllables.\n Note: Sentence/text has to be tokenized first.\n\n :param token: Single word or token\n :type token: str\n :return syllable_list: Single word or token broken up into syllables.\n :rtype: list(str)\n \"\"\"\n # assign values from hierarchy\n syllables_values = self.assign_values(token)\n\n # if only one vowel return word\n if sum(token.count(x) for x in self.vowels) <= 1:\n return [token]\n\n syllable_list = []\n syllable = syllables_values[0][0] # start syllable with first phoneme\n for trigram in ngrams(syllables_values, n=3):\n phonemes, values = zip(*trigram)\n # Sonority of previous, focal and following phoneme\n prev_value, focal_value, next_value = values\n # Focal phoneme.\n focal_phoneme = phonemes[1]\n\n # These cases trigger syllable break.\n if focal_value == -1: # If it's a punctuation, just break.\n syllable_list.append(syllable)\n syllable_list.append(focal_phoneme)\n syllable = \"\"\n elif prev_value >= focal_value == next_value:\n syllable += focal_phoneme\n syllable_list.append(syllable)\n syllable = \"\"\n\n elif prev_value > focal_value < next_value:\n syllable_list.append(syllable)\n syllable = \"\"\n syllable += focal_phoneme\n\n # no syllable break\n else:\n syllable += focal_phoneme\n\n syllable += syllables_values[-1][0] # append last phoneme\n syllable_list.append(syllable)\n\n return self.validate_syllables(syllable_list)\n", "path": "nltk/tokenize/sonority_sequencing.py"}]}
| 2,961 | 378 |
gh_patches_debug_6547
|
rasdani/github-patches
|
git_diff
|
lk-geimfari__mimesis-376
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
I can't compile my project by pyinstaller
I have a script with code:
```
from mimesis import Personal
person = Personal('en')
person.full_name()
```
and it works well, but after compiling this code to .exe via pyinstaller I have an error **FileNotFoundError: [Errno 2] No such file or directory: 'B:\\_MEI131682\\mimesis\\data/es\\personal.json'
[20624] Failed to execute script myproject**
So, I think that problem in path (`data/es\\personal`). What ways of solving this problem can you recommend?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mimesis/utils.py`
Content:
```
1 """This module is provide internal util functions."""
2
3 import collections
4 import functools
5 import json
6 import ssl
7 from os import path
8 from typing import Mapping, Optional, Union
9 from urllib import request
10
11 from mimesis import config
12 from mimesis.exceptions import UnsupportedLocale
13 from mimesis.typing import JSON
14
15 __all__ = ['download_image', 'locale_info',
16 'luhn_checksum', 'setup_locale', 'pull']
17
18 DATA_DIR = path.abspath(path.join(path.dirname(__file__), 'data'))
19
20
21 def locale_info(locale: str) -> str:
22 """Check information about locale.
23
24 :param locale: Locale abbreviation.
25 :return: Locale name.
26 :raises UnsupportedLocale: if locale is not supported.
27 """
28 locale = locale.lower()
29 supported = config.SUPPORTED_LOCALES
30
31 if locale not in supported:
32 raise UnsupportedLocale(locale)
33
34 return supported[locale]['name']
35
36
37 def luhn_checksum(num: str) -> str:
38 """Calculate a checksum for num using the Luhn algorithm.
39
40 :param num: The number to calculate a checksum for as a string.
41 :return: Checksum for number.
42 """
43 check = 0
44 for i, s in enumerate(reversed(num)):
45 sx = int(s)
46 sx = sx * 2 if i % 2 == 0 else sx
47 sx = sx - 9 if sx > 9 else sx
48 check += sx
49 return str(check * 9 % 10)
50
51
52 def update_dict(initial: JSON, other: Mapping) -> JSON:
53 """Recursively update a dictionary.
54
55 :param initial: Dict to update.
56 :type initial: dict or list
57 :param other: Dict to update from.
58 :type other: Mapping
59 :return: Updated dict.
60 :rtype: dict
61 """
62 for key, value in other.items():
63 if isinstance(value, collections.Mapping):
64 r = update_dict(initial.get(key, {}), value)
65 initial[key] = r
66 else:
67 initial[key] = other[key]
68 return initial
69
70
71 @functools.lru_cache(maxsize=None)
72 def pull(file: str, locale: str = 'en') -> JSON:
73 """Pull the content from the JSON and memorize one.
74
75 Opens JSON file ``file`` in the folder ``data/locale``
76 and get content from the file and memorize ones using lru_cache.
77
78 :param file: The name of file.
79 :param locale: Locale.
80 :return: The content of the file.
81 :rtype: dict
82 :raises UnsupportedLocale: if locale is not supported.
83
84 :Example:
85
86 >>> from mimesis.utils import pull
87 >>> en = pull(file='datetime.json', locale='en')
88 >>> isinstance(en, dict)
89 True
90 >>> en['day']['abbr'][0]
91 'Mon.'
92 """
93 def get_data(locale_name: str) -> JSON:
94 """Pull JSON data from file.
95
96 :param locale_name: Locale name.
97 :return: Content of JSON file as dict.
98 """
99 file_path = path.join(DATA_DIR + '/' + locale_name, file)
100 # Needs explicit encoding for Windows
101 with open(file_path, 'r', encoding='utf8') as f:
102 return json.load(f)
103
104 locale = locale.lower()
105
106 if locale not in config.SUPPORTED_LOCALES:
107 raise UnsupportedLocale(locale)
108
109 master_locale = locale.split('-')[0]
110 data = get_data(master_locale)
111
112 # Handle sub-locales
113 if '-' in locale:
114 data = update_dict(data, get_data(locale))
115
116 return data
117
118
119 def download_image(url: str = '', save_path: str = '',
120 unverified_ctx: bool = False) -> Union[None, str]:
121 """Download image and save in current directory on local machine.
122
123 :param url: URL to image.
124 :param save_path: Saving path.
125 :param unverified_ctx: Create unverified context.
126 :return: Image name.
127 :rtype: str or None
128 """
129 if unverified_ctx:
130 ssl._create_default_https_context = ssl._create_unverified_context
131
132 if url is not None:
133 image_name = url.rsplit('/')[-1]
134 request.urlretrieve(url, save_path + image_name)
135 return image_name
136 return None
137
138
139 def setup_locale(locale: Optional[str] = None) -> str:
140 """Set up locale after pre-check.
141
142 :param str locale: Locale
143 :return: Locale in lowercase.
144 :raises UnsupportedLocale: if locales is not supported.
145 """
146 if not locale:
147 return config.DEFAULT_LOCALE
148
149 locale = locale.lower()
150 if locale not in config.SUPPORTED_LOCALES:
151 raise UnsupportedLocale(locale)
152
153 return locale
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mimesis/utils.py b/mimesis/utils.py
--- a/mimesis/utils.py
+++ b/mimesis/utils.py
@@ -96,7 +96,7 @@
:param locale_name: Locale name.
:return: Content of JSON file as dict.
"""
- file_path = path.join(DATA_DIR + '/' + locale_name, file)
+ file_path = path.join(DATA_DIR, locale_name, file)
# Needs explicit encoding for Windows
with open(file_path, 'r', encoding='utf8') as f:
return json.load(f)
|
{"golden_diff": "diff --git a/mimesis/utils.py b/mimesis/utils.py\n--- a/mimesis/utils.py\n+++ b/mimesis/utils.py\n@@ -96,7 +96,7 @@\n :param locale_name: Locale name.\n :return: Content of JSON file as dict.\n \"\"\"\n- file_path = path.join(DATA_DIR + '/' + locale_name, file)\n+ file_path = path.join(DATA_DIR, locale_name, file)\n # Needs explicit encoding for Windows\n with open(file_path, 'r', encoding='utf8') as f:\n return json.load(f)\n", "issue": "I can't compile my project by pyinstaller\nI have a script with code:\r\n```\r\nfrom mimesis import Personal\r\nperson = Personal('en')\r\nperson.full_name()\r\n```\r\nand it works well, but after compiling this code to .exe via pyinstaller I have an error **FileNotFoundError: [Errno 2] No such file or directory: 'B:\\\\_MEI131682\\\\mimesis\\\\data/es\\\\personal.json'\r\n[20624] Failed to execute script myproject**\r\nSo, I think that problem in path (`data/es\\\\personal`). What ways of solving this problem can you recommend?\n", "before_files": [{"content": "\"\"\"This module is provide internal util functions.\"\"\"\n\nimport collections\nimport functools\nimport json\nimport ssl\nfrom os import path\nfrom typing import Mapping, Optional, Union\nfrom urllib import request\n\nfrom mimesis import config\nfrom mimesis.exceptions import UnsupportedLocale\nfrom mimesis.typing import JSON\n\n__all__ = ['download_image', 'locale_info',\n 'luhn_checksum', 'setup_locale', 'pull']\n\nDATA_DIR = path.abspath(path.join(path.dirname(__file__), 'data'))\n\n\ndef locale_info(locale: str) -> str:\n \"\"\"Check information about locale.\n\n :param locale: Locale abbreviation.\n :return: Locale name.\n :raises UnsupportedLocale: if locale is not supported.\n \"\"\"\n locale = locale.lower()\n supported = config.SUPPORTED_LOCALES\n\n if locale not in supported:\n raise UnsupportedLocale(locale)\n\n return supported[locale]['name']\n\n\ndef luhn_checksum(num: str) -> str:\n \"\"\"Calculate a checksum for num using the Luhn algorithm.\n\n :param num: The number to calculate a checksum for as a string.\n :return: Checksum for number.\n \"\"\"\n check = 0\n for i, s in enumerate(reversed(num)):\n sx = int(s)\n sx = sx * 2 if i % 2 == 0 else sx\n sx = sx - 9 if sx > 9 else sx\n check += sx\n return str(check * 9 % 10)\n\n\ndef update_dict(initial: JSON, other: Mapping) -> JSON:\n \"\"\"Recursively update a dictionary.\n\n :param initial: Dict to update.\n :type initial: dict or list\n :param other: Dict to update from.\n :type other: Mapping\n :return: Updated dict.\n :rtype: dict\n \"\"\"\n for key, value in other.items():\n if isinstance(value, collections.Mapping):\n r = update_dict(initial.get(key, {}), value)\n initial[key] = r\n else:\n initial[key] = other[key]\n return initial\n\n\[email protected]_cache(maxsize=None)\ndef pull(file: str, locale: str = 'en') -> JSON:\n \"\"\"Pull the content from the JSON and memorize one.\n\n Opens JSON file ``file`` in the folder ``data/locale``\n and get content from the file and memorize ones using lru_cache.\n\n :param file: The name of file.\n :param locale: Locale.\n :return: The content of the file.\n :rtype: dict\n :raises UnsupportedLocale: if locale is not supported.\n\n :Example:\n\n >>> from mimesis.utils import pull\n >>> en = pull(file='datetime.json', locale='en')\n >>> isinstance(en, dict)\n True\n >>> en['day']['abbr'][0]\n 'Mon.'\n \"\"\"\n def get_data(locale_name: str) -> JSON:\n \"\"\"Pull JSON data from file.\n\n :param locale_name: Locale name.\n :return: Content of JSON file as dict.\n \"\"\"\n file_path = path.join(DATA_DIR + '/' + locale_name, file)\n # Needs explicit encoding for Windows\n with open(file_path, 'r', encoding='utf8') as f:\n return json.load(f)\n\n locale = locale.lower()\n\n if locale not in config.SUPPORTED_LOCALES:\n raise UnsupportedLocale(locale)\n\n master_locale = locale.split('-')[0]\n data = get_data(master_locale)\n\n # Handle sub-locales\n if '-' in locale:\n data = update_dict(data, get_data(locale))\n\n return data\n\n\ndef download_image(url: str = '', save_path: str = '',\n unverified_ctx: bool = False) -> Union[None, str]:\n \"\"\"Download image and save in current directory on local machine.\n\n :param url: URL to image.\n :param save_path: Saving path.\n :param unverified_ctx: Create unverified context.\n :return: Image name.\n :rtype: str or None\n \"\"\"\n if unverified_ctx:\n ssl._create_default_https_context = ssl._create_unverified_context\n\n if url is not None:\n image_name = url.rsplit('/')[-1]\n request.urlretrieve(url, save_path + image_name)\n return image_name\n return None\n\n\ndef setup_locale(locale: Optional[str] = None) -> str:\n \"\"\"Set up locale after pre-check.\n\n :param str locale: Locale\n :return: Locale in lowercase.\n :raises UnsupportedLocale: if locales is not supported.\n \"\"\"\n if not locale:\n return config.DEFAULT_LOCALE\n\n locale = locale.lower()\n if locale not in config.SUPPORTED_LOCALES:\n raise UnsupportedLocale(locale)\n\n return locale\n", "path": "mimesis/utils.py"}], "after_files": [{"content": "\"\"\"This module is provide internal util functions.\"\"\"\n\nimport collections\nimport functools\nimport json\nimport ssl\nfrom os import path\nfrom typing import Mapping, Optional, Union\nfrom urllib import request\n\nfrom mimesis import config\nfrom mimesis.exceptions import UnsupportedLocale\nfrom mimesis.typing import JSON\n\n__all__ = ['download_image', 'locale_info',\n 'luhn_checksum', 'setup_locale', 'pull']\n\nDATA_DIR = path.abspath(path.join(path.dirname(__file__), 'data'))\n\n\ndef locale_info(locale: str) -> str:\n \"\"\"Check information about locale.\n\n :param locale: Locale abbreviation.\n :return: Locale name.\n :raises UnsupportedLocale: if locale is not supported.\n \"\"\"\n locale = locale.lower()\n supported = config.SUPPORTED_LOCALES\n\n if locale not in supported:\n raise UnsupportedLocale(locale)\n\n return supported[locale]['name']\n\n\ndef luhn_checksum(num: str) -> str:\n \"\"\"Calculate a checksum for num using the Luhn algorithm.\n\n :param num: The number to calculate a checksum for as a string.\n :return: Checksum for number.\n \"\"\"\n check = 0\n for i, s in enumerate(reversed(num)):\n sx = int(s)\n sx = sx * 2 if i % 2 == 0 else sx\n sx = sx - 9 if sx > 9 else sx\n check += sx\n return str(check * 9 % 10)\n\n\ndef update_dict(initial: JSON, other: Mapping) -> JSON:\n \"\"\"Recursively update a dictionary.\n\n :param initial: Dict to update.\n :type initial: dict or list\n :param other: Dict to update from.\n :type other: Mapping\n :return: Updated dict.\n :rtype: dict\n \"\"\"\n for key, value in other.items():\n if isinstance(value, collections.Mapping):\n r = update_dict(initial.get(key, {}), value)\n initial[key] = r\n else:\n initial[key] = other[key]\n return initial\n\n\[email protected]_cache(maxsize=None)\ndef pull(file: str, locale: str = 'en') -> JSON:\n \"\"\"Pull the content from the JSON and memorize one.\n\n Opens JSON file ``file`` in the folder ``data/locale``\n and get content from the file and memorize ones using lru_cache.\n\n :param file: The name of file.\n :param locale: Locale.\n :return: The content of the file.\n :rtype: dict\n :raises UnsupportedLocale: if locale is not supported.\n\n :Example:\n\n >>> from mimesis.utils import pull\n >>> en = pull(file='datetime.json', locale='en')\n >>> isinstance(en, dict)\n True\n >>> en['day']['abbr'][0]\n 'Mon.'\n \"\"\"\n def get_data(locale_name: str) -> JSON:\n \"\"\"Pull JSON data from file.\n\n :param locale_name: Locale name.\n :return: Content of JSON file as dict.\n \"\"\"\n file_path = path.join(DATA_DIR, locale_name, file)\n # Needs explicit encoding for Windows\n with open(file_path, 'r', encoding='utf8') as f:\n return json.load(f)\n\n locale = locale.lower()\n\n if locale not in config.SUPPORTED_LOCALES:\n raise UnsupportedLocale(locale)\n\n master_locale = locale.split('-')[0]\n data = get_data(master_locale)\n\n # Handle sub-locales\n if '-' in locale:\n data = update_dict(data, get_data(locale))\n\n return data\n\n\ndef download_image(url: str = '', save_path: str = '',\n unverified_ctx: bool = False) -> Union[None, str]:\n \"\"\"Download image and save in current directory on local machine.\n\n :param url: URL to image.\n :param save_path: Saving path.\n :param unverified_ctx: Create unverified context.\n :return: Image name.\n :rtype: str or None\n \"\"\"\n if unverified_ctx:\n ssl._create_default_https_context = ssl._create_unverified_context\n\n if url is not None:\n image_name = url.rsplit('/')[-1]\n request.urlretrieve(url, save_path + image_name)\n return image_name\n return None\n\n\ndef setup_locale(locale: Optional[str] = None) -> str:\n \"\"\"Set up locale after pre-check.\n\n :param str locale: Locale\n :return: Locale in lowercase.\n :raises UnsupportedLocale: if locales is not supported.\n \"\"\"\n if not locale:\n return config.DEFAULT_LOCALE\n\n locale = locale.lower()\n if locale not in config.SUPPORTED_LOCALES:\n raise UnsupportedLocale(locale)\n\n return locale\n", "path": "mimesis/utils.py"}]}
| 1,790 | 129 |
gh_patches_debug_40879
|
rasdani/github-patches
|
git_diff
|
aimhubio__aim-2422
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GPU utilization is not tracked if querying power usage fails
## 🐛 Bug
I am running experiments on a machine with a GPU, yet no GPU metrics are tracked. It seems like this code is responsible:
https://github.com/aimhubio/aim/blob/480e063cde063897283bcd8adb221e9baa861637/aim/ext/resource/stat.py#L152-L186
When any part of the GPU stats collection fails, we just give up entirely and store no information. In my case querying the power usage seems not supported by nvml, it raises `nvml.NVMLError_NotSupported`. Querying utilization and memory usage works just fine though and it would be nice if we could track those stats anyway.
### To reproduce
I'm not sure how to reproduce this, since it depends on a setup where `nvml` fails to determine the GPU power usage.
### Expected behavior
Aim tracks all the information that it can query without exceptions.
### Environment
- Aim Version (e.g., 3.15.1)
- Python version 3.10.9
- pip version 22.0.3
- OS (e.g., Linux) Linux
- Any other relevant information
### Additional context
--
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aim/ext/resource/stat.py`
Content:
```
1 import psutil
2 import json
3 from typing import List
4
5 from aim.ext.resource.utils import round10e5
6
7 try:
8 # Import python wrapper for the NVIDIA Management Library
9 # Initialize it or pass if NVIDIA ML is not initialized
10 from py3nvml import py3nvml as nvml
11 nvml.nvmlInit()
12 except Exception:
13 pass
14
15
16 class StatDict(object):
17 # Available aggregation functions
18 AGG_MODE_AVG = 'average'
19 AGG_MODE_MIN = 'min'
20 AGG_MODE_MAX = 'max'
21 AGG_MODE_DIFF = 'diff'
22 AGG_DEFAULT = AGG_MODE_AVG
23
24 @classmethod
25 def aggregate(cls, items: List, mode: str):
26 """
27 Aggregates array of numbers by a given 'mode'
28 """
29 if mode == cls.AGG_MODE_MAX:
30 return max(items)
31 elif mode == cls.AGG_MODE_MIN:
32 return min(items)
33 elif mode == cls.AGG_MODE_AVG:
34 return round10e5(sum(items) / len(items))
35 elif mode == cls.AGG_MODE_DIFF:
36 return round10e5(max(items) - min(items))
37 else:
38 raise ValueError('unknown aggregation mode: \'{}\''.format(mode))
39
40 @classmethod
41 def aggregate_items(cls,
42 items: 'List[StatDict]',
43 agg_mode: str = AGG_DEFAULT,
44 ):
45 """
46 Aggregates array of `StatDict` items by a given `mode`
47 """
48 aggregated_stat = cls()
49
50 # Return empty item if items array is empty
51 if not items or len(items) == 0:
52 return aggregated_stat
53
54 gpu_stats = []
55 for s in items:
56 # Collect system stats
57 for k in s.system.keys():
58 aggregated_stat.system.setdefault(k, [])
59 aggregated_stat.system[k].append(s.system[k])
60
61 # Collect GPU device stats
62 for stat_item_gpu_idx in range(len(s.gpus)):
63 stat_item_gpu_stat = s.gpus[stat_item_gpu_idx]
64 if len(gpu_stats) == stat_item_gpu_idx:
65 gpu_stats.append({})
66 for gpu_stat_key in stat_item_gpu_stat.keys():
67 gpu_stat = stat_item_gpu_stat[gpu_stat_key]
68 gpu_stats[stat_item_gpu_idx].setdefault(gpu_stat_key, [])
69 gpu_stats[stat_item_gpu_idx][gpu_stat_key].append(gpu_stat)
70
71 # Aggregate system stats
72 for k in aggregated_stat.system.keys():
73 aggregated_stat.system[k] = cls.aggregate(aggregated_stat.system[k],
74 agg_mode)
75
76 # Aggregate GPU device stats
77 for g in range(len(gpu_stats)):
78 for k in gpu_stats[g].keys():
79 gpu_stats[g][k] = cls.aggregate(gpu_stats[g][k], agg_mode)
80 aggregated_stat.gpu = gpu_stats
81
82 return aggregated_stat
83
84 def __init__(self, system: dict = None, gpus: List[dict] = None):
85 self.system = system or {}
86 self.gpus = gpus or []
87
88 def __str__(self):
89 return json.dumps(self.to_dict())
90
91 def to_dict(self):
92 """
93 Returns system and GPU device statistics
94 """
95 return {
96 'system': self.system,
97 'gpus': self.gpus,
98 }
99
100
101 class Stat(object):
102 def __init__(self, process):
103 # Set process
104 self._process = process
105
106 # Get statistics
107 system, gpus = self.get_stats()
108 self._stat = StatDict(system, gpus)
109
110 @property
111 def process(self):
112 return self._process
113
114 @property
115 def stat_item(self):
116 return self._stat
117
118 @property
119 def system(self):
120 return self._stat.system
121
122 @property
123 def gpus(self):
124 return self._stat.gpus
125
126 def get_stats(self):
127 """
128 Get system statistics and assign to `self`
129 """
130 memory_usage = psutil.virtual_memory()
131 disk_usage = psutil.disk_usage('/')
132 # net = psutil.net_io_counters()
133 system = {
134 # CPU utilization percent(can be over 100%)
135 'cpu': round10e5(self._process.cpu_percent(0.0)),
136
137 # Whole system memory usage
138 # 'memory_used': round10e5(memory_usage.used / 1024 / 1024),
139 'memory_percent': round10e5(memory_usage.used * 100 / memory_usage.total),
140
141 # Get the portion of memory occupied by a process
142 # 'p_memory_rss': round10e5(self._process.memory_info().rss
143 # / 1024 / 1024),
144 'p_memory_percent': round10e5(self._process.memory_percent()),
145
146 # Disk usage
147 # 'disk_used': round10e5(disk_usage.used / 1024 / 1024),
148 'disk_percent': round10e5(disk_usage.percent),
149 }
150
151 # Collect GPU statistics
152 gpus = []
153 try:
154 gpu_device_count = nvml.nvmlDeviceGetCount()
155 for i in range(gpu_device_count):
156 handle = nvml.nvmlDeviceGetHandleByIndex(i)
157 nvml_tmp = nvml.NVML_TEMPERATURE_GPU
158
159 # Get device memory and temperature
160 util = nvml.nvmlDeviceGetUtilizationRates(handle)
161 memory = nvml.nvmlDeviceGetMemoryInfo(handle)
162 temp = nvml.nvmlDeviceGetTemperature(handle, nvml_tmp)
163
164 # Compute power usage in watts and percent
165 power_watts = nvml.nvmlDeviceGetPowerUsage(handle) / 1000
166 power_cap = nvml.nvmlDeviceGetEnforcedPowerLimit(handle)
167 power_cap_watts = power_cap / 1000
168 power_watts / power_cap_watts * 100
169
170 gpus.append({
171 # GPU utilization percent
172 'gpu': round10e5(util.gpu),
173
174 # Device memory usage
175 # 'memory_used': round10e5(memory.used / 1024 / 1024),
176 'gpu_memory_percent': round10e5(memory.used * 100 / memory.total),
177
178 # Power usage in watts and percent
179 'gpu_power_watts': round10e5(power_watts),
180 # 'power_percent': round10e5(power_usage),
181
182 # Device temperature
183 'gpu_temp': round10e5(temp),
184 })
185 except Exception:
186 pass
187
188 return system, gpus
189
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/aim/ext/resource/stat.py b/aim/ext/resource/stat.py
--- a/aim/ext/resource/stat.py
+++ b/aim/ext/resource/stat.py
@@ -4,13 +4,7 @@
from aim.ext.resource.utils import round10e5
-try:
- # Import python wrapper for the NVIDIA Management Library
- # Initialize it or pass if NVIDIA ML is not initialized
- from py3nvml import py3nvml as nvml
- nvml.nvmlInit()
-except Exception:
- pass
+from py3nvml import py3nvml as nvml
class StatDict(object):
@@ -151,38 +145,49 @@
# Collect GPU statistics
gpus = []
try:
+ nvml.nvmlInit()
gpu_device_count = nvml.nvmlDeviceGetCount()
for i in range(gpu_device_count):
+ gpu_info = dict()
handle = nvml.nvmlDeviceGetHandleByIndex(i)
- nvml_tmp = nvml.NVML_TEMPERATURE_GPU
-
- # Get device memory and temperature
- util = nvml.nvmlDeviceGetUtilizationRates(handle)
- memory = nvml.nvmlDeviceGetMemoryInfo(handle)
- temp = nvml.nvmlDeviceGetTemperature(handle, nvml_tmp)
-
- # Compute power usage in watts and percent
- power_watts = nvml.nvmlDeviceGetPowerUsage(handle) / 1000
- power_cap = nvml.nvmlDeviceGetEnforcedPowerLimit(handle)
- power_cap_watts = power_cap / 1000
- power_watts / power_cap_watts * 100
-
- gpus.append({
+ try:
+ util = nvml.nvmlDeviceGetUtilizationRates(handle)
# GPU utilization percent
- 'gpu': round10e5(util.gpu),
-
+ gpu_info["gpu"] = round10e5(util.gpu)
+ except nvml.NVMLError_NotSupported:
+ pass
+ try:
+ # Get device memory
+ memory = nvml.nvmlDeviceGetMemoryInfo(handle)
# Device memory usage
# 'memory_used': round10e5(memory.used / 1024 / 1024),
- 'gpu_memory_percent': round10e5(memory.used * 100 / memory.total),
-
- # Power usage in watts and percent
- 'gpu_power_watts': round10e5(power_watts),
- # 'power_percent': round10e5(power_usage),
-
+ gpu_info["gpu_memory_percent"] = (
+ round10e5(memory.used * 100 / memory.total),
+ )
+ except nvml.NVMLError_NotSupported:
+ pass
+ try:
+ # Get device temperature
+ nvml_tmp = nvml.NVML_TEMPERATURE_GPU
+ temp = nvml.nvmlDeviceGetTemperature(handle, nvml_tmp)
# Device temperature
- 'gpu_temp': round10e5(temp),
- })
- except Exception:
+ gpu_info["gpu_temp"] = round10e5(temp)
+ except nvml.NVMLError_NotSupported:
+ pass
+ try:
+ # Compute power usage in watts and percent
+ power_watts = nvml.nvmlDeviceGetPowerUsage(handle) / 1000
+ power_cap = nvml.nvmlDeviceGetEnforcedPowerLimit(handle)
+ power_cap_watts = power_cap / 1000
+ power_watts / power_cap_watts * 100
+ # Power usage in watts and percent
+ gpu_info["gpu_power_watts"]: round10e5(power_watts)
+ # gpu_info["power_percent"] = round10e5(power_usage)
+ except nvml.NVMLError_NotSupported:
+ pass
+ gpus.append(gpu_info)
+ nvml.nvmlShutdown()
+ except (nvml.NVMLError_LibraryNotFound, nvml.NVMLError_NotSupported):
pass
return system, gpus
|
{"golden_diff": "diff --git a/aim/ext/resource/stat.py b/aim/ext/resource/stat.py\n--- a/aim/ext/resource/stat.py\n+++ b/aim/ext/resource/stat.py\n@@ -4,13 +4,7 @@\n \n from aim.ext.resource.utils import round10e5\n \n-try:\n- # Import python wrapper for the NVIDIA Management Library\n- # Initialize it or pass if NVIDIA ML is not initialized\n- from py3nvml import py3nvml as nvml\n- nvml.nvmlInit()\n-except Exception:\n- pass\n+from py3nvml import py3nvml as nvml\n \n \n class StatDict(object):\n@@ -151,38 +145,49 @@\n # Collect GPU statistics\n gpus = []\n try:\n+ nvml.nvmlInit()\n gpu_device_count = nvml.nvmlDeviceGetCount()\n for i in range(gpu_device_count):\n+ gpu_info = dict()\n handle = nvml.nvmlDeviceGetHandleByIndex(i)\n- nvml_tmp = nvml.NVML_TEMPERATURE_GPU\n-\n- # Get device memory and temperature\n- util = nvml.nvmlDeviceGetUtilizationRates(handle)\n- memory = nvml.nvmlDeviceGetMemoryInfo(handle)\n- temp = nvml.nvmlDeviceGetTemperature(handle, nvml_tmp)\n-\n- # Compute power usage in watts and percent\n- power_watts = nvml.nvmlDeviceGetPowerUsage(handle) / 1000\n- power_cap = nvml.nvmlDeviceGetEnforcedPowerLimit(handle)\n- power_cap_watts = power_cap / 1000\n- power_watts / power_cap_watts * 100\n-\n- gpus.append({\n+ try:\n+ util = nvml.nvmlDeviceGetUtilizationRates(handle)\n # GPU utilization percent\n- 'gpu': round10e5(util.gpu),\n-\n+ gpu_info[\"gpu\"] = round10e5(util.gpu)\n+ except nvml.NVMLError_NotSupported:\n+ pass\n+ try:\n+ # Get device memory\n+ memory = nvml.nvmlDeviceGetMemoryInfo(handle)\n # Device memory usage\n # 'memory_used': round10e5(memory.used / 1024 / 1024),\n- 'gpu_memory_percent': round10e5(memory.used * 100 / memory.total),\n-\n- # Power usage in watts and percent\n- 'gpu_power_watts': round10e5(power_watts),\n- # 'power_percent': round10e5(power_usage),\n-\n+ gpu_info[\"gpu_memory_percent\"] = (\n+ round10e5(memory.used * 100 / memory.total),\n+ )\n+ except nvml.NVMLError_NotSupported:\n+ pass\n+ try:\n+ # Get device temperature\n+ nvml_tmp = nvml.NVML_TEMPERATURE_GPU\n+ temp = nvml.nvmlDeviceGetTemperature(handle, nvml_tmp)\n # Device temperature\n- 'gpu_temp': round10e5(temp),\n- })\n- except Exception:\n+ gpu_info[\"gpu_temp\"] = round10e5(temp)\n+ except nvml.NVMLError_NotSupported:\n+ pass\n+ try:\n+ # Compute power usage in watts and percent\n+ power_watts = nvml.nvmlDeviceGetPowerUsage(handle) / 1000\n+ power_cap = nvml.nvmlDeviceGetEnforcedPowerLimit(handle)\n+ power_cap_watts = power_cap / 1000\n+ power_watts / power_cap_watts * 100\n+ # Power usage in watts and percent\n+ gpu_info[\"gpu_power_watts\"]: round10e5(power_watts)\n+ # gpu_info[\"power_percent\"] = round10e5(power_usage)\n+ except nvml.NVMLError_NotSupported:\n+ pass\n+ gpus.append(gpu_info)\n+ nvml.nvmlShutdown()\n+ except (nvml.NVMLError_LibraryNotFound, nvml.NVMLError_NotSupported):\n pass\n \n return system, gpus\n", "issue": "GPU utilization is not tracked if querying power usage fails\n## \ud83d\udc1b Bug\r\n\r\nI am running experiments on a machine with a GPU, yet no GPU metrics are tracked. It seems like this code is responsible:\r\n\r\nhttps://github.com/aimhubio/aim/blob/480e063cde063897283bcd8adb221e9baa861637/aim/ext/resource/stat.py#L152-L186\r\n\r\nWhen any part of the GPU stats collection fails, we just give up entirely and store no information. In my case querying the power usage seems not supported by nvml, it raises `nvml.NVMLError_NotSupported`. Querying utilization and memory usage works just fine though and it would be nice if we could track those stats anyway.\r\n\r\n### To reproduce\r\n\r\nI'm not sure how to reproduce this, since it depends on a setup where `nvml` fails to determine the GPU power usage.\r\n\r\n### Expected behavior\r\n\r\nAim tracks all the information that it can query without exceptions.\r\n\r\n### Environment\r\n\r\n- Aim Version (e.g., 3.15.1)\r\n- Python version 3.10.9\r\n- pip version 22.0.3\r\n- OS (e.g., Linux) Linux\r\n- Any other relevant information\r\n\r\n### Additional context\r\n\r\n--\r\n\n", "before_files": [{"content": "import psutil\nimport json\nfrom typing import List\n\nfrom aim.ext.resource.utils import round10e5\n\ntry:\n # Import python wrapper for the NVIDIA Management Library\n # Initialize it or pass if NVIDIA ML is not initialized\n from py3nvml import py3nvml as nvml\n nvml.nvmlInit()\nexcept Exception:\n pass\n\n\nclass StatDict(object):\n # Available aggregation functions\n AGG_MODE_AVG = 'average'\n AGG_MODE_MIN = 'min'\n AGG_MODE_MAX = 'max'\n AGG_MODE_DIFF = 'diff'\n AGG_DEFAULT = AGG_MODE_AVG\n\n @classmethod\n def aggregate(cls, items: List, mode: str):\n \"\"\"\n Aggregates array of numbers by a given 'mode'\n \"\"\"\n if mode == cls.AGG_MODE_MAX:\n return max(items)\n elif mode == cls.AGG_MODE_MIN:\n return min(items)\n elif mode == cls.AGG_MODE_AVG:\n return round10e5(sum(items) / len(items))\n elif mode == cls.AGG_MODE_DIFF:\n return round10e5(max(items) - min(items))\n else:\n raise ValueError('unknown aggregation mode: \\'{}\\''.format(mode))\n\n @classmethod\n def aggregate_items(cls,\n items: 'List[StatDict]',\n agg_mode: str = AGG_DEFAULT,\n ):\n \"\"\"\n Aggregates array of `StatDict` items by a given `mode`\n \"\"\"\n aggregated_stat = cls()\n\n # Return empty item if items array is empty\n if not items or len(items) == 0:\n return aggregated_stat\n\n gpu_stats = []\n for s in items:\n # Collect system stats\n for k in s.system.keys():\n aggregated_stat.system.setdefault(k, [])\n aggregated_stat.system[k].append(s.system[k])\n\n # Collect GPU device stats\n for stat_item_gpu_idx in range(len(s.gpus)):\n stat_item_gpu_stat = s.gpus[stat_item_gpu_idx]\n if len(gpu_stats) == stat_item_gpu_idx:\n gpu_stats.append({})\n for gpu_stat_key in stat_item_gpu_stat.keys():\n gpu_stat = stat_item_gpu_stat[gpu_stat_key]\n gpu_stats[stat_item_gpu_idx].setdefault(gpu_stat_key, [])\n gpu_stats[stat_item_gpu_idx][gpu_stat_key].append(gpu_stat)\n\n # Aggregate system stats\n for k in aggregated_stat.system.keys():\n aggregated_stat.system[k] = cls.aggregate(aggregated_stat.system[k],\n agg_mode)\n\n # Aggregate GPU device stats\n for g in range(len(gpu_stats)):\n for k in gpu_stats[g].keys():\n gpu_stats[g][k] = cls.aggregate(gpu_stats[g][k], agg_mode)\n aggregated_stat.gpu = gpu_stats\n\n return aggregated_stat\n\n def __init__(self, system: dict = None, gpus: List[dict] = None):\n self.system = system or {}\n self.gpus = gpus or []\n\n def __str__(self):\n return json.dumps(self.to_dict())\n\n def to_dict(self):\n \"\"\"\n Returns system and GPU device statistics\n \"\"\"\n return {\n 'system': self.system,\n 'gpus': self.gpus,\n }\n\n\nclass Stat(object):\n def __init__(self, process):\n # Set process\n self._process = process\n\n # Get statistics\n system, gpus = self.get_stats()\n self._stat = StatDict(system, gpus)\n\n @property\n def process(self):\n return self._process\n\n @property\n def stat_item(self):\n return self._stat\n\n @property\n def system(self):\n return self._stat.system\n\n @property\n def gpus(self):\n return self._stat.gpus\n\n def get_stats(self):\n \"\"\"\n Get system statistics and assign to `self`\n \"\"\"\n memory_usage = psutil.virtual_memory()\n disk_usage = psutil.disk_usage('/')\n # net = psutil.net_io_counters()\n system = {\n # CPU utilization percent(can be over 100%)\n 'cpu': round10e5(self._process.cpu_percent(0.0)),\n\n # Whole system memory usage\n # 'memory_used': round10e5(memory_usage.used / 1024 / 1024),\n 'memory_percent': round10e5(memory_usage.used * 100 / memory_usage.total),\n\n # Get the portion of memory occupied by a process\n # 'p_memory_rss': round10e5(self._process.memory_info().rss\n # / 1024 / 1024),\n 'p_memory_percent': round10e5(self._process.memory_percent()),\n\n # Disk usage\n # 'disk_used': round10e5(disk_usage.used / 1024 / 1024),\n 'disk_percent': round10e5(disk_usage.percent),\n }\n\n # Collect GPU statistics\n gpus = []\n try:\n gpu_device_count = nvml.nvmlDeviceGetCount()\n for i in range(gpu_device_count):\n handle = nvml.nvmlDeviceGetHandleByIndex(i)\n nvml_tmp = nvml.NVML_TEMPERATURE_GPU\n\n # Get device memory and temperature\n util = nvml.nvmlDeviceGetUtilizationRates(handle)\n memory = nvml.nvmlDeviceGetMemoryInfo(handle)\n temp = nvml.nvmlDeviceGetTemperature(handle, nvml_tmp)\n\n # Compute power usage in watts and percent\n power_watts = nvml.nvmlDeviceGetPowerUsage(handle) / 1000\n power_cap = nvml.nvmlDeviceGetEnforcedPowerLimit(handle)\n power_cap_watts = power_cap / 1000\n power_watts / power_cap_watts * 100\n\n gpus.append({\n # GPU utilization percent\n 'gpu': round10e5(util.gpu),\n\n # Device memory usage\n # 'memory_used': round10e5(memory.used / 1024 / 1024),\n 'gpu_memory_percent': round10e5(memory.used * 100 / memory.total),\n\n # Power usage in watts and percent\n 'gpu_power_watts': round10e5(power_watts),\n # 'power_percent': round10e5(power_usage),\n\n # Device temperature\n 'gpu_temp': round10e5(temp),\n })\n except Exception:\n pass\n\n return system, gpus\n", "path": "aim/ext/resource/stat.py"}], "after_files": [{"content": "import psutil\nimport json\nfrom typing import List\n\nfrom aim.ext.resource.utils import round10e5\n\nfrom py3nvml import py3nvml as nvml\n\n\nclass StatDict(object):\n # Available aggregation functions\n AGG_MODE_AVG = 'average'\n AGG_MODE_MIN = 'min'\n AGG_MODE_MAX = 'max'\n AGG_MODE_DIFF = 'diff'\n AGG_DEFAULT = AGG_MODE_AVG\n\n @classmethod\n def aggregate(cls, items: List, mode: str):\n \"\"\"\n Aggregates array of numbers by a given 'mode'\n \"\"\"\n if mode == cls.AGG_MODE_MAX:\n return max(items)\n elif mode == cls.AGG_MODE_MIN:\n return min(items)\n elif mode == cls.AGG_MODE_AVG:\n return round10e5(sum(items) / len(items))\n elif mode == cls.AGG_MODE_DIFF:\n return round10e5(max(items) - min(items))\n else:\n raise ValueError('unknown aggregation mode: \\'{}\\''.format(mode))\n\n @classmethod\n def aggregate_items(cls,\n items: 'List[StatDict]',\n agg_mode: str = AGG_DEFAULT,\n ):\n \"\"\"\n Aggregates array of `StatDict` items by a given `mode`\n \"\"\"\n aggregated_stat = cls()\n\n # Return empty item if items array is empty\n if not items or len(items) == 0:\n return aggregated_stat\n\n gpu_stats = []\n for s in items:\n # Collect system stats\n for k in s.system.keys():\n aggregated_stat.system.setdefault(k, [])\n aggregated_stat.system[k].append(s.system[k])\n\n # Collect GPU device stats\n for stat_item_gpu_idx in range(len(s.gpus)):\n stat_item_gpu_stat = s.gpus[stat_item_gpu_idx]\n if len(gpu_stats) == stat_item_gpu_idx:\n gpu_stats.append({})\n for gpu_stat_key in stat_item_gpu_stat.keys():\n gpu_stat = stat_item_gpu_stat[gpu_stat_key]\n gpu_stats[stat_item_gpu_idx].setdefault(gpu_stat_key, [])\n gpu_stats[stat_item_gpu_idx][gpu_stat_key].append(gpu_stat)\n\n # Aggregate system stats\n for k in aggregated_stat.system.keys():\n aggregated_stat.system[k] = cls.aggregate(aggregated_stat.system[k],\n agg_mode)\n\n # Aggregate GPU device stats\n for g in range(len(gpu_stats)):\n for k in gpu_stats[g].keys():\n gpu_stats[g][k] = cls.aggregate(gpu_stats[g][k], agg_mode)\n aggregated_stat.gpu = gpu_stats\n\n return aggregated_stat\n\n def __init__(self, system: dict = None, gpus: List[dict] = None):\n self.system = system or {}\n self.gpus = gpus or []\n\n def __str__(self):\n return json.dumps(self.to_dict())\n\n def to_dict(self):\n \"\"\"\n Returns system and GPU device statistics\n \"\"\"\n return {\n 'system': self.system,\n 'gpus': self.gpus,\n }\n\n\nclass Stat(object):\n def __init__(self, process):\n # Set process\n self._process = process\n\n # Get statistics\n system, gpus = self.get_stats()\n self._stat = StatDict(system, gpus)\n\n @property\n def process(self):\n return self._process\n\n @property\n def stat_item(self):\n return self._stat\n\n @property\n def system(self):\n return self._stat.system\n\n @property\n def gpus(self):\n return self._stat.gpus\n\n def get_stats(self):\n \"\"\"\n Get system statistics and assign to `self`\n \"\"\"\n memory_usage = psutil.virtual_memory()\n disk_usage = psutil.disk_usage('/')\n # net = psutil.net_io_counters()\n system = {\n # CPU utilization percent(can be over 100%)\n 'cpu': round10e5(self._process.cpu_percent(0.0)),\n\n # Whole system memory usage\n # 'memory_used': round10e5(memory_usage.used / 1024 / 1024),\n 'memory_percent': round10e5(memory_usage.used * 100 / memory_usage.total),\n\n # Get the portion of memory occupied by a process\n # 'p_memory_rss': round10e5(self._process.memory_info().rss\n # / 1024 / 1024),\n 'p_memory_percent': round10e5(self._process.memory_percent()),\n\n # Disk usage\n # 'disk_used': round10e5(disk_usage.used / 1024 / 1024),\n 'disk_percent': round10e5(disk_usage.percent),\n }\n\n # Collect GPU statistics\n gpus = []\n try:\n nvml.nvmlInit()\n gpu_device_count = nvml.nvmlDeviceGetCount()\n for i in range(gpu_device_count):\n gpu_info = dict()\n handle = nvml.nvmlDeviceGetHandleByIndex(i)\n try:\n util = nvml.nvmlDeviceGetUtilizationRates(handle)\n # GPU utilization percent\n gpu_info[\"gpu\"] = round10e5(util.gpu)\n except nvml.NVMLError_NotSupported:\n pass\n try:\n # Get device memory\n memory = nvml.nvmlDeviceGetMemoryInfo(handle)\n # Device memory usage\n # 'memory_used': round10e5(memory.used / 1024 / 1024),\n gpu_info[\"gpu_memory_percent\"] = (\n round10e5(memory.used * 100 / memory.total),\n )\n except nvml.NVMLError_NotSupported:\n pass\n try:\n # Get device temperature\n nvml_tmp = nvml.NVML_TEMPERATURE_GPU\n temp = nvml.nvmlDeviceGetTemperature(handle, nvml_tmp)\n # Device temperature\n gpu_info[\"gpu_temp\"] = round10e5(temp)\n except nvml.NVMLError_NotSupported:\n pass\n try:\n # Compute power usage in watts and percent\n power_watts = nvml.nvmlDeviceGetPowerUsage(handle) / 1000\n power_cap = nvml.nvmlDeviceGetEnforcedPowerLimit(handle)\n power_cap_watts = power_cap / 1000\n power_watts / power_cap_watts * 100\n # Power usage in watts and percent\n gpu_info[\"gpu_power_watts\"]: round10e5(power_watts)\n # gpu_info[\"power_percent\"] = round10e5(power_usage)\n except nvml.NVMLError_NotSupported:\n pass\n gpus.append(gpu_info)\n nvml.nvmlShutdown()\n except (nvml.NVMLError_LibraryNotFound, nvml.NVMLError_NotSupported):\n pass\n\n return system, gpus\n", "path": "aim/ext/resource/stat.py"}]}
| 2,460 | 937 |
gh_patches_debug_13779
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-4877
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.atresplayer: error: Unable to validate response text: ValidationError(NoneOrAllSchema)
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
Using the latest app image with Tvheadend with command:
pipe:///usr/local/bin/streamlink -O https://www.atresplayer.com/directos/nova best
2022-10-09 23:21:29.885 mpegts: nova HD in Streams - tuning on IPTV #1
2022-10-09 23:21:29.927 subscription: 0121: "scan" subscribing to mux "nova HD", weight: 6, adapter: "IPTV #1", network: "Streams", service: "Raw PID Subscription"
2022-10-09 23:21:29.927 spawn: Executing "/usr/local/bin/streamlink"
2022-10-09 23:21:30.352 spawn: [cli][info] Found matching plugin atresplayer for URL https://www.atresplayer.com/directos/nova/
2022-10-09 23:21:30.621 spawn: [cli][info] Available streams: 360p (worst), 480p, 720p, 1080p (best)
2022-10-09 23:21:30.621 spawn: [cli][info] Opening stream: 1080p (hls)
2022-10-09 23:21:44.927 mpegts: nova HD in Streams - scan no data, failed
2022-10-09 23:21:44.927 subscription: 0121: "scan" unsubscribing
### Debug log
```text
nico@NUC:~/streamlink$ ./streamlink -l debug https://www.atresplayer.com/directos/nova
[cli][debug] OS: Linux-5.15.0-48-generic-x86_64-with-glibc2.31
[cli][debug] Python: 3.10.7
[cli][debug] Streamlink: 5.0.1
[cli][debug] Dependencies:
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.15.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.1
[cli][debug] websocket-client: 1.4.1
[cli][debug] Arguments:
[cli][debug] url=https://www.atresplayer.com/directos/nova
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin atresplayer for URL https://www.atresplayer.com/directos/nova
error: Unable to validate response text: ValidationError(NoneOrAllSchema):
ValidationError(dict):
Unable to validate value of key 'links'
Context(dict):
Key '/directos/nova' not found in <{'/directos/nova/': {'url': '/directos/nova/', 'redirec...>
nuc@NUC:~/streamlink$ ./streamlink --version-check
[cli][info] Your Streamlink version (5.0.1) is up to date!
nuc@NUC:~/streamlink$ ./streamlink --version
streamlink 5.0.1
nuc@NUC:~/streamlink$ ./streamlink --plugins
Loaded plugins: abematv, adultswim, afreeca, albavision, aloula, app17, ard_live, ard_mediathek, artetv, atpchallenger, atresplayer, bbciplayer, bfmtv, bigo, bilibili, blazetv, bloomberg, booyah, brightcove, btv, cbsnews, cdnbg, ceskatelevize, cinergroup, clubbingtv, cmmedia, cnews, crunchyroll, dailymotion, dash, delfi, deutschewelle, dlive, dogan, dogus, drdk, earthcam, egame, euronews, facebook, filmon, foxtr, funimationnow, galatasaraytv, **goltelevision**, goodgame, googledrive, gulli, hiplayer, hls, http, htv, huajiao, huya, idf1, invintus, kugou, linelive, livestream, lnk, lrt, ltv_lsm_lv, mdstrm, mediaklikk, mediavitrina, mildom, mitele, mjunoon, mrtmk, n13tv, nbcnews, nhkworld, nicolive, nimotv, nos, nownews, nrk, ntv, okru, olympicchannel, oneplusone, onetv, openrectv, orf_tvthek, pandalive, picarto, piczel, pixiv, pluto, pluzz, qq, radiko, radionet, raiplay, reuters, rtbf, rtpa, rtpplay, rtve, rtvs, ruv, sbscokr, schoolism, showroom, sportal, sportschau, ssh101, stadium, steam, streamable, streann, stv, svtplay, swisstxt, telefe, tf1, trovo, turkuvaz, tv360, tv3cat, tv4play, tv5monde, tv8, tv999, tvibo, tviplayer, tvp, tvrby, tvrplus, tvtoya, twitcasting, twitch, useetv, ustreamtv, ustvnow, vidio, vimeo, vinhlongtv, vk, vlive, vtvgo, wasd, webtv, welt, wwenetwork, youtube, yupptv, zattoo, zdf_mediathek, zeenews, zengatv, zhanqi
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/atresplayer.py`
Content:
```
1 """
2 $description Spanish live TV channels from Atresmedia Television, including Antena 3 and laSexta.
3 $url atresplayer.com
4 $type live
5 $region Spain
6 """
7
8 import logging
9 import re
10 from urllib.parse import urlparse
11
12 from streamlink.plugin import Plugin, pluginmatcher
13 from streamlink.plugin.api import validate
14 from streamlink.stream.dash import DASHStream
15 from streamlink.stream.hls import HLSStream
16 from streamlink.utils.url import update_scheme
17
18 log = logging.getLogger(__name__)
19
20
21 @pluginmatcher(re.compile(
22 r"https?://(?:www\.)?atresplayer\.com/"
23 ))
24 class AtresPlayer(Plugin):
25 def _get_streams(self):
26 self.url = update_scheme("https://", self.url)
27 path = urlparse(self.url).path
28
29 api_url = self.session.http.get(self.url, schema=validate.Schema(
30 re.compile(r"""window.__PRELOADED_STATE__\s*=\s*({.*?});""", re.DOTALL),
31 validate.none_or_all(
32 validate.get(1),
33 validate.parse_json(),
34 {"links": {path: {"href": validate.url()}}},
35 validate.get(("links", path, "href")),
36 ),
37 ))
38 if not api_url:
39 return
40 log.debug(f"API URL: {api_url}")
41
42 player_api_url = self.session.http.get(api_url, schema=validate.Schema(
43 validate.parse_json(),
44 {"urlVideo": validate.url()},
45 validate.get("urlVideo"),
46 ))
47
48 log.debug(f"Player API URL: {player_api_url}")
49 sources = self.session.http.get(player_api_url, acceptable_status=(200, 403), schema=validate.Schema(
50 validate.parse_json(),
51 validate.any(
52 {
53 "error": str,
54 "error_description": str,
55 },
56 {
57 "sources": [
58 validate.all(
59 {
60 "src": validate.url(),
61 validate.optional("type"): str,
62 },
63 validate.union_get("type", "src"),
64 ),
65 ],
66 },
67 ),
68 ))
69 if "error" in sources:
70 log.error(f"Player API error: {sources['error']} - {sources['error_description']}")
71 return
72
73 for streamtype, streamsrc in sources.get("sources"):
74 log.debug(f"Stream source: {streamsrc} ({streamtype or 'n/a'})")
75
76 if streamtype == "application/vnd.apple.mpegurl":
77 streams = HLSStream.parse_variant_playlist(self.session, streamsrc)
78 if not streams:
79 yield "live", HLSStream(self.session, streamsrc)
80 else:
81 yield from streams.items()
82 elif streamtype == "application/dash+xml":
83 yield from DASHStream.parse_manifest(self.session, streamsrc).items()
84
85
86 __plugin__ = AtresPlayer
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/atresplayer.py b/src/streamlink/plugins/atresplayer.py
--- a/src/streamlink/plugins/atresplayer.py
+++ b/src/streamlink/plugins/atresplayer.py
@@ -22,10 +22,12 @@
r"https?://(?:www\.)?atresplayer\.com/"
))
class AtresPlayer(Plugin):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.url = update_scheme("https://", f"{self.url.rstrip('/')}/")
+
def _get_streams(self):
- self.url = update_scheme("https://", self.url)
path = urlparse(self.url).path
-
api_url = self.session.http.get(self.url, schema=validate.Schema(
re.compile(r"""window.__PRELOADED_STATE__\s*=\s*({.*?});""", re.DOTALL),
validate.none_or_all(
|
{"golden_diff": "diff --git a/src/streamlink/plugins/atresplayer.py b/src/streamlink/plugins/atresplayer.py\n--- a/src/streamlink/plugins/atresplayer.py\n+++ b/src/streamlink/plugins/atresplayer.py\n@@ -22,10 +22,12 @@\n r\"https?://(?:www\\.)?atresplayer\\.com/\"\n ))\n class AtresPlayer(Plugin):\n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.url = update_scheme(\"https://\", f\"{self.url.rstrip('/')}/\")\n+\n def _get_streams(self):\n- self.url = update_scheme(\"https://\", self.url)\n path = urlparse(self.url).path\n-\n api_url = self.session.http.get(self.url, schema=validate.Schema(\n re.compile(r\"\"\"window.__PRELOADED_STATE__\\s*=\\s*({.*?});\"\"\", re.DOTALL),\n validate.none_or_all(\n", "issue": "plugins.atresplayer: error: Unable to validate response text: ValidationError(NoneOrAllSchema)\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\nUsing the latest app image with Tvheadend with command:\r\n\r\npipe:///usr/local/bin/streamlink -O https://www.atresplayer.com/directos/nova best\r\n\r\n2022-10-09 23:21:29.885 mpegts: nova HD in Streams - tuning on IPTV #1\r\n2022-10-09 23:21:29.927 subscription: 0121: \"scan\" subscribing to mux \"nova HD\", weight: 6, adapter: \"IPTV #1\", network: \"Streams\", service: \"Raw PID Subscription\"\r\n2022-10-09 23:21:29.927 spawn: Executing \"/usr/local/bin/streamlink\"\r\n2022-10-09 23:21:30.352 spawn: [cli][info] Found matching plugin atresplayer for URL https://www.atresplayer.com/directos/nova/\r\n2022-10-09 23:21:30.621 spawn: [cli][info] Available streams: 360p (worst), 480p, 720p, 1080p (best)\r\n2022-10-09 23:21:30.621 spawn: [cli][info] Opening stream: 1080p (hls)\r\n2022-10-09 23:21:44.927 mpegts: nova HD in Streams - scan no data, failed\r\n2022-10-09 23:21:44.927 subscription: 0121: \"scan\" unsubscribing\n\n### Debug log\n\n```text\nnico@NUC:~/streamlink$ ./streamlink -l debug https://www.atresplayer.com/directos/nova\r\n[cli][debug] OS: Linux-5.15.0-48-generic-x86_64-with-glibc2.31\r\n[cli][debug] Python: 3.10.7\r\n[cli][debug] Streamlink: 5.0.1\r\n[cli][debug] Dependencies:\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.15.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.1\r\n[cli][debug] websocket-client: 1.4.1\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.atresplayer.com/directos/nova\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin atresplayer for URL https://www.atresplayer.com/directos/nova\r\nerror: Unable to validate response text: ValidationError(NoneOrAllSchema):\r\n ValidationError(dict):\r\n Unable to validate value of key 'links'\r\n Context(dict):\r\n Key '/directos/nova' not found in <{'/directos/nova/': {'url': '/directos/nova/', 'redirec...>\r\n\r\nnuc@NUC:~/streamlink$ ./streamlink --version-check\r\n[cli][info] Your Streamlink version (5.0.1) is up to date!\r\n\r\nnuc@NUC:~/streamlink$ ./streamlink --version\r\nstreamlink 5.0.1\r\n\r\nnuc@NUC:~/streamlink$ ./streamlink --plugins\r\nLoaded plugins: abematv, adultswim, afreeca, albavision, aloula, app17, ard_live, ard_mediathek, artetv, atpchallenger, atresplayer, bbciplayer, bfmtv, bigo, bilibili, blazetv, bloomberg, booyah, brightcove, btv, cbsnews, cdnbg, ceskatelevize, cinergroup, clubbingtv, cmmedia, cnews, crunchyroll, dailymotion, dash, delfi, deutschewelle, dlive, dogan, dogus, drdk, earthcam, egame, euronews, facebook, filmon, foxtr, funimationnow, galatasaraytv, **goltelevision**, goodgame, googledrive, gulli, hiplayer, hls, http, htv, huajiao, huya, idf1, invintus, kugou, linelive, livestream, lnk, lrt, ltv_lsm_lv, mdstrm, mediaklikk, mediavitrina, mildom, mitele, mjunoon, mrtmk, n13tv, nbcnews, nhkworld, nicolive, nimotv, nos, nownews, nrk, ntv, okru, olympicchannel, oneplusone, onetv, openrectv, orf_tvthek, pandalive, picarto, piczel, pixiv, pluto, pluzz, qq, radiko, radionet, raiplay, reuters, rtbf, rtpa, rtpplay, rtve, rtvs, ruv, sbscokr, schoolism, showroom, sportal, sportschau, ssh101, stadium, steam, streamable, streann, stv, svtplay, swisstxt, telefe, tf1, trovo, turkuvaz, tv360, tv3cat, tv4play, tv5monde, tv8, tv999, tvibo, tviplayer, tvp, tvrby, tvrplus, tvtoya, twitcasting, twitch, useetv, ustreamtv, ustvnow, vidio, vimeo, vinhlongtv, vk, vlive, vtvgo, wasd, webtv, welt, wwenetwork, youtube, yupptv, zattoo, zdf_mediathek, zeenews, zengatv, zhanqi\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Spanish live TV channels from Atresmedia Television, including Antena 3 and laSexta.\n$url atresplayer.com\n$type live\n$region Spain\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.dash import DASHStream\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.utils.url import update_scheme\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?atresplayer\\.com/\"\n))\nclass AtresPlayer(Plugin):\n def _get_streams(self):\n self.url = update_scheme(\"https://\", self.url)\n path = urlparse(self.url).path\n\n api_url = self.session.http.get(self.url, schema=validate.Schema(\n re.compile(r\"\"\"window.__PRELOADED_STATE__\\s*=\\s*({.*?});\"\"\", re.DOTALL),\n validate.none_or_all(\n validate.get(1),\n validate.parse_json(),\n {\"links\": {path: {\"href\": validate.url()}}},\n validate.get((\"links\", path, \"href\")),\n ),\n ))\n if not api_url:\n return\n log.debug(f\"API URL: {api_url}\")\n\n player_api_url = self.session.http.get(api_url, schema=validate.Schema(\n validate.parse_json(),\n {\"urlVideo\": validate.url()},\n validate.get(\"urlVideo\"),\n ))\n\n log.debug(f\"Player API URL: {player_api_url}\")\n sources = self.session.http.get(player_api_url, acceptable_status=(200, 403), schema=validate.Schema(\n validate.parse_json(),\n validate.any(\n {\n \"error\": str,\n \"error_description\": str,\n },\n {\n \"sources\": [\n validate.all(\n {\n \"src\": validate.url(),\n validate.optional(\"type\"): str,\n },\n validate.union_get(\"type\", \"src\"),\n ),\n ],\n },\n ),\n ))\n if \"error\" in sources:\n log.error(f\"Player API error: {sources['error']} - {sources['error_description']}\")\n return\n\n for streamtype, streamsrc in sources.get(\"sources\"):\n log.debug(f\"Stream source: {streamsrc} ({streamtype or 'n/a'})\")\n\n if streamtype == \"application/vnd.apple.mpegurl\":\n streams = HLSStream.parse_variant_playlist(self.session, streamsrc)\n if not streams:\n yield \"live\", HLSStream(self.session, streamsrc)\n else:\n yield from streams.items()\n elif streamtype == \"application/dash+xml\":\n yield from DASHStream.parse_manifest(self.session, streamsrc).items()\n\n\n__plugin__ = AtresPlayer\n", "path": "src/streamlink/plugins/atresplayer.py"}], "after_files": [{"content": "\"\"\"\n$description Spanish live TV channels from Atresmedia Television, including Antena 3 and laSexta.\n$url atresplayer.com\n$type live\n$region Spain\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import urlparse\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.dash import DASHStream\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.utils.url import update_scheme\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?atresplayer\\.com/\"\n))\nclass AtresPlayer(Plugin):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.url = update_scheme(\"https://\", f\"{self.url.rstrip('/')}/\")\n\n def _get_streams(self):\n path = urlparse(self.url).path\n api_url = self.session.http.get(self.url, schema=validate.Schema(\n re.compile(r\"\"\"window.__PRELOADED_STATE__\\s*=\\s*({.*?});\"\"\", re.DOTALL),\n validate.none_or_all(\n validate.get(1),\n validate.parse_json(),\n {\"links\": {path: {\"href\": validate.url()}}},\n validate.get((\"links\", path, \"href\")),\n ),\n ))\n if not api_url:\n return\n log.debug(f\"API URL: {api_url}\")\n\n player_api_url = self.session.http.get(api_url, schema=validate.Schema(\n validate.parse_json(),\n {\"urlVideo\": validate.url()},\n validate.get(\"urlVideo\"),\n ))\n\n log.debug(f\"Player API URL: {player_api_url}\")\n sources = self.session.http.get(player_api_url, acceptable_status=(200, 403), schema=validate.Schema(\n validate.parse_json(),\n validate.any(\n {\n \"error\": str,\n \"error_description\": str,\n },\n {\n \"sources\": [\n validate.all(\n {\n \"src\": validate.url(),\n validate.optional(\"type\"): str,\n },\n validate.union_get(\"type\", \"src\"),\n ),\n ],\n },\n ),\n ))\n if \"error\" in sources:\n log.error(f\"Player API error: {sources['error']} - {sources['error_description']}\")\n return\n\n for streamtype, streamsrc in sources.get(\"sources\"):\n log.debug(f\"Stream source: {streamsrc} ({streamtype or 'n/a'})\")\n\n if streamtype == \"application/vnd.apple.mpegurl\":\n streams = HLSStream.parse_variant_playlist(self.session, streamsrc)\n if not streams:\n yield \"live\", HLSStream(self.session, streamsrc)\n else:\n yield from streams.items()\n elif streamtype == \"application/dash+xml\":\n yield from DASHStream.parse_manifest(self.session, streamsrc).items()\n\n\n__plugin__ = AtresPlayer\n", "path": "src/streamlink/plugins/atresplayer.py"}]}
| 2,602 | 209 |
gh_patches_debug_28856
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__torchmetrics-1918
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`ClasswiseWrapper` double prefixes with `self.prefix` when enclosed in a `MetricCollection`
## 🐛 Bug
#1866 double prefixes with `self.prefix` when enclosed in a `MetricCollection`. This is because `MetricCollection` already handles prefixing, here:
https://github.com/Lightning-AI/torchmetrics/blob/a448ad3ff4329682a83fe1036ef21f35a2a8418a/src/torchmetrics/collections.py#L335-L339
but #1866 doesn't account for it.
### To Reproduce
Enclose a `ClasswiseWrapper` with a `prefix` within a `MetricCollection`.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
<details>
<summary>Finds a multiclass accuracy and a classwise f1 score.</summary>
```py
from torchmetrics import *
import torch
category_names = ['Tree', 'Bush']
num_classes = len(category_names)
input_ = torch.rand((5, num_classes, 3, 3))
target = torch.ones((5, num_classes, 3, 3)).long()
val_metrics = MetricCollection(
{
"accuracy": Accuracy(task="multiclass", num_classes=num_classes),
"f1": ClasswiseWrapper(
F1Score(
task="multiclass",
num_classes=num_classes,
average="none",
dist_sync_on_step=True,
),
category_names,
prefix="f_score_",
),
},
prefix="val/",
)
val_metrics["precision"](input_, target)
val_metrics(input_, target)
```
</details>
### Expected behavior
I should get `{'val/acc': tensor(0.), 'val/f1_Tree': tensor(0.), 'val/f1_Bush': tensor(0.)}`. I instead get `{'val/acc': tensor(0.), 'val/f1_f1_Tree': tensor(0.), 'val/f1_f1_Bush': tensor(0.)}`.
### Environment
- `torchmetrics` 1.0.0 via `pip`
- Python `3.10.6` & PyTorch `1.12`:
- OS: Linux
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/wrappers/classwise.py`
Content:
```
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any, Callable, Dict, List, Optional, Sequence, Union
15
16 from torch import Tensor
17
18 from torchmetrics.metric import Metric
19 from torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE
20 from torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE
21 from torchmetrics.wrappers.abstract import WrapperMetric
22
23 if not _MATPLOTLIB_AVAILABLE:
24 __doctest_skip__ = ["ClasswiseWrapper.plot"]
25
26
27 class ClasswiseWrapper(WrapperMetric):
28 """Wrapper metric for altering the output of classification metrics.
29
30 This metric works together with classification metrics that returns multiple values (one value per class) such that
31 label information can be automatically included in the output.
32
33 Args:
34 metric: base metric that should be wrapped. It is assumed that the metric outputs a single
35 tensor that is split along the first dimension.
36 labels: list of strings indicating the different classes.
37 prefix: string that is prepended to the metric names.
38 postfix: string that is appended to the metric names.
39
40 Example::
41 Basic example where the ouput of a metric is unwrapped into a dictionary with the class index as keys:
42
43 >>> import torch
44 >>> _ = torch.manual_seed(42)
45 >>> from torchmetrics.wrappers import ClasswiseWrapper
46 >>> from torchmetrics.classification import MulticlassAccuracy
47 >>> metric = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None))
48 >>> preds = torch.randn(10, 3).softmax(dim=-1)
49 >>> target = torch.randint(3, (10,))
50 >>> metric(preds, target) # doctest: +NORMALIZE_WHITESPACE
51 {'multiclassaccuracy_0': tensor(0.5000),
52 'multiclassaccuracy_1': tensor(0.7500),
53 'multiclassaccuracy_2': tensor(0.)}
54
55 Example::
56 Using custom name via prefix and postfix:
57
58 >>> import torch
59 >>> _ = torch.manual_seed(42)
60 >>> from torchmetrics.wrappers import ClasswiseWrapper
61 >>> from torchmetrics.classification import MulticlassAccuracy
62 >>> metric_pre = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None), prefix="acc-")
63 >>> metric_post = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None), postfix="-acc")
64 >>> preds = torch.randn(10, 3).softmax(dim=-1)
65 >>> target = torch.randint(3, (10,))
66 >>> metric_pre(preds, target) # doctest: +NORMALIZE_WHITESPACE
67 {'acc-0': tensor(0.5000),
68 'acc-1': tensor(0.7500),
69 'acc-2': tensor(0.)}
70 >>> metric_post(preds, target) # doctest: +NORMALIZE_WHITESPACE
71 {'0-acc': tensor(0.5000),
72 '1-acc': tensor(0.7500),
73 '2-acc': tensor(0.)}
74
75 Example::
76 Providing labels as a list of strings:
77
78 >>> from torchmetrics.wrappers import ClasswiseWrapper
79 >>> from torchmetrics.classification import MulticlassAccuracy
80 >>> metric = ClasswiseWrapper(
81 ... MulticlassAccuracy(num_classes=3, average=None),
82 ... labels=["horse", "fish", "dog"]
83 ... )
84 >>> preds = torch.randn(10, 3).softmax(dim=-1)
85 >>> target = torch.randint(3, (10,))
86 >>> metric(preds, target) # doctest: +NORMALIZE_WHITESPACE
87 {'multiclassaccuracy_horse': tensor(0.3333),
88 'multiclassaccuracy_fish': tensor(0.6667),
89 'multiclassaccuracy_dog': tensor(0.)}
90
91 Example::
92 Classwise can also be used in combination with :class:`~torchmetrics.MetricCollection`. In this case, everything
93 will be flattened into a single dictionary:
94
95 >>> from torchmetrics import MetricCollection
96 >>> from torchmetrics.wrappers import ClasswiseWrapper
97 >>> from torchmetrics.classification import MulticlassAccuracy, MulticlassRecall
98 >>> labels = ["horse", "fish", "dog"]
99 >>> metric = MetricCollection(
100 ... {'multiclassaccuracy': ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None), labels),
101 ... 'multiclassrecall': ClasswiseWrapper(MulticlassRecall(num_classes=3, average=None), labels)}
102 ... )
103 >>> preds = torch.randn(10, 3).softmax(dim=-1)
104 >>> target = torch.randint(3, (10,))
105 >>> metric(preds, target) # doctest: +NORMALIZE_WHITESPACE
106 {'multiclassaccuracy_horse': tensor(0.),
107 'multiclassaccuracy_fish': tensor(0.3333),
108 'multiclassaccuracy_dog': tensor(0.4000),
109 'multiclassrecall_horse': tensor(0.),
110 'multiclassrecall_fish': tensor(0.3333),
111 'multiclassrecall_dog': tensor(0.4000)}
112
113 """
114
115 def __init__(
116 self,
117 metric: Metric,
118 labels: Optional[List[str]] = None,
119 prefix: Optional[str] = None,
120 postfix: Optional[str] = None,
121 ) -> None:
122 super().__init__()
123 if not isinstance(metric, Metric):
124 raise ValueError(f"Expected argument `metric` to be an instance of `torchmetrics.Metric` but got {metric}")
125 self.metric = metric
126
127 if labels is not None and not (isinstance(labels, list) and all(isinstance(lab, str) for lab in labels)):
128 raise ValueError(f"Expected argument `labels` to either be `None` or a list of strings but got {labels}")
129 self.labels = labels
130
131 if prefix is not None and not isinstance(prefix, str):
132 raise ValueError(f"Expected argument `prefix` to either be `None` or a string but got {prefix}")
133 self.prefix = prefix
134
135 if postfix is not None and not isinstance(postfix, str):
136 raise ValueError(f"Expected argument `postfix` to either be `None` or a string but got {postfix}")
137 self.postfix = postfix
138
139 self._update_count = 1
140
141 def _convert(self, x: Tensor) -> Dict[str, Any]:
142 # Will set the class name as prefix if neither prefix nor postfix is given
143 if not self.prefix and not self.postfix:
144 prefix = f"{self.metric.__class__.__name__.lower()}_"
145 postfix = ""
146 else:
147 prefix = self.prefix or ""
148 postfix = self.postfix or ""
149 if self.labels is None:
150 return {f"{prefix}{i}{postfix}": val for i, val in enumerate(x)}
151 return {f"{prefix}{lab}{postfix}": val for lab, val in zip(self.labels, x)}
152
153 def forward(self, *args: Any, **kwargs: Any) -> Any:
154 """Calculate on batch and accumulate to global state."""
155 return self._convert(self.metric(*args, **kwargs))
156
157 def update(self, *args: Any, **kwargs: Any) -> None:
158 """Update state."""
159 self.metric.update(*args, **kwargs)
160
161 def compute(self) -> Dict[str, Tensor]:
162 """Compute metric."""
163 return self._convert(self.metric.compute())
164
165 def reset(self) -> None:
166 """Reset metric."""
167 self.metric.reset()
168
169 def plot(
170 self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None
171 ) -> _PLOT_OUT_TYPE:
172 """Plot a single or multiple values from the metric.
173
174 Args:
175 val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.
176 If no value is provided, will automatically call `metric.compute` and plot that result.
177 ax: An matplotlib axis object. If provided will add plot to that axis
178
179 Returns:
180 Figure and Axes object
181
182 Raises:
183 ModuleNotFoundError:
184 If `matplotlib` is not installed
185
186 .. plot::
187 :scale: 75
188
189 >>> # Example plotting a single value
190 >>> import torch
191 >>> from torchmetrics.wrappers import ClasswiseWrapper
192 >>> from torchmetrics.classification import MulticlassAccuracy
193 >>> metric = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None))
194 >>> metric.update(torch.randint(3, (20,)), torch.randint(3, (20,)))
195 >>> fig_, ax_ = metric.plot()
196
197 .. plot::
198 :scale: 75
199
200 >>> # Example plotting multiple values
201 >>> import torch
202 >>> from torchmetrics.wrappers import ClasswiseWrapper
203 >>> from torchmetrics.classification import MulticlassAccuracy
204 >>> metric = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None))
205 >>> values = [ ]
206 >>> for _ in range(3):
207 ... values.append(metric(torch.randint(3, (20,)), torch.randint(3, (20,))))
208 >>> fig_, ax_ = metric.plot(values)
209
210 """
211 return self._plot(val, ax)
212
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/torchmetrics/wrappers/classwise.py b/src/torchmetrics/wrappers/classwise.py
--- a/src/torchmetrics/wrappers/classwise.py
+++ b/src/torchmetrics/wrappers/classwise.py
@@ -130,22 +130,22 @@
if prefix is not None and not isinstance(prefix, str):
raise ValueError(f"Expected argument `prefix` to either be `None` or a string but got {prefix}")
- self.prefix = prefix
+ self._prefix = prefix
if postfix is not None and not isinstance(postfix, str):
raise ValueError(f"Expected argument `postfix` to either be `None` or a string but got {postfix}")
- self.postfix = postfix
+ self._postfix = postfix
self._update_count = 1
def _convert(self, x: Tensor) -> Dict[str, Any]:
# Will set the class name as prefix if neither prefix nor postfix is given
- if not self.prefix and not self.postfix:
+ if not self._prefix and not self._postfix:
prefix = f"{self.metric.__class__.__name__.lower()}_"
postfix = ""
else:
- prefix = self.prefix or ""
- postfix = self.postfix or ""
+ prefix = self._prefix or ""
+ postfix = self._postfix or ""
if self.labels is None:
return {f"{prefix}{i}{postfix}": val for i, val in enumerate(x)}
return {f"{prefix}{lab}{postfix}": val for lab, val in zip(self.labels, x)}
|
{"golden_diff": "diff --git a/src/torchmetrics/wrappers/classwise.py b/src/torchmetrics/wrappers/classwise.py\n--- a/src/torchmetrics/wrappers/classwise.py\n+++ b/src/torchmetrics/wrappers/classwise.py\n@@ -130,22 +130,22 @@\n \n if prefix is not None and not isinstance(prefix, str):\n raise ValueError(f\"Expected argument `prefix` to either be `None` or a string but got {prefix}\")\n- self.prefix = prefix\n+ self._prefix = prefix\n \n if postfix is not None and not isinstance(postfix, str):\n raise ValueError(f\"Expected argument `postfix` to either be `None` or a string but got {postfix}\")\n- self.postfix = postfix\n+ self._postfix = postfix\n \n self._update_count = 1\n \n def _convert(self, x: Tensor) -> Dict[str, Any]:\n # Will set the class name as prefix if neither prefix nor postfix is given\n- if not self.prefix and not self.postfix:\n+ if not self._prefix and not self._postfix:\n prefix = f\"{self.metric.__class__.__name__.lower()}_\"\n postfix = \"\"\n else:\n- prefix = self.prefix or \"\"\n- postfix = self.postfix or \"\"\n+ prefix = self._prefix or \"\"\n+ postfix = self._postfix or \"\"\n if self.labels is None:\n return {f\"{prefix}{i}{postfix}\": val for i, val in enumerate(x)}\n return {f\"{prefix}{lab}{postfix}\": val for lab, val in zip(self.labels, x)}\n", "issue": "`ClasswiseWrapper` double prefixes with `self.prefix` when enclosed in a `MetricCollection`\n## \ud83d\udc1b Bug\r\n\r\n#1866 double prefixes with `self.prefix` when enclosed in a `MetricCollection`. This is because `MetricCollection` already handles prefixing, here:\r\nhttps://github.com/Lightning-AI/torchmetrics/blob/a448ad3ff4329682a83fe1036ef21f35a2a8418a/src/torchmetrics/collections.py#L335-L339\r\nbut #1866 doesn't account for it.\r\n\r\n### To Reproduce\r\n\r\nEnclose a `ClasswiseWrapper` with a `prefix` within a `MetricCollection`.\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n<details>\r\n <summary>Finds a multiclass accuracy and a classwise f1 score.</summary>\r\n\r\n\r\n```py\r\nfrom torchmetrics import *\r\nimport torch\r\n\r\ncategory_names = ['Tree', 'Bush']\r\nnum_classes = len(category_names)\r\n\r\ninput_ = torch.rand((5, num_classes, 3, 3))\r\ntarget = torch.ones((5, num_classes, 3, 3)).long()\r\n\r\n\r\nval_metrics = MetricCollection(\r\n {\r\n \"accuracy\": Accuracy(task=\"multiclass\", num_classes=num_classes),\r\n \"f1\": ClasswiseWrapper(\r\n F1Score(\r\n task=\"multiclass\",\r\n num_classes=num_classes,\r\n average=\"none\",\r\n dist_sync_on_step=True,\r\n ),\r\n category_names,\r\n prefix=\"f_score_\",\r\n ),\r\n },\r\n prefix=\"val/\",\r\n)\r\n\r\nval_metrics[\"precision\"](input_, target)\r\nval_metrics(input_, target)\r\n```\r\n\r\n</details>\r\n\r\n### Expected behavior\r\n\r\nI should get `{'val/acc': tensor(0.), 'val/f1_Tree': tensor(0.), 'val/f1_Bush': tensor(0.)}`. I instead get `{'val/acc': tensor(0.), 'val/f1_f1_Tree': tensor(0.), 'val/f1_f1_Bush': tensor(0.)}`.\r\n\r\n### Environment\r\n\r\n- `torchmetrics` 1.0.0 via `pip`\r\n- Python `3.10.6` & PyTorch `1.12`:\r\n- OS: Linux\r\n\r\n\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Union\n\nfrom torch import Tensor\n\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE\nfrom torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE\nfrom torchmetrics.wrappers.abstract import WrapperMetric\n\nif not _MATPLOTLIB_AVAILABLE:\n __doctest_skip__ = [\"ClasswiseWrapper.plot\"]\n\n\nclass ClasswiseWrapper(WrapperMetric):\n \"\"\"Wrapper metric for altering the output of classification metrics.\n\n This metric works together with classification metrics that returns multiple values (one value per class) such that\n label information can be automatically included in the output.\n\n Args:\n metric: base metric that should be wrapped. It is assumed that the metric outputs a single\n tensor that is split along the first dimension.\n labels: list of strings indicating the different classes.\n prefix: string that is prepended to the metric names.\n postfix: string that is appended to the metric names.\n\n Example::\n Basic example where the ouput of a metric is unwrapped into a dictionary with the class index as keys:\n\n >>> import torch\n >>> _ = torch.manual_seed(42)\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> metric = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None))\n >>> preds = torch.randn(10, 3).softmax(dim=-1)\n >>> target = torch.randint(3, (10,))\n >>> metric(preds, target) # doctest: +NORMALIZE_WHITESPACE\n {'multiclassaccuracy_0': tensor(0.5000),\n 'multiclassaccuracy_1': tensor(0.7500),\n 'multiclassaccuracy_2': tensor(0.)}\n\n Example::\n Using custom name via prefix and postfix:\n\n >>> import torch\n >>> _ = torch.manual_seed(42)\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> metric_pre = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None), prefix=\"acc-\")\n >>> metric_post = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None), postfix=\"-acc\")\n >>> preds = torch.randn(10, 3).softmax(dim=-1)\n >>> target = torch.randint(3, (10,))\n >>> metric_pre(preds, target) # doctest: +NORMALIZE_WHITESPACE\n {'acc-0': tensor(0.5000),\n 'acc-1': tensor(0.7500),\n 'acc-2': tensor(0.)}\n >>> metric_post(preds, target) # doctest: +NORMALIZE_WHITESPACE\n {'0-acc': tensor(0.5000),\n '1-acc': tensor(0.7500),\n '2-acc': tensor(0.)}\n\n Example::\n Providing labels as a list of strings:\n\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> metric = ClasswiseWrapper(\n ... MulticlassAccuracy(num_classes=3, average=None),\n ... labels=[\"horse\", \"fish\", \"dog\"]\n ... )\n >>> preds = torch.randn(10, 3).softmax(dim=-1)\n >>> target = torch.randint(3, (10,))\n >>> metric(preds, target) # doctest: +NORMALIZE_WHITESPACE\n {'multiclassaccuracy_horse': tensor(0.3333),\n 'multiclassaccuracy_fish': tensor(0.6667),\n 'multiclassaccuracy_dog': tensor(0.)}\n\n Example::\n Classwise can also be used in combination with :class:`~torchmetrics.MetricCollection`. In this case, everything\n will be flattened into a single dictionary:\n\n >>> from torchmetrics import MetricCollection\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy, MulticlassRecall\n >>> labels = [\"horse\", \"fish\", \"dog\"]\n >>> metric = MetricCollection(\n ... {'multiclassaccuracy': ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None), labels),\n ... 'multiclassrecall': ClasswiseWrapper(MulticlassRecall(num_classes=3, average=None), labels)}\n ... )\n >>> preds = torch.randn(10, 3).softmax(dim=-1)\n >>> target = torch.randint(3, (10,))\n >>> metric(preds, target) # doctest: +NORMALIZE_WHITESPACE\n {'multiclassaccuracy_horse': tensor(0.),\n 'multiclassaccuracy_fish': tensor(0.3333),\n 'multiclassaccuracy_dog': tensor(0.4000),\n 'multiclassrecall_horse': tensor(0.),\n 'multiclassrecall_fish': tensor(0.3333),\n 'multiclassrecall_dog': tensor(0.4000)}\n\n \"\"\"\n\n def __init__(\n self,\n metric: Metric,\n labels: Optional[List[str]] = None,\n prefix: Optional[str] = None,\n postfix: Optional[str] = None,\n ) -> None:\n super().__init__()\n if not isinstance(metric, Metric):\n raise ValueError(f\"Expected argument `metric` to be an instance of `torchmetrics.Metric` but got {metric}\")\n self.metric = metric\n\n if labels is not None and not (isinstance(labels, list) and all(isinstance(lab, str) for lab in labels)):\n raise ValueError(f\"Expected argument `labels` to either be `None` or a list of strings but got {labels}\")\n self.labels = labels\n\n if prefix is not None and not isinstance(prefix, str):\n raise ValueError(f\"Expected argument `prefix` to either be `None` or a string but got {prefix}\")\n self.prefix = prefix\n\n if postfix is not None and not isinstance(postfix, str):\n raise ValueError(f\"Expected argument `postfix` to either be `None` or a string but got {postfix}\")\n self.postfix = postfix\n\n self._update_count = 1\n\n def _convert(self, x: Tensor) -> Dict[str, Any]:\n # Will set the class name as prefix if neither prefix nor postfix is given\n if not self.prefix and not self.postfix:\n prefix = f\"{self.metric.__class__.__name__.lower()}_\"\n postfix = \"\"\n else:\n prefix = self.prefix or \"\"\n postfix = self.postfix or \"\"\n if self.labels is None:\n return {f\"{prefix}{i}{postfix}\": val for i, val in enumerate(x)}\n return {f\"{prefix}{lab}{postfix}\": val for lab, val in zip(self.labels, x)}\n\n def forward(self, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Calculate on batch and accumulate to global state.\"\"\"\n return self._convert(self.metric(*args, **kwargs))\n\n def update(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Update state.\"\"\"\n self.metric.update(*args, **kwargs)\n\n def compute(self) -> Dict[str, Tensor]:\n \"\"\"Compute metric.\"\"\"\n return self._convert(self.metric.compute())\n\n def reset(self) -> None:\n \"\"\"Reset metric.\"\"\"\n self.metric.reset()\n\n def plot(\n self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None\n ) -> _PLOT_OUT_TYPE:\n \"\"\"Plot a single or multiple values from the metric.\n\n Args:\n val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.\n If no value is provided, will automatically call `metric.compute` and plot that result.\n ax: An matplotlib axis object. If provided will add plot to that axis\n\n Returns:\n Figure and Axes object\n\n Raises:\n ModuleNotFoundError:\n If `matplotlib` is not installed\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting a single value\n >>> import torch\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> metric = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None))\n >>> metric.update(torch.randint(3, (20,)), torch.randint(3, (20,)))\n >>> fig_, ax_ = metric.plot()\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting multiple values\n >>> import torch\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> metric = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None))\n >>> values = [ ]\n >>> for _ in range(3):\n ... values.append(metric(torch.randint(3, (20,)), torch.randint(3, (20,))))\n >>> fig_, ax_ = metric.plot(values)\n\n \"\"\"\n return self._plot(val, ax)\n", "path": "src/torchmetrics/wrappers/classwise.py"}], "after_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Union\n\nfrom torch import Tensor\n\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE\nfrom torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE\nfrom torchmetrics.wrappers.abstract import WrapperMetric\n\nif not _MATPLOTLIB_AVAILABLE:\n __doctest_skip__ = [\"ClasswiseWrapper.plot\"]\n\n\nclass ClasswiseWrapper(WrapperMetric):\n \"\"\"Wrapper metric for altering the output of classification metrics.\n\n This metric works together with classification metrics that returns multiple values (one value per class) such that\n label information can be automatically included in the output.\n\n Args:\n metric: base metric that should be wrapped. It is assumed that the metric outputs a single\n tensor that is split along the first dimension.\n labels: list of strings indicating the different classes.\n prefix: string that is prepended to the metric names.\n postfix: string that is appended to the metric names.\n\n Example::\n Basic example where the ouput of a metric is unwrapped into a dictionary with the class index as keys:\n\n >>> import torch\n >>> _ = torch.manual_seed(42)\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> metric = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None))\n >>> preds = torch.randn(10, 3).softmax(dim=-1)\n >>> target = torch.randint(3, (10,))\n >>> metric(preds, target) # doctest: +NORMALIZE_WHITESPACE\n {'multiclassaccuracy_0': tensor(0.5000),\n 'multiclassaccuracy_1': tensor(0.7500),\n 'multiclassaccuracy_2': tensor(0.)}\n\n Example::\n Using custom name via prefix and postfix:\n\n >>> import torch\n >>> _ = torch.manual_seed(42)\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> metric_pre = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None), prefix=\"acc-\")\n >>> metric_post = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None), postfix=\"-acc\")\n >>> preds = torch.randn(10, 3).softmax(dim=-1)\n >>> target = torch.randint(3, (10,))\n >>> metric_pre(preds, target) # doctest: +NORMALIZE_WHITESPACE\n {'acc-0': tensor(0.5000),\n 'acc-1': tensor(0.7500),\n 'acc-2': tensor(0.)}\n >>> metric_post(preds, target) # doctest: +NORMALIZE_WHITESPACE\n {'0-acc': tensor(0.5000),\n '1-acc': tensor(0.7500),\n '2-acc': tensor(0.)}\n\n Example::\n Providing labels as a list of strings:\n\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> metric = ClasswiseWrapper(\n ... MulticlassAccuracy(num_classes=3, average=None),\n ... labels=[\"horse\", \"fish\", \"dog\"]\n ... )\n >>> preds = torch.randn(10, 3).softmax(dim=-1)\n >>> target = torch.randint(3, (10,))\n >>> metric(preds, target) # doctest: +NORMALIZE_WHITESPACE\n {'multiclassaccuracy_horse': tensor(0.3333),\n 'multiclassaccuracy_fish': tensor(0.6667),\n 'multiclassaccuracy_dog': tensor(0.)}\n\n Example::\n Classwise can also be used in combination with :class:`~torchmetrics.MetricCollection`. In this case, everything\n will be flattened into a single dictionary:\n\n >>> from torchmetrics import MetricCollection\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy, MulticlassRecall\n >>> labels = [\"horse\", \"fish\", \"dog\"]\n >>> metric = MetricCollection(\n ... {'multiclassaccuracy': ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None), labels),\n ... 'multiclassrecall': ClasswiseWrapper(MulticlassRecall(num_classes=3, average=None), labels)}\n ... )\n >>> preds = torch.randn(10, 3).softmax(dim=-1)\n >>> target = torch.randint(3, (10,))\n >>> metric(preds, target) # doctest: +NORMALIZE_WHITESPACE\n {'multiclassaccuracy_horse': tensor(0.),\n 'multiclassaccuracy_fish': tensor(0.3333),\n 'multiclassaccuracy_dog': tensor(0.4000),\n 'multiclassrecall_horse': tensor(0.),\n 'multiclassrecall_fish': tensor(0.3333),\n 'multiclassrecall_dog': tensor(0.4000)}\n\n \"\"\"\n\n def __init__(\n self,\n metric: Metric,\n labels: Optional[List[str]] = None,\n prefix: Optional[str] = None,\n postfix: Optional[str] = None,\n ) -> None:\n super().__init__()\n if not isinstance(metric, Metric):\n raise ValueError(f\"Expected argument `metric` to be an instance of `torchmetrics.Metric` but got {metric}\")\n self.metric = metric\n\n if labels is not None and not (isinstance(labels, list) and all(isinstance(lab, str) for lab in labels)):\n raise ValueError(f\"Expected argument `labels` to either be `None` or a list of strings but got {labels}\")\n self.labels = labels\n\n if prefix is not None and not isinstance(prefix, str):\n raise ValueError(f\"Expected argument `prefix` to either be `None` or a string but got {prefix}\")\n self._prefix = prefix\n\n if postfix is not None and not isinstance(postfix, str):\n raise ValueError(f\"Expected argument `postfix` to either be `None` or a string but got {postfix}\")\n self._postfix = postfix\n\n self._update_count = 1\n\n def _convert(self, x: Tensor) -> Dict[str, Any]:\n # Will set the class name as prefix if neither prefix nor postfix is given\n if not self._prefix and not self._postfix:\n prefix = f\"{self.metric.__class__.__name__.lower()}_\"\n postfix = \"\"\n else:\n prefix = self._prefix or \"\"\n postfix = self._postfix or \"\"\n if self.labels is None:\n return {f\"{prefix}{i}{postfix}\": val for i, val in enumerate(x)}\n return {f\"{prefix}{lab}{postfix}\": val for lab, val in zip(self.labels, x)}\n\n def forward(self, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Calculate on batch and accumulate to global state.\"\"\"\n return self._convert(self.metric(*args, **kwargs))\n\n def update(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Update state.\"\"\"\n self.metric.update(*args, **kwargs)\n\n def compute(self) -> Dict[str, Tensor]:\n \"\"\"Compute metric.\"\"\"\n return self._convert(self.metric.compute())\n\n def reset(self) -> None:\n \"\"\"Reset metric.\"\"\"\n self.metric.reset()\n\n def plot(\n self, val: Optional[Union[Tensor, Sequence[Tensor]]] = None, ax: Optional[_AX_TYPE] = None\n ) -> _PLOT_OUT_TYPE:\n \"\"\"Plot a single or multiple values from the metric.\n\n Args:\n val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results.\n If no value is provided, will automatically call `metric.compute` and plot that result.\n ax: An matplotlib axis object. If provided will add plot to that axis\n\n Returns:\n Figure and Axes object\n\n Raises:\n ModuleNotFoundError:\n If `matplotlib` is not installed\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting a single value\n >>> import torch\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> metric = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None))\n >>> metric.update(torch.randint(3, (20,)), torch.randint(3, (20,)))\n >>> fig_, ax_ = metric.plot()\n\n .. plot::\n :scale: 75\n\n >>> # Example plotting multiple values\n >>> import torch\n >>> from torchmetrics.wrappers import ClasswiseWrapper\n >>> from torchmetrics.classification import MulticlassAccuracy\n >>> metric = ClasswiseWrapper(MulticlassAccuracy(num_classes=3, average=None))\n >>> values = [ ]\n >>> for _ in range(3):\n ... values.append(metric(torch.randint(3, (20,)), torch.randint(3, (20,))))\n >>> fig_, ax_ = metric.plot(values)\n\n \"\"\"\n return self._plot(val, ax)\n", "path": "src/torchmetrics/wrappers/classwise.py"}]}
| 3,454 | 359 |
gh_patches_debug_11035
|
rasdani/github-patches
|
git_diff
|
python-pillow__Pillow-821
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PyPy performance on test_image_point is awful
Hoisted from #476, test_image_point.py takes ~ 2 minutes to run, vs < 1 sec for cpython.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `profile-installed.py`
Content:
```
1 #!/usr/bin/env python
2 import nose
3 import os
4 import sys
5 import glob
6
7 import profile
8
9 # monkey with the path, removing the local directory but adding the Tests/
10 # directory for helper.py and the other local imports there.
11
12 del(sys.path[0])
13 sys.path.insert(0, os.path.abspath('./Tests'))
14
15 # if there's no test selected (mostly) choose a working default.
16 # Something is required, because if we import the tests from the local
17 # directory, once again, we've got the non-installed PIL in the way
18 if len(sys.argv) == 1:
19 sys.argv.extend(glob.glob('Tests/test*.py'))
20
21 # Make sure that nose doesn't muck with our paths.
22 if ('--no-path-adjustment' not in sys.argv) and ('-P' not in sys.argv):
23 sys.argv.insert(1, '--no-path-adjustment')
24
25 if 'NOSE_PROCESSES' not in os.environ:
26 for arg in sys.argv:
27 if '--processes' in arg:
28 break
29 else: # for
30 sys.argv.insert(1, '--processes=-1') # -1 == number of cores
31 sys.argv.insert(1, '--process-timeout=30')
32
33 if __name__ == '__main__':
34 profile.run("nose.main()", sort=2)
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/profile-installed.py b/profile-installed.py
--- a/profile-installed.py
+++ b/profile-installed.py
@@ -21,14 +21,6 @@
# Make sure that nose doesn't muck with our paths.
if ('--no-path-adjustment' not in sys.argv) and ('-P' not in sys.argv):
sys.argv.insert(1, '--no-path-adjustment')
-
-if 'NOSE_PROCESSES' not in os.environ:
- for arg in sys.argv:
- if '--processes' in arg:
- break
- else: # for
- sys.argv.insert(1, '--processes=-1') # -1 == number of cores
- sys.argv.insert(1, '--process-timeout=30')
if __name__ == '__main__':
profile.run("nose.main()", sort=2)
|
{"golden_diff": "diff --git a/profile-installed.py b/profile-installed.py\n--- a/profile-installed.py\n+++ b/profile-installed.py\n@@ -21,14 +21,6 @@\n # Make sure that nose doesn't muck with our paths.\n if ('--no-path-adjustment' not in sys.argv) and ('-P' not in sys.argv):\n sys.argv.insert(1, '--no-path-adjustment')\n-\n-if 'NOSE_PROCESSES' not in os.environ:\n- for arg in sys.argv:\n- if '--processes' in arg:\n- break\n- else: # for\n- sys.argv.insert(1, '--processes=-1') # -1 == number of cores\n- sys.argv.insert(1, '--process-timeout=30') \n \n if __name__ == '__main__':\n profile.run(\"nose.main()\", sort=2)\n", "issue": "PyPy performance on test_image_point is awful\nHoisted from #476, test_image_point.py takes ~ 2 minutes to run, vs < 1 sec for cpython.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport nose\nimport os\nimport sys\nimport glob\n\nimport profile\n\n# monkey with the path, removing the local directory but adding the Tests/\n# directory for helper.py and the other local imports there.\n\ndel(sys.path[0])\nsys.path.insert(0, os.path.abspath('./Tests'))\n\n# if there's no test selected (mostly) choose a working default.\n# Something is required, because if we import the tests from the local\n# directory, once again, we've got the non-installed PIL in the way\nif len(sys.argv) == 1:\n sys.argv.extend(glob.glob('Tests/test*.py'))\n\n# Make sure that nose doesn't muck with our paths.\nif ('--no-path-adjustment' not in sys.argv) and ('-P' not in sys.argv):\n sys.argv.insert(1, '--no-path-adjustment')\n\nif 'NOSE_PROCESSES' not in os.environ:\n for arg in sys.argv:\n if '--processes' in arg:\n break\n else: # for\n sys.argv.insert(1, '--processes=-1') # -1 == number of cores\n sys.argv.insert(1, '--process-timeout=30') \n \nif __name__ == '__main__':\n profile.run(\"nose.main()\", sort=2)\n", "path": "profile-installed.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport nose\nimport os\nimport sys\nimport glob\n\nimport profile\n\n# monkey with the path, removing the local directory but adding the Tests/\n# directory for helper.py and the other local imports there.\n\ndel(sys.path[0])\nsys.path.insert(0, os.path.abspath('./Tests'))\n\n# if there's no test selected (mostly) choose a working default.\n# Something is required, because if we import the tests from the local\n# directory, once again, we've got the non-installed PIL in the way\nif len(sys.argv) == 1:\n sys.argv.extend(glob.glob('Tests/test*.py'))\n\n# Make sure that nose doesn't muck with our paths.\nif ('--no-path-adjustment' not in sys.argv) and ('-P' not in sys.argv):\n sys.argv.insert(1, '--no-path-adjustment')\n \nif __name__ == '__main__':\n profile.run(\"nose.main()\", sort=2)\n", "path": "profile-installed.py"}]}
| 647 | 194 |
gh_patches_debug_2448
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-3200
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't create config object
Much like https://github.com/docker/docker-py/issues/2025 the config model is failing to create a new object due to 'name' KeyError
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "docker\models\configs.py", line 10, in __repr__
return f"<{self.__class__.__name__}: '{self.name}'>"
File "docker\models\configs.py", line 14, in name
return self.attrs['Spec']['Name']
```
This https://github.com/docker/docker-py/pull/2793 appears to be the fix that was implemented and should likely be implements for configs as well (if not other models that might have this issue)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/models/configs.py`
Content:
```
1 from ..api import APIClient
2 from .resource import Model, Collection
3
4
5 class Config(Model):
6 """A config."""
7 id_attribute = 'ID'
8
9 def __repr__(self):
10 return f"<{self.__class__.__name__}: '{self.name}'>"
11
12 @property
13 def name(self):
14 return self.attrs['Spec']['Name']
15
16 def remove(self):
17 """
18 Remove this config.
19
20 Raises:
21 :py:class:`docker.errors.APIError`
22 If config failed to remove.
23 """
24 return self.client.api.remove_config(self.id)
25
26
27 class ConfigCollection(Collection):
28 """Configs on the Docker server."""
29 model = Config
30
31 def create(self, **kwargs):
32 obj = self.client.api.create_config(**kwargs)
33 return self.prepare_model(obj)
34 create.__doc__ = APIClient.create_config.__doc__
35
36 def get(self, config_id):
37 """
38 Get a config.
39
40 Args:
41 config_id (str): Config ID.
42
43 Returns:
44 (:py:class:`Config`): The config.
45
46 Raises:
47 :py:class:`docker.errors.NotFound`
48 If the config does not exist.
49 :py:class:`docker.errors.APIError`
50 If the server returns an error.
51 """
52 return self.prepare_model(self.client.api.inspect_config(config_id))
53
54 def list(self, **kwargs):
55 """
56 List configs. Similar to the ``docker config ls`` command.
57
58 Args:
59 filters (dict): Server-side list filtering options.
60
61 Returns:
62 (list of :py:class:`Config`): The configs.
63
64 Raises:
65 :py:class:`docker.errors.APIError`
66 If the server returns an error.
67 """
68 resp = self.client.api.configs(**kwargs)
69 return [self.prepare_model(obj) for obj in resp]
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docker/models/configs.py b/docker/models/configs.py
--- a/docker/models/configs.py
+++ b/docker/models/configs.py
@@ -30,6 +30,7 @@
def create(self, **kwargs):
obj = self.client.api.create_config(**kwargs)
+ obj.setdefault("Spec", {})["Name"] = kwargs.get("name")
return self.prepare_model(obj)
create.__doc__ = APIClient.create_config.__doc__
|
{"golden_diff": "diff --git a/docker/models/configs.py b/docker/models/configs.py\n--- a/docker/models/configs.py\n+++ b/docker/models/configs.py\n@@ -30,6 +30,7 @@\n \n def create(self, **kwargs):\n obj = self.client.api.create_config(**kwargs)\n+ obj.setdefault(\"Spec\", {})[\"Name\"] = kwargs.get(\"name\")\n return self.prepare_model(obj)\n create.__doc__ = APIClient.create_config.__doc__\n", "issue": "Can't create config object\nMuch like https://github.com/docker/docker-py/issues/2025 the config model is failing to create a new object due to 'name' KeyError\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"docker\\models\\configs.py\", line 10, in __repr__\r\n return f\"<{self.__class__.__name__}: '{self.name}'>\"\r\n File \"docker\\models\\configs.py\", line 14, in name\r\n return self.attrs['Spec']['Name']\r\n```\r\n\r\nThis https://github.com/docker/docker-py/pull/2793 appears to be the fix that was implemented and should likely be implements for configs as well (if not other models that might have this issue)\n", "before_files": [{"content": "from ..api import APIClient\nfrom .resource import Model, Collection\n\n\nclass Config(Model):\n \"\"\"A config.\"\"\"\n id_attribute = 'ID'\n\n def __repr__(self):\n return f\"<{self.__class__.__name__}: '{self.name}'>\"\n\n @property\n def name(self):\n return self.attrs['Spec']['Name']\n\n def remove(self):\n \"\"\"\n Remove this config.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If config failed to remove.\n \"\"\"\n return self.client.api.remove_config(self.id)\n\n\nclass ConfigCollection(Collection):\n \"\"\"Configs on the Docker server.\"\"\"\n model = Config\n\n def create(self, **kwargs):\n obj = self.client.api.create_config(**kwargs)\n return self.prepare_model(obj)\n create.__doc__ = APIClient.create_config.__doc__\n\n def get(self, config_id):\n \"\"\"\n Get a config.\n\n Args:\n config_id (str): Config ID.\n\n Returns:\n (:py:class:`Config`): The config.\n\n Raises:\n :py:class:`docker.errors.NotFound`\n If the config does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_config(config_id))\n\n def list(self, **kwargs):\n \"\"\"\n List configs. Similar to the ``docker config ls`` command.\n\n Args:\n filters (dict): Server-side list filtering options.\n\n Returns:\n (list of :py:class:`Config`): The configs.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.configs(**kwargs)\n return [self.prepare_model(obj) for obj in resp]\n", "path": "docker/models/configs.py"}], "after_files": [{"content": "from ..api import APIClient\nfrom .resource import Model, Collection\n\n\nclass Config(Model):\n \"\"\"A config.\"\"\"\n id_attribute = 'ID'\n\n def __repr__(self):\n return f\"<{self.__class__.__name__}: '{self.name}'>\"\n\n @property\n def name(self):\n return self.attrs['Spec']['Name']\n\n def remove(self):\n \"\"\"\n Remove this config.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If config failed to remove.\n \"\"\"\n return self.client.api.remove_config(self.id)\n\n\nclass ConfigCollection(Collection):\n \"\"\"Configs on the Docker server.\"\"\"\n model = Config\n\n def create(self, **kwargs):\n obj = self.client.api.create_config(**kwargs)\n obj.setdefault(\"Spec\", {})[\"Name\"] = kwargs.get(\"name\")\n return self.prepare_model(obj)\n create.__doc__ = APIClient.create_config.__doc__\n\n def get(self, config_id):\n \"\"\"\n Get a config.\n\n Args:\n config_id (str): Config ID.\n\n Returns:\n (:py:class:`Config`): The config.\n\n Raises:\n :py:class:`docker.errors.NotFound`\n If the config does not exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_config(config_id))\n\n def list(self, **kwargs):\n \"\"\"\n List configs. Similar to the ``docker config ls`` command.\n\n Args:\n filters (dict): Server-side list filtering options.\n\n Returns:\n (list of :py:class:`Config`): The configs.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.configs(**kwargs)\n return [self.prepare_model(obj) for obj in resp]\n", "path": "docker/models/configs.py"}]}
| 948 | 101 |
gh_patches_debug_16039
|
rasdani/github-patches
|
git_diff
|
kserve__kserve-2342
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Kserve defaulting causing duplicates of environment variable
/kind bug
**What steps did you take and what happened:**
Create example xgboost isvc and enable gRPC
```
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "xgboost-iris"
spec:
predictor:
xgboost:
protocolVersion: "v2"
storageUri: "gs://kfserving-examples/models/xgboost/iris"
ports:
- containerPort: 9000
name: h2c
protocol: TCP
```
The pod spec has duplicated environment variable
```
Environment:
MLSERVER_MODEL_NAME: xgboost-iris
MLSERVER_MODEL_URI: /mnt/models
MLSERVER_MODEL_NAME: xgboost-iris
MLSERVER_MODEL_URI: /mnt/models
```
Additionally, attempt to override the defaults leads to duplicated environment variable with different values
```
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "xgboost-iris"
spec:
predictor:
xgboost:
protocolVersion: "v2"
storageUri: "gs://kfserving-examples/models/xgboost/iris"
ports:
- containerPort: 9000
name: h2c
protocol: TCP
env:
- name: MLSERVER_MODEL_NAME
value: my-model
```
The pod spec:
```
Environment:
MLSERVER_MODEL_NAME: my-model
MLSERVER_MODEL_NAME: xgboost-iris
MLSERVER_MODEL_URI: /mnt/models
```
**What did you expect to happen:**
- Defaulting should not duplicate environment variable and should prioritise user's defined environment variable
**Anything else you would like to add:**
I believe it's because the defaulter always append `.Env` without checking the presence of existing environment variable. (https://github.com/kserve/kserve/blob/a6ed8e4b006e27433de2336e0e8b7cec11137dc1/pkg/apis/serving/v1beta1/inference_service_defaults.go#L264)
**Environment:**
- Kserve: 0.8.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/custom_transformer/model_grpc.py`
Content:
```
1 # Copyright 2022 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import argparse
16 import base64
17 from typing import Dict, Union
18
19 from kserve import Model, ModelServer, model_server
20 from kserve.grpc.grpc_predict_v2_pb2 import ModelInferRequest
21 from kserve.handlers.v2_datamodels import InferenceRequest
22
23
24 class ImageTransformer(Model):
25 def __init__(self, name: str, predictor_host: str, protocol: str):
26 super().__init__(name)
27 self.predictor_host = predictor_host
28 self.protocol = protocol
29 self.model_name = name
30
31 def preprocess(self, request: Union[Dict, ModelInferRequest, InferenceRequest], headers=None) -> ModelInferRequest:
32 if isinstance(request, ModelInferRequest):
33 return request
34 else:
35 payload = [
36 {
37 "name": "input-0",
38 "shape": [],
39 "datatype": "BYTES",
40 "contents": {
41 "bytes_contents": [base64.b64decode(request["inputs"][0]["data"][0])]
42 }
43 }
44 ]
45 return ModelInferRequest(model_name=self.model_name, inputs=payload)
46
47
48 parser = argparse.ArgumentParser(parents=[model_server.parser])
49 parser.add_argument(
50 "--predictor_host", help="The URL for the model predict function", required=True
51 )
52 parser.add_argument(
53 "--protocol", help="The protocol for the predictor", default="v1"
54 )
55 parser.add_argument(
56 "--model_name", help="The name that the model is served under."
57 )
58 args, _ = parser.parse_known_args()
59
60 if __name__ == "__main__":
61 model = ImageTransformer(args.model_name, predictor_host=args.predictor_host,
62 protocol=args.protocol)
63 ModelServer(workers=1).start([model])
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/custom_transformer/model_grpc.py b/python/custom_transformer/model_grpc.py
--- a/python/custom_transformer/model_grpc.py
+++ b/python/custom_transformer/model_grpc.py
@@ -18,7 +18,6 @@
from kserve import Model, ModelServer, model_server
from kserve.grpc.grpc_predict_v2_pb2 import ModelInferRequest
-from kserve.handlers.v2_datamodels import InferenceRequest
class ImageTransformer(Model):
@@ -28,7 +27,7 @@
self.protocol = protocol
self.model_name = name
- def preprocess(self, request: Union[Dict, ModelInferRequest, InferenceRequest], headers=None) -> ModelInferRequest:
+ def preprocess(self, request: Union[Dict, ModelInferRequest], headers=None) -> ModelInferRequest:
if isinstance(request, ModelInferRequest):
return request
else:
|
{"golden_diff": "diff --git a/python/custom_transformer/model_grpc.py b/python/custom_transformer/model_grpc.py\n--- a/python/custom_transformer/model_grpc.py\n+++ b/python/custom_transformer/model_grpc.py\n@@ -18,7 +18,6 @@\n \n from kserve import Model, ModelServer, model_server\n from kserve.grpc.grpc_predict_v2_pb2 import ModelInferRequest\n-from kserve.handlers.v2_datamodels import InferenceRequest\n \n \n class ImageTransformer(Model):\n@@ -28,7 +27,7 @@\n self.protocol = protocol\n self.model_name = name\n \n- def preprocess(self, request: Union[Dict, ModelInferRequest, InferenceRequest], headers=None) -> ModelInferRequest:\n+ def preprocess(self, request: Union[Dict, ModelInferRequest], headers=None) -> ModelInferRequest:\n if isinstance(request, ModelInferRequest):\n return request\n else:\n", "issue": "Kserve defaulting causing duplicates of environment variable \n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\nCreate example xgboost isvc and enable gRPC\r\n```\r\napiVersion: \"serving.kserve.io/v1beta1\"\r\nkind: \"InferenceService\"\r\nmetadata:\r\n name: \"xgboost-iris\"\r\nspec:\r\n predictor:\r\n xgboost:\r\n protocolVersion: \"v2\"\r\n storageUri: \"gs://kfserving-examples/models/xgboost/iris\"\r\n ports:\r\n - containerPort: 9000\r\n name: h2c\r\n protocol: TCP\r\n```\r\n\r\nThe pod spec has duplicated environment variable\r\n```\r\n Environment:\r\n MLSERVER_MODEL_NAME: xgboost-iris\r\n MLSERVER_MODEL_URI: /mnt/models\r\n MLSERVER_MODEL_NAME: xgboost-iris\r\n MLSERVER_MODEL_URI: /mnt/models\r\n```\r\n\r\nAdditionally, attempt to override the defaults leads to duplicated environment variable with different values\r\n\r\n```\r\napiVersion: \"serving.kserve.io/v1beta1\"\r\nkind: \"InferenceService\"\r\nmetadata:\r\n name: \"xgboost-iris\"\r\nspec:\r\n predictor:\r\n xgboost:\r\n protocolVersion: \"v2\"\r\n storageUri: \"gs://kfserving-examples/models/xgboost/iris\"\r\n ports:\r\n - containerPort: 9000\r\n name: h2c\r\n protocol: TCP\r\n env:\r\n - name: MLSERVER_MODEL_NAME\r\n value: my-model\r\n```\r\n\r\nThe pod spec:\r\n```\r\n Environment:\r\n MLSERVER_MODEL_NAME: my-model\r\n MLSERVER_MODEL_NAME: xgboost-iris\r\n MLSERVER_MODEL_URI: /mnt/models\r\n```\r\n\r\n**What did you expect to happen:**\r\n- Defaulting should not duplicate environment variable and should prioritise user's defined environment variable\r\n\r\n**Anything else you would like to add:**\r\nI believe it's because the defaulter always append `.Env` without checking the presence of existing environment variable. (https://github.com/kserve/kserve/blob/a6ed8e4b006e27433de2336e0e8b7cec11137dc1/pkg/apis/serving/v1beta1/inference_service_defaults.go#L264)\r\n\r\n\r\n**Environment:**\r\n\r\n- Kserve: 0.8.0\n", "before_files": [{"content": "# Copyright 2022 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport base64\nfrom typing import Dict, Union\n\nfrom kserve import Model, ModelServer, model_server\nfrom kserve.grpc.grpc_predict_v2_pb2 import ModelInferRequest\nfrom kserve.handlers.v2_datamodels import InferenceRequest\n\n\nclass ImageTransformer(Model):\n def __init__(self, name: str, predictor_host: str, protocol: str):\n super().__init__(name)\n self.predictor_host = predictor_host\n self.protocol = protocol\n self.model_name = name\n\n def preprocess(self, request: Union[Dict, ModelInferRequest, InferenceRequest], headers=None) -> ModelInferRequest:\n if isinstance(request, ModelInferRequest):\n return request\n else:\n payload = [\n {\n \"name\": \"input-0\",\n \"shape\": [],\n \"datatype\": \"BYTES\",\n \"contents\": {\n \"bytes_contents\": [base64.b64decode(request[\"inputs\"][0][\"data\"][0])]\n }\n }\n ]\n return ModelInferRequest(model_name=self.model_name, inputs=payload)\n\n\nparser = argparse.ArgumentParser(parents=[model_server.parser])\nparser.add_argument(\n \"--predictor_host\", help=\"The URL for the model predict function\", required=True\n)\nparser.add_argument(\n \"--protocol\", help=\"The protocol for the predictor\", default=\"v1\"\n)\nparser.add_argument(\n \"--model_name\", help=\"The name that the model is served under.\"\n)\nargs, _ = parser.parse_known_args()\n\nif __name__ == \"__main__\":\n model = ImageTransformer(args.model_name, predictor_host=args.predictor_host,\n protocol=args.protocol)\n ModelServer(workers=1).start([model])\n", "path": "python/custom_transformer/model_grpc.py"}], "after_files": [{"content": "# Copyright 2022 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport base64\nfrom typing import Dict, Union\n\nfrom kserve import Model, ModelServer, model_server\nfrom kserve.grpc.grpc_predict_v2_pb2 import ModelInferRequest\n\n\nclass ImageTransformer(Model):\n def __init__(self, name: str, predictor_host: str, protocol: str):\n super().__init__(name)\n self.predictor_host = predictor_host\n self.protocol = protocol\n self.model_name = name\n\n def preprocess(self, request: Union[Dict, ModelInferRequest], headers=None) -> ModelInferRequest:\n if isinstance(request, ModelInferRequest):\n return request\n else:\n payload = [\n {\n \"name\": \"input-0\",\n \"shape\": [],\n \"datatype\": \"BYTES\",\n \"contents\": {\n \"bytes_contents\": [base64.b64decode(request[\"inputs\"][0][\"data\"][0])]\n }\n }\n ]\n return ModelInferRequest(model_name=self.model_name, inputs=payload)\n\n\nparser = argparse.ArgumentParser(parents=[model_server.parser])\nparser.add_argument(\n \"--predictor_host\", help=\"The URL for the model predict function\", required=True\n)\nparser.add_argument(\n \"--protocol\", help=\"The protocol for the predictor\", default=\"v1\"\n)\nparser.add_argument(\n \"--model_name\", help=\"The name that the model is served under.\"\n)\nargs, _ = parser.parse_known_args()\n\nif __name__ == \"__main__\":\n model = ImageTransformer(args.model_name, predictor_host=args.predictor_host,\n protocol=args.protocol)\n ModelServer(workers=1).start([model])\n", "path": "python/custom_transformer/model_grpc.py"}]}
| 1,401 | 203 |
gh_patches_debug_20425
|
rasdani/github-patches
|
git_diff
|
wenet-e2e__wenet-1221
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DLL load failed while importing _wenet: 找不到指定的模块。
我安装了wenet, pip install wenet.
安装提示成功了。
我用例子程序做识别。
程序如下:
import sys
import wenet
def get_text_from_wav(dir, wav):
model_dir = dir
wav_file = wav
decoder = wenet.Decoder(model_dir)
ans = decoder.decode_wav(wav_file)
print(ans)
if __name__ == '__main__':
dir = "./models"
wav = "./1.wav"
get_text_from_wav(dir,wav)
但是运行报错如下:
Traceback (most recent call last):
File "D:\codes\speech2word\main.py", line 2, in <module>
import wenet
File "D:\codes\speech2word\venv\lib\site-packages\wenet\__init__.py", line 1, in <module>
from .decoder import Decoder # noqa
File "D:\codes\speech2word\venv\lib\site-packages\wenet\decoder.py", line 17, in <module>
import _wenet
ImportError: DLL load failed while importing _wenet: 找不到指定的模块。
请问如何解决?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `runtime/binding/python/setup.py`
Content:
```
1 #!/usr/bin/env python3
2 # Copyright (c) 2020 Xiaomi Corporation (author: Fangjun Kuang)
3 # 2022 Binbin Zhang([email protected])
4
5 import glob
6 import os
7 import platform
8 import shutil
9 import sys
10
11 import setuptools
12 from setuptools.command.build_ext import build_ext
13
14
15 def is_windows():
16 return platform.system() == "Windows"
17
18
19 def cmake_extension(name, *args, **kwargs) -> setuptools.Extension:
20 kwargs["language"] = "c++"
21 sources = []
22 return setuptools.Extension(name, sources, *args, **kwargs)
23
24
25 class BuildExtension(build_ext):
26 def build_extension(self, ext: setuptools.extension.Extension):
27 os.makedirs(self.build_temp, exist_ok=True)
28 os.makedirs(self.build_lib, exist_ok=True)
29
30 cmake_args = os.environ.get("WENET_CMAKE_ARGS",
31 "-DCMAKE_BUILD_TYPE=Release")
32 if "PYTHON_EXECUTABLE" not in cmake_args:
33 print(f"Setting PYTHON_EXECUTABLE to {sys.executable}")
34 cmake_args += f" -DPYTHON_EXECUTABLE={sys.executable}"
35
36 src_dir = os.path.dirname(os.path.abspath(__file__))
37 os.system(f"cmake {cmake_args} -B {self.build_temp} -S {src_dir}")
38 ret = os.system(f"""
39 cmake --build {self.build_temp} --target _wenet --config Release
40 """)
41 if ret != 0:
42 raise Exception(
43 "\nBuild wenet failed. Please check the error message.\n"
44 "You can ask for help by creating an issue on GitHub.\n"
45 "\nClick:\n https://github.com/wenet-e2e/wenet/issues/new\n"
46 )
47
48 libs = []
49 torch_lib = 'fc_base/libtorch-src/lib'
50 for ext in ['so', 'pyd']:
51 libs.extend(glob.glob(
52 f"{self.build_temp}/**/_wenet*.{ext}", recursive=True))
53 for ext in ['so', 'dylib', 'dll']:
54 libs.extend(glob.glob(
55 f"{self.build_temp}/**/*wenet_api.{ext}", recursive=True))
56 libs.extend(glob.glob(f'{src_dir}/{torch_lib}/*c10.{ext}'))
57 libs.extend(glob.glob(f'{src_dir}/{torch_lib}/*torch_cpu.{ext}'))
58
59 if not is_windows():
60 fst_lib = 'fc_base/openfst-build/src/lib/.libs'
61 for ext in ['so', 'dylib']:
62 libs.extend(glob.glob(f'{src_dir}/{fst_lib}/libfst.{ext}'))
63 libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libgomp*')) # linux
64 libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libiomp*')) # macos
65 else:
66 libs.extend(glob.glob(f'{src_dir}/{torch_lib}/asmjit.dll'))
67 libs.extend(glob.glob(f'{src_dir}/{torch_lib}/fbgemm.dll'))
68 libs.extend(glob.glob(f'{src_dir}/{torch_lib}/uv.dll'))
69
70 for lib in libs:
71 print(f"Copying {lib} to {self.build_lib}/")
72 shutil.copy(f"{lib}", f"{self.build_lib}/")
73
74
75 def read_long_description():
76 with open("README.md", encoding="utf8") as f:
77 readme = f.read()
78 return readme
79
80
81 package_name = "wenet"
82
83 setuptools.setup(
84 name=package_name,
85 version='1.0.4',
86 author="Binbin Zhang",
87 author_email="[email protected]",
88 package_dir={
89 package_name: "py",
90 },
91 packages=[package_name],
92 url="https://github.com/wenet-e2e/wenet",
93 long_description=read_long_description(),
94 long_description_content_type="text/markdown",
95 ext_modules=[cmake_extension("_wenet")],
96 cmdclass={"build_ext": BuildExtension},
97 zip_safe=False,
98 classifiers=[
99 "Programming Language :: C++",
100 "Programming Language :: Python",
101 "Topic :: Scientific/Engineering :: Artificial Intelligence",
102 ],
103 license="Apache licensed, as found in the LICENSE file",
104 python_requires=">=3.6",
105 )
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/runtime/binding/python/setup.py b/runtime/binding/python/setup.py
--- a/runtime/binding/python/setup.py
+++ b/runtime/binding/python/setup.py
@@ -60,12 +60,12 @@
fst_lib = 'fc_base/openfst-build/src/lib/.libs'
for ext in ['so', 'dylib']:
libs.extend(glob.glob(f'{src_dir}/{fst_lib}/libfst.{ext}'))
- libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libgomp*')) # linux
- libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libiomp*')) # macos
else:
libs.extend(glob.glob(f'{src_dir}/{torch_lib}/asmjit.dll'))
libs.extend(glob.glob(f'{src_dir}/{torch_lib}/fbgemm.dll'))
libs.extend(glob.glob(f'{src_dir}/{torch_lib}/uv.dll'))
+ libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libgomp*')) # linux
+ libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libiomp5*')) # macos/win
for lib in libs:
print(f"Copying {lib} to {self.build_lib}/")
|
{"golden_diff": "diff --git a/runtime/binding/python/setup.py b/runtime/binding/python/setup.py\n--- a/runtime/binding/python/setup.py\n+++ b/runtime/binding/python/setup.py\n@@ -60,12 +60,12 @@\n fst_lib = 'fc_base/openfst-build/src/lib/.libs'\n for ext in ['so', 'dylib']:\n libs.extend(glob.glob(f'{src_dir}/{fst_lib}/libfst.{ext}'))\n- libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libgomp*')) # linux\n- libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libiomp*')) # macos\n else:\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/asmjit.dll'))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/fbgemm.dll'))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/uv.dll'))\n+ libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libgomp*')) # linux\n+ libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libiomp5*')) # macos/win\n \n for lib in libs:\n print(f\"Copying {lib} to {self.build_lib}/\")\n", "issue": "DLL load failed while importing _wenet: \u627e\u4e0d\u5230\u6307\u5b9a\u7684\u6a21\u5757\u3002\n\u6211\u5b89\u88c5\u4e86wenet, pip install wenet.\r\n\u5b89\u88c5\u63d0\u793a\u6210\u529f\u4e86\u3002\r\n\u6211\u7528\u4f8b\u5b50\u7a0b\u5e8f\u505a\u8bc6\u522b\u3002\r\n\u7a0b\u5e8f\u5982\u4e0b\uff1a\r\nimport sys\r\nimport wenet\r\n\r\ndef get_text_from_wav(dir, wav):\r\n model_dir = dir\r\n wav_file = wav\r\n decoder = wenet.Decoder(model_dir)\r\n ans = decoder.decode_wav(wav_file)\r\n print(ans)\r\n\r\nif __name__ == '__main__':\r\n dir = \"./models\"\r\n wav = \"./1.wav\"\r\n get_text_from_wav(dir,wav)\r\n\r\n\u4f46\u662f\u8fd0\u884c\u62a5\u9519\u5982\u4e0b\uff1a\r\nTraceback (most recent call last):\r\n File \"D:\\codes\\speech2word\\main.py\", line 2, in <module>\r\n import wenet\r\n File \"D:\\codes\\speech2word\\venv\\lib\\site-packages\\wenet\\__init__.py\", line 1, in <module>\r\n from .decoder import Decoder # noqa\r\n File \"D:\\codes\\speech2word\\venv\\lib\\site-packages\\wenet\\decoder.py\", line 17, in <module>\r\n import _wenet\r\nImportError: DLL load failed while importing _wenet: \u627e\u4e0d\u5230\u6307\u5b9a\u7684\u6a21\u5757\u3002\r\n\r\n\u8bf7\u95ee\u5982\u4f55\u89e3\u51b3\uff1f\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright (c) 2020 Xiaomi Corporation (author: Fangjun Kuang)\n# 2022 Binbin Zhang([email protected])\n\nimport glob\nimport os\nimport platform\nimport shutil\nimport sys\n\nimport setuptools\nfrom setuptools.command.build_ext import build_ext\n\n\ndef is_windows():\n return platform.system() == \"Windows\"\n\n\ndef cmake_extension(name, *args, **kwargs) -> setuptools.Extension:\n kwargs[\"language\"] = \"c++\"\n sources = []\n return setuptools.Extension(name, sources, *args, **kwargs)\n\n\nclass BuildExtension(build_ext):\n def build_extension(self, ext: setuptools.extension.Extension):\n os.makedirs(self.build_temp, exist_ok=True)\n os.makedirs(self.build_lib, exist_ok=True)\n\n cmake_args = os.environ.get(\"WENET_CMAKE_ARGS\",\n \"-DCMAKE_BUILD_TYPE=Release\")\n if \"PYTHON_EXECUTABLE\" not in cmake_args:\n print(f\"Setting PYTHON_EXECUTABLE to {sys.executable}\")\n cmake_args += f\" -DPYTHON_EXECUTABLE={sys.executable}\"\n\n src_dir = os.path.dirname(os.path.abspath(__file__))\n os.system(f\"cmake {cmake_args} -B {self.build_temp} -S {src_dir}\")\n ret = os.system(f\"\"\"\n cmake --build {self.build_temp} --target _wenet --config Release\n \"\"\")\n if ret != 0:\n raise Exception(\n \"\\nBuild wenet failed. Please check the error message.\\n\"\n \"You can ask for help by creating an issue on GitHub.\\n\"\n \"\\nClick:\\n https://github.com/wenet-e2e/wenet/issues/new\\n\"\n )\n\n libs = []\n torch_lib = 'fc_base/libtorch-src/lib'\n for ext in ['so', 'pyd']:\n libs.extend(glob.glob(\n f\"{self.build_temp}/**/_wenet*.{ext}\", recursive=True))\n for ext in ['so', 'dylib', 'dll']:\n libs.extend(glob.glob(\n f\"{self.build_temp}/**/*wenet_api.{ext}\", recursive=True))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/*c10.{ext}'))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/*torch_cpu.{ext}'))\n\n if not is_windows():\n fst_lib = 'fc_base/openfst-build/src/lib/.libs'\n for ext in ['so', 'dylib']:\n libs.extend(glob.glob(f'{src_dir}/{fst_lib}/libfst.{ext}'))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libgomp*')) # linux\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libiomp*')) # macos\n else:\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/asmjit.dll'))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/fbgemm.dll'))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/uv.dll'))\n\n for lib in libs:\n print(f\"Copying {lib} to {self.build_lib}/\")\n shutil.copy(f\"{lib}\", f\"{self.build_lib}/\")\n\n\ndef read_long_description():\n with open(\"README.md\", encoding=\"utf8\") as f:\n readme = f.read()\n return readme\n\n\npackage_name = \"wenet\"\n\nsetuptools.setup(\n name=package_name,\n version='1.0.4',\n author=\"Binbin Zhang\",\n author_email=\"[email protected]\",\n package_dir={\n package_name: \"py\",\n },\n packages=[package_name],\n url=\"https://github.com/wenet-e2e/wenet\",\n long_description=read_long_description(),\n long_description_content_type=\"text/markdown\",\n ext_modules=[cmake_extension(\"_wenet\")],\n cmdclass={\"build_ext\": BuildExtension},\n zip_safe=False,\n classifiers=[\n \"Programming Language :: C++\",\n \"Programming Language :: Python\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n license=\"Apache licensed, as found in the LICENSE file\",\n python_requires=\">=3.6\",\n)\n", "path": "runtime/binding/python/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# Copyright (c) 2020 Xiaomi Corporation (author: Fangjun Kuang)\n# 2022 Binbin Zhang([email protected])\n\nimport glob\nimport os\nimport platform\nimport shutil\nimport sys\n\nimport setuptools\nfrom setuptools.command.build_ext import build_ext\n\n\ndef is_windows():\n return platform.system() == \"Windows\"\n\n\ndef cmake_extension(name, *args, **kwargs) -> setuptools.Extension:\n kwargs[\"language\"] = \"c++\"\n sources = []\n return setuptools.Extension(name, sources, *args, **kwargs)\n\n\nclass BuildExtension(build_ext):\n def build_extension(self, ext: setuptools.extension.Extension):\n os.makedirs(self.build_temp, exist_ok=True)\n os.makedirs(self.build_lib, exist_ok=True)\n\n cmake_args = os.environ.get(\"WENET_CMAKE_ARGS\",\n \"-DCMAKE_BUILD_TYPE=Release\")\n if \"PYTHON_EXECUTABLE\" not in cmake_args:\n print(f\"Setting PYTHON_EXECUTABLE to {sys.executable}\")\n cmake_args += f\" -DPYTHON_EXECUTABLE={sys.executable}\"\n\n src_dir = os.path.dirname(os.path.abspath(__file__))\n os.system(f\"cmake {cmake_args} -B {self.build_temp} -S {src_dir}\")\n ret = os.system(f\"\"\"\n cmake --build {self.build_temp} --target _wenet --config Release\n \"\"\")\n if ret != 0:\n raise Exception(\n \"\\nBuild wenet failed. Please check the error message.\\n\"\n \"You can ask for help by creating an issue on GitHub.\\n\"\n \"\\nClick:\\n https://github.com/wenet-e2e/wenet/issues/new\\n\"\n )\n\n libs = []\n torch_lib = 'fc_base/libtorch-src/lib'\n for ext in ['so', 'pyd']:\n libs.extend(glob.glob(\n f\"{self.build_temp}/**/_wenet*.{ext}\", recursive=True))\n for ext in ['so', 'dylib', 'dll']:\n libs.extend(glob.glob(\n f\"{self.build_temp}/**/*wenet_api.{ext}\", recursive=True))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/*c10.{ext}'))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/*torch_cpu.{ext}'))\n\n if not is_windows():\n fst_lib = 'fc_base/openfst-build/src/lib/.libs'\n for ext in ['so', 'dylib']:\n libs.extend(glob.glob(f'{src_dir}/{fst_lib}/libfst.{ext}'))\n else:\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/asmjit.dll'))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/fbgemm.dll'))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/uv.dll'))\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libgomp*')) # linux\n libs.extend(glob.glob(f'{src_dir}/{torch_lib}/libiomp5*')) # macos/win\n\n for lib in libs:\n print(f\"Copying {lib} to {self.build_lib}/\")\n shutil.copy(f\"{lib}\", f\"{self.build_lib}/\")\n\n\ndef read_long_description():\n with open(\"README.md\", encoding=\"utf8\") as f:\n readme = f.read()\n return readme\n\n\npackage_name = \"wenet\"\n\nsetuptools.setup(\n name=package_name,\n version='1.0.4',\n author=\"Binbin Zhang\",\n author_email=\"[email protected]\",\n package_dir={\n package_name: \"py\",\n },\n packages=[package_name],\n url=\"https://github.com/wenet-e2e/wenet\",\n long_description=read_long_description(),\n long_description_content_type=\"text/markdown\",\n ext_modules=[cmake_extension(\"_wenet\")],\n cmdclass={\"build_ext\": BuildExtension},\n zip_safe=False,\n classifiers=[\n \"Programming Language :: C++\",\n \"Programming Language :: Python\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n license=\"Apache licensed, as found in the LICENSE file\",\n python_requires=\">=3.6\",\n)\n", "path": "runtime/binding/python/setup.py"}]}
| 1,671 | 274 |
gh_patches_debug_5806
|
rasdani/github-patches
|
git_diff
|
dask__distributed-2870
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SpecCluster error when removing workers
When attempting to scale down a `SpecCluster` I'm getting the following error.
```python-traceback
distributed.utils - ERROR - 'Worker' object has no attribute 'worker_address'
Traceback (most recent call last):
File "/home/nfs/jtomlinson/Projects/dask/distributed/distributed/utils.py", line 714, in log_errors
yield
File "/home/nfs/jtomlinson/Projects/dask/distributed/distributed/deploy/adaptive.py", line 176, in _retire_workers
await f
File "/home/nfs/jtomlinson/Projects/dask/distributed/distributed/deploy/spec.py", line 324, in scale_down
if v.worker_address in workers:
AttributeError: 'Worker' object has no attribute 'worker_address'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `distributed/deploy/spec.py`
Content:
```
1 import asyncio
2 import atexit
3 import weakref
4
5 from tornado import gen
6
7 from .cluster import Cluster
8 from ..core import rpc, CommClosedError
9 from ..utils import LoopRunner, silence_logging, ignoring
10 from ..scheduler import Scheduler
11 from ..security import Security
12
13
14 class SpecCluster(Cluster):
15 """ Cluster that requires a full specification of workers
16
17 The SpecCluster class expects a full specification of the Scheduler and
18 Workers to use. It removes any handling of user inputs (like threads vs
19 processes, number of cores, and so on) and any handling of cluster resource
20 managers (like pods, jobs, and so on). Instead, it expects this
21 information to be passed in scheduler and worker specifications. This
22 class does handle all of the logic around asynchronously cleanly setting up
23 and tearing things down at the right times. Hopefully it can form a base
24 for other more user-centric classes.
25
26 Parameters
27 ----------
28 workers: dict
29 A dictionary mapping names to worker classes and their specifications
30 See example below
31 scheduler: dict, optional
32 A similar mapping for a scheduler
33 worker: dict
34 A specification of a single worker.
35 This is used for any new workers that are created.
36 asynchronous: bool
37 If this is intended to be used directly within an event loop with
38 async/await
39 silence_logs: bool
40 Whether or not we should silence logging when setting up the cluster.
41
42 Examples
43 --------
44 To create a SpecCluster you specify how to set up a Scheduler and Workers
45
46 >>> from dask.distributed import Scheduler, Worker, Nanny
47 >>> scheduler = {'cls': Scheduler, 'options': {"dashboard_address": ':8787'}}
48 >>> workers = {
49 ... 'my-worker': {"cls": Worker, "options": {"nthreads": 1}},
50 ... 'my-nanny': {"cls": Nanny, "options": {"nthreads": 2}},
51 ... }
52 >>> cluster = SpecCluster(scheduler=scheduler, workers=workers)
53
54 The worker spec is stored as the ``.worker_spec`` attribute
55
56 >>> cluster.worker_spec
57 {
58 'my-worker': {"cls": Worker, "options": {"nthreads": 1}},
59 'my-nanny': {"cls": Nanny, "options": {"nthreads": 2}},
60 }
61
62 While the instantiation of this spec is stored in the ``.workers``
63 attribute
64
65 >>> cluster.workers
66 {
67 'my-worker': <Worker ...>
68 'my-nanny': <Nanny ...>
69 }
70
71 Should the spec change, we can await the cluster or call the
72 ``._correct_state`` method to align the actual state to the specified
73 state.
74
75 We can also ``.scale(...)`` the cluster, which adds new workers of a given
76 form.
77
78 >>> worker = {'cls': Worker, 'options': {}}
79 >>> cluster = SpecCluster(scheduler=scheduler, worker=worker)
80 >>> cluster.worker_spec
81 {}
82
83 >>> cluster.scale(3)
84 >>> cluster.worker_spec
85 {
86 0: {'cls': Worker, 'options': {}},
87 1: {'cls': Worker, 'options': {}},
88 2: {'cls': Worker, 'options': {}},
89 }
90
91 Note that above we are using the standard ``Worker`` and ``Nanny`` classes,
92 however in practice other classes could be used that handle resource
93 management like ``KubernetesPod`` or ``SLURMJob``. The spec does not need
94 to conform to the expectations of the standard Dask Worker class. It just
95 needs to be called with the provided options, support ``__await__`` and
96 ``close`` methods and the ``worker_address`` property..
97
98 Also note that uniformity of the specification is not required. Other API
99 could be added externally (in subclasses) that adds workers of different
100 specifications into the same dictionary.
101 """
102
103 _instances = weakref.WeakSet()
104
105 def __init__(
106 self,
107 workers=None,
108 scheduler=None,
109 worker=None,
110 asynchronous=False,
111 loop=None,
112 security=None,
113 silence_logs=False,
114 ):
115 self._created = weakref.WeakSet()
116
117 self.scheduler_spec = scheduler
118 self.worker_spec = workers or {}
119 self.new_spec = worker
120 self.workers = {}
121 self._i = 0
122 self._asynchronous = asynchronous
123 self.security = security or Security()
124 self.scheduler_comm = None
125
126 if silence_logs:
127 self._old_logging_level = silence_logging(level=silence_logs)
128
129 self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous)
130 self.loop = self._loop_runner.loop
131
132 self.status = "created"
133 self._instances.add(self)
134 self._correct_state_waiting = None
135
136 if not self.asynchronous:
137 self._loop_runner.start()
138 self.sync(self._start)
139 self.sync(self._correct_state)
140 self.sync(self._wait_for_workers)
141
142 async def _start(self):
143 while self.status == "starting":
144 await asyncio.sleep(0.01)
145 if self.status == "running":
146 return
147 if self.status == "closed":
148 raise ValueError("Cluster is closed")
149
150 if self.scheduler_spec is None:
151 try:
152 from distributed.dashboard import BokehScheduler
153 except ImportError:
154 services = {}
155 else:
156 services = {("dashboard", 8787): BokehScheduler}
157 self.scheduler_spec = {"cls": Scheduler, "options": {"services": services}}
158 self.scheduler = self.scheduler_spec["cls"](
159 loop=self.loop, **self.scheduler_spec.get("options", {})
160 )
161
162 self._lock = asyncio.Lock()
163 self.status = "starting"
164 self.scheduler = await self.scheduler
165 self.scheduler_comm = rpc(
166 self.scheduler.address,
167 connection_args=self.security.get_connection_args("client"),
168 )
169 self.status = "running"
170
171 def _correct_state(self):
172 if self._correct_state_waiting:
173 # If people call this frequently, we only want to run it once
174 return self._correct_state_waiting
175 else:
176 task = asyncio.ensure_future(self._correct_state_internal())
177 self._correct_state_waiting = task
178 return task
179
180 async def _correct_state_internal(self):
181 async with self._lock:
182 self._correct_state_waiting = None
183
184 pre = list(set(self.workers))
185 to_close = set(self.workers) - set(self.worker_spec)
186 if to_close:
187 if self.scheduler.status == "running":
188 await self.scheduler_comm.retire_workers(workers=list(to_close))
189 tasks = [self.workers[w].close() for w in to_close]
190 await asyncio.wait(tasks)
191 for task in tasks: # for tornado gen.coroutine support
192 with ignoring(RuntimeError):
193 await task
194 for name in to_close:
195 del self.workers[name]
196
197 to_open = set(self.worker_spec) - set(self.workers)
198 workers = []
199 for name in to_open:
200 d = self.worker_spec[name]
201 cls, opts = d["cls"], d.get("options", {})
202 if "name" not in opts:
203 opts = opts.copy()
204 opts["name"] = name
205 worker = cls(self.scheduler.address, **opts)
206 self._created.add(worker)
207 workers.append(worker)
208 if workers:
209 await asyncio.wait(workers)
210 for w in workers:
211 w._cluster = weakref.ref(self)
212 await w # for tornado gen.coroutine support
213 self.workers.update(dict(zip(to_open, workers)))
214
215 def __await__(self):
216 async def _():
217 if self.status == "created":
218 await self._start()
219 await self.scheduler
220 await self._correct_state()
221 if self.workers:
222 await asyncio.wait(list(self.workers.values())) # maybe there are more
223 await self._wait_for_workers()
224 return self
225
226 return _().__await__()
227
228 async def _wait_for_workers(self):
229 while {
230 str(d["name"])
231 for d in (await self.scheduler_comm.identity())["workers"].values()
232 } != set(map(str, self.workers)):
233 if (
234 any(w.status == "closed" for w in self.workers.values())
235 and self.scheduler.status == "running"
236 ):
237 raise gen.TimeoutError("Worker unexpectedly closed")
238 await asyncio.sleep(0.1)
239
240 async def __aenter__(self):
241 await self
242 return self
243
244 async def __aexit__(self, typ, value, traceback):
245 await self.close()
246
247 async def _close(self):
248 while self.status == "closing":
249 await asyncio.sleep(0.1)
250 if self.status == "closed":
251 return
252 self.status = "closing"
253
254 self.scale(0)
255 await self._correct_state()
256 async with self._lock:
257 with ignoring(CommClosedError):
258 await self.scheduler_comm.close(close_workers=True)
259 await self.scheduler.close()
260 for w in self._created:
261 assert w.status == "closed"
262 self.scheduler_comm.close_rpc()
263
264 if hasattr(self, "_old_logging_level"):
265 silence_logging(self._old_logging_level)
266
267 self.status = "closed"
268
269 def close(self, timeout=None):
270 with ignoring(RuntimeError): # loop closed during process shutdown
271 return self.sync(self._close, callback_timeout=timeout)
272
273 def __del__(self):
274 if self.status != "closed":
275 self.close()
276
277 def __enter__(self):
278 self.sync(self._correct_state)
279 self.sync(self._wait_for_workers)
280 assert self.status == "running"
281 return self
282
283 def __exit__(self, typ, value, traceback):
284 self.close()
285 self._loop_runner.stop()
286
287 def scale(self, n):
288 while len(self.worker_spec) > n:
289 self.worker_spec.popitem()
290
291 if self.status in ("closing", "closed"):
292 self.loop.add_callback(self._correct_state)
293 return
294
295 while len(self.worker_spec) < n:
296 k, spec = self.new_worker_spec()
297 self.worker_spec[k] = spec
298
299 self.loop.add_callback(self._correct_state)
300
301 def new_worker_spec(self):
302 """ Return name and spec for the next worker
303
304 Returns
305 -------
306 name: identifier for worker
307 spec: dict
308
309 See Also
310 --------
311 scale
312 """
313 while self._i in self.worker_spec:
314 self._i += 1
315
316 return self._i, self.new_spec
317
318 async def scale_down(self, workers):
319 workers = set(workers)
320
321 # TODO: this is linear cost. We should be indexing by name or something
322 to_close = [w for w in self.workers.values() if w.address in workers]
323 for k, v in self.workers.items():
324 if v.worker_address in workers:
325 del self.worker_spec[k]
326
327 await self
328
329 scale_up = scale # backwards compatibility
330
331 def __repr__(self):
332 return "%s(%r, workers=%d)" % (
333 type(self).__name__,
334 self.scheduler_address,
335 len(self.workers),
336 )
337
338
339 @atexit.register
340 def close_clusters():
341 for cluster in list(SpecCluster._instances):
342 with ignoring(gen.TimeoutError):
343 if cluster.status != "closed":
344 cluster.close(timeout=10)
345
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/distributed/deploy/spec.py b/distributed/deploy/spec.py
--- a/distributed/deploy/spec.py
+++ b/distributed/deploy/spec.py
@@ -318,8 +318,6 @@
async def scale_down(self, workers):
workers = set(workers)
- # TODO: this is linear cost. We should be indexing by name or something
- to_close = [w for w in self.workers.values() if w.address in workers]
for k, v in self.workers.items():
if v.worker_address in workers:
del self.worker_spec[k]
|
{"golden_diff": "diff --git a/distributed/deploy/spec.py b/distributed/deploy/spec.py\n--- a/distributed/deploy/spec.py\n+++ b/distributed/deploy/spec.py\n@@ -318,8 +318,6 @@\n async def scale_down(self, workers):\n workers = set(workers)\n \n- # TODO: this is linear cost. We should be indexing by name or something\n- to_close = [w for w in self.workers.values() if w.address in workers]\n for k, v in self.workers.items():\n if v.worker_address in workers:\n del self.worker_spec[k]\n", "issue": "SpecCluster error when removing workers\nWhen attempting to scale down a `SpecCluster` I'm getting the following error.\r\n\r\n```python-traceback\r\ndistributed.utils - ERROR - 'Worker' object has no attribute 'worker_address'\r\nTraceback (most recent call last):\r\n File \"/home/nfs/jtomlinson/Projects/dask/distributed/distributed/utils.py\", line 714, in log_errors\r\n yield\r\n File \"/home/nfs/jtomlinson/Projects/dask/distributed/distributed/deploy/adaptive.py\", line 176, in _retire_workers\r\n await f\r\n File \"/home/nfs/jtomlinson/Projects/dask/distributed/distributed/deploy/spec.py\", line 324, in scale_down\r\n if v.worker_address in workers:\r\nAttributeError: 'Worker' object has no attribute 'worker_address'\r\n```\n", "before_files": [{"content": "import asyncio\nimport atexit\nimport weakref\n\nfrom tornado import gen\n\nfrom .cluster import Cluster\nfrom ..core import rpc, CommClosedError\nfrom ..utils import LoopRunner, silence_logging, ignoring\nfrom ..scheduler import Scheduler\nfrom ..security import Security\n\n\nclass SpecCluster(Cluster):\n \"\"\" Cluster that requires a full specification of workers\n\n The SpecCluster class expects a full specification of the Scheduler and\n Workers to use. It removes any handling of user inputs (like threads vs\n processes, number of cores, and so on) and any handling of cluster resource\n managers (like pods, jobs, and so on). Instead, it expects this\n information to be passed in scheduler and worker specifications. This\n class does handle all of the logic around asynchronously cleanly setting up\n and tearing things down at the right times. Hopefully it can form a base\n for other more user-centric classes.\n\n Parameters\n ----------\n workers: dict\n A dictionary mapping names to worker classes and their specifications\n See example below\n scheduler: dict, optional\n A similar mapping for a scheduler\n worker: dict\n A specification of a single worker.\n This is used for any new workers that are created.\n asynchronous: bool\n If this is intended to be used directly within an event loop with\n async/await\n silence_logs: bool\n Whether or not we should silence logging when setting up the cluster.\n\n Examples\n --------\n To create a SpecCluster you specify how to set up a Scheduler and Workers\n\n >>> from dask.distributed import Scheduler, Worker, Nanny\n >>> scheduler = {'cls': Scheduler, 'options': {\"dashboard_address\": ':8787'}}\n >>> workers = {\n ... 'my-worker': {\"cls\": Worker, \"options\": {\"nthreads\": 1}},\n ... 'my-nanny': {\"cls\": Nanny, \"options\": {\"nthreads\": 2}},\n ... }\n >>> cluster = SpecCluster(scheduler=scheduler, workers=workers)\n\n The worker spec is stored as the ``.worker_spec`` attribute\n\n >>> cluster.worker_spec\n {\n 'my-worker': {\"cls\": Worker, \"options\": {\"nthreads\": 1}},\n 'my-nanny': {\"cls\": Nanny, \"options\": {\"nthreads\": 2}},\n }\n\n While the instantiation of this spec is stored in the ``.workers``\n attribute\n\n >>> cluster.workers\n {\n 'my-worker': <Worker ...>\n 'my-nanny': <Nanny ...>\n }\n\n Should the spec change, we can await the cluster or call the\n ``._correct_state`` method to align the actual state to the specified\n state.\n\n We can also ``.scale(...)`` the cluster, which adds new workers of a given\n form.\n\n >>> worker = {'cls': Worker, 'options': {}}\n >>> cluster = SpecCluster(scheduler=scheduler, worker=worker)\n >>> cluster.worker_spec\n {}\n\n >>> cluster.scale(3)\n >>> cluster.worker_spec\n {\n 0: {'cls': Worker, 'options': {}},\n 1: {'cls': Worker, 'options': {}},\n 2: {'cls': Worker, 'options': {}},\n }\n\n Note that above we are using the standard ``Worker`` and ``Nanny`` classes,\n however in practice other classes could be used that handle resource\n management like ``KubernetesPod`` or ``SLURMJob``. The spec does not need\n to conform to the expectations of the standard Dask Worker class. It just\n needs to be called with the provided options, support ``__await__`` and\n ``close`` methods and the ``worker_address`` property..\n\n Also note that uniformity of the specification is not required. Other API\n could be added externally (in subclasses) that adds workers of different\n specifications into the same dictionary.\n \"\"\"\n\n _instances = weakref.WeakSet()\n\n def __init__(\n self,\n workers=None,\n scheduler=None,\n worker=None,\n asynchronous=False,\n loop=None,\n security=None,\n silence_logs=False,\n ):\n self._created = weakref.WeakSet()\n\n self.scheduler_spec = scheduler\n self.worker_spec = workers or {}\n self.new_spec = worker\n self.workers = {}\n self._i = 0\n self._asynchronous = asynchronous\n self.security = security or Security()\n self.scheduler_comm = None\n\n if silence_logs:\n self._old_logging_level = silence_logging(level=silence_logs)\n\n self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous)\n self.loop = self._loop_runner.loop\n\n self.status = \"created\"\n self._instances.add(self)\n self._correct_state_waiting = None\n\n if not self.asynchronous:\n self._loop_runner.start()\n self.sync(self._start)\n self.sync(self._correct_state)\n self.sync(self._wait_for_workers)\n\n async def _start(self):\n while self.status == \"starting\":\n await asyncio.sleep(0.01)\n if self.status == \"running\":\n return\n if self.status == \"closed\":\n raise ValueError(\"Cluster is closed\")\n\n if self.scheduler_spec is None:\n try:\n from distributed.dashboard import BokehScheduler\n except ImportError:\n services = {}\n else:\n services = {(\"dashboard\", 8787): BokehScheduler}\n self.scheduler_spec = {\"cls\": Scheduler, \"options\": {\"services\": services}}\n self.scheduler = self.scheduler_spec[\"cls\"](\n loop=self.loop, **self.scheduler_spec.get(\"options\", {})\n )\n\n self._lock = asyncio.Lock()\n self.status = \"starting\"\n self.scheduler = await self.scheduler\n self.scheduler_comm = rpc(\n self.scheduler.address,\n connection_args=self.security.get_connection_args(\"client\"),\n )\n self.status = \"running\"\n\n def _correct_state(self):\n if self._correct_state_waiting:\n # If people call this frequently, we only want to run it once\n return self._correct_state_waiting\n else:\n task = asyncio.ensure_future(self._correct_state_internal())\n self._correct_state_waiting = task\n return task\n\n async def _correct_state_internal(self):\n async with self._lock:\n self._correct_state_waiting = None\n\n pre = list(set(self.workers))\n to_close = set(self.workers) - set(self.worker_spec)\n if to_close:\n if self.scheduler.status == \"running\":\n await self.scheduler_comm.retire_workers(workers=list(to_close))\n tasks = [self.workers[w].close() for w in to_close]\n await asyncio.wait(tasks)\n for task in tasks: # for tornado gen.coroutine support\n with ignoring(RuntimeError):\n await task\n for name in to_close:\n del self.workers[name]\n\n to_open = set(self.worker_spec) - set(self.workers)\n workers = []\n for name in to_open:\n d = self.worker_spec[name]\n cls, opts = d[\"cls\"], d.get(\"options\", {})\n if \"name\" not in opts:\n opts = opts.copy()\n opts[\"name\"] = name\n worker = cls(self.scheduler.address, **opts)\n self._created.add(worker)\n workers.append(worker)\n if workers:\n await asyncio.wait(workers)\n for w in workers:\n w._cluster = weakref.ref(self)\n await w # for tornado gen.coroutine support\n self.workers.update(dict(zip(to_open, workers)))\n\n def __await__(self):\n async def _():\n if self.status == \"created\":\n await self._start()\n await self.scheduler\n await self._correct_state()\n if self.workers:\n await asyncio.wait(list(self.workers.values())) # maybe there are more\n await self._wait_for_workers()\n return self\n\n return _().__await__()\n\n async def _wait_for_workers(self):\n while {\n str(d[\"name\"])\n for d in (await self.scheduler_comm.identity())[\"workers\"].values()\n } != set(map(str, self.workers)):\n if (\n any(w.status == \"closed\" for w in self.workers.values())\n and self.scheduler.status == \"running\"\n ):\n raise gen.TimeoutError(\"Worker unexpectedly closed\")\n await asyncio.sleep(0.1)\n\n async def __aenter__(self):\n await self\n return self\n\n async def __aexit__(self, typ, value, traceback):\n await self.close()\n\n async def _close(self):\n while self.status == \"closing\":\n await asyncio.sleep(0.1)\n if self.status == \"closed\":\n return\n self.status = \"closing\"\n\n self.scale(0)\n await self._correct_state()\n async with self._lock:\n with ignoring(CommClosedError):\n await self.scheduler_comm.close(close_workers=True)\n await self.scheduler.close()\n for w in self._created:\n assert w.status == \"closed\"\n self.scheduler_comm.close_rpc()\n\n if hasattr(self, \"_old_logging_level\"):\n silence_logging(self._old_logging_level)\n\n self.status = \"closed\"\n\n def close(self, timeout=None):\n with ignoring(RuntimeError): # loop closed during process shutdown\n return self.sync(self._close, callback_timeout=timeout)\n\n def __del__(self):\n if self.status != \"closed\":\n self.close()\n\n def __enter__(self):\n self.sync(self._correct_state)\n self.sync(self._wait_for_workers)\n assert self.status == \"running\"\n return self\n\n def __exit__(self, typ, value, traceback):\n self.close()\n self._loop_runner.stop()\n\n def scale(self, n):\n while len(self.worker_spec) > n:\n self.worker_spec.popitem()\n\n if self.status in (\"closing\", \"closed\"):\n self.loop.add_callback(self._correct_state)\n return\n\n while len(self.worker_spec) < n:\n k, spec = self.new_worker_spec()\n self.worker_spec[k] = spec\n\n self.loop.add_callback(self._correct_state)\n\n def new_worker_spec(self):\n \"\"\" Return name and spec for the next worker\n\n Returns\n -------\n name: identifier for worker\n spec: dict\n\n See Also\n --------\n scale\n \"\"\"\n while self._i in self.worker_spec:\n self._i += 1\n\n return self._i, self.new_spec\n\n async def scale_down(self, workers):\n workers = set(workers)\n\n # TODO: this is linear cost. We should be indexing by name or something\n to_close = [w for w in self.workers.values() if w.address in workers]\n for k, v in self.workers.items():\n if v.worker_address in workers:\n del self.worker_spec[k]\n\n await self\n\n scale_up = scale # backwards compatibility\n\n def __repr__(self):\n return \"%s(%r, workers=%d)\" % (\n type(self).__name__,\n self.scheduler_address,\n len(self.workers),\n )\n\n\[email protected]\ndef close_clusters():\n for cluster in list(SpecCluster._instances):\n with ignoring(gen.TimeoutError):\n if cluster.status != \"closed\":\n cluster.close(timeout=10)\n", "path": "distributed/deploy/spec.py"}], "after_files": [{"content": "import asyncio\nimport atexit\nimport weakref\n\nfrom tornado import gen\n\nfrom .cluster import Cluster\nfrom ..core import rpc, CommClosedError\nfrom ..utils import LoopRunner, silence_logging, ignoring\nfrom ..scheduler import Scheduler\nfrom ..security import Security\n\n\nclass SpecCluster(Cluster):\n \"\"\" Cluster that requires a full specification of workers\n\n The SpecCluster class expects a full specification of the Scheduler and\n Workers to use. It removes any handling of user inputs (like threads vs\n processes, number of cores, and so on) and any handling of cluster resource\n managers (like pods, jobs, and so on). Instead, it expects this\n information to be passed in scheduler and worker specifications. This\n class does handle all of the logic around asynchronously cleanly setting up\n and tearing things down at the right times. Hopefully it can form a base\n for other more user-centric classes.\n\n Parameters\n ----------\n workers: dict\n A dictionary mapping names to worker classes and their specifications\n See example below\n scheduler: dict, optional\n A similar mapping for a scheduler\n worker: dict\n A specification of a single worker.\n This is used for any new workers that are created.\n asynchronous: bool\n If this is intended to be used directly within an event loop with\n async/await\n silence_logs: bool\n Whether or not we should silence logging when setting up the cluster.\n\n Examples\n --------\n To create a SpecCluster you specify how to set up a Scheduler and Workers\n\n >>> from dask.distributed import Scheduler, Worker, Nanny\n >>> scheduler = {'cls': Scheduler, 'options': {\"dashboard_address\": ':8787'}}\n >>> workers = {\n ... 'my-worker': {\"cls\": Worker, \"options\": {\"nthreads\": 1}},\n ... 'my-nanny': {\"cls\": Nanny, \"options\": {\"nthreads\": 2}},\n ... }\n >>> cluster = SpecCluster(scheduler=scheduler, workers=workers)\n\n The worker spec is stored as the ``.worker_spec`` attribute\n\n >>> cluster.worker_spec\n {\n 'my-worker': {\"cls\": Worker, \"options\": {\"nthreads\": 1}},\n 'my-nanny': {\"cls\": Nanny, \"options\": {\"nthreads\": 2}},\n }\n\n While the instantiation of this spec is stored in the ``.workers``\n attribute\n\n >>> cluster.workers\n {\n 'my-worker': <Worker ...>\n 'my-nanny': <Nanny ...>\n }\n\n Should the spec change, we can await the cluster or call the\n ``._correct_state`` method to align the actual state to the specified\n state.\n\n We can also ``.scale(...)`` the cluster, which adds new workers of a given\n form.\n\n >>> worker = {'cls': Worker, 'options': {}}\n >>> cluster = SpecCluster(scheduler=scheduler, worker=worker)\n >>> cluster.worker_spec\n {}\n\n >>> cluster.scale(3)\n >>> cluster.worker_spec\n {\n 0: {'cls': Worker, 'options': {}},\n 1: {'cls': Worker, 'options': {}},\n 2: {'cls': Worker, 'options': {}},\n }\n\n Note that above we are using the standard ``Worker`` and ``Nanny`` classes,\n however in practice other classes could be used that handle resource\n management like ``KubernetesPod`` or ``SLURMJob``. The spec does not need\n to conform to the expectations of the standard Dask Worker class. It just\n needs to be called with the provided options, support ``__await__`` and\n ``close`` methods and the ``worker_address`` property..\n\n Also note that uniformity of the specification is not required. Other API\n could be added externally (in subclasses) that adds workers of different\n specifications into the same dictionary.\n \"\"\"\n\n _instances = weakref.WeakSet()\n\n def __init__(\n self,\n workers=None,\n scheduler=None,\n worker=None,\n asynchronous=False,\n loop=None,\n security=None,\n silence_logs=False,\n ):\n self._created = weakref.WeakSet()\n\n self.scheduler_spec = scheduler\n self.worker_spec = workers or {}\n self.new_spec = worker\n self.workers = {}\n self._i = 0\n self._asynchronous = asynchronous\n self.security = security or Security()\n self.scheduler_comm = None\n\n if silence_logs:\n self._old_logging_level = silence_logging(level=silence_logs)\n\n self._loop_runner = LoopRunner(loop=loop, asynchronous=asynchronous)\n self.loop = self._loop_runner.loop\n\n self.status = \"created\"\n self._instances.add(self)\n self._correct_state_waiting = None\n\n if not self.asynchronous:\n self._loop_runner.start()\n self.sync(self._start)\n self.sync(self._correct_state)\n self.sync(self._wait_for_workers)\n\n async def _start(self):\n while self.status == \"starting\":\n await asyncio.sleep(0.01)\n if self.status == \"running\":\n return\n if self.status == \"closed\":\n raise ValueError(\"Cluster is closed\")\n\n if self.scheduler_spec is None:\n try:\n from distributed.dashboard import BokehScheduler\n except ImportError:\n services = {}\n else:\n services = {(\"dashboard\", 8787): BokehScheduler}\n self.scheduler_spec = {\"cls\": Scheduler, \"options\": {\"services\": services}}\n self.scheduler = self.scheduler_spec[\"cls\"](\n loop=self.loop, **self.scheduler_spec.get(\"options\", {})\n )\n\n self._lock = asyncio.Lock()\n self.status = \"starting\"\n self.scheduler = await self.scheduler\n self.scheduler_comm = rpc(\n self.scheduler.address,\n connection_args=self.security.get_connection_args(\"client\"),\n )\n self.status = \"running\"\n\n def _correct_state(self):\n if self._correct_state_waiting:\n # If people call this frequently, we only want to run it once\n return self._correct_state_waiting\n else:\n task = asyncio.ensure_future(self._correct_state_internal())\n self._correct_state_waiting = task\n return task\n\n async def _correct_state_internal(self):\n async with self._lock:\n self._correct_state_waiting = None\n\n pre = list(set(self.workers))\n to_close = set(self.workers) - set(self.worker_spec)\n if to_close:\n if self.scheduler.status == \"running\":\n await self.scheduler_comm.retire_workers(workers=list(to_close))\n tasks = [self.workers[w].close() for w in to_close]\n await asyncio.wait(tasks)\n for task in tasks: # for tornado gen.coroutine support\n with ignoring(RuntimeError):\n await task\n for name in to_close:\n del self.workers[name]\n\n to_open = set(self.worker_spec) - set(self.workers)\n workers = []\n for name in to_open:\n d = self.worker_spec[name]\n cls, opts = d[\"cls\"], d.get(\"options\", {})\n if \"name\" not in opts:\n opts = opts.copy()\n opts[\"name\"] = name\n worker = cls(self.scheduler.address, **opts)\n self._created.add(worker)\n workers.append(worker)\n if workers:\n await asyncio.wait(workers)\n for w in workers:\n w._cluster = weakref.ref(self)\n await w # for tornado gen.coroutine support\n self.workers.update(dict(zip(to_open, workers)))\n\n def __await__(self):\n async def _():\n if self.status == \"created\":\n await self._start()\n await self.scheduler\n await self._correct_state()\n if self.workers:\n await asyncio.wait(list(self.workers.values())) # maybe there are more\n await self._wait_for_workers()\n return self\n\n return _().__await__()\n\n async def _wait_for_workers(self):\n while {\n str(d[\"name\"])\n for d in (await self.scheduler_comm.identity())[\"workers\"].values()\n } != set(map(str, self.workers)):\n if (\n any(w.status == \"closed\" for w in self.workers.values())\n and self.scheduler.status == \"running\"\n ):\n raise gen.TimeoutError(\"Worker unexpectedly closed\")\n await asyncio.sleep(0.1)\n\n async def __aenter__(self):\n await self\n return self\n\n async def __aexit__(self, typ, value, traceback):\n await self.close()\n\n async def _close(self):\n while self.status == \"closing\":\n await asyncio.sleep(0.1)\n if self.status == \"closed\":\n return\n self.status = \"closing\"\n\n self.scale(0)\n await self._correct_state()\n async with self._lock:\n with ignoring(CommClosedError):\n await self.scheduler_comm.close(close_workers=True)\n await self.scheduler.close()\n for w in self._created:\n assert w.status == \"closed\"\n self.scheduler_comm.close_rpc()\n\n if hasattr(self, \"_old_logging_level\"):\n silence_logging(self._old_logging_level)\n\n self.status = \"closed\"\n\n def close(self, timeout=None):\n with ignoring(RuntimeError): # loop closed during process shutdown\n return self.sync(self._close, callback_timeout=timeout)\n\n def __del__(self):\n if self.status != \"closed\":\n self.close()\n\n def __enter__(self):\n self.sync(self._correct_state)\n self.sync(self._wait_for_workers)\n assert self.status == \"running\"\n return self\n\n def __exit__(self, typ, value, traceback):\n self.close()\n self._loop_runner.stop()\n\n def scale(self, n):\n while len(self.worker_spec) > n:\n self.worker_spec.popitem()\n\n if self.status in (\"closing\", \"closed\"):\n self.loop.add_callback(self._correct_state)\n return\n\n while len(self.worker_spec) < n:\n k, spec = self.new_worker_spec()\n self.worker_spec[k] = spec\n\n self.loop.add_callback(self._correct_state)\n\n def new_worker_spec(self):\n \"\"\" Return name and spec for the next worker\n\n Returns\n -------\n name: identifier for worker\n spec: dict\n\n See Also\n --------\n scale\n \"\"\"\n while self._i in self.worker_spec:\n self._i += 1\n\n return self._i, self.new_spec\n\n async def scale_down(self, workers):\n workers = set(workers)\n\n for k, v in self.workers.items():\n if v.worker_address in workers:\n del self.worker_spec[k]\n\n await self\n\n scale_up = scale # backwards compatibility\n\n def __repr__(self):\n return \"%s(%r, workers=%d)\" % (\n type(self).__name__,\n self.scheduler_address,\n len(self.workers),\n )\n\n\[email protected]\ndef close_clusters():\n for cluster in list(SpecCluster._instances):\n with ignoring(gen.TimeoutError):\n if cluster.status != \"closed\":\n cluster.close(timeout=10)\n", "path": "distributed/deploy/spec.py"}]}
| 3,856 | 134 |
gh_patches_debug_42539
|
rasdani/github-patches
|
git_diff
|
translate__pootle-6705
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecate update/sync stores
Wondering what is best with these commands.
on the one hand they are quite useful for grouping common operations
on the other, it would be better for users to learn the more powerful fs api, and grouping can be done in other ways
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_app/management/commands/update_stores.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import logging
10 import os
11 os.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'
12
13 from pootle_app.management.commands import PootleCommand
14 from pootle_language.models import Language
15 from pootle_fs.utils import FSPlugin
16 from pootle_project.models import Project
17
18
19 logger = logging.getLogger(__name__)
20
21
22 class Command(PootleCommand):
23 help = "Update database stores from files."
24 process_disabled_projects = True
25 log_name = "update"
26
27 def add_arguments(self, parser):
28 super(Command, self).add_arguments(parser)
29 parser.add_argument(
30 '--overwrite',
31 action='store_true',
32 dest='overwrite',
33 default=False,
34 help="Don't just update untranslated units "
35 "and add new units, but overwrite database "
36 "translations to reflect state in files.",
37 )
38 parser.add_argument(
39 '--force',
40 action='store_true',
41 dest='force',
42 default=False,
43 help="Unconditionally process all files (even if they "
44 "appear unchanged).",
45 )
46
47 def handle_translation_project(self, translation_project, **options):
48 """
49 """
50 path_glob = "%s*" % translation_project.pootle_path
51 plugin = FSPlugin(translation_project.project)
52 plugin.add(pootle_path=path_glob, update="pootle")
53 plugin.rm(pootle_path=path_glob, update="pootle")
54 plugin.resolve(pootle_path=path_glob)
55 plugin.sync(pootle_path=path_glob, update="pootle")
56
57 def _parse_tps_to_create(self, project):
58 plugin = FSPlugin(project)
59 plugin.fetch()
60 untracked_languages = set(
61 fs.pootle_path.split("/")[1]
62 for fs
63 in plugin.state()["fs_untracked"])
64 new_langs = (
65 [lang for lang
66 in untracked_languages
67 if lang in self.languages]
68 if self.languages
69 else untracked_languages)
70 return Language.objects.filter(
71 code__in=new_langs).exclude(
72 code__in=project.translationproject_set.values_list(
73 "language__code", flat=True))
74
75 def _create_tps_for_project(self, project):
76 for language in self._parse_tps_to_create(project):
77 project.translationproject_set.create(
78 language=language,
79 project=project)
80
81 def handle_all(self, **options):
82 projects = (
83 Project.objects.filter(code__in=self.projects)
84 if self.projects
85 else Project.objects.all())
86 for project in projects.iterator():
87 self._create_tps_for_project(project)
88 super(Command, self).handle_all(**options)
89
```
Path: `pootle/apps/pootle_app/management/commands/sync_stores.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import logging
10 import os
11 os.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'
12
13 from pootle_app.management.commands import PootleCommand
14 from pootle_fs.utils import FSPlugin
15
16
17 logger = logging.getLogger(__name__)
18
19
20 class Command(PootleCommand):
21 help = "Save new translations to disk manually."
22 process_disabled_projects = True
23
24 def add_arguments(self, parser):
25 super(Command, self).add_arguments(parser)
26 parser.add_argument(
27 '--overwrite',
28 action='store_true',
29 dest='overwrite',
30 default=False,
31 help="Don't just save translations, but "
32 "overwrite files to reflect state in database",
33 )
34 parser.add_argument(
35 '--skip-missing',
36 action='store_true',
37 dest='skip_missing',
38 default=False,
39 help="Ignore missing files on disk",
40 )
41 parser.add_argument(
42 '--force',
43 action='store_true',
44 dest='force',
45 default=False,
46 help="Don't ignore stores synced after last change",
47 )
48
49 warn_on_conflict = []
50
51 def handle_all_stores(self, translation_project, **options):
52 path_glob = "%s*" % translation_project.pootle_path
53 plugin = FSPlugin(translation_project.project)
54 plugin.fetch()
55 if translation_project.project.pk not in self.warn_on_conflict:
56 state = plugin.state()
57 if any(k in state for k in ["conflict", "conflict_untracked"]):
58 logger.warn(
59 "The project '%s' has conflicting changes in the database "
60 "and translation files. Use `pootle fs resolve` to tell "
61 "pootle how to merge",
62 translation_project.project.code)
63 self.warn_on_conflict.append(
64 translation_project.project.pk)
65 if not options["skip_missing"]:
66 plugin.add(pootle_path=path_glob, update="fs")
67 if options["overwrite"]:
68 plugin.resolve(
69 pootle_path=path_glob,
70 pootle_wins=True)
71 plugin.sync(pootle_path=path_glob, update="fs")
72 if options["force"]:
73 # touch the timestamps on disk for files that
74 # werent updated
75 pass
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/apps/pootle_app/management/commands/sync_stores.py b/pootle/apps/pootle_app/management/commands/sync_stores.py
--- a/pootle/apps/pootle_app/management/commands/sync_stores.py
+++ b/pootle/apps/pootle_app/management/commands/sync_stores.py
@@ -28,9 +28,7 @@
action='store_true',
dest='overwrite',
default=False,
- help="Don't just save translations, but "
- "overwrite files to reflect state in database",
- )
+ help="This option has been removed.")
parser.add_argument(
'--skip-missing',
action='store_true',
@@ -43,11 +41,21 @@
action='store_true',
dest='force',
default=False,
- help="Don't ignore stores synced after last change",
- )
+ help="This option has been removed.")
warn_on_conflict = []
+ def handle(self, **options):
+ logger.warn(
+ "The sync_stores command is deprecated, use pootle fs instead")
+ if options["force"]:
+ logger.warn(
+ "The force option no longer has any affect on this command")
+ if options["overwrite"]:
+ logger.warn(
+ "The overwrite option no longer has any affect on this command")
+ super(Command, self).handle(**options)
+
def handle_all_stores(self, translation_project, **options):
path_glob = "%s*" % translation_project.pootle_path
plugin = FSPlugin(translation_project.project)
@@ -64,12 +72,4 @@
translation_project.project.pk)
if not options["skip_missing"]:
plugin.add(pootle_path=path_glob, update="fs")
- if options["overwrite"]:
- plugin.resolve(
- pootle_path=path_glob,
- pootle_wins=True)
plugin.sync(pootle_path=path_glob, update="fs")
- if options["force"]:
- # touch the timestamps on disk for files that
- # werent updated
- pass
diff --git a/pootle/apps/pootle_app/management/commands/update_stores.py b/pootle/apps/pootle_app/management/commands/update_stores.py
--- a/pootle/apps/pootle_app/management/commands/update_stores.py
+++ b/pootle/apps/pootle_app/management/commands/update_stores.py
@@ -40,9 +40,7 @@
action='store_true',
dest='force',
default=False,
- help="Unconditionally process all files (even if they "
- "appear unchanged).",
- )
+ help="This option has been removed.")
def handle_translation_project(self, translation_project, **options):
"""
@@ -51,7 +49,9 @@
plugin = FSPlugin(translation_project.project)
plugin.add(pootle_path=path_glob, update="pootle")
plugin.rm(pootle_path=path_glob, update="pootle")
- plugin.resolve(pootle_path=path_glob)
+ plugin.resolve(
+ pootle_path=path_glob,
+ merge=not options["overwrite"])
plugin.sync(pootle_path=path_glob, update="pootle")
def _parse_tps_to_create(self, project):
@@ -79,6 +79,11 @@
project=project)
def handle_all(self, **options):
+ logger.warn(
+ "The update_stores command is deprecated, use pootle fs instead")
+ if options["force"]:
+ logger.warn(
+ "The force option no longer has any affect on this command")
projects = (
Project.objects.filter(code__in=self.projects)
if self.projects
|
{"golden_diff": "diff --git a/pootle/apps/pootle_app/management/commands/sync_stores.py b/pootle/apps/pootle_app/management/commands/sync_stores.py\n--- a/pootle/apps/pootle_app/management/commands/sync_stores.py\n+++ b/pootle/apps/pootle_app/management/commands/sync_stores.py\n@@ -28,9 +28,7 @@\n action='store_true',\n dest='overwrite',\n default=False,\n- help=\"Don't just save translations, but \"\n- \"overwrite files to reflect state in database\",\n- )\n+ help=\"This option has been removed.\")\n parser.add_argument(\n '--skip-missing',\n action='store_true',\n@@ -43,11 +41,21 @@\n action='store_true',\n dest='force',\n default=False,\n- help=\"Don't ignore stores synced after last change\",\n- )\n+ help=\"This option has been removed.\")\n \n warn_on_conflict = []\n \n+ def handle(self, **options):\n+ logger.warn(\n+ \"The sync_stores command is deprecated, use pootle fs instead\")\n+ if options[\"force\"]:\n+ logger.warn(\n+ \"The force option no longer has any affect on this command\")\n+ if options[\"overwrite\"]:\n+ logger.warn(\n+ \"The overwrite option no longer has any affect on this command\")\n+ super(Command, self).handle(**options)\n+\n def handle_all_stores(self, translation_project, **options):\n path_glob = \"%s*\" % translation_project.pootle_path\n plugin = FSPlugin(translation_project.project)\n@@ -64,12 +72,4 @@\n translation_project.project.pk)\n if not options[\"skip_missing\"]:\n plugin.add(pootle_path=path_glob, update=\"fs\")\n- if options[\"overwrite\"]:\n- plugin.resolve(\n- pootle_path=path_glob,\n- pootle_wins=True)\n plugin.sync(pootle_path=path_glob, update=\"fs\")\n- if options[\"force\"]:\n- # touch the timestamps on disk for files that\n- # werent updated\n- pass\ndiff --git a/pootle/apps/pootle_app/management/commands/update_stores.py b/pootle/apps/pootle_app/management/commands/update_stores.py\n--- a/pootle/apps/pootle_app/management/commands/update_stores.py\n+++ b/pootle/apps/pootle_app/management/commands/update_stores.py\n@@ -40,9 +40,7 @@\n action='store_true',\n dest='force',\n default=False,\n- help=\"Unconditionally process all files (even if they \"\n- \"appear unchanged).\",\n- )\n+ help=\"This option has been removed.\")\n \n def handle_translation_project(self, translation_project, **options):\n \"\"\"\n@@ -51,7 +49,9 @@\n plugin = FSPlugin(translation_project.project)\n plugin.add(pootle_path=path_glob, update=\"pootle\")\n plugin.rm(pootle_path=path_glob, update=\"pootle\")\n- plugin.resolve(pootle_path=path_glob)\n+ plugin.resolve(\n+ pootle_path=path_glob,\n+ merge=not options[\"overwrite\"])\n plugin.sync(pootle_path=path_glob, update=\"pootle\")\n \n def _parse_tps_to_create(self, project):\n@@ -79,6 +79,11 @@\n project=project)\n \n def handle_all(self, **options):\n+ logger.warn(\n+ \"The update_stores command is deprecated, use pootle fs instead\")\n+ if options[\"force\"]:\n+ logger.warn(\n+ \"The force option no longer has any affect on this command\")\n projects = (\n Project.objects.filter(code__in=self.projects)\n if self.projects\n", "issue": "Deprecate update/sync stores\nWondering what is best with these commands.\r\n\r\non the one hand they are quite useful for grouping common operations\r\n\r\non the other, it would be better for users to learn the more powerful fs api, and grouping can be done in other ways\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\nimport os\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\n\nfrom pootle_app.management.commands import PootleCommand\nfrom pootle_language.models import Language\nfrom pootle_fs.utils import FSPlugin\nfrom pootle_project.models import Project\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(PootleCommand):\n help = \"Update database stores from files.\"\n process_disabled_projects = True\n log_name = \"update\"\n\n def add_arguments(self, parser):\n super(Command, self).add_arguments(parser)\n parser.add_argument(\n '--overwrite',\n action='store_true',\n dest='overwrite',\n default=False,\n help=\"Don't just update untranslated units \"\n \"and add new units, but overwrite database \"\n \"translations to reflect state in files.\",\n )\n parser.add_argument(\n '--force',\n action='store_true',\n dest='force',\n default=False,\n help=\"Unconditionally process all files (even if they \"\n \"appear unchanged).\",\n )\n\n def handle_translation_project(self, translation_project, **options):\n \"\"\"\n \"\"\"\n path_glob = \"%s*\" % translation_project.pootle_path\n plugin = FSPlugin(translation_project.project)\n plugin.add(pootle_path=path_glob, update=\"pootle\")\n plugin.rm(pootle_path=path_glob, update=\"pootle\")\n plugin.resolve(pootle_path=path_glob)\n plugin.sync(pootle_path=path_glob, update=\"pootle\")\n\n def _parse_tps_to_create(self, project):\n plugin = FSPlugin(project)\n plugin.fetch()\n untracked_languages = set(\n fs.pootle_path.split(\"/\")[1]\n for fs\n in plugin.state()[\"fs_untracked\"])\n new_langs = (\n [lang for lang\n in untracked_languages\n if lang in self.languages]\n if self.languages\n else untracked_languages)\n return Language.objects.filter(\n code__in=new_langs).exclude(\n code__in=project.translationproject_set.values_list(\n \"language__code\", flat=True))\n\n def _create_tps_for_project(self, project):\n for language in self._parse_tps_to_create(project):\n project.translationproject_set.create(\n language=language,\n project=project)\n\n def handle_all(self, **options):\n projects = (\n Project.objects.filter(code__in=self.projects)\n if self.projects\n else Project.objects.all())\n for project in projects.iterator():\n self._create_tps_for_project(project)\n super(Command, self).handle_all(**options)\n", "path": "pootle/apps/pootle_app/management/commands/update_stores.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\nimport os\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\n\nfrom pootle_app.management.commands import PootleCommand\nfrom pootle_fs.utils import FSPlugin\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(PootleCommand):\n help = \"Save new translations to disk manually.\"\n process_disabled_projects = True\n\n def add_arguments(self, parser):\n super(Command, self).add_arguments(parser)\n parser.add_argument(\n '--overwrite',\n action='store_true',\n dest='overwrite',\n default=False,\n help=\"Don't just save translations, but \"\n \"overwrite files to reflect state in database\",\n )\n parser.add_argument(\n '--skip-missing',\n action='store_true',\n dest='skip_missing',\n default=False,\n help=\"Ignore missing files on disk\",\n )\n parser.add_argument(\n '--force',\n action='store_true',\n dest='force',\n default=False,\n help=\"Don't ignore stores synced after last change\",\n )\n\n warn_on_conflict = []\n\n def handle_all_stores(self, translation_project, **options):\n path_glob = \"%s*\" % translation_project.pootle_path\n plugin = FSPlugin(translation_project.project)\n plugin.fetch()\n if translation_project.project.pk not in self.warn_on_conflict:\n state = plugin.state()\n if any(k in state for k in [\"conflict\", \"conflict_untracked\"]):\n logger.warn(\n \"The project '%s' has conflicting changes in the database \"\n \"and translation files. Use `pootle fs resolve` to tell \"\n \"pootle how to merge\",\n translation_project.project.code)\n self.warn_on_conflict.append(\n translation_project.project.pk)\n if not options[\"skip_missing\"]:\n plugin.add(pootle_path=path_glob, update=\"fs\")\n if options[\"overwrite\"]:\n plugin.resolve(\n pootle_path=path_glob,\n pootle_wins=True)\n plugin.sync(pootle_path=path_glob, update=\"fs\")\n if options[\"force\"]:\n # touch the timestamps on disk for files that\n # werent updated\n pass\n", "path": "pootle/apps/pootle_app/management/commands/sync_stores.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\nimport os\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\n\nfrom pootle_app.management.commands import PootleCommand\nfrom pootle_language.models import Language\nfrom pootle_fs.utils import FSPlugin\nfrom pootle_project.models import Project\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(PootleCommand):\n help = \"Update database stores from files.\"\n process_disabled_projects = True\n log_name = \"update\"\n\n def add_arguments(self, parser):\n super(Command, self).add_arguments(parser)\n parser.add_argument(\n '--overwrite',\n action='store_true',\n dest='overwrite',\n default=False,\n help=\"Don't just update untranslated units \"\n \"and add new units, but overwrite database \"\n \"translations to reflect state in files.\",\n )\n parser.add_argument(\n '--force',\n action='store_true',\n dest='force',\n default=False,\n help=\"This option has been removed.\")\n\n def handle_translation_project(self, translation_project, **options):\n \"\"\"\n \"\"\"\n path_glob = \"%s*\" % translation_project.pootle_path\n plugin = FSPlugin(translation_project.project)\n plugin.add(pootle_path=path_glob, update=\"pootle\")\n plugin.rm(pootle_path=path_glob, update=\"pootle\")\n plugin.resolve(\n pootle_path=path_glob,\n merge=not options[\"overwrite\"])\n plugin.sync(pootle_path=path_glob, update=\"pootle\")\n\n def _parse_tps_to_create(self, project):\n plugin = FSPlugin(project)\n plugin.fetch()\n untracked_languages = set(\n fs.pootle_path.split(\"/\")[1]\n for fs\n in plugin.state()[\"fs_untracked\"])\n new_langs = (\n [lang for lang\n in untracked_languages\n if lang in self.languages]\n if self.languages\n else untracked_languages)\n return Language.objects.filter(\n code__in=new_langs).exclude(\n code__in=project.translationproject_set.values_list(\n \"language__code\", flat=True))\n\n def _create_tps_for_project(self, project):\n for language in self._parse_tps_to_create(project):\n project.translationproject_set.create(\n language=language,\n project=project)\n\n def handle_all(self, **options):\n logger.warn(\n \"The update_stores command is deprecated, use pootle fs instead\")\n if options[\"force\"]:\n logger.warn(\n \"The force option no longer has any affect on this command\")\n projects = (\n Project.objects.filter(code__in=self.projects)\n if self.projects\n else Project.objects.all())\n for project in projects.iterator():\n self._create_tps_for_project(project)\n super(Command, self).handle_all(**options)\n", "path": "pootle/apps/pootle_app/management/commands/update_stores.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\nimport os\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\n\nfrom pootle_app.management.commands import PootleCommand\nfrom pootle_fs.utils import FSPlugin\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(PootleCommand):\n help = \"Save new translations to disk manually.\"\n process_disabled_projects = True\n\n def add_arguments(self, parser):\n super(Command, self).add_arguments(parser)\n parser.add_argument(\n '--overwrite',\n action='store_true',\n dest='overwrite',\n default=False,\n help=\"This option has been removed.\")\n parser.add_argument(\n '--skip-missing',\n action='store_true',\n dest='skip_missing',\n default=False,\n help=\"Ignore missing files on disk\",\n )\n parser.add_argument(\n '--force',\n action='store_true',\n dest='force',\n default=False,\n help=\"This option has been removed.\")\n\n warn_on_conflict = []\n\n def handle(self, **options):\n logger.warn(\n \"The sync_stores command is deprecated, use pootle fs instead\")\n if options[\"force\"]:\n logger.warn(\n \"The force option no longer has any affect on this command\")\n if options[\"overwrite\"]:\n logger.warn(\n \"The overwrite option no longer has any affect on this command\")\n super(Command, self).handle(**options)\n\n def handle_all_stores(self, translation_project, **options):\n path_glob = \"%s*\" % translation_project.pootle_path\n plugin = FSPlugin(translation_project.project)\n plugin.fetch()\n if translation_project.project.pk not in self.warn_on_conflict:\n state = plugin.state()\n if any(k in state for k in [\"conflict\", \"conflict_untracked\"]):\n logger.warn(\n \"The project '%s' has conflicting changes in the database \"\n \"and translation files. Use `pootle fs resolve` to tell \"\n \"pootle how to merge\",\n translation_project.project.code)\n self.warn_on_conflict.append(\n translation_project.project.pk)\n if not options[\"skip_missing\"]:\n plugin.add(pootle_path=path_glob, update=\"fs\")\n plugin.sync(pootle_path=path_glob, update=\"fs\")\n", "path": "pootle/apps/pootle_app/management/commands/sync_stores.py"}]}
| 1,836 | 843 |
gh_patches_debug_58086
|
rasdani/github-patches
|
git_diff
|
secondmind-labs__trieste-730
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot install trieste from pypi on MacOS
**Describe the bug**
`pip install trieste` fails on MacOS
**To reproduce**
Steps to reproduce the behaviour:
```
$ pip install trieste
Collecting trieste
Downloading trieste-1.1.2-py3-none-any.whl (246 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 246.6/246.6 kB 3.4 MB/s eta 0:00:00
Downloading trieste-1.1.1-py3-none-any.whl (246 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 246.5/246.5 kB 10.5 MB/s eta 0:00:00
Downloading trieste-1.1.0-py3-none-any.whl (246 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 246.5/246.5 kB 10.5 MB/s eta 0:00:00
Downloading trieste-1.0.0-py3-none-any.whl (240 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 240.4/240.4 kB 16.6 MB/s eta 0:00:00
Using cached trieste-0.13.3-py3-none-any.whl (233 kB)
Using cached trieste-0.13.2-py3-none-any.whl (218 kB)
Using cached trieste-0.13.1-py3-none-any.whl (220 kB)
Collecting dill==0.3.4
Using cached dill-0.3.4-py2.py3-none-any.whl (86 kB)
Collecting gpflow==2.5.2
Using cached gpflow-2.5.2-py3-none-any.whl (383 kB)
Collecting trieste
Using cached trieste-0.13.0-py3-none-any.whl (215 kB)
Using cached trieste-0.12.0-py3-none-any.whl (208 kB)
Using cached trieste-0.11.3-py3-none-any.whl (196 kB)
Using cached trieste-0.11.2-py3-none-any.whl (196 kB)
Using cached trieste-0.11.1-py3-none-any.whl (195 kB)
Using cached trieste-0.11.0-py3-none-any.whl (195 kB)
Using cached trieste-0.10.0-py3-none-any.whl (168 kB)
Using cached trieste-0.9.1-py3-none-any.whl (139 kB)
Using cached trieste-0.9.0-py3-none-any.whl (136 kB)
Using cached trieste-0.8.0-py3-none-any.whl (150 kB)
Using cached trieste-0.7.0-py3-none-any.whl (110 kB)
Using cached trieste-0.6.1-py3-none-any.whl (77 kB)
Using cached trieste-0.6.0-py3-none-any.whl (77 kB)
Using cached trieste-0.5.1-py3-none-any.whl (63 kB)
Collecting gpflow==2.2.*
Using cached gpflow-2.2.1-py3-none-any.whl (271 kB)
Collecting numpy
Downloading numpy-1.24.3-cp39-cp39-macosx_11_0_arm64.whl (13.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.9/13.9 MB 16.5 MB/s eta 0:00:00
Collecting trieste
Using cached trieste-0.5.0-py3-none-any.whl (63 kB)
Collecting gpflow==2.1.*
Using cached gpflow-2.1.5-py3-none-any.whl (260 kB)
Collecting trieste
Using cached trieste-0.4.0-py3-none-any.whl (43 kB)
Using cached trieste-0.3.1-py3-none-any.whl (38 kB)
Using cached trieste-0.3.0-py3-none-any.whl (38 kB)
Using cached trieste-0.2.0-py3-none-any.whl (35 kB)
ERROR: Cannot install trieste==0.10.0, trieste==0.11.0, trieste==0.11.1, trieste==0.11.2, trieste==0.11.3, trieste==0.12.0, trieste==0.13.0, trieste==0.13.1, trieste==0.13.2, trieste==0.13.3, trieste==0.2.0, trieste==0.3.0, trieste==0.3.1, trieste==0.4.0, trieste==0.5.0, trieste==0.5.1, trieste==0.6.0, trieste==0.6.1, trieste==0.7.0, trieste==0.8.0, trieste==0.9.0, trieste==0.9.1, trieste==1.0.0, trieste==1.1.0, trieste==1.1.1 and trieste==1.1.2 because these package versions have conflicting dependencies.
The conflict is caused by:
trieste 1.1.2 depends on tensorflow>=2.5
trieste 1.1.1 depends on tensorflow>=2.5
trieste 1.1.0 depends on tensorflow>=2.5
trieste 1.0.0 depends on tensorflow>=2.5
trieste 0.13.3 depends on tensorflow>=2.5
trieste 0.13.2 depends on tensorflow>=2.4
trieste 0.13.1 depends on tensorflow>=2.4
trieste 0.13.0 depends on tensorflow>=2.4
trieste 0.12.0 depends on tensorflow>=2.4
trieste 0.11.3 depends on tensorflow>=2.4
trieste 0.11.2 depends on tensorflow>=2.4
trieste 0.11.1 depends on tensorflow>=2.4
trieste 0.11.0 depends on tensorflow>=2.4
trieste 0.10.0 depends on tensorflow>=2.4
trieste 0.9.1 depends on tensorflow>=2.4
trieste 0.9.0 depends on tensorflow>=2.4
trieste 0.8.0 depends on tensorflow>=2.4
trieste 0.7.0 depends on tensorflow>=2.4
trieste 0.6.1 depends on tensorflow>=2.4
trieste 0.6.0 depends on tensorflow>=2.4
trieste 0.5.1 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1
trieste 0.5.0 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1
trieste 0.4.0 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1
trieste 0.3.1 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1
trieste 0.3.0 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1
trieste 0.2.0 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
```
**Expected behaviour**
It should be possible to install trieste from pypi on MacOS
**System information**
- OS: MacOS Ventura 13.2
- Python version: 3.8.13
- Trieste version: 0.2.0 - 1.1.2
- TensorFlow version: 2.11.0
- GPflow version: 2.8.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2020 The Trieste Contributors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from pathlib import Path
15
16 from setuptools import find_packages, setup
17
18 with open("README.md", "r") as file:
19 long_description = file.read()
20
21 setup(
22 name="trieste",
23 version=Path("trieste/VERSION").read_text().strip(),
24 author="The Trieste contributors",
25 author_email="[email protected]",
26 description="A Bayesian optimization research toolbox built on TensorFlow",
27 long_description=long_description,
28 long_description_content_type="text/markdown",
29 url="https://github.com/secondmind-labs/trieste",
30 packages=find_packages(include=("trieste*",)),
31 package_data={
32 "trieste": ["py.typed", "VERSION"],
33 },
34 classifiers=[
35 "Programming Language :: Python :: 3.7",
36 "License :: OSI Approved :: Apache Software License",
37 "Operating System :: OS Independent",
38 ],
39 python_requires="~=3.7",
40 install_requires=[
41 "absl-py",
42 "dill!=0.3.6",
43 "gpflow>=2.7.0",
44 "gpflux>=0.4.0",
45 "numpy",
46 "tensorflow>=2.5",
47 "tensorflow-probability>=0.13",
48 "greenlet>=1.1.0",
49 ],
50 extras_require={
51 "plotting": ["seaborn", "plotly"],
52 "qhsri": ["pymoo", "cvxpy"],
53 },
54 )
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -43,7 +43,8 @@
"gpflow>=2.7.0",
"gpflux>=0.4.0",
"numpy",
- "tensorflow>=2.5",
+ "tensorflow>=2.5; platform_system!='Darwin' or platform_machine!='arm64'",
+ "tensorflow-macos>=2.5; platform_system=='Darwin' and platform_machine=='arm64'",
"tensorflow-probability>=0.13",
"greenlet>=1.1.0",
],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -43,7 +43,8 @@\n \"gpflow>=2.7.0\",\n \"gpflux>=0.4.0\",\n \"numpy\",\n- \"tensorflow>=2.5\",\n+ \"tensorflow>=2.5; platform_system!='Darwin' or platform_machine!='arm64'\",\n+ \"tensorflow-macos>=2.5; platform_system=='Darwin' and platform_machine=='arm64'\",\n \"tensorflow-probability>=0.13\",\n \"greenlet>=1.1.0\",\n ],\n", "issue": "Cannot install trieste from pypi on MacOS\n**Describe the bug**\r\n`pip install trieste` fails on MacOS\r\n\r\n**To reproduce**\r\nSteps to reproduce the behaviour:\r\n```\r\n$ pip install trieste\r\nCollecting trieste\r\n Downloading trieste-1.1.2-py3-none-any.whl (246 kB)\r\n \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 246.6/246.6 kB 3.4 MB/s eta 0:00:00\r\n Downloading trieste-1.1.1-py3-none-any.whl (246 kB)\r\n \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 246.5/246.5 kB 10.5 MB/s eta 0:00:00\r\n Downloading trieste-1.1.0-py3-none-any.whl (246 kB)\r\n \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 246.5/246.5 kB 10.5 MB/s eta 0:00:00\r\n Downloading trieste-1.0.0-py3-none-any.whl (240 kB)\r\n \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 240.4/240.4 kB 16.6 MB/s eta 0:00:00\r\n Using cached trieste-0.13.3-py3-none-any.whl (233 kB)\r\n Using cached trieste-0.13.2-py3-none-any.whl (218 kB)\r\n Using cached trieste-0.13.1-py3-none-any.whl (220 kB)\r\nCollecting dill==0.3.4\r\n Using cached dill-0.3.4-py2.py3-none-any.whl (86 kB)\r\nCollecting gpflow==2.5.2\r\n Using cached gpflow-2.5.2-py3-none-any.whl (383 kB)\r\nCollecting trieste\r\n Using cached trieste-0.13.0-py3-none-any.whl (215 kB)\r\n Using cached trieste-0.12.0-py3-none-any.whl (208 kB)\r\n Using cached trieste-0.11.3-py3-none-any.whl (196 kB)\r\n Using cached trieste-0.11.2-py3-none-any.whl (196 kB)\r\n Using cached trieste-0.11.1-py3-none-any.whl (195 kB)\r\n Using cached trieste-0.11.0-py3-none-any.whl (195 kB)\r\n Using cached trieste-0.10.0-py3-none-any.whl (168 kB)\r\n Using cached trieste-0.9.1-py3-none-any.whl (139 kB)\r\n Using cached trieste-0.9.0-py3-none-any.whl (136 kB)\r\n Using cached trieste-0.8.0-py3-none-any.whl (150 kB)\r\n Using cached trieste-0.7.0-py3-none-any.whl (110 kB)\r\n Using cached trieste-0.6.1-py3-none-any.whl (77 kB)\r\n Using cached trieste-0.6.0-py3-none-any.whl (77 kB)\r\n Using cached trieste-0.5.1-py3-none-any.whl (63 kB)\r\nCollecting gpflow==2.2.*\r\n Using cached gpflow-2.2.1-py3-none-any.whl (271 kB)\r\nCollecting numpy\r\n Downloading numpy-1.24.3-cp39-cp39-macosx_11_0_arm64.whl (13.9 MB)\r\n \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 13.9/13.9 MB 16.5 MB/s eta 0:00:00\r\nCollecting trieste\r\n Using cached trieste-0.5.0-py3-none-any.whl (63 kB)\r\nCollecting gpflow==2.1.*\r\n Using cached gpflow-2.1.5-py3-none-any.whl (260 kB)\r\nCollecting trieste\r\n Using cached trieste-0.4.0-py3-none-any.whl (43 kB)\r\n Using cached trieste-0.3.1-py3-none-any.whl (38 kB)\r\n Using cached trieste-0.3.0-py3-none-any.whl (38 kB)\r\n Using cached trieste-0.2.0-py3-none-any.whl (35 kB)\r\nERROR: Cannot install trieste==0.10.0, trieste==0.11.0, trieste==0.11.1, trieste==0.11.2, trieste==0.11.3, trieste==0.12.0, trieste==0.13.0, trieste==0.13.1, trieste==0.13.2, trieste==0.13.3, trieste==0.2.0, trieste==0.3.0, trieste==0.3.1, trieste==0.4.0, trieste==0.5.0, trieste==0.5.1, trieste==0.6.0, trieste==0.6.1, trieste==0.7.0, trieste==0.8.0, trieste==0.9.0, trieste==0.9.1, trieste==1.0.0, trieste==1.1.0, trieste==1.1.1 and trieste==1.1.2 because these package versions have conflicting dependencies.\r\n\r\nThe conflict is caused by:\r\n trieste 1.1.2 depends on tensorflow>=2.5\r\n trieste 1.1.1 depends on tensorflow>=2.5\r\n trieste 1.1.0 depends on tensorflow>=2.5\r\n trieste 1.0.0 depends on tensorflow>=2.5\r\n trieste 0.13.3 depends on tensorflow>=2.5\r\n trieste 0.13.2 depends on tensorflow>=2.4\r\n trieste 0.13.1 depends on tensorflow>=2.4\r\n trieste 0.13.0 depends on tensorflow>=2.4\r\n trieste 0.12.0 depends on tensorflow>=2.4\r\n trieste 0.11.3 depends on tensorflow>=2.4\r\n trieste 0.11.2 depends on tensorflow>=2.4\r\n trieste 0.11.1 depends on tensorflow>=2.4\r\n trieste 0.11.0 depends on tensorflow>=2.4\r\n trieste 0.10.0 depends on tensorflow>=2.4\r\n trieste 0.9.1 depends on tensorflow>=2.4\r\n trieste 0.9.0 depends on tensorflow>=2.4\r\n trieste 0.8.0 depends on tensorflow>=2.4\r\n trieste 0.7.0 depends on tensorflow>=2.4\r\n trieste 0.6.1 depends on tensorflow>=2.4\r\n trieste 0.6.0 depends on tensorflow>=2.4\r\n trieste 0.5.1 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1\r\n trieste 0.5.0 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1\r\n trieste 0.4.0 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1\r\n trieste 0.3.1 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1\r\n trieste 0.3.0 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1\r\n trieste 0.2.0 depends on tensorflow!=2.2.0, !=2.3.0 and >=2.1\r\n\r\nTo fix this you could try to:\r\n1. loosen the range of package versions you've specified\r\n2. remove package versions to allow pip attempt to solve the dependency conflict\r\n\r\nERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts\r\n```\r\n\r\n**Expected behaviour**\r\nIt should be possible to install trieste from pypi on MacOS\r\n\r\n**System information**\r\n - OS: MacOS Ventura 13.2\r\n - Python version: 3.8.13\r\n - Trieste version: 0.2.0 - 1.1.2\r\n - TensorFlow version: 2.11.0\r\n - GPflow version: 2.8.0\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 The Trieste Contributors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\nwith open(\"README.md\", \"r\") as file:\n long_description = file.read()\n\nsetup(\n name=\"trieste\",\n version=Path(\"trieste/VERSION\").read_text().strip(),\n author=\"The Trieste contributors\",\n author_email=\"[email protected]\",\n description=\"A Bayesian optimization research toolbox built on TensorFlow\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/secondmind-labs/trieste\",\n packages=find_packages(include=(\"trieste*\",)),\n package_data={\n \"trieste\": [\"py.typed\", \"VERSION\"],\n },\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n ],\n python_requires=\"~=3.7\",\n install_requires=[\n \"absl-py\",\n \"dill!=0.3.6\",\n \"gpflow>=2.7.0\",\n \"gpflux>=0.4.0\",\n \"numpy\",\n \"tensorflow>=2.5\",\n \"tensorflow-probability>=0.13\",\n \"greenlet>=1.1.0\",\n ],\n extras_require={\n \"plotting\": [\"seaborn\", \"plotly\"],\n \"qhsri\": [\"pymoo\", \"cvxpy\"],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2020 The Trieste Contributors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\nwith open(\"README.md\", \"r\") as file:\n long_description = file.read()\n\nsetup(\n name=\"trieste\",\n version=Path(\"trieste/VERSION\").read_text().strip(),\n author=\"The Trieste contributors\",\n author_email=\"[email protected]\",\n description=\"A Bayesian optimization research toolbox built on TensorFlow\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/secondmind-labs/trieste\",\n packages=find_packages(include=(\"trieste*\",)),\n package_data={\n \"trieste\": [\"py.typed\", \"VERSION\"],\n },\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n ],\n python_requires=\"~=3.7\",\n install_requires=[\n \"absl-py\",\n \"dill!=0.3.6\",\n \"gpflow>=2.7.0\",\n \"gpflux>=0.4.0\",\n \"numpy\",\n \"tensorflow>=2.5; platform_system!='Darwin' or platform_machine!='arm64'\",\n \"tensorflow-macos>=2.5; platform_system=='Darwin' and platform_machine=='arm64'\",\n \"tensorflow-probability>=0.13\",\n \"greenlet>=1.1.0\",\n ],\n extras_require={\n \"plotting\": [\"seaborn\", \"plotly\"],\n \"qhsri\": [\"pymoo\", \"cvxpy\"],\n },\n)\n", "path": "setup.py"}]}
| 2,889 | 140 |
gh_patches_debug_63270
|
rasdani/github-patches
|
git_diff
|
google__turbinia-1086
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
sphinx docs build broken
Getting an error when trying to build the docs:
```
$ sphinx-build -b html -d build/doctrees docs dist/docs
Running Sphinx v4.5.0
WARNING: html_static_path entry '_static' does not exist
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 19 source files that are out of date
updating environment: [new config] 19 added, 0 changed, 0 removed
reading sources... [ 5%] developer/contributing
Extension error (sphinx_markdown_tables):
Handler <function process_tables at 0x7fb9b1b0a700> for event 'source-read' threw an exception (exception: __init__() missing 1 required positional argument: 'config')
```
Trying an earlier version of sphinx and an earlier version of the repo does not resolve the issue. It seems to be something in the sphinx-markdown-tables module, but that doesn't seem to have changed that recently either (more than a month ago: https://pypi.org/project/sphinx-markdown-tables/0.0.15/#history).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 # import os
14 # import sys
15 # sys.path.insert(0, os.path.abspath('.'))
16
17 from __future__ import unicode_literals
18 import re
19
20 from recommonmark.parser import CommonMarkParser
21 from recommonmark.transform import AutoStructify
22 from docutils import nodes, transforms
23
24 # -- Project information -----------------------------------------------------
25
26 project = 'Turbinia'
27 copyright = '2020, Google Inc'
28 author = 'Turbinia maintainers'
29
30 # -- General configuration ---------------------------------------------------
31
32 # Add any Sphinx extension module names here, as strings. They can be
33 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
34 # ones.
35 extensions = [
36 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage',
37 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinx_markdown_tables',
38 'recommonmark'
39 ]
40
41 # Add any paths that contain templates here, relative to this directory.
42 templates_path = ['_templates']
43
44 # List of patterns, relative to source directory, that match files and
45 # directories to ignore when looking for source files.
46 # This pattern also affects html_static_path and html_extra_path.
47 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'design/*']
48
49 # -- Options for HTML output -------------------------------------------------
50
51 # The theme to use for HTML and HTML Help pages. See the documentation for
52 # a list of builtin themes.
53 #
54 html_theme = 'sphinx_rtd_theme'
55
56 # The master toctree document.
57 master_doc = 'index'
58
59 # The name of the Pygments (syntax highlighting) style to use.
60 pygments_style = 'sphinx'
61
62 # Add any paths that contain custom static files (such as style sheets) here,
63 # relative to this directory. They are copied after the builtin static files,
64 # so a file named "default.css" will overwrite the builtin "default.css".
65 html_static_path = ['_static']
66
67 # The default sidebars (for documents that don't match any pattern) are
68 # defined by theme itself. Builtin themes are using these templates by
69 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
70 # 'searchbox.html']``.
71 #
72 html_sidebars = {
73 '**': [
74 'sidebar.html', 'localtoc.html', 'relations.html', 'sourcelink.html',
75 'searchbox.html'
76 ]
77 }
78
79 # Adding retries to linkchecks before declaring a link broken
80 linkcheck_retries = 3
81
82 # Output file base name for HTML help builder.
83 htmlhelp_basename = 'turbiniadoc'
84
85 html_logo = "images/turbinia-logo.jpg"
86
87
88 class ProcessLink(transforms.Transform):
89 """Transform definition to parse .md references to internal pages."""
90
91 default_priority = 1000
92
93 def find_replace(self, node):
94 """Parses URIs containing .md and replaces them with their HTML page."""
95 if isinstance(node, nodes.reference) and 'refuri' in node:
96 r = node['refuri']
97 if r.endswith('.md'):
98 r = r[:-3] + '.html'
99 node['refuri'] = r
100
101 return node
102
103 def traverse(self, node):
104 """Traverse the document tree rooted at node.
105 node : docutil node
106 current root node to traverse
107 """
108 self.find_replace(node)
109
110 for c in node.children:
111 self.traverse(c)
112
113 # pylint: disable=arguments-differ,attribute-defined-outside-init
114 # this was taken from GRR's config file for documentation
115 def apply(self):
116 self.current_level = 0
117 self.traverse(self.document)
118
119
120 def setup(app):
121 """Add custom parsers to Sphinx generation."""
122 app.add_config_value(
123 'recommonmark_config', {
124 'enable_auto_doc_ref': False,
125 }, True)
126 app.add_transform(AutoStructify)
127 app.add_transform(ProcessLink)
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -34,8 +34,7 @@
# ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage',
- 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinx_markdown_tables',
- 'recommonmark'
+ 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'recommonmark'
]
# Add any paths that contain templates here, relative to this directory.
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -34,8 +34,7 @@\n # ones.\n extensions = [\n 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage',\n- 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinx_markdown_tables',\n- 'recommonmark'\n+ 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'recommonmark'\n ]\n \n # Add any paths that contain templates here, relative to this directory.\n", "issue": "sphinx docs build broken\nGetting an error when trying to build the docs:\r\n```\r\n$ sphinx-build -b html -d build/doctrees docs dist/docs\r\nRunning Sphinx v4.5.0\r\nWARNING: html_static_path entry '_static' does not exist\r\nbuilding [mo]: targets for 0 po files that are out of date\r\nbuilding [html]: targets for 19 source files that are out of date\r\nupdating environment: [new config] 19 added, 0 changed, 0 removed\r\nreading sources... [ 5%] developer/contributing \r\nExtension error (sphinx_markdown_tables):\r\nHandler <function process_tables at 0x7fb9b1b0a700> for event 'source-read' threw an exception (exception: __init__() missing 1 required positional argument: 'config')\r\n```\r\n\r\nTrying an earlier version of sphinx and an earlier version of the repo does not resolve the issue. It seems to be something in the sphinx-markdown-tables module, but that doesn't seem to have changed that recently either (more than a month ago: https://pypi.org/project/sphinx-markdown-tables/0.0.15/#history).\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nfrom __future__ import unicode_literals\nimport re\n\nfrom recommonmark.parser import CommonMarkParser\nfrom recommonmark.transform import AutoStructify\nfrom docutils import nodes, transforms\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Turbinia'\ncopyright = '2020, Google Inc'\nauthor = 'Turbinia maintainers'\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'sphinx_markdown_tables',\n 'recommonmark'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'design/*']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\nhtml_sidebars = {\n '**': [\n 'sidebar.html', 'localtoc.html', 'relations.html', 'sourcelink.html',\n 'searchbox.html'\n ]\n}\n\n# Adding retries to linkchecks before declaring a link broken\nlinkcheck_retries = 3\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'turbiniadoc'\n\nhtml_logo = \"images/turbinia-logo.jpg\"\n\n\nclass ProcessLink(transforms.Transform):\n \"\"\"Transform definition to parse .md references to internal pages.\"\"\"\n\n default_priority = 1000\n\n def find_replace(self, node):\n \"\"\"Parses URIs containing .md and replaces them with their HTML page.\"\"\"\n if isinstance(node, nodes.reference) and 'refuri' in node:\n r = node['refuri']\n if r.endswith('.md'):\n r = r[:-3] + '.html'\n node['refuri'] = r\n\n return node\n\n def traverse(self, node):\n \"\"\"Traverse the document tree rooted at node.\n node : docutil node\n current root node to traverse\n \"\"\"\n self.find_replace(node)\n\n for c in node.children:\n self.traverse(c)\n\n # pylint: disable=arguments-differ,attribute-defined-outside-init\n # this was taken from GRR's config file for documentation\n def apply(self):\n self.current_level = 0\n self.traverse(self.document)\n\n\ndef setup(app):\n \"\"\"Add custom parsers to Sphinx generation.\"\"\"\n app.add_config_value(\n 'recommonmark_config', {\n 'enable_auto_doc_ref': False,\n }, True)\n app.add_transform(AutoStructify)\n app.add_transform(ProcessLink)\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nfrom __future__ import unicode_literals\nimport re\n\nfrom recommonmark.parser import CommonMarkParser\nfrom recommonmark.transform import AutoStructify\nfrom docutils import nodes, transforms\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Turbinia'\ncopyright = '2020, Google Inc'\nauthor = 'Turbinia maintainers'\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode', 'sphinx.ext.napoleon', 'recommonmark'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'design/*']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\nhtml_sidebars = {\n '**': [\n 'sidebar.html', 'localtoc.html', 'relations.html', 'sourcelink.html',\n 'searchbox.html'\n ]\n}\n\n# Adding retries to linkchecks before declaring a link broken\nlinkcheck_retries = 3\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'turbiniadoc'\n\nhtml_logo = \"images/turbinia-logo.jpg\"\n\n\nclass ProcessLink(transforms.Transform):\n \"\"\"Transform definition to parse .md references to internal pages.\"\"\"\n\n default_priority = 1000\n\n def find_replace(self, node):\n \"\"\"Parses URIs containing .md and replaces them with their HTML page.\"\"\"\n if isinstance(node, nodes.reference) and 'refuri' in node:\n r = node['refuri']\n if r.endswith('.md'):\n r = r[:-3] + '.html'\n node['refuri'] = r\n\n return node\n\n def traverse(self, node):\n \"\"\"Traverse the document tree rooted at node.\n node : docutil node\n current root node to traverse\n \"\"\"\n self.find_replace(node)\n\n for c in node.children:\n self.traverse(c)\n\n # pylint: disable=arguments-differ,attribute-defined-outside-init\n # this was taken from GRR's config file for documentation\n def apply(self):\n self.current_level = 0\n self.traverse(self.document)\n\n\ndef setup(app):\n \"\"\"Add custom parsers to Sphinx generation.\"\"\"\n app.add_config_value(\n 'recommonmark_config', {\n 'enable_auto_doc_ref': False,\n }, True)\n app.add_transform(AutoStructify)\n app.add_transform(ProcessLink)\n", "path": "docs/conf.py"}]}
| 1,761 | 134 |
gh_patches_debug_19235
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-6837
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ImportError in every `dvc command` after `pip install -e` in develop environment
# Bug Report
```
ImportError: cannot import name '__version__' from '_version' (/Users/gao/Code/dvc/dvc/_version.py)
```
After every `dvc command` didn't affect the command running but quite anonying.
## Description
Meet this error message after every command.
```
Traceback (most recent call last):
File "/Users/gao/Code/dvc/dvc/__main__.py", line 7, in <module>
sys.exit(main(sys.argv[1:]))
File "/Users/gao/Code/dvc/dvc/main.py", line 97, in main
if analytics.is_enabled():
File "/Users/gao/Code/dvc/dvc/analytics.py", line 50, in is_enabled
Config(validate=False).get("core", {}).get("analytics", "true")
File "/Users/gao/Code/dvc/dvc/config.py", line 107, in __init__
self.load(validate=validate, config=config)
File "/Users/gao/Code/dvc/dvc/config.py", line 153, in load
conf = self.load_config_to_level()
File "/Users/gao/Code/dvc/dvc/config.py", line 278, in load_config_to_level
merge(merged_conf, self.load_one(merge_level))
File "/Users/gao/Code/dvc/dvc/config.py", line 202, in load_one
conf = self._load_config(level)
File "/Users/gao/Code/dvc/dvc/config.py", line 174, in _load_config
from configobj import ConfigObj
File "/Users/gao/anaconda3/envs/dvc/lib/python3.8/site-packages/configobj.py", line 23, in <module>
from _version import __version__
ImportError: cannot import name '__version__' from '_version' (/Users/gao/Code/dvc/dvc/_version.py)
```
### Reproduce
<!--
Step list of how to reproduce the bug
-->
1. git clone [email protected]:iterative/dvc.git
2. cd dvc
2. pip3 install -e ".[all,tests]"
3. dvc doctor.
### Expected
No error message
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 2.8.2.dev35+geb0b4ce6
---------------------------------
Platform: Python 3.8.8 on macOS-10.16-x86_64-i386-64bit
Supports:
azure (adlfs = 2021.9.1, knack = 0.8.0, azure-identity = 1.5.0),
gdrive (pydrive2 = 1.9.4),
gs (gcsfs = 2021.10.1),
hdfs (fsspec = 2021.10.1, pyarrow = 3.0.0),
webhdfs (fsspec = 2021.10.1),
http (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
https (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
s3 (s3fs = 2021.10.1, boto3 = 1.17.106),
ssh (sshfs = 2021.9.0),
oss (ossfs = 2021.8.0),
webdav (webdav4 = 0.9.3),
webdavs (webdav4 = 0.9.3)
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: oss, https, s3
Workspace directory: apfs on /dev/disk3s1s1
Repo: dvc, git
```
**Additional Information (if any):**
Error go out from `configobj`. It wants to read the contents from https://github.com/DiffSK/configobj/blob/master/src/configobj/_version.py but this file did not install properly.
And, as there is a `_version.py` generated automatically after `pip install -e` in our `dvc` repo. `configobj` wrongly reads it.
[](https://asciinema.org/a/Y0IhkBONNStMVjWC0fu8uXqJ9)
ImportError in every `dvc command` after `pip install -e` in develop environment
# Bug Report
```
ImportError: cannot import name '__version__' from '_version' (/Users/gao/Code/dvc/dvc/_version.py)
```
After every `dvc command` didn't affect the command running but quite anonying.
## Description
Meet this error message after every command.
```
Traceback (most recent call last):
File "/Users/gao/Code/dvc/dvc/__main__.py", line 7, in <module>
sys.exit(main(sys.argv[1:]))
File "/Users/gao/Code/dvc/dvc/main.py", line 97, in main
if analytics.is_enabled():
File "/Users/gao/Code/dvc/dvc/analytics.py", line 50, in is_enabled
Config(validate=False).get("core", {}).get("analytics", "true")
File "/Users/gao/Code/dvc/dvc/config.py", line 107, in __init__
self.load(validate=validate, config=config)
File "/Users/gao/Code/dvc/dvc/config.py", line 153, in load
conf = self.load_config_to_level()
File "/Users/gao/Code/dvc/dvc/config.py", line 278, in load_config_to_level
merge(merged_conf, self.load_one(merge_level))
File "/Users/gao/Code/dvc/dvc/config.py", line 202, in load_one
conf = self._load_config(level)
File "/Users/gao/Code/dvc/dvc/config.py", line 174, in _load_config
from configobj import ConfigObj
File "/Users/gao/anaconda3/envs/dvc/lib/python3.8/site-packages/configobj.py", line 23, in <module>
from _version import __version__
ImportError: cannot import name '__version__' from '_version' (/Users/gao/Code/dvc/dvc/_version.py)
```
### Reproduce
<!--
Step list of how to reproduce the bug
-->
1. git clone [email protected]:iterative/dvc.git
2. cd dvc
2. pip3 install -e ".[all,tests]"
3. dvc doctor.
### Expected
No error message
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 2.8.2.dev35+geb0b4ce6
---------------------------------
Platform: Python 3.8.8 on macOS-10.16-x86_64-i386-64bit
Supports:
azure (adlfs = 2021.9.1, knack = 0.8.0, azure-identity = 1.5.0),
gdrive (pydrive2 = 1.9.4),
gs (gcsfs = 2021.10.1),
hdfs (fsspec = 2021.10.1, pyarrow = 3.0.0),
webhdfs (fsspec = 2021.10.1),
http (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
https (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),
s3 (s3fs = 2021.10.1, boto3 = 1.17.106),
ssh (sshfs = 2021.9.0),
oss (ossfs = 2021.8.0),
webdav (webdav4 = 0.9.3),
webdavs (webdav4 = 0.9.3)
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: oss, https, s3
Workspace directory: apfs on /dev/disk3s1s1
Repo: dvc, git
```
**Additional Information (if any):**
Error go out from `configobj`. It wants to read the contents from https://github.com/DiffSK/configobj/blob/master/src/configobj/_version.py but this file did not install properly.
And, as there is a `_version.py` generated automatically after `pip install -e` in our `dvc` repo. `configobj` wrongly reads it.
[](https://asciinema.org/a/Y0IhkBONNStMVjWC0fu8uXqJ9)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/version.py`
Content:
```
1 # pylint: disable=unused-import
2 try:
3 from ._version import version as __version__
4 from ._version import version_tuple # noqa: F401
5 except ImportError:
6 __version__ = "UNKNOWN"
7
```
Path: `dvc/info.py`
Content:
```
1 import itertools
2 import os
3 import pathlib
4 import platform
5 import uuid
6
7 import psutil
8
9 from dvc.exceptions import DvcException, NotDvcRepoError
10 from dvc.fs import FS_MAP, get_fs_cls, get_fs_config
11 from dvc.repo import Repo
12 from dvc.scm.base import SCMError
13 from dvc.system import System
14 from dvc.utils import error_link
15 from dvc.utils.pkg import PKG
16 from dvc.version import __version__
17
18 try:
19 import importlib.metadata as importlib_metadata
20 except ImportError: # < 3.8
21 import importlib_metadata # type: ignore[no-redef]
22
23
24 package = "" if PKG is None else f"({PKG})"
25
26
27 def get_dvc_info():
28 info = [
29 f"DVC version: {__version__} {package}",
30 "---------------------------------",
31 f"Platform: Python {platform.python_version()} on "
32 f"{platform.platform()}",
33 f"Supports:{_get_supported_remotes()}",
34 ]
35
36 try:
37 with Repo() as repo:
38 # cache_dir might not exist yet (e.g. after `dvc init`), and we
39 # can't auto-create it, as it might cause issues if the user
40 # later decides to enable shared cache mode with
41 # `dvc config cache.shared group`.
42 if os.path.exists(repo.odb.local.cache_dir):
43 info.append(f"Cache types: {_get_linktype_support_info(repo)}")
44 fs_type = get_fs_type(repo.odb.local.cache_dir)
45 info.append(f"Cache directory: {fs_type}")
46 else:
47 info.append("Cache types: " + error_link("no-dvc-cache"))
48
49 info.append(f"Caches: {_get_caches(repo.odb)}")
50 info.append(f"Remotes: {_get_remotes(repo.config)}")
51
52 root_directory = repo.root_dir
53 fs_root = get_fs_type(os.path.abspath(root_directory))
54 info.append(f"Workspace directory: {fs_root}")
55 info.append(f"Repo: {_get_dvc_repo_info(repo)}")
56 except NotDvcRepoError:
57 pass
58 except SCMError:
59 info.append("Repo: dvc, git (broken)")
60
61 return "\n".join(info)
62
63
64 def _get_caches(cache):
65 caches = (
66 cache_type
67 for cache_type, cache_instance in cache.by_scheme()
68 if cache_instance
69 )
70
71 # Caches will be always non-empty including the local cache
72 return ", ".join(caches)
73
74
75 def _get_remotes(config):
76 schemes = (
77 get_fs_cls(get_fs_config(config, name=remote)).scheme
78 for remote in config["remote"]
79 )
80
81 return ", ".join(schemes) or "None"
82
83
84 def _get_linktype_support_info(repo):
85
86 links = {
87 "reflink": (System.reflink, None),
88 "hardlink": (System.hardlink, System.is_hardlink),
89 "symlink": (System.symlink, System.is_symlink),
90 }
91
92 fname = "." + str(uuid.uuid4())
93 src = os.path.join(repo.odb.local.cache_dir, fname)
94 open(src, "w").close()
95 dst = os.path.join(repo.root_dir, fname)
96
97 cache = []
98
99 for name, (link, is_link) in links.items():
100 try:
101 link(src, dst)
102 status = "supported"
103 if is_link and not is_link(dst):
104 status = "broken"
105 os.unlink(dst)
106 except DvcException:
107 status = "not supported"
108
109 if status == "supported":
110 cache.append(name)
111 os.remove(src)
112
113 return ", ".join(cache)
114
115
116 def _get_supported_remotes():
117 supported_remotes = []
118 for scheme, fs_cls in FS_MAP.items():
119 if not fs_cls.get_missing_deps():
120 dependencies = []
121 for requirement in fs_cls.REQUIRES:
122 dependencies.append(
123 f"{requirement} = "
124 f"{importlib_metadata.version(requirement)}"
125 )
126
127 remote_info = scheme
128 if dependencies:
129 remote_info += " (" + ", ".join(dependencies) + ")"
130 supported_remotes.append(remote_info)
131
132 assert len(supported_remotes) >= 1
133 return "\n\t" + ",\n\t".join(supported_remotes)
134
135
136 def get_fs_type(path):
137 partition = {}
138 for part in psutil.disk_partitions(all=True):
139 if part.fstype != "":
140 try:
141 mountpoint = pathlib.Path(part.mountpoint).resolve()
142 partition[mountpoint] = part.fstype + " on " + part.device
143 except PermissionError:
144 pass
145
146 # need to follow the symlink: https://github.com/iterative/dvc/issues/5065
147 path = pathlib.Path(path).resolve()
148
149 for parent in itertools.chain([path], path.parents):
150 if parent in partition:
151 return partition[parent]
152 return ("unknown", "none")
153
154
155 def _get_dvc_repo_info(self):
156 if self.config.get("core", {}).get("no_scm", False):
157 return "dvc (no_scm)"
158
159 if self.root_dir != self.scm.root_dir:
160 return "dvc (subdir), git"
161
162 return "dvc, git"
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dvc/info.py b/dvc/info.py
--- a/dvc/info.py
+++ b/dvc/info.py
@@ -6,6 +6,7 @@
import psutil
+from dvc import __version__
from dvc.exceptions import DvcException, NotDvcRepoError
from dvc.fs import FS_MAP, get_fs_cls, get_fs_config
from dvc.repo import Repo
@@ -13,7 +14,6 @@
from dvc.system import System
from dvc.utils import error_link
from dvc.utils.pkg import PKG
-from dvc.version import __version__
try:
import importlib.metadata as importlib_metadata
diff --git a/dvc/version.py b/dvc/version.py
--- a/dvc/version.py
+++ b/dvc/version.py
@@ -1,6 +1,6 @@
# pylint: disable=unused-import
try:
- from ._version import version as __version__
- from ._version import version_tuple # noqa: F401
+ from ._dvc_version import version as __version__
+ from ._dvc_version import version_tuple # noqa: F401
except ImportError:
__version__ = "UNKNOWN"
|
{"golden_diff": "diff --git a/dvc/info.py b/dvc/info.py\n--- a/dvc/info.py\n+++ b/dvc/info.py\n@@ -6,6 +6,7 @@\n \n import psutil\n \n+from dvc import __version__\n from dvc.exceptions import DvcException, NotDvcRepoError\n from dvc.fs import FS_MAP, get_fs_cls, get_fs_config\n from dvc.repo import Repo\n@@ -13,7 +14,6 @@\n from dvc.system import System\n from dvc.utils import error_link\n from dvc.utils.pkg import PKG\n-from dvc.version import __version__\n \n try:\n import importlib.metadata as importlib_metadata\ndiff --git a/dvc/version.py b/dvc/version.py\n--- a/dvc/version.py\n+++ b/dvc/version.py\n@@ -1,6 +1,6 @@\n # pylint: disable=unused-import\n try:\n- from ._version import version as __version__\n- from ._version import version_tuple # noqa: F401\n+ from ._dvc_version import version as __version__\n+ from ._dvc_version import version_tuple # noqa: F401\n except ImportError:\n __version__ = \"UNKNOWN\"\n", "issue": "ImportError in every `dvc command` after `pip install -e` in develop environment\n# Bug Report\r\n\r\n```\r\nImportError: cannot import name '__version__' from '_version' (/Users/gao/Code/dvc/dvc/_version.py)\r\n```\r\nAfter every `dvc command` didn't affect the command running but quite anonying.\r\n\r\n\r\n## Description\r\nMeet this error message after every command.\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/gao/Code/dvc/dvc/__main__.py\", line 7, in <module>\r\n sys.exit(main(sys.argv[1:]))\r\n File \"/Users/gao/Code/dvc/dvc/main.py\", line 97, in main\r\n if analytics.is_enabled():\r\n File \"/Users/gao/Code/dvc/dvc/analytics.py\", line 50, in is_enabled\r\n Config(validate=False).get(\"core\", {}).get(\"analytics\", \"true\")\r\n File \"/Users/gao/Code/dvc/dvc/config.py\", line 107, in __init__\r\n self.load(validate=validate, config=config)\r\n File \"/Users/gao/Code/dvc/dvc/config.py\", line 153, in load\r\n conf = self.load_config_to_level()\r\n File \"/Users/gao/Code/dvc/dvc/config.py\", line 278, in load_config_to_level\r\n merge(merged_conf, self.load_one(merge_level))\r\n File \"/Users/gao/Code/dvc/dvc/config.py\", line 202, in load_one\r\n conf = self._load_config(level)\r\n File \"/Users/gao/Code/dvc/dvc/config.py\", line 174, in _load_config\r\n from configobj import ConfigObj\r\n File \"/Users/gao/anaconda3/envs/dvc/lib/python3.8/site-packages/configobj.py\", line 23, in <module>\r\n from _version import __version__\r\nImportError: cannot import name '__version__' from '_version' (/Users/gao/Code/dvc/dvc/_version.py)\r\n```\r\n\r\n### Reproduce\r\n\r\n\r\n\r\n<!--\r\nStep list of how to reproduce the bug\r\n-->\r\n\r\n1. git clone [email protected]:iterative/dvc.git\r\n2. cd dvc\r\n2. pip3 install -e \".[all,tests]\"\r\n3. dvc doctor.\r\n\r\n### Expected\r\n\r\nNo error message \r\n\r\n### Environment information\r\n\r\n<!--\r\nThis is required to ensure that we can reproduce the bug.\r\n-->\r\n\r\n**Output of `dvc doctor`:**\r\n\r\n```console\r\n$ dvc doctor\r\nDVC version: 2.8.2.dev35+geb0b4ce6\r\n---------------------------------\r\nPlatform: Python 3.8.8 on macOS-10.16-x86_64-i386-64bit\r\nSupports:\r\n\tazure (adlfs = 2021.9.1, knack = 0.8.0, azure-identity = 1.5.0),\r\n\tgdrive (pydrive2 = 1.9.4),\r\n\tgs (gcsfs = 2021.10.1),\r\n\thdfs (fsspec = 2021.10.1, pyarrow = 3.0.0),\r\n\twebhdfs (fsspec = 2021.10.1),\r\n\thttp (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),\r\n\thttps (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),\r\n\ts3 (s3fs = 2021.10.1, boto3 = 1.17.106),\r\n\tssh (sshfs = 2021.9.0),\r\n\toss (ossfs = 2021.8.0),\r\n\twebdav (webdav4 = 0.9.3),\r\n\twebdavs (webdav4 = 0.9.3)\r\nCache types: <https://error.dvc.org/no-dvc-cache>\r\nCaches: local\r\nRemotes: oss, https, s3\r\nWorkspace directory: apfs on /dev/disk3s1s1\r\nRepo: dvc, git\r\n```\r\n\r\n**Additional Information (if any):**\r\n\r\nError go out from `configobj`. It wants to read the contents from https://github.com/DiffSK/configobj/blob/master/src/configobj/_version.py but this file did not install properly.\r\n\r\nAnd, as there is a `_version.py` generated automatically after `pip install -e` in our `dvc` repo. `configobj` wrongly reads it. \r\n\r\n[](https://asciinema.org/a/Y0IhkBONNStMVjWC0fu8uXqJ9)\nImportError in every `dvc command` after `pip install -e` in develop environment\n# Bug Report\r\n\r\n```\r\nImportError: cannot import name '__version__' from '_version' (/Users/gao/Code/dvc/dvc/_version.py)\r\n```\r\nAfter every `dvc command` didn't affect the command running but quite anonying.\r\n\r\n\r\n## Description\r\nMeet this error message after every command.\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/gao/Code/dvc/dvc/__main__.py\", line 7, in <module>\r\n sys.exit(main(sys.argv[1:]))\r\n File \"/Users/gao/Code/dvc/dvc/main.py\", line 97, in main\r\n if analytics.is_enabled():\r\n File \"/Users/gao/Code/dvc/dvc/analytics.py\", line 50, in is_enabled\r\n Config(validate=False).get(\"core\", {}).get(\"analytics\", \"true\")\r\n File \"/Users/gao/Code/dvc/dvc/config.py\", line 107, in __init__\r\n self.load(validate=validate, config=config)\r\n File \"/Users/gao/Code/dvc/dvc/config.py\", line 153, in load\r\n conf = self.load_config_to_level()\r\n File \"/Users/gao/Code/dvc/dvc/config.py\", line 278, in load_config_to_level\r\n merge(merged_conf, self.load_one(merge_level))\r\n File \"/Users/gao/Code/dvc/dvc/config.py\", line 202, in load_one\r\n conf = self._load_config(level)\r\n File \"/Users/gao/Code/dvc/dvc/config.py\", line 174, in _load_config\r\n from configobj import ConfigObj\r\n File \"/Users/gao/anaconda3/envs/dvc/lib/python3.8/site-packages/configobj.py\", line 23, in <module>\r\n from _version import __version__\r\nImportError: cannot import name '__version__' from '_version' (/Users/gao/Code/dvc/dvc/_version.py)\r\n```\r\n\r\n### Reproduce\r\n\r\n\r\n\r\n<!--\r\nStep list of how to reproduce the bug\r\n-->\r\n\r\n1. git clone [email protected]:iterative/dvc.git\r\n2. cd dvc\r\n2. pip3 install -e \".[all,tests]\"\r\n3. dvc doctor.\r\n\r\n### Expected\r\n\r\nNo error message \r\n\r\n### Environment information\r\n\r\n<!--\r\nThis is required to ensure that we can reproduce the bug.\r\n-->\r\n\r\n**Output of `dvc doctor`:**\r\n\r\n```console\r\n$ dvc doctor\r\nDVC version: 2.8.2.dev35+geb0b4ce6\r\n---------------------------------\r\nPlatform: Python 3.8.8 on macOS-10.16-x86_64-i386-64bit\r\nSupports:\r\n\tazure (adlfs = 2021.9.1, knack = 0.8.0, azure-identity = 1.5.0),\r\n\tgdrive (pydrive2 = 1.9.4),\r\n\tgs (gcsfs = 2021.10.1),\r\n\thdfs (fsspec = 2021.10.1, pyarrow = 3.0.0),\r\n\twebhdfs (fsspec = 2021.10.1),\r\n\thttp (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),\r\n\thttps (aiohttp = 3.7.4.post0, aiohttp-retry = 2.4.5),\r\n\ts3 (s3fs = 2021.10.1, boto3 = 1.17.106),\r\n\tssh (sshfs = 2021.9.0),\r\n\toss (ossfs = 2021.8.0),\r\n\twebdav (webdav4 = 0.9.3),\r\n\twebdavs (webdav4 = 0.9.3)\r\nCache types: <https://error.dvc.org/no-dvc-cache>\r\nCaches: local\r\nRemotes: oss, https, s3\r\nWorkspace directory: apfs on /dev/disk3s1s1\r\nRepo: dvc, git\r\n```\r\n\r\n**Additional Information (if any):**\r\n\r\nError go out from `configobj`. It wants to read the contents from https://github.com/DiffSK/configobj/blob/master/src/configobj/_version.py but this file did not install properly.\r\n\r\nAnd, as there is a `_version.py` generated automatically after `pip install -e` in our `dvc` repo. `configobj` wrongly reads it. \r\n\r\n[](https://asciinema.org/a/Y0IhkBONNStMVjWC0fu8uXqJ9)\n", "before_files": [{"content": "# pylint: disable=unused-import\ntry:\n from ._version import version as __version__\n from ._version import version_tuple # noqa: F401\nexcept ImportError:\n __version__ = \"UNKNOWN\"\n", "path": "dvc/version.py"}, {"content": "import itertools\nimport os\nimport pathlib\nimport platform\nimport uuid\n\nimport psutil\n\nfrom dvc.exceptions import DvcException, NotDvcRepoError\nfrom dvc.fs import FS_MAP, get_fs_cls, get_fs_config\nfrom dvc.repo import Repo\nfrom dvc.scm.base import SCMError\nfrom dvc.system import System\nfrom dvc.utils import error_link\nfrom dvc.utils.pkg import PKG\nfrom dvc.version import __version__\n\ntry:\n import importlib.metadata as importlib_metadata\nexcept ImportError: # < 3.8\n import importlib_metadata # type: ignore[no-redef]\n\n\npackage = \"\" if PKG is None else f\"({PKG})\"\n\n\ndef get_dvc_info():\n info = [\n f\"DVC version: {__version__} {package}\",\n \"---------------------------------\",\n f\"Platform: Python {platform.python_version()} on \"\n f\"{platform.platform()}\",\n f\"Supports:{_get_supported_remotes()}\",\n ]\n\n try:\n with Repo() as repo:\n # cache_dir might not exist yet (e.g. after `dvc init`), and we\n # can't auto-create it, as it might cause issues if the user\n # later decides to enable shared cache mode with\n # `dvc config cache.shared group`.\n if os.path.exists(repo.odb.local.cache_dir):\n info.append(f\"Cache types: {_get_linktype_support_info(repo)}\")\n fs_type = get_fs_type(repo.odb.local.cache_dir)\n info.append(f\"Cache directory: {fs_type}\")\n else:\n info.append(\"Cache types: \" + error_link(\"no-dvc-cache\"))\n\n info.append(f\"Caches: {_get_caches(repo.odb)}\")\n info.append(f\"Remotes: {_get_remotes(repo.config)}\")\n\n root_directory = repo.root_dir\n fs_root = get_fs_type(os.path.abspath(root_directory))\n info.append(f\"Workspace directory: {fs_root}\")\n info.append(f\"Repo: {_get_dvc_repo_info(repo)}\")\n except NotDvcRepoError:\n pass\n except SCMError:\n info.append(\"Repo: dvc, git (broken)\")\n\n return \"\\n\".join(info)\n\n\ndef _get_caches(cache):\n caches = (\n cache_type\n for cache_type, cache_instance in cache.by_scheme()\n if cache_instance\n )\n\n # Caches will be always non-empty including the local cache\n return \", \".join(caches)\n\n\ndef _get_remotes(config):\n schemes = (\n get_fs_cls(get_fs_config(config, name=remote)).scheme\n for remote in config[\"remote\"]\n )\n\n return \", \".join(schemes) or \"None\"\n\n\ndef _get_linktype_support_info(repo):\n\n links = {\n \"reflink\": (System.reflink, None),\n \"hardlink\": (System.hardlink, System.is_hardlink),\n \"symlink\": (System.symlink, System.is_symlink),\n }\n\n fname = \".\" + str(uuid.uuid4())\n src = os.path.join(repo.odb.local.cache_dir, fname)\n open(src, \"w\").close()\n dst = os.path.join(repo.root_dir, fname)\n\n cache = []\n\n for name, (link, is_link) in links.items():\n try:\n link(src, dst)\n status = \"supported\"\n if is_link and not is_link(dst):\n status = \"broken\"\n os.unlink(dst)\n except DvcException:\n status = \"not supported\"\n\n if status == \"supported\":\n cache.append(name)\n os.remove(src)\n\n return \", \".join(cache)\n\n\ndef _get_supported_remotes():\n supported_remotes = []\n for scheme, fs_cls in FS_MAP.items():\n if not fs_cls.get_missing_deps():\n dependencies = []\n for requirement in fs_cls.REQUIRES:\n dependencies.append(\n f\"{requirement} = \"\n f\"{importlib_metadata.version(requirement)}\"\n )\n\n remote_info = scheme\n if dependencies:\n remote_info += \" (\" + \", \".join(dependencies) + \")\"\n supported_remotes.append(remote_info)\n\n assert len(supported_remotes) >= 1\n return \"\\n\\t\" + \",\\n\\t\".join(supported_remotes)\n\n\ndef get_fs_type(path):\n partition = {}\n for part in psutil.disk_partitions(all=True):\n if part.fstype != \"\":\n try:\n mountpoint = pathlib.Path(part.mountpoint).resolve()\n partition[mountpoint] = part.fstype + \" on \" + part.device\n except PermissionError:\n pass\n\n # need to follow the symlink: https://github.com/iterative/dvc/issues/5065\n path = pathlib.Path(path).resolve()\n\n for parent in itertools.chain([path], path.parents):\n if parent in partition:\n return partition[parent]\n return (\"unknown\", \"none\")\n\n\ndef _get_dvc_repo_info(self):\n if self.config.get(\"core\", {}).get(\"no_scm\", False):\n return \"dvc (no_scm)\"\n\n if self.root_dir != self.scm.root_dir:\n return \"dvc (subdir), git\"\n\n return \"dvc, git\"\n", "path": "dvc/info.py"}], "after_files": [{"content": "# pylint: disable=unused-import\ntry:\n from ._dvc_version import version as __version__\n from ._dvc_version import version_tuple # noqa: F401\nexcept ImportError:\n __version__ = \"UNKNOWN\"\n", "path": "dvc/version.py"}, {"content": "import itertools\nimport os\nimport pathlib\nimport platform\nimport uuid\n\nimport psutil\n\nfrom dvc import __version__\nfrom dvc.exceptions import DvcException, NotDvcRepoError\nfrom dvc.fs import FS_MAP, get_fs_cls, get_fs_config\nfrom dvc.repo import Repo\nfrom dvc.scm.base import SCMError\nfrom dvc.system import System\nfrom dvc.utils import error_link\nfrom dvc.utils.pkg import PKG\n\ntry:\n import importlib.metadata as importlib_metadata\nexcept ImportError: # < 3.8\n import importlib_metadata # type: ignore[no-redef]\n\n\npackage = \"\" if PKG is None else f\"({PKG})\"\n\n\ndef get_dvc_info():\n info = [\n f\"DVC version: {__version__} {package}\",\n \"---------------------------------\",\n f\"Platform: Python {platform.python_version()} on \"\n f\"{platform.platform()}\",\n f\"Supports:{_get_supported_remotes()}\",\n ]\n\n try:\n with Repo() as repo:\n # cache_dir might not exist yet (e.g. after `dvc init`), and we\n # can't auto-create it, as it might cause issues if the user\n # later decides to enable shared cache mode with\n # `dvc config cache.shared group`.\n if os.path.exists(repo.odb.local.cache_dir):\n info.append(f\"Cache types: {_get_linktype_support_info(repo)}\")\n fs_type = get_fs_type(repo.odb.local.cache_dir)\n info.append(f\"Cache directory: {fs_type}\")\n else:\n info.append(\"Cache types: \" + error_link(\"no-dvc-cache\"))\n\n info.append(f\"Caches: {_get_caches(repo.odb)}\")\n info.append(f\"Remotes: {_get_remotes(repo.config)}\")\n\n root_directory = repo.root_dir\n fs_root = get_fs_type(os.path.abspath(root_directory))\n info.append(f\"Workspace directory: {fs_root}\")\n info.append(f\"Repo: {_get_dvc_repo_info(repo)}\")\n except NotDvcRepoError:\n pass\n except SCMError:\n info.append(\"Repo: dvc, git (broken)\")\n\n return \"\\n\".join(info)\n\n\ndef _get_caches(cache):\n caches = (\n cache_type\n for cache_type, cache_instance in cache.by_scheme()\n if cache_instance\n )\n\n # Caches will be always non-empty including the local cache\n return \", \".join(caches)\n\n\ndef _get_remotes(config):\n schemes = (\n get_fs_cls(get_fs_config(config, name=remote)).scheme\n for remote in config[\"remote\"]\n )\n\n return \", \".join(schemes) or \"None\"\n\n\ndef _get_linktype_support_info(repo):\n\n links = {\n \"reflink\": (System.reflink, None),\n \"hardlink\": (System.hardlink, System.is_hardlink),\n \"symlink\": (System.symlink, System.is_symlink),\n }\n\n fname = \".\" + str(uuid.uuid4())\n src = os.path.join(repo.odb.local.cache_dir, fname)\n open(src, \"w\").close()\n dst = os.path.join(repo.root_dir, fname)\n\n cache = []\n\n for name, (link, is_link) in links.items():\n try:\n link(src, dst)\n status = \"supported\"\n if is_link and not is_link(dst):\n status = \"broken\"\n os.unlink(dst)\n except DvcException:\n status = \"not supported\"\n\n if status == \"supported\":\n cache.append(name)\n os.remove(src)\n\n return \", \".join(cache)\n\n\ndef _get_supported_remotes():\n supported_remotes = []\n for scheme, fs_cls in FS_MAP.items():\n if not fs_cls.get_missing_deps():\n dependencies = []\n for requirement in fs_cls.REQUIRES:\n dependencies.append(\n f\"{requirement} = \"\n f\"{importlib_metadata.version(requirement)}\"\n )\n\n remote_info = scheme\n if dependencies:\n remote_info += \" (\" + \", \".join(dependencies) + \")\"\n supported_remotes.append(remote_info)\n\n assert len(supported_remotes) >= 1\n return \"\\n\\t\" + \",\\n\\t\".join(supported_remotes)\n\n\ndef get_fs_type(path):\n partition = {}\n for part in psutil.disk_partitions(all=True):\n if part.fstype != \"\":\n try:\n mountpoint = pathlib.Path(part.mountpoint).resolve()\n partition[mountpoint] = part.fstype + \" on \" + part.device\n except PermissionError:\n pass\n\n # need to follow the symlink: https://github.com/iterative/dvc/issues/5065\n path = pathlib.Path(path).resolve()\n\n for parent in itertools.chain([path], path.parents):\n if parent in partition:\n return partition[parent]\n return (\"unknown\", \"none\")\n\n\ndef _get_dvc_repo_info(self):\n if self.config.get(\"core\", {}).get(\"no_scm\", False):\n return \"dvc (no_scm)\"\n\n if self.root_dir != self.scm.root_dir:\n return \"dvc (subdir), git\"\n\n return \"dvc, git\"\n", "path": "dvc/info.py"}]}
| 4,015 | 265 |
gh_patches_debug_3111
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-128
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lists being marked as Optional
When defining a list the resulting schema marks the list as optional (or nullable in GraphQL terms) even if it wasn't wrapped in `typing.Optional`, we should fix that :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/type_converter.py`
Content:
```
1 from collections.abc import AsyncGenerator
2
3 from graphql import (
4 GraphQLBoolean,
5 GraphQLFloat,
6 GraphQLID,
7 GraphQLInt,
8 GraphQLList,
9 GraphQLNonNull,
10 GraphQLString,
11 GraphQLUnionType,
12 )
13
14 from .exceptions import UnallowedReturnTypeForUnion, WrongReturnTypeForUnion
15 from .scalars import ID
16 from .utils.typing import is_union
17
18
19 REGISTRY = {
20 str: GraphQLString,
21 int: GraphQLInt,
22 float: GraphQLFloat,
23 bool: GraphQLBoolean,
24 ID: GraphQLID,
25 }
26
27
28 # TODO: make so that we don't pass force optional
29 # we use that when trying to get the type for a
30 # option field (which can either be a scalar or an object type)
31 def get_graphql_type_for_annotation(
32 annotation, field_name: str, force_optional: bool = False
33 ):
34 # TODO: this might lead to issues with types that have a field value
35 is_field_optional = force_optional
36
37 if hasattr(annotation, "field"):
38 graphql_type = annotation.field
39 else:
40 annotation_name = getattr(annotation, "_name", None)
41
42 if annotation_name == "List":
43 list_of_type = get_graphql_type_for_annotation(
44 annotation.__args__[0], field_name
45 )
46
47 return GraphQLList(list_of_type)
48
49 annotation_origin = getattr(annotation, "__origin__", None)
50
51 if annotation_origin == AsyncGenerator:
52 # async generators are used in subscription, we only need the yield type
53 # https://docs.python.org/3/library/typing.html#typing.AsyncGenerator
54 return get_graphql_type_for_annotation(annotation.__args__[0], field_name)
55
56 elif is_union(annotation):
57 types = annotation.__args__
58 non_none_types = [x for x in types if x != None.__class__] # noqa:E721
59
60 # optionals are represented as Union[type, None]
61 if len(non_none_types) == 1:
62 is_field_optional = True
63 graphql_type = get_graphql_type_for_annotation(
64 non_none_types[0], field_name, force_optional=True
65 )
66 else:
67 is_field_optional = None.__class__ in types
68
69 def _resolve_type(self, value, _type):
70 if not hasattr(self, "field"):
71 raise WrongReturnTypeForUnion(value.field_name, str(type(self)))
72
73 if self.field not in _type.types:
74 raise UnallowedReturnTypeForUnion(
75 value.field_name, str(type(self)), _type.types
76 )
77
78 return self.field
79
80 # TODO: union types don't work with scalar types
81 # so we want to return a nice error
82 # also we want to make sure we have been passed
83 # strawberry types
84 graphql_type = GraphQLUnionType(
85 field_name, [type.field for type in types]
86 )
87 graphql_type.resolve_type = _resolve_type
88 else:
89 graphql_type = REGISTRY.get(annotation)
90
91 if not graphql_type:
92 raise ValueError(f"Unable to get GraphQL type for {annotation}")
93
94 if is_field_optional:
95 return graphql_type
96
97 return GraphQLNonNull(graphql_type)
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/strawberry/type_converter.py b/strawberry/type_converter.py
--- a/strawberry/type_converter.py
+++ b/strawberry/type_converter.py
@@ -44,7 +44,9 @@
annotation.__args__[0], field_name
)
- return GraphQLList(list_of_type)
+ list_type = GraphQLList(list_of_type)
+
+ return list_type if is_field_optional else GraphQLNonNull(list_type)
annotation_origin = getattr(annotation, "__origin__", None)
|
{"golden_diff": "diff --git a/strawberry/type_converter.py b/strawberry/type_converter.py\n--- a/strawberry/type_converter.py\n+++ b/strawberry/type_converter.py\n@@ -44,7 +44,9 @@\n annotation.__args__[0], field_name\n )\n \n- return GraphQLList(list_of_type)\n+ list_type = GraphQLList(list_of_type)\n+\n+ return list_type if is_field_optional else GraphQLNonNull(list_type)\n \n annotation_origin = getattr(annotation, \"__origin__\", None)\n", "issue": "Lists being marked as Optional\nWhen defining a list the resulting schema marks the list as optional (or nullable in GraphQL terms) even if it wasn't wrapped in `typing.Optional`, we should fix that :)\n", "before_files": [{"content": "from collections.abc import AsyncGenerator\n\nfrom graphql import (\n GraphQLBoolean,\n GraphQLFloat,\n GraphQLID,\n GraphQLInt,\n GraphQLList,\n GraphQLNonNull,\n GraphQLString,\n GraphQLUnionType,\n)\n\nfrom .exceptions import UnallowedReturnTypeForUnion, WrongReturnTypeForUnion\nfrom .scalars import ID\nfrom .utils.typing import is_union\n\n\nREGISTRY = {\n str: GraphQLString,\n int: GraphQLInt,\n float: GraphQLFloat,\n bool: GraphQLBoolean,\n ID: GraphQLID,\n}\n\n\n# TODO: make so that we don't pass force optional\n# we use that when trying to get the type for a\n# option field (which can either be a scalar or an object type)\ndef get_graphql_type_for_annotation(\n annotation, field_name: str, force_optional: bool = False\n):\n # TODO: this might lead to issues with types that have a field value\n is_field_optional = force_optional\n\n if hasattr(annotation, \"field\"):\n graphql_type = annotation.field\n else:\n annotation_name = getattr(annotation, \"_name\", None)\n\n if annotation_name == \"List\":\n list_of_type = get_graphql_type_for_annotation(\n annotation.__args__[0], field_name\n )\n\n return GraphQLList(list_of_type)\n\n annotation_origin = getattr(annotation, \"__origin__\", None)\n\n if annotation_origin == AsyncGenerator:\n # async generators are used in subscription, we only need the yield type\n # https://docs.python.org/3/library/typing.html#typing.AsyncGenerator\n return get_graphql_type_for_annotation(annotation.__args__[0], field_name)\n\n elif is_union(annotation):\n types = annotation.__args__\n non_none_types = [x for x in types if x != None.__class__] # noqa:E721\n\n # optionals are represented as Union[type, None]\n if len(non_none_types) == 1:\n is_field_optional = True\n graphql_type = get_graphql_type_for_annotation(\n non_none_types[0], field_name, force_optional=True\n )\n else:\n is_field_optional = None.__class__ in types\n\n def _resolve_type(self, value, _type):\n if not hasattr(self, \"field\"):\n raise WrongReturnTypeForUnion(value.field_name, str(type(self)))\n\n if self.field not in _type.types:\n raise UnallowedReturnTypeForUnion(\n value.field_name, str(type(self)), _type.types\n )\n\n return self.field\n\n # TODO: union types don't work with scalar types\n # so we want to return a nice error\n # also we want to make sure we have been passed\n # strawberry types\n graphql_type = GraphQLUnionType(\n field_name, [type.field for type in types]\n )\n graphql_type.resolve_type = _resolve_type\n else:\n graphql_type = REGISTRY.get(annotation)\n\n if not graphql_type:\n raise ValueError(f\"Unable to get GraphQL type for {annotation}\")\n\n if is_field_optional:\n return graphql_type\n\n return GraphQLNonNull(graphql_type)\n", "path": "strawberry/type_converter.py"}], "after_files": [{"content": "from collections.abc import AsyncGenerator\n\nfrom graphql import (\n GraphQLBoolean,\n GraphQLFloat,\n GraphQLID,\n GraphQLInt,\n GraphQLList,\n GraphQLNonNull,\n GraphQLString,\n GraphQLUnionType,\n)\n\nfrom .exceptions import UnallowedReturnTypeForUnion, WrongReturnTypeForUnion\nfrom .scalars import ID\nfrom .utils.typing import is_union\n\n\nREGISTRY = {\n str: GraphQLString,\n int: GraphQLInt,\n float: GraphQLFloat,\n bool: GraphQLBoolean,\n ID: GraphQLID,\n}\n\n\n# TODO: make so that we don't pass force optional\n# we use that when trying to get the type for a\n# option field (which can either be a scalar or an object type)\ndef get_graphql_type_for_annotation(\n annotation, field_name: str, force_optional: bool = False\n):\n # TODO: this might lead to issues with types that have a field value\n is_field_optional = force_optional\n\n if hasattr(annotation, \"field\"):\n graphql_type = annotation.field\n else:\n annotation_name = getattr(annotation, \"_name\", None)\n\n if annotation_name == \"List\":\n list_of_type = get_graphql_type_for_annotation(\n annotation.__args__[0], field_name\n )\n\n list_type = GraphQLList(list_of_type)\n\n return list_type if is_field_optional else GraphQLNonNull(list_type)\n\n annotation_origin = getattr(annotation, \"__origin__\", None)\n\n if annotation_origin == AsyncGenerator:\n # async generators are used in subscription, we only need the yield type\n # https://docs.python.org/3/library/typing.html#typing.AsyncGenerator\n return get_graphql_type_for_annotation(annotation.__args__[0], field_name)\n\n elif is_union(annotation):\n types = annotation.__args__\n non_none_types = [x for x in types if x != None.__class__] # noqa:E721\n\n # optionals are represented as Union[type, None]\n if len(non_none_types) == 1:\n is_field_optional = True\n graphql_type = get_graphql_type_for_annotation(\n non_none_types[0], field_name, force_optional=True\n )\n else:\n is_field_optional = None.__class__ in types\n\n def _resolve_type(self, value, _type):\n if not hasattr(self, \"field\"):\n raise WrongReturnTypeForUnion(value.field_name, str(type(self)))\n\n if self.field not in _type.types:\n raise UnallowedReturnTypeForUnion(\n value.field_name, str(type(self)), _type.types\n )\n\n return self.field\n\n # TODO: union types don't work with scalar types\n # so we want to return a nice error\n # also we want to make sure we have been passed\n # strawberry types\n graphql_type = GraphQLUnionType(\n field_name, [type.field for type in types]\n )\n graphql_type.resolve_type = _resolve_type\n else:\n graphql_type = REGISTRY.get(annotation)\n\n if not graphql_type:\n raise ValueError(f\"Unable to get GraphQL type for {annotation}\")\n\n if is_field_optional:\n return graphql_type\n\n return GraphQLNonNull(graphql_type)\n", "path": "strawberry/type_converter.py"}]}
| 1,162 | 115 |
gh_patches_debug_847
|
rasdani/github-patches
|
git_diff
|
vyperlang__vyper-3202
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`pc_pos_map` for small methods is empty
### Version Information
* vyper Version (output of `vyper --version`): 0.3.7
* OS: osx
* Python Version (output of `python --version`): 3.10.4
### Bug
```
(vyper) ~/vyper $ cat tmp/baz.vy
@external
def foo():
pass
(vyper) ~/vyper $ vyc -f source_map tmp/baz.vy
{"breakpoints": [], "error_map": {"51": "fallback function"}, "pc_breakpoints": [], "pc_jump_map": {"0": "-", "7": "-", "11": "-", "12": "-", "23": "-", "34": "-", "42": "-", "44": "-", "46": "-", "52": "-"}, "pc_pos_map": {}, "pc_pos_map_compressed": "-1:-1:0:-;;;;:::-;;:::-;:::-;;;;;;;:::-;;;;;:::-;;;;;:::-;;:::-;;:::-;;;;:::-;;;"}
```
pc_pos_map should not be empty.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/codegen/function_definitions/external_function.py`
Content:
```
1 from typing import Any, List
2
3 import vyper.utils as util
4 from vyper.address_space import CALLDATA, DATA, MEMORY
5 from vyper.ast.signatures.function_signature import FunctionSignature, VariableRecord
6 from vyper.codegen.abi_encoder import abi_encoding_matches_vyper
7 from vyper.codegen.context import Context
8 from vyper.codegen.core import get_element_ptr, getpos, make_setter, needs_clamp
9 from vyper.codegen.expr import Expr
10 from vyper.codegen.function_definitions.utils import get_nonreentrant_lock
11 from vyper.codegen.ir_node import Encoding, IRnode
12 from vyper.codegen.stmt import parse_body
13 from vyper.codegen.types.types import TupleType
14
15
16 # register function args with the local calling context.
17 # also allocate the ones that live in memory (i.e. kwargs)
18 def _register_function_args(context: Context, sig: FunctionSignature) -> List[IRnode]:
19 ret = []
20
21 # the type of the calldata
22 base_args_t = TupleType([arg.typ for arg in sig.base_args])
23
24 # tuple with the abi_encoded args
25 if sig.is_init_func:
26 base_args_ofst = IRnode(0, location=DATA, typ=base_args_t, encoding=Encoding.ABI)
27 else:
28 base_args_ofst = IRnode(4, location=CALLDATA, typ=base_args_t, encoding=Encoding.ABI)
29
30 for i, arg in enumerate(sig.base_args):
31
32 arg_ir = get_element_ptr(base_args_ofst, i)
33
34 if needs_clamp(arg.typ, Encoding.ABI):
35 # allocate a memory slot for it and copy
36 p = context.new_variable(arg.name, arg.typ, is_mutable=False)
37 dst = IRnode(p, typ=arg.typ, location=MEMORY)
38
39 copy_arg = make_setter(dst, arg_ir)
40 copy_arg.source_pos = getpos(arg.ast_source)
41 ret.append(copy_arg)
42 else:
43 assert abi_encoding_matches_vyper(arg.typ)
44 # leave it in place
45 context.vars[arg.name] = VariableRecord(
46 name=arg.name,
47 pos=arg_ir,
48 typ=arg.typ,
49 mutable=False,
50 location=arg_ir.location,
51 encoding=Encoding.ABI,
52 )
53
54 return ret
55
56
57 def _annotated_method_id(abi_sig):
58 method_id = util.method_id_int(abi_sig)
59 annotation = f"{hex(method_id)}: {abi_sig}"
60 return IRnode(method_id, annotation=annotation)
61
62
63 def _generate_kwarg_handlers(context: Context, sig: FunctionSignature) -> List[Any]:
64 # generate kwarg handlers.
65 # since they might come in thru calldata or be default,
66 # allocate them in memory and then fill it in based on calldata or default,
67 # depending on the signature
68 # a kwarg handler looks like
69 # (if (eq _method_id <method_id>)
70 # copy calldata args to memory
71 # write default args to memory
72 # goto external_function_common_ir
73
74 def handler_for(calldata_kwargs, default_kwargs):
75 calldata_args = sig.base_args + calldata_kwargs
76 # create a fake type so that get_element_ptr works
77 calldata_args_t = TupleType(list(arg.typ for arg in calldata_args))
78
79 abi_sig = sig.abi_signature_for_kwargs(calldata_kwargs)
80 method_id = _annotated_method_id(abi_sig)
81
82 calldata_kwargs_ofst = IRnode(
83 4, location=CALLDATA, typ=calldata_args_t, encoding=Encoding.ABI
84 )
85
86 # a sequence of statements to strictify kwargs into memory
87 ret = ["seq"]
88
89 # ensure calldata is at least of minimum length
90 args_abi_t = calldata_args_t.abi_type
91 calldata_min_size = args_abi_t.min_size() + 4
92 ret.append(["assert", ["ge", "calldatasize", calldata_min_size]])
93
94 # TODO optimize make_setter by using
95 # TupleType(list(arg.typ for arg in calldata_kwargs + default_kwargs))
96 # (must ensure memory area is contiguous)
97
98 n_base_args = len(sig.base_args)
99
100 for i, arg_meta in enumerate(calldata_kwargs):
101 k = n_base_args + i
102
103 dst = context.lookup_var(arg_meta.name).pos
104
105 lhs = IRnode(dst, location=MEMORY, typ=arg_meta.typ)
106
107 rhs = get_element_ptr(calldata_kwargs_ofst, k, array_bounds_check=False)
108
109 copy_arg = make_setter(lhs, rhs)
110 copy_arg.source_pos = getpos(arg_meta.ast_source)
111 ret.append(copy_arg)
112
113 for x in default_kwargs:
114 dst = context.lookup_var(x.name).pos
115 lhs = IRnode(dst, location=MEMORY, typ=x.typ)
116 lhs.source_pos = getpos(x.ast_source)
117 kw_ast_val = sig.default_values[x.name] # e.g. `3` in x: int = 3
118 rhs = Expr(kw_ast_val, context).ir_node
119
120 copy_arg = make_setter(lhs, rhs)
121 copy_arg.source_pos = getpos(x.ast_source)
122 ret.append(copy_arg)
123
124 ret.append(["goto", sig.external_function_base_entry_label])
125
126 ret = ["if", ["eq", "_calldata_method_id", method_id], ret]
127 return ret
128
129 ret = ["seq"]
130
131 keyword_args = sig.default_args
132
133 # allocate variable slots in memory
134 for arg in keyword_args:
135 context.new_variable(arg.name, arg.typ, is_mutable=False)
136
137 for i, _ in enumerate(keyword_args):
138 calldata_kwargs = keyword_args[:i]
139 default_kwargs = keyword_args[i:]
140
141 ret.append(handler_for(calldata_kwargs, default_kwargs))
142
143 ret.append(handler_for(keyword_args, []))
144
145 return ret
146
147
148 # TODO it would be nice if this returned a data structure which were
149 # amenable to generating a jump table instead of the linear search for
150 # method_id we have now.
151 def generate_ir_for_external_function(code, sig, context, skip_nonpayable_check):
152 # TODO type hints:
153 # def generate_ir_for_external_function(
154 # code: vy_ast.FunctionDef, sig: FunctionSignature, context: Context, check_nonpayable: bool,
155 # ) -> IRnode:
156 """Return the IR for an external function. Includes code to inspect the method_id,
157 enter the function (nonpayable and reentrancy checks), handle kwargs and exit
158 the function (clean up reentrancy storage variables)
159 """
160 func_type = code._metadata["type"]
161
162 nonreentrant_pre, nonreentrant_post = get_nonreentrant_lock(func_type)
163
164 # generate handlers for base args and register the variable records
165 handle_base_args = _register_function_args(context, sig)
166
167 # generate handlers for kwargs and register the variable records
168 kwarg_handlers = _generate_kwarg_handlers(context, sig)
169
170 body = ["seq"]
171 # once optional args have been handled,
172 # generate the main body of the function
173 body += handle_base_args
174
175 if sig.mutability != "payable" and not skip_nonpayable_check:
176 # if the contract contains payable functions, but this is not one of them
177 # add an assertion that the value of the call is zero
178 body += [["assert", ["iszero", "callvalue"]]]
179
180 body += nonreentrant_pre
181
182 body += [parse_body(code.body, context, ensure_terminated=True)]
183
184 # wrap the body in labeled block
185 body = ["label", sig.external_function_base_entry_label, ["var_list"], body]
186
187 exit_sequence = ["seq"] + nonreentrant_post
188 if sig.is_init_func:
189 pass # init func has special exit sequence generated by module.py
190 elif context.return_type is None:
191 exit_sequence += [["stop"]]
192 else:
193 exit_sequence += [["return", "ret_ofst", "ret_len"]]
194
195 exit_sequence_args = ["var_list"]
196 if context.return_type is not None:
197 exit_sequence_args += ["ret_ofst", "ret_len"]
198 # wrap the exit in a labeled block
199 exit = ["label", sig.exit_sequence_label, exit_sequence_args, exit_sequence]
200
201 # the ir which comprises the main body of the function,
202 # besides any kwarg handling
203 func_common_ir = ["seq", body, exit]
204
205 if sig.is_default_func or sig.is_init_func:
206 ret = ["seq"]
207 # add a goto to make the function entry look like other functions
208 # (for zksync interpreter)
209 ret.append(["goto", sig.external_function_base_entry_label])
210 ret.append(func_common_ir)
211 else:
212 ret = kwarg_handlers
213 # sneak the base code into the kwarg handler
214 # TODO rethink this / make it clearer
215 ret[-1][-1].append(func_common_ir)
216
217 return IRnode.from_list(ret)
218
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vyper/codegen/function_definitions/external_function.py b/vyper/codegen/function_definitions/external_function.py
--- a/vyper/codegen/function_definitions/external_function.py
+++ b/vyper/codegen/function_definitions/external_function.py
@@ -214,4 +214,4 @@
# TODO rethink this / make it clearer
ret[-1][-1].append(func_common_ir)
- return IRnode.from_list(ret)
+ return IRnode.from_list(ret, source_pos=getpos(sig.func_ast_code))
|
{"golden_diff": "diff --git a/vyper/codegen/function_definitions/external_function.py b/vyper/codegen/function_definitions/external_function.py\n--- a/vyper/codegen/function_definitions/external_function.py\n+++ b/vyper/codegen/function_definitions/external_function.py\n@@ -214,4 +214,4 @@\n # TODO rethink this / make it clearer\n ret[-1][-1].append(func_common_ir)\n \n- return IRnode.from_list(ret)\n+ return IRnode.from_list(ret, source_pos=getpos(sig.func_ast_code))\n", "issue": "`pc_pos_map` for small methods is empty\n### Version Information\r\n\r\n* vyper Version (output of `vyper --version`): 0.3.7\r\n* OS: osx\r\n* Python Version (output of `python --version`): 3.10.4\r\n\r\n### Bug\r\n\r\n```\r\n(vyper) ~/vyper $ cat tmp/baz.vy \r\n\r\n@external\r\ndef foo():\r\n pass\r\n\r\n(vyper) ~/vyper $ vyc -f source_map tmp/baz.vy \r\n\r\n{\"breakpoints\": [], \"error_map\": {\"51\": \"fallback function\"}, \"pc_breakpoints\": [], \"pc_jump_map\": {\"0\": \"-\", \"7\": \"-\", \"11\": \"-\", \"12\": \"-\", \"23\": \"-\", \"34\": \"-\", \"42\": \"-\", \"44\": \"-\", \"46\": \"-\", \"52\": \"-\"}, \"pc_pos_map\": {}, \"pc_pos_map_compressed\": \"-1:-1:0:-;;;;:::-;;:::-;:::-;;;;;;;:::-;;;;;:::-;;;;;:::-;;:::-;;:::-;;;;:::-;;;\"}\r\n\r\n```\r\npc_pos_map should not be empty.\r\n\r\n\n", "before_files": [{"content": "from typing import Any, List\n\nimport vyper.utils as util\nfrom vyper.address_space import CALLDATA, DATA, MEMORY\nfrom vyper.ast.signatures.function_signature import FunctionSignature, VariableRecord\nfrom vyper.codegen.abi_encoder import abi_encoding_matches_vyper\nfrom vyper.codegen.context import Context\nfrom vyper.codegen.core import get_element_ptr, getpos, make_setter, needs_clamp\nfrom vyper.codegen.expr import Expr\nfrom vyper.codegen.function_definitions.utils import get_nonreentrant_lock\nfrom vyper.codegen.ir_node import Encoding, IRnode\nfrom vyper.codegen.stmt import parse_body\nfrom vyper.codegen.types.types import TupleType\n\n\n# register function args with the local calling context.\n# also allocate the ones that live in memory (i.e. kwargs)\ndef _register_function_args(context: Context, sig: FunctionSignature) -> List[IRnode]:\n ret = []\n\n # the type of the calldata\n base_args_t = TupleType([arg.typ for arg in sig.base_args])\n\n # tuple with the abi_encoded args\n if sig.is_init_func:\n base_args_ofst = IRnode(0, location=DATA, typ=base_args_t, encoding=Encoding.ABI)\n else:\n base_args_ofst = IRnode(4, location=CALLDATA, typ=base_args_t, encoding=Encoding.ABI)\n\n for i, arg in enumerate(sig.base_args):\n\n arg_ir = get_element_ptr(base_args_ofst, i)\n\n if needs_clamp(arg.typ, Encoding.ABI):\n # allocate a memory slot for it and copy\n p = context.new_variable(arg.name, arg.typ, is_mutable=False)\n dst = IRnode(p, typ=arg.typ, location=MEMORY)\n\n copy_arg = make_setter(dst, arg_ir)\n copy_arg.source_pos = getpos(arg.ast_source)\n ret.append(copy_arg)\n else:\n assert abi_encoding_matches_vyper(arg.typ)\n # leave it in place\n context.vars[arg.name] = VariableRecord(\n name=arg.name,\n pos=arg_ir,\n typ=arg.typ,\n mutable=False,\n location=arg_ir.location,\n encoding=Encoding.ABI,\n )\n\n return ret\n\n\ndef _annotated_method_id(abi_sig):\n method_id = util.method_id_int(abi_sig)\n annotation = f\"{hex(method_id)}: {abi_sig}\"\n return IRnode(method_id, annotation=annotation)\n\n\ndef _generate_kwarg_handlers(context: Context, sig: FunctionSignature) -> List[Any]:\n # generate kwarg handlers.\n # since they might come in thru calldata or be default,\n # allocate them in memory and then fill it in based on calldata or default,\n # depending on the signature\n # a kwarg handler looks like\n # (if (eq _method_id <method_id>)\n # copy calldata args to memory\n # write default args to memory\n # goto external_function_common_ir\n\n def handler_for(calldata_kwargs, default_kwargs):\n calldata_args = sig.base_args + calldata_kwargs\n # create a fake type so that get_element_ptr works\n calldata_args_t = TupleType(list(arg.typ for arg in calldata_args))\n\n abi_sig = sig.abi_signature_for_kwargs(calldata_kwargs)\n method_id = _annotated_method_id(abi_sig)\n\n calldata_kwargs_ofst = IRnode(\n 4, location=CALLDATA, typ=calldata_args_t, encoding=Encoding.ABI\n )\n\n # a sequence of statements to strictify kwargs into memory\n ret = [\"seq\"]\n\n # ensure calldata is at least of minimum length\n args_abi_t = calldata_args_t.abi_type\n calldata_min_size = args_abi_t.min_size() + 4\n ret.append([\"assert\", [\"ge\", \"calldatasize\", calldata_min_size]])\n\n # TODO optimize make_setter by using\n # TupleType(list(arg.typ for arg in calldata_kwargs + default_kwargs))\n # (must ensure memory area is contiguous)\n\n n_base_args = len(sig.base_args)\n\n for i, arg_meta in enumerate(calldata_kwargs):\n k = n_base_args + i\n\n dst = context.lookup_var(arg_meta.name).pos\n\n lhs = IRnode(dst, location=MEMORY, typ=arg_meta.typ)\n\n rhs = get_element_ptr(calldata_kwargs_ofst, k, array_bounds_check=False)\n\n copy_arg = make_setter(lhs, rhs)\n copy_arg.source_pos = getpos(arg_meta.ast_source)\n ret.append(copy_arg)\n\n for x in default_kwargs:\n dst = context.lookup_var(x.name).pos\n lhs = IRnode(dst, location=MEMORY, typ=x.typ)\n lhs.source_pos = getpos(x.ast_source)\n kw_ast_val = sig.default_values[x.name] # e.g. `3` in x: int = 3\n rhs = Expr(kw_ast_val, context).ir_node\n\n copy_arg = make_setter(lhs, rhs)\n copy_arg.source_pos = getpos(x.ast_source)\n ret.append(copy_arg)\n\n ret.append([\"goto\", sig.external_function_base_entry_label])\n\n ret = [\"if\", [\"eq\", \"_calldata_method_id\", method_id], ret]\n return ret\n\n ret = [\"seq\"]\n\n keyword_args = sig.default_args\n\n # allocate variable slots in memory\n for arg in keyword_args:\n context.new_variable(arg.name, arg.typ, is_mutable=False)\n\n for i, _ in enumerate(keyword_args):\n calldata_kwargs = keyword_args[:i]\n default_kwargs = keyword_args[i:]\n\n ret.append(handler_for(calldata_kwargs, default_kwargs))\n\n ret.append(handler_for(keyword_args, []))\n\n return ret\n\n\n# TODO it would be nice if this returned a data structure which were\n# amenable to generating a jump table instead of the linear search for\n# method_id we have now.\ndef generate_ir_for_external_function(code, sig, context, skip_nonpayable_check):\n # TODO type hints:\n # def generate_ir_for_external_function(\n # code: vy_ast.FunctionDef, sig: FunctionSignature, context: Context, check_nonpayable: bool,\n # ) -> IRnode:\n \"\"\"Return the IR for an external function. Includes code to inspect the method_id,\n enter the function (nonpayable and reentrancy checks), handle kwargs and exit\n the function (clean up reentrancy storage variables)\n \"\"\"\n func_type = code._metadata[\"type\"]\n\n nonreentrant_pre, nonreentrant_post = get_nonreentrant_lock(func_type)\n\n # generate handlers for base args and register the variable records\n handle_base_args = _register_function_args(context, sig)\n\n # generate handlers for kwargs and register the variable records\n kwarg_handlers = _generate_kwarg_handlers(context, sig)\n\n body = [\"seq\"]\n # once optional args have been handled,\n # generate the main body of the function\n body += handle_base_args\n\n if sig.mutability != \"payable\" and not skip_nonpayable_check:\n # if the contract contains payable functions, but this is not one of them\n # add an assertion that the value of the call is zero\n body += [[\"assert\", [\"iszero\", \"callvalue\"]]]\n\n body += nonreentrant_pre\n\n body += [parse_body(code.body, context, ensure_terminated=True)]\n\n # wrap the body in labeled block\n body = [\"label\", sig.external_function_base_entry_label, [\"var_list\"], body]\n\n exit_sequence = [\"seq\"] + nonreentrant_post\n if sig.is_init_func:\n pass # init func has special exit sequence generated by module.py\n elif context.return_type is None:\n exit_sequence += [[\"stop\"]]\n else:\n exit_sequence += [[\"return\", \"ret_ofst\", \"ret_len\"]]\n\n exit_sequence_args = [\"var_list\"]\n if context.return_type is not None:\n exit_sequence_args += [\"ret_ofst\", \"ret_len\"]\n # wrap the exit in a labeled block\n exit = [\"label\", sig.exit_sequence_label, exit_sequence_args, exit_sequence]\n\n # the ir which comprises the main body of the function,\n # besides any kwarg handling\n func_common_ir = [\"seq\", body, exit]\n\n if sig.is_default_func or sig.is_init_func:\n ret = [\"seq\"]\n # add a goto to make the function entry look like other functions\n # (for zksync interpreter)\n ret.append([\"goto\", sig.external_function_base_entry_label])\n ret.append(func_common_ir)\n else:\n ret = kwarg_handlers\n # sneak the base code into the kwarg handler\n # TODO rethink this / make it clearer\n ret[-1][-1].append(func_common_ir)\n\n return IRnode.from_list(ret)\n", "path": "vyper/codegen/function_definitions/external_function.py"}], "after_files": [{"content": "from typing import Any, List\n\nimport vyper.utils as util\nfrom vyper.address_space import CALLDATA, DATA, MEMORY\nfrom vyper.ast.signatures.function_signature import FunctionSignature, VariableRecord\nfrom vyper.codegen.abi_encoder import abi_encoding_matches_vyper\nfrom vyper.codegen.context import Context\nfrom vyper.codegen.core import get_element_ptr, getpos, make_setter, needs_clamp\nfrom vyper.codegen.expr import Expr\nfrom vyper.codegen.function_definitions.utils import get_nonreentrant_lock\nfrom vyper.codegen.ir_node import Encoding, IRnode\nfrom vyper.codegen.stmt import parse_body\nfrom vyper.codegen.types.types import TupleType\n\n\n# register function args with the local calling context.\n# also allocate the ones that live in memory (i.e. kwargs)\ndef _register_function_args(context: Context, sig: FunctionSignature) -> List[IRnode]:\n ret = []\n\n # the type of the calldata\n base_args_t = TupleType([arg.typ for arg in sig.base_args])\n\n # tuple with the abi_encoded args\n if sig.is_init_func:\n base_args_ofst = IRnode(0, location=DATA, typ=base_args_t, encoding=Encoding.ABI)\n else:\n base_args_ofst = IRnode(4, location=CALLDATA, typ=base_args_t, encoding=Encoding.ABI)\n\n for i, arg in enumerate(sig.base_args):\n\n arg_ir = get_element_ptr(base_args_ofst, i)\n\n if needs_clamp(arg.typ, Encoding.ABI):\n # allocate a memory slot for it and copy\n p = context.new_variable(arg.name, arg.typ, is_mutable=False)\n dst = IRnode(p, typ=arg.typ, location=MEMORY)\n\n copy_arg = make_setter(dst, arg_ir)\n copy_arg.source_pos = getpos(arg.ast_source)\n ret.append(copy_arg)\n else:\n assert abi_encoding_matches_vyper(arg.typ)\n # leave it in place\n context.vars[arg.name] = VariableRecord(\n name=arg.name,\n pos=arg_ir,\n typ=arg.typ,\n mutable=False,\n location=arg_ir.location,\n encoding=Encoding.ABI,\n )\n\n return ret\n\n\ndef _annotated_method_id(abi_sig):\n method_id = util.method_id_int(abi_sig)\n annotation = f\"{hex(method_id)}: {abi_sig}\"\n return IRnode(method_id, annotation=annotation)\n\n\ndef _generate_kwarg_handlers(context: Context, sig: FunctionSignature) -> List[Any]:\n # generate kwarg handlers.\n # since they might come in thru calldata or be default,\n # allocate them in memory and then fill it in based on calldata or default,\n # depending on the signature\n # a kwarg handler looks like\n # (if (eq _method_id <method_id>)\n # copy calldata args to memory\n # write default args to memory\n # goto external_function_common_ir\n\n def handler_for(calldata_kwargs, default_kwargs):\n calldata_args = sig.base_args + calldata_kwargs\n # create a fake type so that get_element_ptr works\n calldata_args_t = TupleType(list(arg.typ for arg in calldata_args))\n\n abi_sig = sig.abi_signature_for_kwargs(calldata_kwargs)\n method_id = _annotated_method_id(abi_sig)\n\n calldata_kwargs_ofst = IRnode(\n 4, location=CALLDATA, typ=calldata_args_t, encoding=Encoding.ABI\n )\n\n # a sequence of statements to strictify kwargs into memory\n ret = [\"seq\"]\n\n # ensure calldata is at least of minimum length\n args_abi_t = calldata_args_t.abi_type\n calldata_min_size = args_abi_t.min_size() + 4\n ret.append([\"assert\", [\"ge\", \"calldatasize\", calldata_min_size]])\n\n # TODO optimize make_setter by using\n # TupleType(list(arg.typ for arg in calldata_kwargs + default_kwargs))\n # (must ensure memory area is contiguous)\n\n n_base_args = len(sig.base_args)\n\n for i, arg_meta in enumerate(calldata_kwargs):\n k = n_base_args + i\n\n dst = context.lookup_var(arg_meta.name).pos\n\n lhs = IRnode(dst, location=MEMORY, typ=arg_meta.typ)\n\n rhs = get_element_ptr(calldata_kwargs_ofst, k, array_bounds_check=False)\n\n copy_arg = make_setter(lhs, rhs)\n copy_arg.source_pos = getpos(arg_meta.ast_source)\n ret.append(copy_arg)\n\n for x in default_kwargs:\n dst = context.lookup_var(x.name).pos\n lhs = IRnode(dst, location=MEMORY, typ=x.typ)\n lhs.source_pos = getpos(x.ast_source)\n kw_ast_val = sig.default_values[x.name] # e.g. `3` in x: int = 3\n rhs = Expr(kw_ast_val, context).ir_node\n\n copy_arg = make_setter(lhs, rhs)\n copy_arg.source_pos = getpos(x.ast_source)\n ret.append(copy_arg)\n\n ret.append([\"goto\", sig.external_function_base_entry_label])\n\n ret = [\"if\", [\"eq\", \"_calldata_method_id\", method_id], ret]\n return ret\n\n ret = [\"seq\"]\n\n keyword_args = sig.default_args\n\n # allocate variable slots in memory\n for arg in keyword_args:\n context.new_variable(arg.name, arg.typ, is_mutable=False)\n\n for i, _ in enumerate(keyword_args):\n calldata_kwargs = keyword_args[:i]\n default_kwargs = keyword_args[i:]\n\n ret.append(handler_for(calldata_kwargs, default_kwargs))\n\n ret.append(handler_for(keyword_args, []))\n\n return ret\n\n\n# TODO it would be nice if this returned a data structure which were\n# amenable to generating a jump table instead of the linear search for\n# method_id we have now.\ndef generate_ir_for_external_function(code, sig, context, skip_nonpayable_check):\n # TODO type hints:\n # def generate_ir_for_external_function(\n # code: vy_ast.FunctionDef, sig: FunctionSignature, context: Context, check_nonpayable: bool,\n # ) -> IRnode:\n \"\"\"Return the IR for an external function. Includes code to inspect the method_id,\n enter the function (nonpayable and reentrancy checks), handle kwargs and exit\n the function (clean up reentrancy storage variables)\n \"\"\"\n func_type = code._metadata[\"type\"]\n\n nonreentrant_pre, nonreentrant_post = get_nonreentrant_lock(func_type)\n\n # generate handlers for base args and register the variable records\n handle_base_args = _register_function_args(context, sig)\n\n # generate handlers for kwargs and register the variable records\n kwarg_handlers = _generate_kwarg_handlers(context, sig)\n\n body = [\"seq\"]\n # once optional args have been handled,\n # generate the main body of the function\n body += handle_base_args\n\n if sig.mutability != \"payable\" and not skip_nonpayable_check:\n # if the contract contains payable functions, but this is not one of them\n # add an assertion that the value of the call is zero\n body += [[\"assert\", [\"iszero\", \"callvalue\"]]]\n\n body += nonreentrant_pre\n\n body += [parse_body(code.body, context, ensure_terminated=True)]\n\n # wrap the body in labeled block\n body = [\"label\", sig.external_function_base_entry_label, [\"var_list\"], body]\n\n exit_sequence = [\"seq\"] + nonreentrant_post\n if sig.is_init_func:\n pass # init func has special exit sequence generated by module.py\n elif context.return_type is None:\n exit_sequence += [[\"stop\"]]\n else:\n exit_sequence += [[\"return\", \"ret_ofst\", \"ret_len\"]]\n\n exit_sequence_args = [\"var_list\"]\n if context.return_type is not None:\n exit_sequence_args += [\"ret_ofst\", \"ret_len\"]\n # wrap the exit in a labeled block\n exit = [\"label\", sig.exit_sequence_label, exit_sequence_args, exit_sequence]\n\n # the ir which comprises the main body of the function,\n # besides any kwarg handling\n func_common_ir = [\"seq\", body, exit]\n\n if sig.is_default_func or sig.is_init_func:\n ret = [\"seq\"]\n # add a goto to make the function entry look like other functions\n # (for zksync interpreter)\n ret.append([\"goto\", sig.external_function_base_entry_label])\n ret.append(func_common_ir)\n else:\n ret = kwarg_handlers\n # sneak the base code into the kwarg handler\n # TODO rethink this / make it clearer\n ret[-1][-1].append(func_common_ir)\n\n return IRnode.from_list(ret, source_pos=getpos(sig.func_ast_code))\n", "path": "vyper/codegen/function_definitions/external_function.py"}]}
| 3,007 | 116 |
gh_patches_debug_29154
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1036
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Setting value in cache should never fail
The cache is a key value store. The transaction isolation and integrity constraints are details of implementation. Setting a value in the cache should just never fail.
Setting value in cache should never fail
The cache is a key value store. The transaction isolation and integrity constraints are details of implementation. Setting a value in the cache should just never fail.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/core/cache/postgresql/__init__.py`
Content:
```
1 from __future__ import absolute_import
2
3 import os
4
5 from kinto.core import logger
6 from kinto.core.cache import CacheBase
7 from kinto.core.storage.postgresql.client import create_from_config
8 from kinto.core.utils import json
9
10
11 class Cache(CacheBase):
12 """Cache backend using PostgreSQL.
13
14 Enable in configuration::
15
16 kinto.cache_backend = kinto.core.cache.postgresql
17
18 Database location URI can be customized::
19
20 kinto.cache_url = postgres://user:[email protected]:5432/dbname
21
22 Alternatively, username and password could also rely on system user ident
23 or even specified in :file:`~/.pgpass` (*see PostgreSQL documentation*).
24
25 .. note::
26
27 Some tables and indices are created when ``kinto migrate`` is run.
28 This requires some privileges on the database, or some error will
29 be raised.
30
31 **Alternatively**, the schema can be initialized outside the
32 python application, using the SQL file located in
33 :file:`kinto/core/cache/postgresql/schema.sql`. This allows to
34 distinguish schema manipulation privileges from schema usage.
35
36
37 A connection pool is enabled by default::
38
39 kinto.cache_pool_size = 10
40 kinto.cache_maxoverflow = 10
41 kinto.cache_max_backlog = -1
42 kinto.cache_pool_recycle = -1
43 kinto.cache_pool_timeout = 30
44 kinto.cache_poolclass =
45 kinto.core.storage.postgresql.pool.QueuePoolWithMaxBacklog
46
47 The ``max_backlog`` limits the number of threads that can be in the queue
48 waiting for a connection. Once this limit has been reached, any further
49 attempts to acquire a connection will be rejected immediately, instead of
50 locking up all threads by keeping them waiting in the queue.
51
52 See `dedicated section in SQLAlchemy documentation
53 <http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html>`_
54 for default values and behaviour.
55
56 .. note::
57
58 Using a `dedicated connection pool <http://pgpool.net>`_ is still
59 recommended to allow load balancing, replication or limit the number
60 of connections used in a multi-process deployment.
61
62 :noindex:
63 """ # NOQA
64 def __init__(self, client, *args, **kwargs):
65 super(Cache, self).__init__(*args, **kwargs)
66 self.client = client
67
68 def initialize_schema(self, dry_run=False):
69 # Check if cache table exists.
70 query = """
71 SELECT 1
72 FROM information_schema.tables
73 WHERE table_name = 'cache';
74 """
75 with self.client.connect(readonly=True) as conn:
76 result = conn.execute(query)
77 if result.rowcount > 0:
78 logger.info("PostgreSQL cache schema is up-to-date.")
79 return
80
81 # Create schema
82 here = os.path.abspath(os.path.dirname(__file__))
83 sql_file = os.path.join(here, 'schema.sql')
84
85 if dry_run:
86 logger.info("Create cache schema from %s" % sql_file)
87 return
88
89 # Since called outside request, force commit.
90 schema = open(sql_file).read()
91 with self.client.connect(force_commit=True) as conn:
92 conn.execute(schema)
93 logger.info('Created PostgreSQL cache tables')
94
95 def flush(self):
96 query = """
97 DELETE FROM cache;
98 """
99 # Since called outside request (e.g. tests), force commit.
100 with self.client.connect(force_commit=True) as conn:
101 conn.execute(query)
102 logger.debug('Flushed PostgreSQL cache tables')
103
104 def ttl(self, key):
105 query = """
106 SELECT EXTRACT(SECOND FROM (ttl - now())) AS ttl
107 FROM cache
108 WHERE key = :key
109 AND ttl IS NOT NULL;
110 """
111 with self.client.connect(readonly=True) as conn:
112 result = conn.execute(query, dict(key=self.prefix + key))
113 if result.rowcount > 0:
114 return result.fetchone()['ttl']
115 return -1
116
117 def expire(self, key, ttl):
118 query = """
119 UPDATE cache SET ttl = sec2ttl(:ttl) WHERE key = :key;
120 """
121 with self.client.connect() as conn:
122 conn.execute(query, dict(ttl=ttl, key=self.prefix + key))
123
124 def set(self, key, value, ttl=None):
125 if ttl is None:
126 logger.warning("No TTL for cache key %r" % key)
127 query = """
128 WITH upsert AS (
129 UPDATE cache SET value = :value, ttl = sec2ttl(:ttl)
130 WHERE key=:key
131 RETURNING *)
132 INSERT INTO cache (key, value, ttl)
133 SELECT :key, :value, sec2ttl(:ttl)
134 WHERE NOT EXISTS (SELECT * FROM upsert)
135 """
136 value = json.dumps(value)
137 with self.client.connect() as conn:
138 conn.execute(query, dict(key=self.prefix + key,
139 value=value, ttl=ttl))
140
141 def get(self, key):
142 purge = "DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;"
143 query = "SELECT value FROM cache WHERE key = :key;"
144 with self.client.connect() as conn:
145 conn.execute(purge)
146 result = conn.execute(query, dict(key=self.prefix + key))
147 if result.rowcount > 0:
148 value = result.fetchone()['value']
149 return json.loads(value)
150
151 def delete(self, key):
152 query = "DELETE FROM cache WHERE key = :key"
153 with self.client.connect() as conn:
154 conn.execute(query, dict(key=self.prefix + key))
155
156
157 def load_from_config(config):
158 settings = config.get_settings()
159 client = create_from_config(config, prefix='cache_', with_transaction=False)
160 return Cache(client=client, cache_prefix=settings['cache_prefix'])
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/core/cache/postgresql/__init__.py b/kinto/core/cache/postgresql/__init__.py
--- a/kinto/core/cache/postgresql/__init__.py
+++ b/kinto/core/cache/postgresql/__init__.py
@@ -1,13 +1,41 @@
from __future__ import absolute_import
+from functools import wraps
import os
+import time
from kinto.core import logger
from kinto.core.cache import CacheBase
from kinto.core.storage.postgresql.client import create_from_config
+from kinto.core.storage.exceptions import BackendError
from kinto.core.utils import json
+DELAY_BETWEEN_RETRIES_IN_SECONDS = 0.005
+MAX_RETRIES = 10
+
+
+def retry_on_failure(func):
+ try:
+ import psycopg2
+ except ImportError: # pragma: no cover
+ pass # Do not break (but will fail nicely later anyway)
+
+ @wraps(func)
+ def wraps_func(self, *args, **kwargs):
+ tries = kwargs.pop('tries', 0)
+ try:
+ return func(self, *args, **kwargs)
+ except psycopg2.IntegrityError as e:
+ if tries < MAX_RETRIES:
+ # Skip delay the 2 first times.
+ delay = max(0, tries - 1) * DELAY_BETWEEN_RETRIES_IN_SECONDS
+ time.sleep(delay)
+ return wraps_func(self, tries=(tries + 1), *args, **kwargs)
+ raise BackendError(original=e)
+ return wraps_func
+
+
class Cache(CacheBase):
"""Cache backend using PostgreSQL.
@@ -121,6 +149,7 @@
with self.client.connect() as conn:
conn.execute(query, dict(ttl=ttl, key=self.prefix + key))
+ @retry_on_failure
def set(self, key, value, ttl=None):
if ttl is None:
logger.warning("No TTL for cache key %r" % key)
|
{"golden_diff": "diff --git a/kinto/core/cache/postgresql/__init__.py b/kinto/core/cache/postgresql/__init__.py\n--- a/kinto/core/cache/postgresql/__init__.py\n+++ b/kinto/core/cache/postgresql/__init__.py\n@@ -1,13 +1,41 @@\n from __future__ import absolute_import\n+from functools import wraps\n \n import os\n+import time\n \n from kinto.core import logger\n from kinto.core.cache import CacheBase\n from kinto.core.storage.postgresql.client import create_from_config\n+from kinto.core.storage.exceptions import BackendError\n from kinto.core.utils import json\n \n \n+DELAY_BETWEEN_RETRIES_IN_SECONDS = 0.005\n+MAX_RETRIES = 10\n+\n+\n+def retry_on_failure(func):\n+ try:\n+ import psycopg2\n+ except ImportError: # pragma: no cover\n+ pass # Do not break (but will fail nicely later anyway)\n+\n+ @wraps(func)\n+ def wraps_func(self, *args, **kwargs):\n+ tries = kwargs.pop('tries', 0)\n+ try:\n+ return func(self, *args, **kwargs)\n+ except psycopg2.IntegrityError as e:\n+ if tries < MAX_RETRIES:\n+ # Skip delay the 2 first times.\n+ delay = max(0, tries - 1) * DELAY_BETWEEN_RETRIES_IN_SECONDS\n+ time.sleep(delay)\n+ return wraps_func(self, tries=(tries + 1), *args, **kwargs)\n+ raise BackendError(original=e)\n+ return wraps_func\n+\n+\n class Cache(CacheBase):\n \"\"\"Cache backend using PostgreSQL.\n \n@@ -121,6 +149,7 @@\n with self.client.connect() as conn:\n conn.execute(query, dict(ttl=ttl, key=self.prefix + key))\n \n+ @retry_on_failure\n def set(self, key, value, ttl=None):\n if ttl is None:\n logger.warning(\"No TTL for cache key %r\" % key)\n", "issue": "Setting value in cache should never fail\nThe cache is a key value store. The transaction isolation and integrity constraints are details of implementation. Setting a value in the cache should just never fail.\nSetting value in cache should never fail\nThe cache is a key value store. The transaction isolation and integrity constraints are details of implementation. Setting a value in the cache should just never fail.\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport os\n\nfrom kinto.core import logger\nfrom kinto.core.cache import CacheBase\nfrom kinto.core.storage.postgresql.client import create_from_config\nfrom kinto.core.utils import json\n\n\nclass Cache(CacheBase):\n \"\"\"Cache backend using PostgreSQL.\n\n Enable in configuration::\n\n kinto.cache_backend = kinto.core.cache.postgresql\n\n Database location URI can be customized::\n\n kinto.cache_url = postgres://user:[email protected]:5432/dbname\n\n Alternatively, username and password could also rely on system user ident\n or even specified in :file:`~/.pgpass` (*see PostgreSQL documentation*).\n\n .. note::\n\n Some tables and indices are created when ``kinto migrate`` is run.\n This requires some privileges on the database, or some error will\n be raised.\n\n **Alternatively**, the schema can be initialized outside the\n python application, using the SQL file located in\n :file:`kinto/core/cache/postgresql/schema.sql`. This allows to\n distinguish schema manipulation privileges from schema usage.\n\n\n A connection pool is enabled by default::\n\n kinto.cache_pool_size = 10\n kinto.cache_maxoverflow = 10\n kinto.cache_max_backlog = -1\n kinto.cache_pool_recycle = -1\n kinto.cache_pool_timeout = 30\n kinto.cache_poolclass =\n kinto.core.storage.postgresql.pool.QueuePoolWithMaxBacklog\n\n The ``max_backlog`` limits the number of threads that can be in the queue\n waiting for a connection. Once this limit has been reached, any further\n attempts to acquire a connection will be rejected immediately, instead of\n locking up all threads by keeping them waiting in the queue.\n\n See `dedicated section in SQLAlchemy documentation\n <http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html>`_\n for default values and behaviour.\n\n .. note::\n\n Using a `dedicated connection pool <http://pgpool.net>`_ is still\n recommended to allow load balancing, replication or limit the number\n of connections used in a multi-process deployment.\n\n :noindex:\n \"\"\" # NOQA\n def __init__(self, client, *args, **kwargs):\n super(Cache, self).__init__(*args, **kwargs)\n self.client = client\n\n def initialize_schema(self, dry_run=False):\n # Check if cache table exists.\n query = \"\"\"\n SELECT 1\n FROM information_schema.tables\n WHERE table_name = 'cache';\n \"\"\"\n with self.client.connect(readonly=True) as conn:\n result = conn.execute(query)\n if result.rowcount > 0:\n logger.info(\"PostgreSQL cache schema is up-to-date.\")\n return\n\n # Create schema\n here = os.path.abspath(os.path.dirname(__file__))\n sql_file = os.path.join(here, 'schema.sql')\n\n if dry_run:\n logger.info(\"Create cache schema from %s\" % sql_file)\n return\n\n # Since called outside request, force commit.\n schema = open(sql_file).read()\n with self.client.connect(force_commit=True) as conn:\n conn.execute(schema)\n logger.info('Created PostgreSQL cache tables')\n\n def flush(self):\n query = \"\"\"\n DELETE FROM cache;\n \"\"\"\n # Since called outside request (e.g. tests), force commit.\n with self.client.connect(force_commit=True) as conn:\n conn.execute(query)\n logger.debug('Flushed PostgreSQL cache tables')\n\n def ttl(self, key):\n query = \"\"\"\n SELECT EXTRACT(SECOND FROM (ttl - now())) AS ttl\n FROM cache\n WHERE key = :key\n AND ttl IS NOT NULL;\n \"\"\"\n with self.client.connect(readonly=True) as conn:\n result = conn.execute(query, dict(key=self.prefix + key))\n if result.rowcount > 0:\n return result.fetchone()['ttl']\n return -1\n\n def expire(self, key, ttl):\n query = \"\"\"\n UPDATE cache SET ttl = sec2ttl(:ttl) WHERE key = :key;\n \"\"\"\n with self.client.connect() as conn:\n conn.execute(query, dict(ttl=ttl, key=self.prefix + key))\n\n def set(self, key, value, ttl=None):\n if ttl is None:\n logger.warning(\"No TTL for cache key %r\" % key)\n query = \"\"\"\n WITH upsert AS (\n UPDATE cache SET value = :value, ttl = sec2ttl(:ttl)\n WHERE key=:key\n RETURNING *)\n INSERT INTO cache (key, value, ttl)\n SELECT :key, :value, sec2ttl(:ttl)\n WHERE NOT EXISTS (SELECT * FROM upsert)\n \"\"\"\n value = json.dumps(value)\n with self.client.connect() as conn:\n conn.execute(query, dict(key=self.prefix + key,\n value=value, ttl=ttl))\n\n def get(self, key):\n purge = \"DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;\"\n query = \"SELECT value FROM cache WHERE key = :key;\"\n with self.client.connect() as conn:\n conn.execute(purge)\n result = conn.execute(query, dict(key=self.prefix + key))\n if result.rowcount > 0:\n value = result.fetchone()['value']\n return json.loads(value)\n\n def delete(self, key):\n query = \"DELETE FROM cache WHERE key = :key\"\n with self.client.connect() as conn:\n conn.execute(query, dict(key=self.prefix + key))\n\n\ndef load_from_config(config):\n settings = config.get_settings()\n client = create_from_config(config, prefix='cache_', with_transaction=False)\n return Cache(client=client, cache_prefix=settings['cache_prefix'])\n", "path": "kinto/core/cache/postgresql/__init__.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom functools import wraps\n\nimport os\nimport time\n\nfrom kinto.core import logger\nfrom kinto.core.cache import CacheBase\nfrom kinto.core.storage.postgresql.client import create_from_config\nfrom kinto.core.storage.exceptions import BackendError\nfrom kinto.core.utils import json\n\n\nDELAY_BETWEEN_RETRIES_IN_SECONDS = 0.005\nMAX_RETRIES = 10\n\n\ndef retry_on_failure(func):\n try:\n import psycopg2\n except ImportError: # pragma: no cover\n pass # Do not break (but will fail nicely later anyway)\n\n @wraps(func)\n def wraps_func(self, *args, **kwargs):\n tries = kwargs.pop('tries', 0)\n try:\n return func(self, *args, **kwargs)\n except psycopg2.IntegrityError as e:\n if tries < MAX_RETRIES:\n # Skip delay the 2 first times.\n delay = max(0, tries - 1) * DELAY_BETWEEN_RETRIES_IN_SECONDS\n time.sleep(delay)\n return wraps_func(self, tries=(tries + 1), *args, **kwargs)\n raise BackendError(original=e)\n return wraps_func\n\n\nclass Cache(CacheBase):\n \"\"\"Cache backend using PostgreSQL.\n\n Enable in configuration::\n\n kinto.cache_backend = kinto.core.cache.postgresql\n\n Database location URI can be customized::\n\n kinto.cache_url = postgres://user:[email protected]:5432/dbname\n\n Alternatively, username and password could also rely on system user ident\n or even specified in :file:`~/.pgpass` (*see PostgreSQL documentation*).\n\n .. note::\n\n Some tables and indices are created when ``kinto migrate`` is run.\n This requires some privileges on the database, or some error will\n be raised.\n\n **Alternatively**, the schema can be initialized outside the\n python application, using the SQL file located in\n :file:`kinto/core/cache/postgresql/schema.sql`. This allows to\n distinguish schema manipulation privileges from schema usage.\n\n\n A connection pool is enabled by default::\n\n kinto.cache_pool_size = 10\n kinto.cache_maxoverflow = 10\n kinto.cache_max_backlog = -1\n kinto.cache_pool_recycle = -1\n kinto.cache_pool_timeout = 30\n kinto.cache_poolclass =\n kinto.core.storage.postgresql.pool.QueuePoolWithMaxBacklog\n\n The ``max_backlog`` limits the number of threads that can be in the queue\n waiting for a connection. Once this limit has been reached, any further\n attempts to acquire a connection will be rejected immediately, instead of\n locking up all threads by keeping them waiting in the queue.\n\n See `dedicated section in SQLAlchemy documentation\n <http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html>`_\n for default values and behaviour.\n\n .. note::\n\n Using a `dedicated connection pool <http://pgpool.net>`_ is still\n recommended to allow load balancing, replication or limit the number\n of connections used in a multi-process deployment.\n\n :noindex:\n \"\"\" # NOQA\n def __init__(self, client, *args, **kwargs):\n super(Cache, self).__init__(*args, **kwargs)\n self.client = client\n\n def initialize_schema(self, dry_run=False):\n # Check if cache table exists.\n query = \"\"\"\n SELECT 1\n FROM information_schema.tables\n WHERE table_name = 'cache';\n \"\"\"\n with self.client.connect(readonly=True) as conn:\n result = conn.execute(query)\n if result.rowcount > 0:\n logger.info(\"PostgreSQL cache schema is up-to-date.\")\n return\n\n # Create schema\n here = os.path.abspath(os.path.dirname(__file__))\n sql_file = os.path.join(here, 'schema.sql')\n\n if dry_run:\n logger.info(\"Create cache schema from %s\" % sql_file)\n return\n\n # Since called outside request, force commit.\n schema = open(sql_file).read()\n with self.client.connect(force_commit=True) as conn:\n conn.execute(schema)\n logger.info('Created PostgreSQL cache tables')\n\n def flush(self):\n query = \"\"\"\n DELETE FROM cache;\n \"\"\"\n # Since called outside request (e.g. tests), force commit.\n with self.client.connect(force_commit=True) as conn:\n conn.execute(query)\n logger.debug('Flushed PostgreSQL cache tables')\n\n def ttl(self, key):\n query = \"\"\"\n SELECT EXTRACT(SECOND FROM (ttl - now())) AS ttl\n FROM cache\n WHERE key = :key\n AND ttl IS NOT NULL;\n \"\"\"\n with self.client.connect(readonly=True) as conn:\n result = conn.execute(query, dict(key=self.prefix + key))\n if result.rowcount > 0:\n return result.fetchone()['ttl']\n return -1\n\n def expire(self, key, ttl):\n query = \"\"\"\n UPDATE cache SET ttl = sec2ttl(:ttl) WHERE key = :key;\n \"\"\"\n with self.client.connect() as conn:\n conn.execute(query, dict(ttl=ttl, key=self.prefix + key))\n\n @retry_on_failure\n def set(self, key, value, ttl=None):\n if ttl is None:\n logger.warning(\"No TTL for cache key %r\" % key)\n query = \"\"\"\n WITH upsert AS (\n UPDATE cache SET value = :value, ttl = sec2ttl(:ttl)\n WHERE key=:key\n RETURNING *)\n INSERT INTO cache (key, value, ttl)\n SELECT :key, :value, sec2ttl(:ttl)\n WHERE NOT EXISTS (SELECT * FROM upsert)\n \"\"\"\n value = json.dumps(value)\n with self.client.connect() as conn:\n conn.execute(query, dict(key=self.prefix + key,\n value=value, ttl=ttl))\n\n def get(self, key):\n purge = \"DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;\"\n query = \"SELECT value FROM cache WHERE key = :key;\"\n with self.client.connect() as conn:\n conn.execute(purge)\n result = conn.execute(query, dict(key=self.prefix + key))\n if result.rowcount > 0:\n value = result.fetchone()['value']\n return json.loads(value)\n\n def delete(self, key):\n query = \"DELETE FROM cache WHERE key = :key\"\n with self.client.connect() as conn:\n conn.execute(query, dict(key=self.prefix + key))\n\n\ndef load_from_config(config):\n settings = config.get_settings()\n client = create_from_config(config, prefix='cache_', with_transaction=False)\n return Cache(client=client, cache_prefix=settings['cache_prefix'])\n", "path": "kinto/core/cache/postgresql/__init__.py"}]}
| 1,974 | 448 |
gh_patches_debug_1075
|
rasdani/github-patches
|
git_diff
|
e2nIEE__pandapower-563
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
from_mpc failed to load the case generated by to_mpc
After checking the source code, I found the to_mpc function saves the fields in a loose format. According to the from_mpc function, all the fields should be under a variable called "mpc" (default), however the to_mpc function does not follow this, which leads to a situation that the from_mpc function cannot load the case generated by the to_mpc function.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pandapower/converter/matpower/to_mpc.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright (c) 2016-2019 by University of Kassel and Fraunhofer Institute for Energy Economics
4 # and Energy System Technology (IEE), Kassel. All rights reserved.
5
6
7 import copy
8
9 import numpy as np
10 from scipy.io import savemat
11
12 from pandapower.converter.pypower import to_ppc
13
14 try:
15 import pplog as logging
16 except ImportError:
17 import logging
18
19 logger = logging.getLogger(__name__)
20
21
22 def to_mpc(net, filename=None, **kwargs):
23 """
24 This function converts a pandapower net to a matpower case files (.mat) version 2.
25 Note: python is 0-based while Matlab is 1-based.
26
27 INPUT:
28 **net** - The pandapower net.
29
30 OPTIONAL:
31 **filename** (str, None) - File path + name of the mat file which will be created. If None
32 the mpc will only be returned
33
34 ****kwargs** - please look at to_ppc() documentation
35
36 EXAMPLE:
37 import pandapower.converter as pc
38 import pandapower.networks as pn
39 net = pn.case9()
40 pc.to_mpc(net, "case9.mat")
41
42 """
43 ppc = to_ppc(net, **kwargs)
44
45 mpc = _ppc2mpc(ppc)
46 if filename is not None:
47 # savemat
48 savemat(filename, mpc)
49
50 return mpc
51
52
53 def _ppc2mpc(ppc):
54 """
55 Convert network in Pypower/Matpower format
56 Convert 0-based python to 1-based Matlab
57
58 **INPUT**:
59 * net - The pandapower format network
60 * filename - File path + name of the mat file which is created
61 """
62
63 # convert to matpower
64 # Matlab is one-based, so all entries (buses, lines, gens) have to start with 1 instead of 0
65 mpc = copy.deepcopy(ppc)
66 if len(np.where(mpc["bus"][:, 0] == 0)[0]):
67 mpc["bus"][:, 0] = mpc["bus"][:, 0] + 1
68 mpc["gen"][:, 0] = mpc["gen"][:, 0] + 1
69 mpc["branch"][:, 0:2] = mpc["branch"][:, 0:2] + 1
70 # adjust for the matpower converter -> taps should be 0 when there is no transformer, but are 1
71 mpc["branch"][np.where(mpc["branch"][:, 8] == 1), 8] = 0
72 # version is a string
73 mpc["version"] = str(mpc["version"])
74 # baseMVA has to be a float instead of int
75 mpc["baseMVA"] = mpc["baseMVA"] * 1.0
76 return mpc
77
78
79 if "__main__" == __name__:
80 pass
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pandapower/converter/matpower/to_mpc.py b/pandapower/converter/matpower/to_mpc.py
--- a/pandapower/converter/matpower/to_mpc.py
+++ b/pandapower/converter/matpower/to_mpc.py
@@ -42,7 +42,8 @@
"""
ppc = to_ppc(net, **kwargs)
- mpc = _ppc2mpc(ppc)
+ mpc = dict()
+ mpc["mpc"] = _ppc2mpc(ppc)
if filename is not None:
# savemat
savemat(filename, mpc)
|
{"golden_diff": "diff --git a/pandapower/converter/matpower/to_mpc.py b/pandapower/converter/matpower/to_mpc.py\n--- a/pandapower/converter/matpower/to_mpc.py\n+++ b/pandapower/converter/matpower/to_mpc.py\n@@ -42,7 +42,8 @@\n \"\"\"\n ppc = to_ppc(net, **kwargs)\n \n- mpc = _ppc2mpc(ppc)\n+ mpc = dict()\n+ mpc[\"mpc\"] = _ppc2mpc(ppc)\n if filename is not None:\n # savemat\n savemat(filename, mpc)\n", "issue": "from_mpc failed to load the case generated by to_mpc\nAfter checking the source code, I found the to_mpc function saves the fields in a loose format. According to the from_mpc function, all the fields should be under a variable called \"mpc\" (default), however the to_mpc function does not follow this, which leads to a situation that the from_mpc function cannot load the case generated by the to_mpc function.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2019 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\n\nimport copy\n\nimport numpy as np\nfrom scipy.io import savemat\n\nfrom pandapower.converter.pypower import to_ppc\n\ntry:\n import pplog as logging\nexcept ImportError:\n import logging\n\nlogger = logging.getLogger(__name__)\n\n\ndef to_mpc(net, filename=None, **kwargs):\n \"\"\"\n This function converts a pandapower net to a matpower case files (.mat) version 2.\n Note: python is 0-based while Matlab is 1-based.\n\n INPUT:\n **net** - The pandapower net.\n\n OPTIONAL:\n **filename** (str, None) - File path + name of the mat file which will be created. If None\n the mpc will only be returned\n\n ****kwargs** - please look at to_ppc() documentation\n\n EXAMPLE:\n import pandapower.converter as pc\n import pandapower.networks as pn\n net = pn.case9()\n pc.to_mpc(net, \"case9.mat\")\n\n \"\"\"\n ppc = to_ppc(net, **kwargs)\n\n mpc = _ppc2mpc(ppc)\n if filename is not None:\n # savemat\n savemat(filename, mpc)\n\n return mpc\n\n\ndef _ppc2mpc(ppc):\n \"\"\"\n Convert network in Pypower/Matpower format\n Convert 0-based python to 1-based Matlab\n\n **INPUT**:\n * net - The pandapower format network\n * filename - File path + name of the mat file which is created\n \"\"\"\n\n # convert to matpower\n # Matlab is one-based, so all entries (buses, lines, gens) have to start with 1 instead of 0\n mpc = copy.deepcopy(ppc)\n if len(np.where(mpc[\"bus\"][:, 0] == 0)[0]):\n mpc[\"bus\"][:, 0] = mpc[\"bus\"][:, 0] + 1\n mpc[\"gen\"][:, 0] = mpc[\"gen\"][:, 0] + 1\n mpc[\"branch\"][:, 0:2] = mpc[\"branch\"][:, 0:2] + 1\n # adjust for the matpower converter -> taps should be 0 when there is no transformer, but are 1\n mpc[\"branch\"][np.where(mpc[\"branch\"][:, 8] == 1), 8] = 0\n # version is a string\n mpc[\"version\"] = str(mpc[\"version\"])\n # baseMVA has to be a float instead of int\n mpc[\"baseMVA\"] = mpc[\"baseMVA\"] * 1.0\n return mpc\n\n\nif \"__main__\" == __name__:\n pass\n", "path": "pandapower/converter/matpower/to_mpc.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2019 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\n\nimport copy\n\nimport numpy as np\nfrom scipy.io import savemat\n\nfrom pandapower.converter.pypower import to_ppc\n\ntry:\n import pplog as logging\nexcept ImportError:\n import logging\n\nlogger = logging.getLogger(__name__)\n\n\ndef to_mpc(net, filename=None, **kwargs):\n \"\"\"\n This function converts a pandapower net to a matpower case files (.mat) version 2.\n Note: python is 0-based while Matlab is 1-based.\n\n INPUT:\n **net** - The pandapower net.\n\n OPTIONAL:\n **filename** (str, None) - File path + name of the mat file which will be created. If None\n the mpc will only be returned\n\n ****kwargs** - please look at to_ppc() documentation\n\n EXAMPLE:\n import pandapower.converter as pc\n import pandapower.networks as pn\n net = pn.case9()\n pc.to_mpc(net, \"case9.mat\")\n\n \"\"\"\n ppc = to_ppc(net, **kwargs)\n\n mpc = dict()\n mpc[\"mpc\"] = _ppc2mpc(ppc)\n if filename is not None:\n # savemat\n savemat(filename, mpc)\n\n return mpc\n\n\ndef _ppc2mpc(ppc):\n \"\"\"\n Convert network in Pypower/Matpower format\n Convert 0-based python to 1-based Matlab\n\n **INPUT**:\n * net - The pandapower format network\n * filename - File path + name of the mat file which is created\n \"\"\"\n\n # convert to matpower\n # Matlab is one-based, so all entries (buses, lines, gens) have to start with 1 instead of 0\n mpc = copy.deepcopy(ppc)\n if len(np.where(mpc[\"bus\"][:, 0] == 0)[0]):\n mpc[\"bus\"][:, 0] = mpc[\"bus\"][:, 0] + 1\n mpc[\"gen\"][:, 0] = mpc[\"gen\"][:, 0] + 1\n mpc[\"branch\"][:, 0:2] = mpc[\"branch\"][:, 0:2] + 1\n # adjust for the matpower converter -> taps should be 0 when there is no transformer, but are 1\n mpc[\"branch\"][np.where(mpc[\"branch\"][:, 8] == 1), 8] = 0\n # version is a string\n mpc[\"version\"] = str(mpc[\"version\"])\n # baseMVA has to be a float instead of int\n mpc[\"baseMVA\"] = mpc[\"baseMVA\"] * 1.0\n return mpc\n\n\nif \"__main__\" == __name__:\n pass\n", "path": "pandapower/converter/matpower/to_mpc.py"}]}
| 1,187 | 146 |
gh_patches_debug_22402
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-6607
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Logger from get_logger does not log to backend - prefect 2.0.4
### First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the Prefect documentation for this issue.
- [X] I checked that this issue is related to Prefect and not one of its dependencies.
### Bug summary
The logger i get from get_logger does log in the local shell session but the logs do not show up in backend.
Logs with made by get_run_logger are captured as expected!
### Reproduction
```python
from prefect import flow, task
from prefect.logging import get_logger, get_run_logger
@flow
def world():
get_logger().info("get_logger World")
get_run_logger().info("get_run_logger World")
@task
def hello():
get_logger().info(" get_logger Hello")
get_run_logger().info("get_run_logger Hello")
@flow
def test_flow():
get_logger().info("get_logger test")
get_run_logger().info("get_run_logger test")
hello()
world()
test_flow()
```
### Error
Local logs
```
19:22:45.427 | INFO | prefect.engine - Created flow run 'unyielding-wolverine' for flow 'test-flow'
19:22:46.433 | INFO | prefect - get_logger test
19:22:46.433 | INFO | Flow run 'unyielding-wolverine' - get_run_logger test
19:22:46.604 | INFO | Flow run 'unyielding-wolverine' - Created task run 'hello-b3a437c7-0' for task 'hello'
19:22:46.605 | INFO | Flow run 'unyielding-wolverine' - Executing 'hello-b3a437c7-0' immediately...
19:22:46.902 | INFO | prefect - get_logger Hello
19:22:46.903 | INFO | Task run 'hello-b3a437c7-0' - get_run_logger Hello
19:22:47.170 | INFO | Task run 'hello-b3a437c7-0' - Finished in state Completed()
19:22:47.732 | INFO | Flow run 'unyielding-wolverine' - Created subflow run 'watchful-puffin' for flow 'world'
19:22:48.065 | INFO | prefect - get_logger World
19:22:48.065 | INFO | Flow run 'watchful-puffin' - get_run_logger World
19:22:48.273 | INFO | Flow run 'watchful-puffin' - Finished in state Completed()
19:22:48.456 | INFO | Flow run 'unyielding-wolverine' - Finished in state Completed('All states completed.')
```
Remote logs
<img width="943" alt="image" src="https://user-images.githubusercontent.com/24698503/187261871-9d89681e-03fe-4557-b942-b24fafb71be5.png">
Subflow logs
<img width="961" alt="image" src="https://user-images.githubusercontent.com/24698503/187261992-8d029968-434e-43f6-9d5b-cd405e250a9e.png">
### Versions
```
Version: 2.0.4
API version: 0.8.0
Python version: 3.8.10
Git commit: 39db6fb1
Built: Wed, Aug 10, 2022 1:19 PM
OS/Arch: linux/x86_64
Profile: ci
Server type: hosted
```
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/logging/loggers.py`
Content:
```
1 import logging
2 from functools import lru_cache
3 from typing import TYPE_CHECKING
4
5 import prefect
6
7 if TYPE_CHECKING:
8 from prefect.context import RunContext
9 from prefect.flows import Flow
10 from prefect.orion.schemas.core import FlowRun, TaskRun
11 from prefect.tasks import Task
12
13
14 class PrefectLogAdapter(logging.LoggerAdapter):
15 """
16 Adapter that ensures extra kwargs are passed through correctly; without this
17 the `extra` fields set on the adapter would overshadow any provided on a
18 log-by-log basis.
19
20 See https://bugs.python.org/issue32732 — the Python team has declared that this is
21 not a bug in the LoggingAdapter and subclassing is the intended workaround.
22 """
23
24 def process(self, msg, kwargs):
25 kwargs["extra"] = {**self.extra, **(kwargs.get("extra") or {})}
26 return (msg, kwargs)
27
28
29 @lru_cache()
30 def get_logger(name: str = None) -> logging.Logger:
31 """
32 Get a `prefect` logger. For use within Prefect.
33 """
34
35 parent_logger = logging.getLogger("prefect")
36
37 if name:
38 # Append the name if given but allow explicit full names e.g. "prefect.test"
39 # should not become "prefect.prefect.test"
40 if not name.startswith(parent_logger.name + "."):
41 logger = parent_logger.getChild(name)
42 else:
43 logger = logging.getLogger(name)
44 else:
45 logger = parent_logger
46
47 return logger
48
49
50 def get_run_logger(context: "RunContext" = None, **kwargs: str) -> logging.Logger:
51 """
52 Get a Prefect logger for the current task run or flow run.
53
54 The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.
55 Contextual data about the run will be attached to the log records.
56
57 Arguments:
58 context: A specific context may be provided as an override. By default, the
59 context is inferred from global state and this should not be needed.
60 **kwargs: Additional keyword arguments will be attached to the log records in
61 addition to the run metadata
62
63 Raises:
64 RuntimeError: If no context can be found
65 """
66 # Check for existing contexts
67 task_run_context = prefect.context.TaskRunContext.get()
68 flow_run_context = prefect.context.FlowRunContext.get()
69
70 # Apply the context override
71 if context:
72 if isinstance(context, prefect.context.FlowRunContext):
73 flow_run_context = context
74 elif isinstance(context, prefect.context.TaskRunContext):
75 task_run_context = context
76 else:
77 raise TypeError(
78 f"Received unexpected type {type(context).__name__!r} for context. "
79 "Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'."
80 )
81
82 # Determine if this is a task or flow run logger
83 if task_run_context:
84 logger = task_run_logger(
85 task_run=task_run_context.task_run,
86 task=task_run_context.task,
87 flow_run=flow_run_context.flow_run if flow_run_context else None,
88 flow=flow_run_context.flow if flow_run_context else None,
89 **kwargs,
90 )
91 elif flow_run_context:
92 logger = flow_run_logger(
93 flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs
94 )
95 else:
96 raise RuntimeError("There is no active flow or task run context.")
97
98 return logger
99
100
101 def flow_run_logger(flow_run: "FlowRun", flow: "Flow" = None, **kwargs: str):
102 """
103 Create a flow run logger with the run's metadata attached.
104
105 Additional keyword arguments can be provided to attach custom data to the log
106 records.
107
108 If the context is available, see `run_logger` instead.
109 """
110 return PrefectLogAdapter(
111 get_logger("prefect.flow_runs"),
112 extra={
113 **{
114 "flow_run_name": flow_run.name,
115 "flow_run_id": str(flow_run.id),
116 "flow_name": flow.name if flow else "<unknown>",
117 },
118 **kwargs,
119 },
120 )
121
122
123 def task_run_logger(
124 task_run: "TaskRun",
125 task: "Task" = None,
126 flow_run: "FlowRun" = None,
127 flow: "Flow" = None,
128 **kwargs: str,
129 ):
130 """
131 Create a task run logger with the run's metadata attached.
132
133 Additional keyword arguments can be provided to attach custom data to the log
134 records.
135
136 If the context is available, see `run_logger` instead.
137 """
138 return PrefectLogAdapter(
139 get_logger("prefect.task_runs"),
140 extra={
141 **{
142 "task_run_id": str(task_run.id),
143 "flow_run_id": str(task_run.flow_run_id),
144 "task_run_name": task_run.name,
145 "task_name": task.name if task else "<unknown>",
146 "flow_run_name": flow_run.name if flow_run else "<unknown>",
147 "flow_name": flow.name if flow else "<unknown>",
148 },
149 **kwargs,
150 },
151 )
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/logging/loggers.py b/src/prefect/logging/loggers.py
--- a/src/prefect/logging/loggers.py
+++ b/src/prefect/logging/loggers.py
@@ -29,7 +29,11 @@
@lru_cache()
def get_logger(name: str = None) -> logging.Logger:
"""
- Get a `prefect` logger. For use within Prefect.
+ Get a `prefect` logger. These loggers are intended for internal use within the
+ `prefect` package.
+
+ See `get_run_logger` for retrieving loggers for use within task or flow runs.
+ By default, only run-related loggers are connected to the `OrionHandler`.
"""
parent_logger = logging.getLogger("prefect")
@@ -54,6 +58,9 @@
The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.
Contextual data about the run will be attached to the log records.
+ These loggers are connected to the `OrionHandler` by default to send log records to
+ the API.
+
Arguments:
context: A specific context may be provided as an override. By default, the
context is inferred from global state and this should not be needed.
|
{"golden_diff": "diff --git a/src/prefect/logging/loggers.py b/src/prefect/logging/loggers.py\n--- a/src/prefect/logging/loggers.py\n+++ b/src/prefect/logging/loggers.py\n@@ -29,7 +29,11 @@\n @lru_cache()\n def get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n- Get a `prefect` logger. For use within Prefect.\n+ Get a `prefect` logger. These loggers are intended for internal use within the\n+ `prefect` package.\n+\n+ See `get_run_logger` for retrieving loggers for use within task or flow runs.\n+ By default, only run-related loggers are connected to the `OrionHandler`.\n \"\"\"\n \n parent_logger = logging.getLogger(\"prefect\")\n@@ -54,6 +58,9 @@\n The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n Contextual data about the run will be attached to the log records.\n \n+ These loggers are connected to the `OrionHandler` by default to send log records to\n+ the API.\n+\n Arguments:\n context: A specific context may be provided as an override. By default, the\n context is inferred from global state and this should not be needed.\n", "issue": "Logger from get_logger does not log to backend - prefect 2.0.4\n### First check\n\n- [X] I added a descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the Prefect documentation for this issue.\n- [X] I checked that this issue is related to Prefect and not one of its dependencies.\n\n### Bug summary\n\nThe logger i get from get_logger does log in the local shell session but the logs do not show up in backend.\r\n\r\nLogs with made by get_run_logger are captured as expected!\n\n### Reproduction\n\n```python\nfrom prefect import flow, task\r\nfrom prefect.logging import get_logger, get_run_logger\r\n\r\n\r\n@flow\r\ndef world():\r\n get_logger().info(\"get_logger World\")\r\n get_run_logger().info(\"get_run_logger World\")\r\n\r\n\r\n@task\r\ndef hello():\r\n get_logger().info(\" get_logger Hello\")\r\n get_run_logger().info(\"get_run_logger Hello\")\r\n\r\n\r\n@flow\r\ndef test_flow():\r\n get_logger().info(\"get_logger test\")\r\n get_run_logger().info(\"get_run_logger test\")\r\n hello()\r\n world()\r\n\r\n\r\ntest_flow()\n```\n\n\n### Error\n\nLocal logs\r\n```\r\n19:22:45.427 | INFO | prefect.engine - Created flow run 'unyielding-wolverine' for flow 'test-flow'\r\n19:22:46.433 | INFO | prefect - get_logger test\r\n19:22:46.433 | INFO | Flow run 'unyielding-wolverine' - get_run_logger test\r\n19:22:46.604 | INFO | Flow run 'unyielding-wolverine' - Created task run 'hello-b3a437c7-0' for task 'hello'\r\n19:22:46.605 | INFO | Flow run 'unyielding-wolverine' - Executing 'hello-b3a437c7-0' immediately...\r\n19:22:46.902 | INFO | prefect - get_logger Hello\r\n19:22:46.903 | INFO | Task run 'hello-b3a437c7-0' - get_run_logger Hello\r\n19:22:47.170 | INFO | Task run 'hello-b3a437c7-0' - Finished in state Completed()\r\n19:22:47.732 | INFO | Flow run 'unyielding-wolverine' - Created subflow run 'watchful-puffin' for flow 'world'\r\n19:22:48.065 | INFO | prefect - get_logger World\r\n19:22:48.065 | INFO | Flow run 'watchful-puffin' - get_run_logger World\r\n19:22:48.273 | INFO | Flow run 'watchful-puffin' - Finished in state Completed()\r\n19:22:48.456 | INFO | Flow run 'unyielding-wolverine' - Finished in state Completed('All states completed.')\r\n```\r\nRemote logs\r\n<img width=\"943\" alt=\"image\" src=\"https://user-images.githubusercontent.com/24698503/187261871-9d89681e-03fe-4557-b942-b24fafb71be5.png\">\r\n\r\nSubflow logs\r\n<img width=\"961\" alt=\"image\" src=\"https://user-images.githubusercontent.com/24698503/187261992-8d029968-434e-43f6-9d5b-cd405e250a9e.png\">\r\n\n\n### Versions\n\n```\r\n\r\nVersion: 2.0.4\r\nAPI version: 0.8.0\r\nPython version: 3.8.10\r\nGit commit: 39db6fb1\r\nBuilt: Wed, Aug 10, 2022 1:19 PM\r\nOS/Arch: linux/x86_64\r\nProfile: ci\r\nServer type: hosted\r\n\r\n```\r\n\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "import logging\nfrom functools import lru_cache\nfrom typing import TYPE_CHECKING\n\nimport prefect\n\nif TYPE_CHECKING:\n from prefect.context import RunContext\n from prefect.flows import Flow\n from prefect.orion.schemas.core import FlowRun, TaskRun\n from prefect.tasks import Task\n\n\nclass PrefectLogAdapter(logging.LoggerAdapter):\n \"\"\"\n Adapter that ensures extra kwargs are passed through correctly; without this\n the `extra` fields set on the adapter would overshadow any provided on a\n log-by-log basis.\n\n See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is\n not a bug in the LoggingAdapter and subclassing is the intended workaround.\n \"\"\"\n\n def process(self, msg, kwargs):\n kwargs[\"extra\"] = {**self.extra, **(kwargs.get(\"extra\") or {})}\n return (msg, kwargs)\n\n\n@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Get a `prefect` logger. For use within Prefect.\n \"\"\"\n\n parent_logger = logging.getLogger(\"prefect\")\n\n if name:\n # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n # should not become \"prefect.prefect.test\"\n if not name.startswith(parent_logger.name + \".\"):\n logger = parent_logger.getChild(name)\n else:\n logger = logging.getLogger(name)\n else:\n logger = parent_logger\n\n return logger\n\n\ndef get_run_logger(context: \"RunContext\" = None, **kwargs: str) -> logging.Logger:\n \"\"\"\n Get a Prefect logger for the current task run or flow run.\n\n The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n Contextual data about the run will be attached to the log records.\n\n Arguments:\n context: A specific context may be provided as an override. By default, the\n context is inferred from global state and this should not be needed.\n **kwargs: Additional keyword arguments will be attached to the log records in\n addition to the run metadata\n\n Raises:\n RuntimeError: If no context can be found\n \"\"\"\n # Check for existing contexts\n task_run_context = prefect.context.TaskRunContext.get()\n flow_run_context = prefect.context.FlowRunContext.get()\n\n # Apply the context override\n if context:\n if isinstance(context, prefect.context.FlowRunContext):\n flow_run_context = context\n elif isinstance(context, prefect.context.TaskRunContext):\n task_run_context = context\n else:\n raise TypeError(\n f\"Received unexpected type {type(context).__name__!r} for context. \"\n \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n )\n\n # Determine if this is a task or flow run logger\n if task_run_context:\n logger = task_run_logger(\n task_run=task_run_context.task_run,\n task=task_run_context.task,\n flow_run=flow_run_context.flow_run if flow_run_context else None,\n flow=flow_run_context.flow if flow_run_context else None,\n **kwargs,\n )\n elif flow_run_context:\n logger = flow_run_logger(\n flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n )\n else:\n raise RuntimeError(\"There is no active flow or task run context.\")\n\n return logger\n\n\ndef flow_run_logger(flow_run: \"FlowRun\", flow: \"Flow\" = None, **kwargs: str):\n \"\"\"\n Create a flow run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the context is available, see `run_logger` instead.\n \"\"\"\n return PrefectLogAdapter(\n get_logger(\"prefect.flow_runs\"),\n extra={\n **{\n \"flow_run_name\": flow_run.name,\n \"flow_run_id\": str(flow_run.id),\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n\n\ndef task_run_logger(\n task_run: \"TaskRun\",\n task: \"Task\" = None,\n flow_run: \"FlowRun\" = None,\n flow: \"Flow\" = None,\n **kwargs: str,\n):\n \"\"\"\n Create a task run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the context is available, see `run_logger` instead.\n \"\"\"\n return PrefectLogAdapter(\n get_logger(\"prefect.task_runs\"),\n extra={\n **{\n \"task_run_id\": str(task_run.id),\n \"flow_run_id\": str(task_run.flow_run_id),\n \"task_run_name\": task_run.name,\n \"task_name\": task.name if task else \"<unknown>\",\n \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n", "path": "src/prefect/logging/loggers.py"}], "after_files": [{"content": "import logging\nfrom functools import lru_cache\nfrom typing import TYPE_CHECKING\n\nimport prefect\n\nif TYPE_CHECKING:\n from prefect.context import RunContext\n from prefect.flows import Flow\n from prefect.orion.schemas.core import FlowRun, TaskRun\n from prefect.tasks import Task\n\n\nclass PrefectLogAdapter(logging.LoggerAdapter):\n \"\"\"\n Adapter that ensures extra kwargs are passed through correctly; without this\n the `extra` fields set on the adapter would overshadow any provided on a\n log-by-log basis.\n\n See https://bugs.python.org/issue32732 \u2014 the Python team has declared that this is\n not a bug in the LoggingAdapter and subclassing is the intended workaround.\n \"\"\"\n\n def process(self, msg, kwargs):\n kwargs[\"extra\"] = {**self.extra, **(kwargs.get(\"extra\") or {})}\n return (msg, kwargs)\n\n\n@lru_cache()\ndef get_logger(name: str = None) -> logging.Logger:\n \"\"\"\n Get a `prefect` logger. These loggers are intended for internal use within the\n `prefect` package.\n\n See `get_run_logger` for retrieving loggers for use within task or flow runs.\n By default, only run-related loggers are connected to the `OrionHandler`.\n \"\"\"\n\n parent_logger = logging.getLogger(\"prefect\")\n\n if name:\n # Append the name if given but allow explicit full names e.g. \"prefect.test\"\n # should not become \"prefect.prefect.test\"\n if not name.startswith(parent_logger.name + \".\"):\n logger = parent_logger.getChild(name)\n else:\n logger = logging.getLogger(name)\n else:\n logger = parent_logger\n\n return logger\n\n\ndef get_run_logger(context: \"RunContext\" = None, **kwargs: str) -> logging.Logger:\n \"\"\"\n Get a Prefect logger for the current task run or flow run.\n\n The logger will be named either `prefect.task_runs` or `prefect.flow_runs`.\n Contextual data about the run will be attached to the log records.\n\n These loggers are connected to the `OrionHandler` by default to send log records to\n the API.\n\n Arguments:\n context: A specific context may be provided as an override. By default, the\n context is inferred from global state and this should not be needed.\n **kwargs: Additional keyword arguments will be attached to the log records in\n addition to the run metadata\n\n Raises:\n RuntimeError: If no context can be found\n \"\"\"\n # Check for existing contexts\n task_run_context = prefect.context.TaskRunContext.get()\n flow_run_context = prefect.context.FlowRunContext.get()\n\n # Apply the context override\n if context:\n if isinstance(context, prefect.context.FlowRunContext):\n flow_run_context = context\n elif isinstance(context, prefect.context.TaskRunContext):\n task_run_context = context\n else:\n raise TypeError(\n f\"Received unexpected type {type(context).__name__!r} for context. \"\n \"Expected one of 'None', 'FlowRunContext', or 'TaskRunContext'.\"\n )\n\n # Determine if this is a task or flow run logger\n if task_run_context:\n logger = task_run_logger(\n task_run=task_run_context.task_run,\n task=task_run_context.task,\n flow_run=flow_run_context.flow_run if flow_run_context else None,\n flow=flow_run_context.flow if flow_run_context else None,\n **kwargs,\n )\n elif flow_run_context:\n logger = flow_run_logger(\n flow_run=flow_run_context.flow_run, flow=flow_run_context.flow, **kwargs\n )\n else:\n raise RuntimeError(\"There is no active flow or task run context.\")\n\n return logger\n\n\ndef flow_run_logger(flow_run: \"FlowRun\", flow: \"Flow\" = None, **kwargs: str):\n \"\"\"\n Create a flow run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the context is available, see `run_logger` instead.\n \"\"\"\n return PrefectLogAdapter(\n get_logger(\"prefect.flow_runs\"),\n extra={\n **{\n \"flow_run_name\": flow_run.name,\n \"flow_run_id\": str(flow_run.id),\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n\n\ndef task_run_logger(\n task_run: \"TaskRun\",\n task: \"Task\" = None,\n flow_run: \"FlowRun\" = None,\n flow: \"Flow\" = None,\n **kwargs: str,\n):\n \"\"\"\n Create a task run logger with the run's metadata attached.\n\n Additional keyword arguments can be provided to attach custom data to the log\n records.\n\n If the context is available, see `run_logger` instead.\n \"\"\"\n return PrefectLogAdapter(\n get_logger(\"prefect.task_runs\"),\n extra={\n **{\n \"task_run_id\": str(task_run.id),\n \"flow_run_id\": str(task_run.flow_run_id),\n \"task_run_name\": task_run.name,\n \"task_name\": task.name if task else \"<unknown>\",\n \"flow_run_name\": flow_run.name if flow_run else \"<unknown>\",\n \"flow_name\": flow.name if flow else \"<unknown>\",\n },\n **kwargs,\n },\n )\n", "path": "src/prefect/logging/loggers.py"}]}
| 2,674 | 286 |
gh_patches_debug_43407
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-5083
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add FileClassifier media support
**Is your feature request related to a problem? Please describe.**
As a user I want to add WhisperTranscriber in my pipeline. I would like to use FileClassifier to classify my documents/media and direct to the correct node.
**Describe the solution you'd like**
- Add support to media files (that Whisper allows) into the FileClassifier
**Describe alternatives you've considered**
Keep as it's and don't integrate into the current pipelines
**Additional context**
This feature request is supposed to be considered after the merge of the current Whisper PR #4335.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/nodes/file_classifier/file_type.py`
Content:
```
1 import mimetypes
2 from typing import Any, Dict, List, Union, Optional
3
4 import logging
5 from pathlib import Path
6
7 from haystack.nodes.base import BaseComponent
8 from haystack.lazy_imports import LazyImport
9
10
11 logger = logging.getLogger(__name__)
12
13 with LazyImport() as magic_import:
14 import magic
15
16
17 DEFAULT_TYPES = ["txt", "pdf", "md", "docx", "html"]
18
19
20 class FileTypeClassifier(BaseComponent):
21 """
22 Route files in an Indexing Pipeline to corresponding file converters.
23 """
24
25 outgoing_edges = len(DEFAULT_TYPES)
26
27 def __init__(self, supported_types: Optional[List[str]] = None):
28 """
29 Node that sends out files on a different output edge depending on their extension.
30
31 :param supported_types: The file types that this node can distinguish between.
32 If no value is provided, the value created by default comprises: `txt`, `pdf`, `md`, `docx`, and `html`.
33 Lists with duplicate elements are not allowed.
34 """
35 if supported_types is None:
36 supported_types = DEFAULT_TYPES
37 if len(set(supported_types)) != len(supported_types):
38 duplicates = supported_types
39 for item in set(supported_types):
40 duplicates.remove(item)
41 raise ValueError(f"supported_types can't contain duplicate values ({duplicates}).")
42
43 super().__init__()
44
45 self.supported_types = supported_types
46
47 @classmethod
48 def _calculate_outgoing_edges(cls, component_params: Dict[str, Any]) -> int:
49 supported_types = component_params.get("supported_types", DEFAULT_TYPES)
50 return len(supported_types)
51
52 def _estimate_extension(self, file_path: Path) -> str:
53 """
54 Return the extension found based on the contents of the given file
55
56 :param file_path: the path to extract the extension from
57 """
58 try:
59 magic_import.check()
60 extension = magic.from_file(str(file_path), mime=True)
61 return mimetypes.guess_extension(extension) or ""
62 except (NameError, ImportError):
63 logger.error(
64 "The type of '%s' could not be guessed, probably because 'python-magic' is not installed. Ignoring this error."
65 "Please make sure the necessary OS libraries are installed if you need this functionality ('python-magic' or 'python-magic-bin' on Windows).",
66 file_path,
67 )
68 return ""
69
70 def _get_extension(self, file_paths: List[Path]) -> str:
71 """
72 Return the extension found in the given list of files.
73 Also makes sure that all files have the same extension.
74 If this is not true, it throws an exception.
75
76 :param file_paths: the paths to extract the extension from
77 :return: a set of strings with all the extensions (without duplicates), the extension will be guessed if the file has none
78 """
79 extension = file_paths[0].suffix.lower()
80 if extension == "":
81 extension = self._estimate_extension(file_paths[0])
82
83 for path in file_paths:
84 path_suffix = path.suffix.lower()
85 if path_suffix == "":
86 path_suffix = self._estimate_extension(path)
87 if path_suffix != extension:
88 raise ValueError("Multiple file types are not allowed at once.")
89
90 return extension.lstrip(".")
91
92 def run(self, file_paths: Union[Path, List[Path], str, List[str], List[Union[Path, str]]]): # type: ignore
93 """
94 Sends out files on a different output edge depending on their extension.
95
96 :param file_paths: paths to route on different edges.
97 """
98 if not isinstance(file_paths, list):
99 file_paths = [file_paths]
100
101 paths = [Path(path) for path in file_paths]
102
103 output = {"file_paths": paths}
104 extension = self._get_extension(paths)
105 try:
106 index = self.supported_types.index(extension) + 1
107 except ValueError:
108 raise ValueError(
109 f"Files of type '{extension}' ({paths[0]}) are not supported. "
110 f"The supported types are: {self.supported_types}. "
111 "Consider using the 'supported_types' parameter to "
112 "change the types accepted by this node."
113 )
114 return output, f"output_{index}"
115
116 def run_batch(self, file_paths: Union[Path, List[Path], str, List[str], List[Union[Path, str]]]): # type: ignore
117 return self.run(file_paths=file_paths)
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/haystack/nodes/file_classifier/file_type.py b/haystack/nodes/file_classifier/file_type.py
--- a/haystack/nodes/file_classifier/file_type.py
+++ b/haystack/nodes/file_classifier/file_type.py
@@ -14,7 +14,9 @@
import magic
-DEFAULT_TYPES = ["txt", "pdf", "md", "docx", "html"]
+DEFAULT_TYPES = ["txt", "pdf", "md", "docx", "html", "media"]
+
+DEFAULT_MEDIA_TYPES = ["mp3", "mp4", "mpeg", "m4a", "wav", "webm"]
class FileTypeClassifier(BaseComponent):
@@ -24,15 +26,20 @@
outgoing_edges = len(DEFAULT_TYPES)
- def __init__(self, supported_types: Optional[List[str]] = None):
+ def __init__(self, supported_types: Optional[List[str]] = None, full_analysis: bool = False):
"""
Node that sends out files on a different output edge depending on their extension.
- :param supported_types: The file types that this node can distinguish between.
- If no value is provided, the value created by default comprises: `txt`, `pdf`, `md`, `docx`, and `html`.
- Lists with duplicate elements are not allowed.
+ :param supported_types: The file types this node distinguishes. Optional.
+ If you don't provide any value, the default is: `txt`, `pdf`, `md`, `docx`, and `html`.
+ You can't use lists with duplicate elements.
+ :param full_analysis: If True, the whole file is analyzed to determine the file type.
+ If False, only the first 2049 bytes are analyzed.
"""
+ self.full_analysis = full_analysis
+ self._default_types = False
if supported_types is None:
+ self._default_types = True
supported_types = DEFAULT_TYPES
if len(set(supported_types)) != len(supported_types):
duplicates = supported_types
@@ -56,9 +63,17 @@
:param file_path: the path to extract the extension from
"""
try:
- magic_import.check()
- extension = magic.from_file(str(file_path), mime=True)
- return mimetypes.guess_extension(extension) or ""
+ with open(file_path, "rb") as f:
+ if self.full_analysis:
+ buffer = f.read()
+ else:
+ buffer = f.read(2049)
+ extension = magic.from_buffer(buffer, mime=True)
+ real_extension = mimetypes.guess_extension(extension) or ""
+ real_extension = real_extension.lstrip(".")
+ if self._default_types and real_extension in DEFAULT_MEDIA_TYPES:
+ return "media"
+ return real_extension or ""
except (NameError, ImportError):
logger.error(
"The type of '%s' could not be guessed, probably because 'python-magic' is not installed. Ignoring this error."
@@ -76,18 +91,19 @@
:param file_paths: the paths to extract the extension from
:return: a set of strings with all the extensions (without duplicates), the extension will be guessed if the file has none
"""
- extension = file_paths[0].suffix.lower()
- if extension == "":
+ extension = file_paths[0].suffix.lower().lstrip(".")
+
+ if extension == "" or (self._default_types and extension in DEFAULT_MEDIA_TYPES):
extension = self._estimate_extension(file_paths[0])
for path in file_paths:
- path_suffix = path.suffix.lower()
- if path_suffix == "":
+ path_suffix = path.suffix.lower().lstrip(".")
+ if path_suffix == "" or (self._default_types and path_suffix in DEFAULT_MEDIA_TYPES):
path_suffix = self._estimate_extension(path)
if path_suffix != extension:
- raise ValueError("Multiple file types are not allowed at once.")
+ raise ValueError("Multiple non-default file types are not allowed at once.")
- return extension.lstrip(".")
+ return extension
def run(self, file_paths: Union[Path, List[Path], str, List[str], List[Union[Path, str]]]): # type: ignore
"""
|
{"golden_diff": "diff --git a/haystack/nodes/file_classifier/file_type.py b/haystack/nodes/file_classifier/file_type.py\n--- a/haystack/nodes/file_classifier/file_type.py\n+++ b/haystack/nodes/file_classifier/file_type.py\n@@ -14,7 +14,9 @@\n import magic\n \n \n-DEFAULT_TYPES = [\"txt\", \"pdf\", \"md\", \"docx\", \"html\"]\n+DEFAULT_TYPES = [\"txt\", \"pdf\", \"md\", \"docx\", \"html\", \"media\"]\n+\n+DEFAULT_MEDIA_TYPES = [\"mp3\", \"mp4\", \"mpeg\", \"m4a\", \"wav\", \"webm\"]\n \n \n class FileTypeClassifier(BaseComponent):\n@@ -24,15 +26,20 @@\n \n outgoing_edges = len(DEFAULT_TYPES)\n \n- def __init__(self, supported_types: Optional[List[str]] = None):\n+ def __init__(self, supported_types: Optional[List[str]] = None, full_analysis: bool = False):\n \"\"\"\n Node that sends out files on a different output edge depending on their extension.\n \n- :param supported_types: The file types that this node can distinguish between.\n- If no value is provided, the value created by default comprises: `txt`, `pdf`, `md`, `docx`, and `html`.\n- Lists with duplicate elements are not allowed.\n+ :param supported_types: The file types this node distinguishes. Optional.\n+ If you don't provide any value, the default is: `txt`, `pdf`, `md`, `docx`, and `html`.\n+ You can't use lists with duplicate elements.\n+ :param full_analysis: If True, the whole file is analyzed to determine the file type.\n+ If False, only the first 2049 bytes are analyzed.\n \"\"\"\n+ self.full_analysis = full_analysis\n+ self._default_types = False\n if supported_types is None:\n+ self._default_types = True\n supported_types = DEFAULT_TYPES\n if len(set(supported_types)) != len(supported_types):\n duplicates = supported_types\n@@ -56,9 +63,17 @@\n :param file_path: the path to extract the extension from\n \"\"\"\n try:\n- magic_import.check()\n- extension = magic.from_file(str(file_path), mime=True)\n- return mimetypes.guess_extension(extension) or \"\"\n+ with open(file_path, \"rb\") as f:\n+ if self.full_analysis:\n+ buffer = f.read()\n+ else:\n+ buffer = f.read(2049)\n+ extension = magic.from_buffer(buffer, mime=True)\n+ real_extension = mimetypes.guess_extension(extension) or \"\"\n+ real_extension = real_extension.lstrip(\".\")\n+ if self._default_types and real_extension in DEFAULT_MEDIA_TYPES:\n+ return \"media\"\n+ return real_extension or \"\"\n except (NameError, ImportError):\n logger.error(\n \"The type of '%s' could not be guessed, probably because 'python-magic' is not installed. Ignoring this error.\"\n@@ -76,18 +91,19 @@\n :param file_paths: the paths to extract the extension from\n :return: a set of strings with all the extensions (without duplicates), the extension will be guessed if the file has none\n \"\"\"\n- extension = file_paths[0].suffix.lower()\n- if extension == \"\":\n+ extension = file_paths[0].suffix.lower().lstrip(\".\")\n+\n+ if extension == \"\" or (self._default_types and extension in DEFAULT_MEDIA_TYPES):\n extension = self._estimate_extension(file_paths[0])\n \n for path in file_paths:\n- path_suffix = path.suffix.lower()\n- if path_suffix == \"\":\n+ path_suffix = path.suffix.lower().lstrip(\".\")\n+ if path_suffix == \"\" or (self._default_types and path_suffix in DEFAULT_MEDIA_TYPES):\n path_suffix = self._estimate_extension(path)\n if path_suffix != extension:\n- raise ValueError(\"Multiple file types are not allowed at once.\")\n+ raise ValueError(\"Multiple non-default file types are not allowed at once.\")\n \n- return extension.lstrip(\".\")\n+ return extension\n \n def run(self, file_paths: Union[Path, List[Path], str, List[str], List[Union[Path, str]]]): # type: ignore\n \"\"\"\n", "issue": "Add FileClassifier media support\n**Is your feature request related to a problem? Please describe.**\r\nAs a user I want to add WhisperTranscriber in my pipeline. I would like to use FileClassifier to classify my documents/media and direct to the correct node. \r\n\r\n**Describe the solution you'd like**\r\n- Add support to media files (that Whisper allows) into the FileClassifier\r\n\r\n**Describe alternatives you've considered**\r\nKeep as it's and don't integrate into the current pipelines\r\n\r\n**Additional context**\r\nThis feature request is supposed to be considered after the merge of the current Whisper PR #4335.\r\n\n", "before_files": [{"content": "import mimetypes\nfrom typing import Any, Dict, List, Union, Optional\n\nimport logging\nfrom pathlib import Path\n\nfrom haystack.nodes.base import BaseComponent\nfrom haystack.lazy_imports import LazyImport\n\n\nlogger = logging.getLogger(__name__)\n\nwith LazyImport() as magic_import:\n import magic\n\n\nDEFAULT_TYPES = [\"txt\", \"pdf\", \"md\", \"docx\", \"html\"]\n\n\nclass FileTypeClassifier(BaseComponent):\n \"\"\"\n Route files in an Indexing Pipeline to corresponding file converters.\n \"\"\"\n\n outgoing_edges = len(DEFAULT_TYPES)\n\n def __init__(self, supported_types: Optional[List[str]] = None):\n \"\"\"\n Node that sends out files on a different output edge depending on their extension.\n\n :param supported_types: The file types that this node can distinguish between.\n If no value is provided, the value created by default comprises: `txt`, `pdf`, `md`, `docx`, and `html`.\n Lists with duplicate elements are not allowed.\n \"\"\"\n if supported_types is None:\n supported_types = DEFAULT_TYPES\n if len(set(supported_types)) != len(supported_types):\n duplicates = supported_types\n for item in set(supported_types):\n duplicates.remove(item)\n raise ValueError(f\"supported_types can't contain duplicate values ({duplicates}).\")\n\n super().__init__()\n\n self.supported_types = supported_types\n\n @classmethod\n def _calculate_outgoing_edges(cls, component_params: Dict[str, Any]) -> int:\n supported_types = component_params.get(\"supported_types\", DEFAULT_TYPES)\n return len(supported_types)\n\n def _estimate_extension(self, file_path: Path) -> str:\n \"\"\"\n Return the extension found based on the contents of the given file\n\n :param file_path: the path to extract the extension from\n \"\"\"\n try:\n magic_import.check()\n extension = magic.from_file(str(file_path), mime=True)\n return mimetypes.guess_extension(extension) or \"\"\n except (NameError, ImportError):\n logger.error(\n \"The type of '%s' could not be guessed, probably because 'python-magic' is not installed. Ignoring this error.\"\n \"Please make sure the necessary OS libraries are installed if you need this functionality ('python-magic' or 'python-magic-bin' on Windows).\",\n file_path,\n )\n return \"\"\n\n def _get_extension(self, file_paths: List[Path]) -> str:\n \"\"\"\n Return the extension found in the given list of files.\n Also makes sure that all files have the same extension.\n If this is not true, it throws an exception.\n\n :param file_paths: the paths to extract the extension from\n :return: a set of strings with all the extensions (without duplicates), the extension will be guessed if the file has none\n \"\"\"\n extension = file_paths[0].suffix.lower()\n if extension == \"\":\n extension = self._estimate_extension(file_paths[0])\n\n for path in file_paths:\n path_suffix = path.suffix.lower()\n if path_suffix == \"\":\n path_suffix = self._estimate_extension(path)\n if path_suffix != extension:\n raise ValueError(\"Multiple file types are not allowed at once.\")\n\n return extension.lstrip(\".\")\n\n def run(self, file_paths: Union[Path, List[Path], str, List[str], List[Union[Path, str]]]): # type: ignore\n \"\"\"\n Sends out files on a different output edge depending on their extension.\n\n :param file_paths: paths to route on different edges.\n \"\"\"\n if not isinstance(file_paths, list):\n file_paths = [file_paths]\n\n paths = [Path(path) for path in file_paths]\n\n output = {\"file_paths\": paths}\n extension = self._get_extension(paths)\n try:\n index = self.supported_types.index(extension) + 1\n except ValueError:\n raise ValueError(\n f\"Files of type '{extension}' ({paths[0]}) are not supported. \"\n f\"The supported types are: {self.supported_types}. \"\n \"Consider using the 'supported_types' parameter to \"\n \"change the types accepted by this node.\"\n )\n return output, f\"output_{index}\"\n\n def run_batch(self, file_paths: Union[Path, List[Path], str, List[str], List[Union[Path, str]]]): # type: ignore\n return self.run(file_paths=file_paths)\n", "path": "haystack/nodes/file_classifier/file_type.py"}], "after_files": [{"content": "import mimetypes\nfrom typing import Any, Dict, List, Union, Optional\n\nimport logging\nfrom pathlib import Path\n\nfrom haystack.nodes.base import BaseComponent\nfrom haystack.lazy_imports import LazyImport\n\n\nlogger = logging.getLogger(__name__)\n\nwith LazyImport() as magic_import:\n import magic\n\n\nDEFAULT_TYPES = [\"txt\", \"pdf\", \"md\", \"docx\", \"html\", \"media\"]\n\nDEFAULT_MEDIA_TYPES = [\"mp3\", \"mp4\", \"mpeg\", \"m4a\", \"wav\", \"webm\"]\n\n\nclass FileTypeClassifier(BaseComponent):\n \"\"\"\n Route files in an Indexing Pipeline to corresponding file converters.\n \"\"\"\n\n outgoing_edges = len(DEFAULT_TYPES)\n\n def __init__(self, supported_types: Optional[List[str]] = None, full_analysis: bool = False):\n \"\"\"\n Node that sends out files on a different output edge depending on their extension.\n\n :param supported_types: The file types this node distinguishes. Optional.\n If you don't provide any value, the default is: `txt`, `pdf`, `md`, `docx`, and `html`.\n You can't use lists with duplicate elements.\n :param full_analysis: If True, the whole file is analyzed to determine the file type.\n If False, only the first 2049 bytes are analyzed.\n \"\"\"\n self.full_analysis = full_analysis\n self._default_types = False\n if supported_types is None:\n self._default_types = True\n supported_types = DEFAULT_TYPES\n if len(set(supported_types)) != len(supported_types):\n duplicates = supported_types\n for item in set(supported_types):\n duplicates.remove(item)\n raise ValueError(f\"supported_types can't contain duplicate values ({duplicates}).\")\n\n super().__init__()\n\n self.supported_types = supported_types\n\n @classmethod\n def _calculate_outgoing_edges(cls, component_params: Dict[str, Any]) -> int:\n supported_types = component_params.get(\"supported_types\", DEFAULT_TYPES)\n return len(supported_types)\n\n def _estimate_extension(self, file_path: Path) -> str:\n \"\"\"\n Return the extension found based on the contents of the given file\n\n :param file_path: the path to extract the extension from\n \"\"\"\n try:\n with open(file_path, \"rb\") as f:\n if self.full_analysis:\n buffer = f.read()\n else:\n buffer = f.read(2049)\n extension = magic.from_buffer(buffer, mime=True)\n real_extension = mimetypes.guess_extension(extension) or \"\"\n real_extension = real_extension.lstrip(\".\")\n if self._default_types and real_extension in DEFAULT_MEDIA_TYPES:\n return \"media\"\n return real_extension or \"\"\n except (NameError, ImportError):\n logger.error(\n \"The type of '%s' could not be guessed, probably because 'python-magic' is not installed. Ignoring this error.\"\n \"Please make sure the necessary OS libraries are installed if you need this functionality ('python-magic' or 'python-magic-bin' on Windows).\",\n file_path,\n )\n return \"\"\n\n def _get_extension(self, file_paths: List[Path]) -> str:\n \"\"\"\n Return the extension found in the given list of files.\n Also makes sure that all files have the same extension.\n If this is not true, it throws an exception.\n\n :param file_paths: the paths to extract the extension from\n :return: a set of strings with all the extensions (without duplicates), the extension will be guessed if the file has none\n \"\"\"\n extension = file_paths[0].suffix.lower().lstrip(\".\")\n\n if extension == \"\" or (self._default_types and extension in DEFAULT_MEDIA_TYPES):\n extension = self._estimate_extension(file_paths[0])\n\n for path in file_paths:\n path_suffix = path.suffix.lower().lstrip(\".\")\n if path_suffix == \"\" or (self._default_types and path_suffix in DEFAULT_MEDIA_TYPES):\n path_suffix = self._estimate_extension(path)\n if path_suffix != extension:\n raise ValueError(\"Multiple non-default file types are not allowed at once.\")\n\n return extension\n\n def run(self, file_paths: Union[Path, List[Path], str, List[str], List[Union[Path, str]]]): # type: ignore\n \"\"\"\n Sends out files on a different output edge depending on their extension.\n\n :param file_paths: paths to route on different edges.\n \"\"\"\n if not isinstance(file_paths, list):\n file_paths = [file_paths]\n\n paths = [Path(path) for path in file_paths]\n\n output = {\"file_paths\": paths}\n extension = self._get_extension(paths)\n try:\n index = self.supported_types.index(extension) + 1\n except ValueError:\n raise ValueError(\n f\"Files of type '{extension}' ({paths[0]}) are not supported. \"\n f\"The supported types are: {self.supported_types}. \"\n \"Consider using the 'supported_types' parameter to \"\n \"change the types accepted by this node.\"\n )\n return output, f\"output_{index}\"\n\n def run_batch(self, file_paths: Union[Path, List[Path], str, List[str], List[Union[Path, str]]]): # type: ignore\n return self.run(file_paths=file_paths)\n", "path": "haystack/nodes/file_classifier/file_type.py"}]}
| 1,590 | 948 |
gh_patches_debug_32538
|
rasdani/github-patches
|
git_diff
|
dask__distributed-3104
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wait on single connection or try multiple connections
When we connect to a remote connection we currently wait on for our full timeout, often ten seconds, something like this:
```python
comm = await connect(address, timeout="10s")
```
However, @jacobtomlinson and I just ran into a situation with Kubernetes where the address that we were connecting to was created at just about the same time, so when we first tried to connect we were sent somewhere that would never receive the connection, but if we try again a second later, things are fine.
```python
comm = await connect(address, timeout="10s") # this hangs for 10s
```
```python
for i in range(10): # this connects after 1s
with ignoring(TimeoutError):
comm = await comm(address, timeout="1s")
```
This seems to work because, presumably, after the first connection fails and we try reconnecting the network now routes us to the correct location.
In general this second approach seems more robust to networks that might be fiddled with on-the-fly, which is presumably more common in cloud and Kubernetes situations. However, it also means that we need to become better about cleaning up missed connections.
cc @jcrist @jacobtomlinson and @mmccarty
The actual code for this is here: https://github.com/dask/distributed/blob/549660e07c0c70fdb17e07c6a18ca438933bd8ba/distributed/comm/core.py#L205-L228
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `distributed/comm/core.py`
Content:
```
1 from abc import ABC, abstractmethod, abstractproperty
2 from datetime import timedelta
3 import logging
4 import weakref
5
6 import dask
7 from tornado import gen
8
9 from ..metrics import time
10 from ..utils import parse_timedelta
11 from . import registry
12 from .addressing import parse_address
13
14
15 logger = logging.getLogger(__name__)
16
17
18 class CommClosedError(IOError):
19 pass
20
21
22 class FatalCommClosedError(CommClosedError):
23 pass
24
25
26 class Comm(ABC):
27 """
28 A message-oriented communication object, representing an established
29 communication channel. There should be only one reader and one
30 writer at a time: to manage current communications, even with a
31 single peer, you must create distinct ``Comm`` objects.
32
33 Messages are arbitrary Python objects. Concrete implementations
34 of this class can implement different serialization mechanisms
35 depending on the underlying transport's characteristics.
36 """
37
38 _instances = weakref.WeakSet()
39
40 def __init__(self):
41 self._instances.add(self)
42 self.name = None
43
44 # XXX add set_close_callback()?
45
46 @abstractmethod
47 def read(self, deserializers=None):
48 """
49 Read and return a message (a Python object).
50
51 This method is a coroutine.
52
53 Parameters
54 ----------
55 deserializers : Optional[Dict[str, Tuple[Callable, Callable, bool]]]
56 An optional dict appropriate for distributed.protocol.deserialize.
57 See :ref:`serialization` for more.
58 """
59
60 @abstractmethod
61 def write(self, msg, on_error=None):
62 """
63 Write a message (a Python object).
64
65 This method is a coroutine.
66
67 Parameters
68 ----------
69 msg :
70 on_error : Optional[str]
71 The behavior when serialization fails. See
72 ``distributed.protocol.core.dumps`` for valid values.
73 """
74
75 @abstractmethod
76 def close(self):
77 """
78 Close the communication cleanly. This will attempt to flush
79 outgoing buffers before actually closing the underlying transport.
80
81 This method is a coroutine.
82 """
83
84 @abstractmethod
85 def abort(self):
86 """
87 Close the communication immediately and abruptly.
88 Useful in destructors or generators' ``finally`` blocks.
89 """
90
91 @abstractmethod
92 def closed(self):
93 """
94 Return whether the stream is closed.
95 """
96
97 @abstractproperty
98 def local_address(self):
99 """
100 The local address. For logging and debugging purposes only.
101 """
102
103 @abstractproperty
104 def peer_address(self):
105 """
106 The peer's address. For logging and debugging purposes only.
107 """
108
109 @property
110 def extra_info(self):
111 """
112 Return backend-specific information about the communication,
113 as a dict. Typically, this is information which is initialized
114 when the communication is established and doesn't vary afterwards.
115 """
116 return {}
117
118 def __repr__(self):
119 clsname = self.__class__.__name__
120 if self.closed():
121 return "<closed %s>" % (clsname,)
122 else:
123 return "<%s %s local=%s remote=%s>" % (
124 clsname,
125 self.name or "",
126 self.local_address,
127 self.peer_address,
128 )
129
130
131 class Listener(ABC):
132 @abstractmethod
133 def start(self):
134 """
135 Start listening for incoming connections.
136 """
137
138 @abstractmethod
139 def stop(self):
140 """
141 Stop listening. This does not shutdown already established
142 communications, but prevents accepting new ones.
143 """
144
145 @abstractproperty
146 def listen_address(self):
147 """
148 The listening address as a URI string.
149 """
150
151 @abstractproperty
152 def contact_address(self):
153 """
154 An address this listener can be contacted on. This can be
155 different from `listen_address` if the latter is some wildcard
156 address such as 'tcp://0.0.0.0:123'.
157 """
158
159 def __enter__(self):
160 self.start()
161 return self
162
163 def __exit__(self, *exc):
164 self.stop()
165
166
167 class Connector(ABC):
168 @abstractmethod
169 def connect(self, address, deserialize=True):
170 """
171 Connect to the given address and return a Comm object.
172 This function is a coroutine. It may raise EnvironmentError
173 if the other endpoint is unreachable or unavailable. It
174 may raise ValueError if the address is malformed.
175 """
176
177
178 async def connect(addr, timeout=None, deserialize=True, connection_args=None):
179 """
180 Connect to the given address (a URI such as ``tcp://127.0.0.1:1234``)
181 and yield a ``Comm`` object. If the connection attempt fails, it is
182 retried until the *timeout* is expired.
183 """
184 if timeout is None:
185 timeout = dask.config.get("distributed.comm.timeouts.connect")
186 timeout = parse_timedelta(timeout, default="seconds")
187
188 scheme, loc = parse_address(addr)
189 backend = registry.get_backend(scheme)
190 connector = backend.get_connector()
191
192 start = time()
193 deadline = start + timeout
194 error = None
195
196 def _raise(error):
197 error = error or "connect() didn't finish in time"
198 msg = "Timed out trying to connect to %r after %s s: %s" % (
199 addr,
200 timeout,
201 error,
202 )
203 raise IOError(msg)
204
205 # This starts a thread
206 while True:
207 try:
208 future = connector.connect(
209 loc, deserialize=deserialize, **(connection_args or {})
210 )
211 comm = await gen.with_timeout(
212 timedelta(seconds=deadline - time()),
213 future,
214 quiet_exceptions=EnvironmentError,
215 )
216 except FatalCommClosedError:
217 raise
218 except EnvironmentError as e:
219 error = str(e)
220 if time() < deadline:
221 await gen.sleep(0.01)
222 logger.debug("sleeping on connect")
223 else:
224 _raise(error)
225 except gen.TimeoutError:
226 _raise(error)
227 else:
228 break
229
230 return comm
231
232
233 def listen(addr, handle_comm, deserialize=True, connection_args=None):
234 """
235 Create a listener object with the given parameters. When its ``start()``
236 method is called, the listener will listen on the given address
237 (a URI such as ``tcp://0.0.0.0``) and call *handle_comm* with a
238 ``Comm`` object for each incoming connection.
239
240 *handle_comm* can be a regular function or a coroutine.
241 """
242 try:
243 scheme, loc = parse_address(addr, strict=True)
244 except ValueError:
245 if connection_args and connection_args.get("ssl_context"):
246 addr = "tls://" + addr
247 else:
248 addr = "tcp://" + addr
249 scheme, loc = parse_address(addr, strict=True)
250
251 backend = registry.get_backend(scheme)
252
253 return backend.get_listener(
254 loc, handle_comm, deserialize, **(connection_args or {})
255 )
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/distributed/comm/core.py b/distributed/comm/core.py
--- a/distributed/comm/core.py
+++ b/distributed/comm/core.py
@@ -7,7 +7,7 @@
from tornado import gen
from ..metrics import time
-from ..utils import parse_timedelta
+from ..utils import parse_timedelta, ignoring
from . import registry
from .addressing import parse_address
@@ -188,6 +188,7 @@
scheme, loc = parse_address(addr)
backend = registry.get_backend(scheme)
connector = backend.get_connector()
+ comm = None
start = time()
deadline = start + timeout
@@ -205,14 +206,19 @@
# This starts a thread
while True:
try:
- future = connector.connect(
- loc, deserialize=deserialize, **(connection_args or {})
- )
- comm = await gen.with_timeout(
- timedelta(seconds=deadline - time()),
- future,
- quiet_exceptions=EnvironmentError,
- )
+ while deadline - time() > 0:
+ future = connector.connect(
+ loc, deserialize=deserialize, **(connection_args or {})
+ )
+ with ignoring(gen.TimeoutError):
+ comm = await gen.with_timeout(
+ timedelta(seconds=min(deadline - time(), 1)),
+ future,
+ quiet_exceptions=EnvironmentError,
+ )
+ break
+ if not comm:
+ _raise(error)
except FatalCommClosedError:
raise
except EnvironmentError as e:
@@ -222,8 +228,6 @@
logger.debug("sleeping on connect")
else:
_raise(error)
- except gen.TimeoutError:
- _raise(error)
else:
break
|
{"golden_diff": "diff --git a/distributed/comm/core.py b/distributed/comm/core.py\n--- a/distributed/comm/core.py\n+++ b/distributed/comm/core.py\n@@ -7,7 +7,7 @@\n from tornado import gen\n \n from ..metrics import time\n-from ..utils import parse_timedelta\n+from ..utils import parse_timedelta, ignoring\n from . import registry\n from .addressing import parse_address\n \n@@ -188,6 +188,7 @@\n scheme, loc = parse_address(addr)\n backend = registry.get_backend(scheme)\n connector = backend.get_connector()\n+ comm = None\n \n start = time()\n deadline = start + timeout\n@@ -205,14 +206,19 @@\n # This starts a thread\n while True:\n try:\n- future = connector.connect(\n- loc, deserialize=deserialize, **(connection_args or {})\n- )\n- comm = await gen.with_timeout(\n- timedelta(seconds=deadline - time()),\n- future,\n- quiet_exceptions=EnvironmentError,\n- )\n+ while deadline - time() > 0:\n+ future = connector.connect(\n+ loc, deserialize=deserialize, **(connection_args or {})\n+ )\n+ with ignoring(gen.TimeoutError):\n+ comm = await gen.with_timeout(\n+ timedelta(seconds=min(deadline - time(), 1)),\n+ future,\n+ quiet_exceptions=EnvironmentError,\n+ )\n+ break\n+ if not comm:\n+ _raise(error)\n except FatalCommClosedError:\n raise\n except EnvironmentError as e:\n@@ -222,8 +228,6 @@\n logger.debug(\"sleeping on connect\")\n else:\n _raise(error)\n- except gen.TimeoutError:\n- _raise(error)\n else:\n break\n", "issue": "Wait on single connection or try multiple connections\nWhen we connect to a remote connection we currently wait on for our full timeout, often ten seconds, something like this:\r\n\r\n```python\r\ncomm = await connect(address, timeout=\"10s\")\r\n```\r\n\r\nHowever, @jacobtomlinson and I just ran into a situation with Kubernetes where the address that we were connecting to was created at just about the same time, so when we first tried to connect we were sent somewhere that would never receive the connection, but if we try again a second later, things are fine.\r\n\r\n```python\r\ncomm = await connect(address, timeout=\"10s\") # this hangs for 10s\r\n```\r\n```python\r\nfor i in range(10): # this connects after 1s\r\n with ignoring(TimeoutError):\r\n comm = await comm(address, timeout=\"1s\")\r\n```\r\n\r\nThis seems to work because, presumably, after the first connection fails and we try reconnecting the network now routes us to the correct location.\r\n\r\nIn general this second approach seems more robust to networks that might be fiddled with on-the-fly, which is presumably more common in cloud and Kubernetes situations. However, it also means that we need to become better about cleaning up missed connections.\r\n\r\ncc @jcrist @jacobtomlinson and @mmccarty \r\n\r\nThe actual code for this is here: https://github.com/dask/distributed/blob/549660e07c0c70fdb17e07c6a18ca438933bd8ba/distributed/comm/core.py#L205-L228\n", "before_files": [{"content": "from abc import ABC, abstractmethod, abstractproperty\nfrom datetime import timedelta\nimport logging\nimport weakref\n\nimport dask\nfrom tornado import gen\n\nfrom ..metrics import time\nfrom ..utils import parse_timedelta\nfrom . import registry\nfrom .addressing import parse_address\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CommClosedError(IOError):\n pass\n\n\nclass FatalCommClosedError(CommClosedError):\n pass\n\n\nclass Comm(ABC):\n \"\"\"\n A message-oriented communication object, representing an established\n communication channel. There should be only one reader and one\n writer at a time: to manage current communications, even with a\n single peer, you must create distinct ``Comm`` objects.\n\n Messages are arbitrary Python objects. Concrete implementations\n of this class can implement different serialization mechanisms\n depending on the underlying transport's characteristics.\n \"\"\"\n\n _instances = weakref.WeakSet()\n\n def __init__(self):\n self._instances.add(self)\n self.name = None\n\n # XXX add set_close_callback()?\n\n @abstractmethod\n def read(self, deserializers=None):\n \"\"\"\n Read and return a message (a Python object).\n\n This method is a coroutine.\n\n Parameters\n ----------\n deserializers : Optional[Dict[str, Tuple[Callable, Callable, bool]]]\n An optional dict appropriate for distributed.protocol.deserialize.\n See :ref:`serialization` for more.\n \"\"\"\n\n @abstractmethod\n def write(self, msg, on_error=None):\n \"\"\"\n Write a message (a Python object).\n\n This method is a coroutine.\n\n Parameters\n ----------\n msg :\n on_error : Optional[str]\n The behavior when serialization fails. See\n ``distributed.protocol.core.dumps`` for valid values.\n \"\"\"\n\n @abstractmethod\n def close(self):\n \"\"\"\n Close the communication cleanly. This will attempt to flush\n outgoing buffers before actually closing the underlying transport.\n\n This method is a coroutine.\n \"\"\"\n\n @abstractmethod\n def abort(self):\n \"\"\"\n Close the communication immediately and abruptly.\n Useful in destructors or generators' ``finally`` blocks.\n \"\"\"\n\n @abstractmethod\n def closed(self):\n \"\"\"\n Return whether the stream is closed.\n \"\"\"\n\n @abstractproperty\n def local_address(self):\n \"\"\"\n The local address. For logging and debugging purposes only.\n \"\"\"\n\n @abstractproperty\n def peer_address(self):\n \"\"\"\n The peer's address. For logging and debugging purposes only.\n \"\"\"\n\n @property\n def extra_info(self):\n \"\"\"\n Return backend-specific information about the communication,\n as a dict. Typically, this is information which is initialized\n when the communication is established and doesn't vary afterwards.\n \"\"\"\n return {}\n\n def __repr__(self):\n clsname = self.__class__.__name__\n if self.closed():\n return \"<closed %s>\" % (clsname,)\n else:\n return \"<%s %s local=%s remote=%s>\" % (\n clsname,\n self.name or \"\",\n self.local_address,\n self.peer_address,\n )\n\n\nclass Listener(ABC):\n @abstractmethod\n def start(self):\n \"\"\"\n Start listening for incoming connections.\n \"\"\"\n\n @abstractmethod\n def stop(self):\n \"\"\"\n Stop listening. This does not shutdown already established\n communications, but prevents accepting new ones.\n \"\"\"\n\n @abstractproperty\n def listen_address(self):\n \"\"\"\n The listening address as a URI string.\n \"\"\"\n\n @abstractproperty\n def contact_address(self):\n \"\"\"\n An address this listener can be contacted on. This can be\n different from `listen_address` if the latter is some wildcard\n address such as 'tcp://0.0.0.0:123'.\n \"\"\"\n\n def __enter__(self):\n self.start()\n return self\n\n def __exit__(self, *exc):\n self.stop()\n\n\nclass Connector(ABC):\n @abstractmethod\n def connect(self, address, deserialize=True):\n \"\"\"\n Connect to the given address and return a Comm object.\n This function is a coroutine. It may raise EnvironmentError\n if the other endpoint is unreachable or unavailable. It\n may raise ValueError if the address is malformed.\n \"\"\"\n\n\nasync def connect(addr, timeout=None, deserialize=True, connection_args=None):\n \"\"\"\n Connect to the given address (a URI such as ``tcp://127.0.0.1:1234``)\n and yield a ``Comm`` object. If the connection attempt fails, it is\n retried until the *timeout* is expired.\n \"\"\"\n if timeout is None:\n timeout = dask.config.get(\"distributed.comm.timeouts.connect\")\n timeout = parse_timedelta(timeout, default=\"seconds\")\n\n scheme, loc = parse_address(addr)\n backend = registry.get_backend(scheme)\n connector = backend.get_connector()\n\n start = time()\n deadline = start + timeout\n error = None\n\n def _raise(error):\n error = error or \"connect() didn't finish in time\"\n msg = \"Timed out trying to connect to %r after %s s: %s\" % (\n addr,\n timeout,\n error,\n )\n raise IOError(msg)\n\n # This starts a thread\n while True:\n try:\n future = connector.connect(\n loc, deserialize=deserialize, **(connection_args or {})\n )\n comm = await gen.with_timeout(\n timedelta(seconds=deadline - time()),\n future,\n quiet_exceptions=EnvironmentError,\n )\n except FatalCommClosedError:\n raise\n except EnvironmentError as e:\n error = str(e)\n if time() < deadline:\n await gen.sleep(0.01)\n logger.debug(\"sleeping on connect\")\n else:\n _raise(error)\n except gen.TimeoutError:\n _raise(error)\n else:\n break\n\n return comm\n\n\ndef listen(addr, handle_comm, deserialize=True, connection_args=None):\n \"\"\"\n Create a listener object with the given parameters. When its ``start()``\n method is called, the listener will listen on the given address\n (a URI such as ``tcp://0.0.0.0``) and call *handle_comm* with a\n ``Comm`` object for each incoming connection.\n\n *handle_comm* can be a regular function or a coroutine.\n \"\"\"\n try:\n scheme, loc = parse_address(addr, strict=True)\n except ValueError:\n if connection_args and connection_args.get(\"ssl_context\"):\n addr = \"tls://\" + addr\n else:\n addr = \"tcp://\" + addr\n scheme, loc = parse_address(addr, strict=True)\n\n backend = registry.get_backend(scheme)\n\n return backend.get_listener(\n loc, handle_comm, deserialize, **(connection_args or {})\n )\n", "path": "distributed/comm/core.py"}], "after_files": [{"content": "from abc import ABC, abstractmethod, abstractproperty\nfrom datetime import timedelta\nimport logging\nimport weakref\n\nimport dask\nfrom tornado import gen\n\nfrom ..metrics import time\nfrom ..utils import parse_timedelta, ignoring\nfrom . import registry\nfrom .addressing import parse_address\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CommClosedError(IOError):\n pass\n\n\nclass FatalCommClosedError(CommClosedError):\n pass\n\n\nclass Comm(ABC):\n \"\"\"\n A message-oriented communication object, representing an established\n communication channel. There should be only one reader and one\n writer at a time: to manage current communications, even with a\n single peer, you must create distinct ``Comm`` objects.\n\n Messages are arbitrary Python objects. Concrete implementations\n of this class can implement different serialization mechanisms\n depending on the underlying transport's characteristics.\n \"\"\"\n\n _instances = weakref.WeakSet()\n\n def __init__(self):\n self._instances.add(self)\n self.name = None\n\n # XXX add set_close_callback()?\n\n @abstractmethod\n def read(self, deserializers=None):\n \"\"\"\n Read and return a message (a Python object).\n\n This method is a coroutine.\n\n Parameters\n ----------\n deserializers : Optional[Dict[str, Tuple[Callable, Callable, bool]]]\n An optional dict appropriate for distributed.protocol.deserialize.\n See :ref:`serialization` for more.\n \"\"\"\n\n @abstractmethod\n def write(self, msg, on_error=None):\n \"\"\"\n Write a message (a Python object).\n\n This method is a coroutine.\n\n Parameters\n ----------\n msg :\n on_error : Optional[str]\n The behavior when serialization fails. See\n ``distributed.protocol.core.dumps`` for valid values.\n \"\"\"\n\n @abstractmethod\n def close(self):\n \"\"\"\n Close the communication cleanly. This will attempt to flush\n outgoing buffers before actually closing the underlying transport.\n\n This method is a coroutine.\n \"\"\"\n\n @abstractmethod\n def abort(self):\n \"\"\"\n Close the communication immediately and abruptly.\n Useful in destructors or generators' ``finally`` blocks.\n \"\"\"\n\n @abstractmethod\n def closed(self):\n \"\"\"\n Return whether the stream is closed.\n \"\"\"\n\n @abstractproperty\n def local_address(self):\n \"\"\"\n The local address. For logging and debugging purposes only.\n \"\"\"\n\n @abstractproperty\n def peer_address(self):\n \"\"\"\n The peer's address. For logging and debugging purposes only.\n \"\"\"\n\n @property\n def extra_info(self):\n \"\"\"\n Return backend-specific information about the communication,\n as a dict. Typically, this is information which is initialized\n when the communication is established and doesn't vary afterwards.\n \"\"\"\n return {}\n\n def __repr__(self):\n clsname = self.__class__.__name__\n if self.closed():\n return \"<closed %s>\" % (clsname,)\n else:\n return \"<%s %s local=%s remote=%s>\" % (\n clsname,\n self.name or \"\",\n self.local_address,\n self.peer_address,\n )\n\n\nclass Listener(ABC):\n @abstractmethod\n def start(self):\n \"\"\"\n Start listening for incoming connections.\n \"\"\"\n\n @abstractmethod\n def stop(self):\n \"\"\"\n Stop listening. This does not shutdown already established\n communications, but prevents accepting new ones.\n \"\"\"\n\n @abstractproperty\n def listen_address(self):\n \"\"\"\n The listening address as a URI string.\n \"\"\"\n\n @abstractproperty\n def contact_address(self):\n \"\"\"\n An address this listener can be contacted on. This can be\n different from `listen_address` if the latter is some wildcard\n address such as 'tcp://0.0.0.0:123'.\n \"\"\"\n\n def __enter__(self):\n self.start()\n return self\n\n def __exit__(self, *exc):\n self.stop()\n\n\nclass Connector(ABC):\n @abstractmethod\n def connect(self, address, deserialize=True):\n \"\"\"\n Connect to the given address and return a Comm object.\n This function is a coroutine. It may raise EnvironmentError\n if the other endpoint is unreachable or unavailable. It\n may raise ValueError if the address is malformed.\n \"\"\"\n\n\nasync def connect(addr, timeout=None, deserialize=True, connection_args=None):\n \"\"\"\n Connect to the given address (a URI such as ``tcp://127.0.0.1:1234``)\n and yield a ``Comm`` object. If the connection attempt fails, it is\n retried until the *timeout* is expired.\n \"\"\"\n if timeout is None:\n timeout = dask.config.get(\"distributed.comm.timeouts.connect\")\n timeout = parse_timedelta(timeout, default=\"seconds\")\n\n scheme, loc = parse_address(addr)\n backend = registry.get_backend(scheme)\n connector = backend.get_connector()\n comm = None\n\n start = time()\n deadline = start + timeout\n error = None\n\n def _raise(error):\n error = error or \"connect() didn't finish in time\"\n msg = \"Timed out trying to connect to %r after %s s: %s\" % (\n addr,\n timeout,\n error,\n )\n raise IOError(msg)\n\n # This starts a thread\n while True:\n try:\n while deadline - time() > 0:\n future = connector.connect(\n loc, deserialize=deserialize, **(connection_args or {})\n )\n with ignoring(gen.TimeoutError):\n comm = await gen.with_timeout(\n timedelta(seconds=min(deadline - time(), 1)),\n future,\n quiet_exceptions=EnvironmentError,\n )\n break\n if not comm:\n _raise(error)\n except FatalCommClosedError:\n raise\n except EnvironmentError as e:\n error = str(e)\n if time() < deadline:\n await gen.sleep(0.01)\n logger.debug(\"sleeping on connect\")\n else:\n _raise(error)\n else:\n break\n\n return comm\n\n\ndef listen(addr, handle_comm, deserialize=True, connection_args=None):\n \"\"\"\n Create a listener object with the given parameters. When its ``start()``\n method is called, the listener will listen on the given address\n (a URI such as ``tcp://0.0.0.0``) and call *handle_comm* with a\n ``Comm`` object for each incoming connection.\n\n *handle_comm* can be a regular function or a coroutine.\n \"\"\"\n try:\n scheme, loc = parse_address(addr, strict=True)\n except ValueError:\n if connection_args and connection_args.get(\"ssl_context\"):\n addr = \"tls://\" + addr\n else:\n addr = \"tcp://\" + addr\n scheme, loc = parse_address(addr, strict=True)\n\n backend = registry.get_backend(scheme)\n\n return backend.get_listener(\n loc, handle_comm, deserialize, **(connection_args or {})\n )\n", "path": "distributed/comm/core.py"}]}
| 2,760 | 396 |
gh_patches_debug_64926
|
rasdani/github-patches
|
git_diff
|
biopython__biopython-3922
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KEGG.Compound.parse not returning mass
### Setup
I am reporting a problem with Biopython version, Python version, and operating
system as follows:
1.78
3.9.12
Windows 10 Pro
### Expected behaviour
Calling KEGG.Compound.parse on a KEGG record should return a KEGG record object containing the mass. For example, compound C00120 should have a mass attribute containing 244.0882.
### Actual behaviour
However, no mass attribute is returned.
### Steps to reproduce
```
from Bio.KEGG.Compound import parse
from Bio.KEGG.REST import kegg_get
c00120 = next(parse(kegg_get('C00120')))
print(c00120.mass)
```
### Fix
This is because the KEGG record now uses separate EXACT_MASS and MOL_WEIGHT fields (can be seen by running kegg_get('C00120').read()). Fixed by replacing line 156 in KEGG.Compound.__init__.py with:
`elif keyword == "EXACT_MASS ":`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Bio/KEGG/Compound/__init__.py`
Content:
```
1 # Copyright 2001 by Tarjei Mikkelsen. All rights reserved.
2 # Copyright 2007 by Michiel de Hoon. All rights reserved.
3 #
4 # This file is part of the Biopython distribution and governed by your
5 # choice of the "Biopython License Agreement" or the "BSD 3-Clause License".
6 # Please see the LICENSE file that should have been included as part of this
7 # package.
8
9 """Code to work with the KEGG Ligand/Compound database.
10
11 Functions:
12 - parse - Returns an iterator giving Record objects.
13
14 Classes:
15 - Record - A representation of a KEGG Ligand/Compound.
16 """
17
18
19 from Bio.KEGG import _default_wrap, _struct_wrap, _wrap_kegg, _write_kegg
20
21
22 # Set up line wrapping rules (see Bio.KEGG._wrap_kegg)
23 name_wrap = [0, "", (" ", "$", 1, 1), ("-", "$", 1, 1)]
24 id_wrap = _default_wrap
25 struct_wrap = _struct_wrap
26
27
28 class Record:
29 """Holds info from a KEGG Ligand/Compound record.
30
31 Attributes:
32 - entry The entry identifier.
33 - name A list of the compound names.
34 - formula The chemical formula for the compound
35 - mass The molecular weight for the compound
36 - pathway A list of 3-tuples: ('PATH', pathway id, pathway)
37 - enzyme A list of the EC numbers.
38 - structures A list of 2-tuples: (database, list of struct ids)
39 - dblinks A list of 2-tuples: (database, list of link ids)
40
41 """
42
43 def __init__(self):
44 """Initialize as new record."""
45 self.entry = ""
46 self.name = []
47 self.formula = ""
48 self.mass = ""
49 self.pathway = []
50 self.enzyme = []
51 self.structures = []
52 self.dblinks = []
53
54 def __str__(self):
55 """Return a string representation of this Record."""
56 return (
57 self._entry()
58 + self._name()
59 + self._formula()
60 + self._mass()
61 + self._pathway()
62 + self._enzyme()
63 + self._structures()
64 + self._dblinks()
65 + "///"
66 )
67
68 def _entry(self):
69 return _write_kegg("ENTRY", [self.entry])
70
71 def _name(self):
72 return _write_kegg(
73 "NAME", [_wrap_kegg(l, wrap_rule=name_wrap) for l in self.name]
74 )
75
76 def _formula(self):
77 return _write_kegg("FORMULA", [self.formula])
78
79 def _mass(self):
80 return _write_kegg("MASS", [self.mass])
81
82 def _pathway(self):
83 s = []
84 for entry in self.pathway:
85 s.append(entry[0] + " " + entry[1])
86 return _write_kegg("PATHWAY", [_wrap_kegg(l, wrap_rule=id_wrap(16)) for l in s])
87
88 def _enzyme(self):
89 return _write_kegg(
90 "ENZYME", [_wrap_kegg(l, wrap_rule=name_wrap) for l in self.enzyme]
91 )
92
93 def _structures(self):
94 s = []
95 for entry in self.structures:
96 s.append(entry[0] + ": " + " ".join(entry[1]) + " ")
97 return _write_kegg(
98 "STRUCTURES", [_wrap_kegg(l, wrap_rule=struct_wrap(5)) for l in s]
99 )
100
101 def _dblinks(self):
102 s = []
103 for entry in self.dblinks:
104 s.append(entry[0] + ": " + " ".join(entry[1]))
105 return _write_kegg("DBLINKS", [_wrap_kegg(l, wrap_rule=id_wrap(9)) for l in s])
106
107
108 def parse(handle):
109 """Parse a KEGG Ligan/Compound file, returning Record objects.
110
111 This is an iterator function, typically used in a for loop. For
112 example, using one of the example KEGG files in the Biopython
113 test suite,
114
115 >>> with open("KEGG/compound.sample") as handle:
116 ... for record in parse(handle):
117 ... print("%s %s" % (record.entry, record.name[0]))
118 ...
119 C00023 Iron
120 C00017 Protein
121 C00099 beta-Alanine
122 C00294 Inosine
123 C00298 Trypsin
124 C00348 all-trans-Undecaprenyl phosphate
125 C00349 2-Methyl-3-oxopropanoate
126 C01386 NH2Mec
127
128 """
129 record = Record()
130 for line in handle:
131 if line[:3] == "///":
132 yield record
133 record = Record()
134 continue
135 if line[:12] != " ":
136 keyword = line[:12]
137 data = line[12:].strip()
138 if keyword == "ENTRY ":
139 words = data.split()
140 record.entry = words[0]
141 elif keyword == "NAME ":
142 data = data.strip(";")
143 record.name.append(data)
144 elif keyword == "ENZYME ":
145 while data:
146 column = data[:16]
147 data = data[16:]
148 enzyme = column.strip()
149 record.enzyme.append(enzyme)
150 elif keyword == "PATHWAY ":
151 map, name = data.split(" ")
152 pathway = ("PATH", map, name)
153 record.pathway.append(pathway)
154 elif keyword == "FORMULA ":
155 record.formula = data
156 elif keyword == "MASS ":
157 record.mass = data
158 elif keyword == "DBLINKS ":
159 if ":" in data:
160 key, values = data.split(":")
161 values = values.split()
162 row = (key, values)
163 record.dblinks.append(row)
164 else:
165 row = record.dblinks[-1]
166 key, values = row
167 values.extend(data.split())
168 row = key, values
169 record.dblinks[-1] = row
170
171
172 if __name__ == "__main__":
173 from Bio._utils import run_doctest
174
175 run_doctest()
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/Bio/KEGG/Compound/__init__.py b/Bio/KEGG/Compound/__init__.py
--- a/Bio/KEGG/Compound/__init__.py
+++ b/Bio/KEGG/Compound/__init__.py
@@ -153,7 +153,7 @@
record.pathway.append(pathway)
elif keyword == "FORMULA ":
record.formula = data
- elif keyword == "MASS ":
+ elif keyword in ("MASS ", "EXACT_MASS "):
record.mass = data
elif keyword == "DBLINKS ":
if ":" in data:
|
{"golden_diff": "diff --git a/Bio/KEGG/Compound/__init__.py b/Bio/KEGG/Compound/__init__.py\n--- a/Bio/KEGG/Compound/__init__.py\n+++ b/Bio/KEGG/Compound/__init__.py\n@@ -153,7 +153,7 @@\n record.pathway.append(pathway)\n elif keyword == \"FORMULA \":\n record.formula = data\n- elif keyword == \"MASS \":\n+ elif keyword in (\"MASS \", \"EXACT_MASS \"):\n record.mass = data\n elif keyword == \"DBLINKS \":\n if \":\" in data:\n", "issue": "KEGG.Compound.parse not returning mass \n### Setup\r\n\r\nI am reporting a problem with Biopython version, Python version, and operating\r\nsystem as follows:\r\n\r\n1.78\r\n3.9.12\r\nWindows 10 Pro\r\n\r\n### Expected behaviour\r\n\r\nCalling KEGG.Compound.parse on a KEGG record should return a KEGG record object containing the mass. For example, compound C00120 should have a mass attribute containing 244.0882.\r\n\r\n### Actual behaviour\r\n\r\nHowever, no mass attribute is returned. \r\n\r\n### Steps to reproduce\r\n\r\n```\r\nfrom Bio.KEGG.Compound import parse\r\nfrom Bio.KEGG.REST import kegg_get\r\nc00120 = next(parse(kegg_get('C00120')))\r\nprint(c00120.mass)\r\n```\r\n### Fix\r\nThis is because the KEGG record now uses separate EXACT_MASS and MOL_WEIGHT fields (can be seen by running kegg_get('C00120').read()). Fixed by replacing line 156 in KEGG.Compound.__init__.py with:\r\n`elif keyword == \"EXACT_MASS \":`\r\n\r\n\n", "before_files": [{"content": "# Copyright 2001 by Tarjei Mikkelsen. All rights reserved.\n# Copyright 2007 by Michiel de Hoon. All rights reserved.\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\n\"\"\"Code to work with the KEGG Ligand/Compound database.\n\nFunctions:\n - parse - Returns an iterator giving Record objects.\n\nClasses:\n - Record - A representation of a KEGG Ligand/Compound.\n\"\"\"\n\n\nfrom Bio.KEGG import _default_wrap, _struct_wrap, _wrap_kegg, _write_kegg\n\n\n# Set up line wrapping rules (see Bio.KEGG._wrap_kegg)\nname_wrap = [0, \"\", (\" \", \"$\", 1, 1), (\"-\", \"$\", 1, 1)]\nid_wrap = _default_wrap\nstruct_wrap = _struct_wrap\n\n\nclass Record:\n \"\"\"Holds info from a KEGG Ligand/Compound record.\n\n Attributes:\n - entry The entry identifier.\n - name A list of the compound names.\n - formula The chemical formula for the compound\n - mass The molecular weight for the compound\n - pathway A list of 3-tuples: ('PATH', pathway id, pathway)\n - enzyme A list of the EC numbers.\n - structures A list of 2-tuples: (database, list of struct ids)\n - dblinks A list of 2-tuples: (database, list of link ids)\n\n \"\"\"\n\n def __init__(self):\n \"\"\"Initialize as new record.\"\"\"\n self.entry = \"\"\n self.name = []\n self.formula = \"\"\n self.mass = \"\"\n self.pathway = []\n self.enzyme = []\n self.structures = []\n self.dblinks = []\n\n def __str__(self):\n \"\"\"Return a string representation of this Record.\"\"\"\n return (\n self._entry()\n + self._name()\n + self._formula()\n + self._mass()\n + self._pathway()\n + self._enzyme()\n + self._structures()\n + self._dblinks()\n + \"///\"\n )\n\n def _entry(self):\n return _write_kegg(\"ENTRY\", [self.entry])\n\n def _name(self):\n return _write_kegg(\n \"NAME\", [_wrap_kegg(l, wrap_rule=name_wrap) for l in self.name]\n )\n\n def _formula(self):\n return _write_kegg(\"FORMULA\", [self.formula])\n\n def _mass(self):\n return _write_kegg(\"MASS\", [self.mass])\n\n def _pathway(self):\n s = []\n for entry in self.pathway:\n s.append(entry[0] + \" \" + entry[1])\n return _write_kegg(\"PATHWAY\", [_wrap_kegg(l, wrap_rule=id_wrap(16)) for l in s])\n\n def _enzyme(self):\n return _write_kegg(\n \"ENZYME\", [_wrap_kegg(l, wrap_rule=name_wrap) for l in self.enzyme]\n )\n\n def _structures(self):\n s = []\n for entry in self.structures:\n s.append(entry[0] + \": \" + \" \".join(entry[1]) + \" \")\n return _write_kegg(\n \"STRUCTURES\", [_wrap_kegg(l, wrap_rule=struct_wrap(5)) for l in s]\n )\n\n def _dblinks(self):\n s = []\n for entry in self.dblinks:\n s.append(entry[0] + \": \" + \" \".join(entry[1]))\n return _write_kegg(\"DBLINKS\", [_wrap_kegg(l, wrap_rule=id_wrap(9)) for l in s])\n\n\ndef parse(handle):\n \"\"\"Parse a KEGG Ligan/Compound file, returning Record objects.\n\n This is an iterator function, typically used in a for loop. For\n example, using one of the example KEGG files in the Biopython\n test suite,\n\n >>> with open(\"KEGG/compound.sample\") as handle:\n ... for record in parse(handle):\n ... print(\"%s %s\" % (record.entry, record.name[0]))\n ...\n C00023 Iron\n C00017 Protein\n C00099 beta-Alanine\n C00294 Inosine\n C00298 Trypsin\n C00348 all-trans-Undecaprenyl phosphate\n C00349 2-Methyl-3-oxopropanoate\n C01386 NH2Mec\n\n \"\"\"\n record = Record()\n for line in handle:\n if line[:3] == \"///\":\n yield record\n record = Record()\n continue\n if line[:12] != \" \":\n keyword = line[:12]\n data = line[12:].strip()\n if keyword == \"ENTRY \":\n words = data.split()\n record.entry = words[0]\n elif keyword == \"NAME \":\n data = data.strip(\";\")\n record.name.append(data)\n elif keyword == \"ENZYME \":\n while data:\n column = data[:16]\n data = data[16:]\n enzyme = column.strip()\n record.enzyme.append(enzyme)\n elif keyword == \"PATHWAY \":\n map, name = data.split(\" \")\n pathway = (\"PATH\", map, name)\n record.pathway.append(pathway)\n elif keyword == \"FORMULA \":\n record.formula = data\n elif keyword == \"MASS \":\n record.mass = data\n elif keyword == \"DBLINKS \":\n if \":\" in data:\n key, values = data.split(\":\")\n values = values.split()\n row = (key, values)\n record.dblinks.append(row)\n else:\n row = record.dblinks[-1]\n key, values = row\n values.extend(data.split())\n row = key, values\n record.dblinks[-1] = row\n\n\nif __name__ == \"__main__\":\n from Bio._utils import run_doctest\n\n run_doctest()\n", "path": "Bio/KEGG/Compound/__init__.py"}], "after_files": [{"content": "# Copyright 2001 by Tarjei Mikkelsen. All rights reserved.\n# Copyright 2007 by Michiel de Hoon. All rights reserved.\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\n\"\"\"Code to work with the KEGG Ligand/Compound database.\n\nFunctions:\n - parse - Returns an iterator giving Record objects.\n\nClasses:\n - Record - A representation of a KEGG Ligand/Compound.\n\"\"\"\n\n\nfrom Bio.KEGG import _default_wrap, _struct_wrap, _wrap_kegg, _write_kegg\n\n\n# Set up line wrapping rules (see Bio.KEGG._wrap_kegg)\nname_wrap = [0, \"\", (\" \", \"$\", 1, 1), (\"-\", \"$\", 1, 1)]\nid_wrap = _default_wrap\nstruct_wrap = _struct_wrap\n\n\nclass Record:\n \"\"\"Holds info from a KEGG Ligand/Compound record.\n\n Attributes:\n - entry The entry identifier.\n - name A list of the compound names.\n - formula The chemical formula for the compound\n - mass The molecular weight for the compound\n - pathway A list of 3-tuples: ('PATH', pathway id, pathway)\n - enzyme A list of the EC numbers.\n - structures A list of 2-tuples: (database, list of struct ids)\n - dblinks A list of 2-tuples: (database, list of link ids)\n\n \"\"\"\n\n def __init__(self):\n \"\"\"Initialize as new record.\"\"\"\n self.entry = \"\"\n self.name = []\n self.formula = \"\"\n self.mass = \"\"\n self.pathway = []\n self.enzyme = []\n self.structures = []\n self.dblinks = []\n\n def __str__(self):\n \"\"\"Return a string representation of this Record.\"\"\"\n return (\n self._entry()\n + self._name()\n + self._formula()\n + self._mass()\n + self._pathway()\n + self._enzyme()\n + self._structures()\n + self._dblinks()\n + \"///\"\n )\n\n def _entry(self):\n return _write_kegg(\"ENTRY\", [self.entry])\n\n def _name(self):\n return _write_kegg(\n \"NAME\", [_wrap_kegg(l, wrap_rule=name_wrap) for l in self.name]\n )\n\n def _formula(self):\n return _write_kegg(\"FORMULA\", [self.formula])\n\n def _mass(self):\n return _write_kegg(\"MASS\", [self.mass])\n\n def _pathway(self):\n s = []\n for entry in self.pathway:\n s.append(entry[0] + \" \" + entry[1])\n return _write_kegg(\"PATHWAY\", [_wrap_kegg(l, wrap_rule=id_wrap(16)) for l in s])\n\n def _enzyme(self):\n return _write_kegg(\n \"ENZYME\", [_wrap_kegg(l, wrap_rule=name_wrap) for l in self.enzyme]\n )\n\n def _structures(self):\n s = []\n for entry in self.structures:\n s.append(entry[0] + \": \" + \" \".join(entry[1]) + \" \")\n return _write_kegg(\n \"STRUCTURES\", [_wrap_kegg(l, wrap_rule=struct_wrap(5)) for l in s]\n )\n\n def _dblinks(self):\n s = []\n for entry in self.dblinks:\n s.append(entry[0] + \": \" + \" \".join(entry[1]))\n return _write_kegg(\"DBLINKS\", [_wrap_kegg(l, wrap_rule=id_wrap(9)) for l in s])\n\n\ndef parse(handle):\n \"\"\"Parse a KEGG Ligan/Compound file, returning Record objects.\n\n This is an iterator function, typically used in a for loop. For\n example, using one of the example KEGG files in the Biopython\n test suite,\n\n >>> with open(\"KEGG/compound.sample\") as handle:\n ... for record in parse(handle):\n ... print(\"%s %s\" % (record.entry, record.name[0]))\n ...\n C00023 Iron\n C00017 Protein\n C00099 beta-Alanine\n C00294 Inosine\n C00298 Trypsin\n C00348 all-trans-Undecaprenyl phosphate\n C00349 2-Methyl-3-oxopropanoate\n C01386 NH2Mec\n\n \"\"\"\n record = Record()\n for line in handle:\n if line[:3] == \"///\":\n yield record\n record = Record()\n continue\n if line[:12] != \" \":\n keyword = line[:12]\n data = line[12:].strip()\n if keyword == \"ENTRY \":\n words = data.split()\n record.entry = words[0]\n elif keyword == \"NAME \":\n data = data.strip(\";\")\n record.name.append(data)\n elif keyword == \"ENZYME \":\n while data:\n column = data[:16]\n data = data[16:]\n enzyme = column.strip()\n record.enzyme.append(enzyme)\n elif keyword == \"PATHWAY \":\n map, name = data.split(\" \")\n pathway = (\"PATH\", map, name)\n record.pathway.append(pathway)\n elif keyword == \"FORMULA \":\n record.formula = data\n elif keyword in (\"MASS \", \"EXACT_MASS \"):\n record.mass = data\n elif keyword == \"DBLINKS \":\n if \":\" in data:\n key, values = data.split(\":\")\n values = values.split()\n row = (key, values)\n record.dblinks.append(row)\n else:\n row = record.dblinks[-1]\n key, values = row\n values.extend(data.split())\n row = key, values\n record.dblinks[-1] = row\n\n\nif __name__ == \"__main__\":\n from Bio._utils import run_doctest\n\n run_doctest()\n", "path": "Bio/KEGG/Compound/__init__.py"}]}
| 2,355 | 147 |
gh_patches_debug_47663
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-875
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tag cog crashes bot on systems with non utf8 default encoding.
The tag cog leaves out encoding when opening files, assuming the default is UTF8 but that is not the case on some OSs and fails with the `UnicodeDecodeError`.
The offending block of code can be found here:
https://github.com/python-discord/bot/blob/7571cabe65e39d231523e713923cd23b927225bc/bot/cogs/tags.py#L43-L48
paging @kmonteith25 here as they mentioned the issue in #dev-contrib
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/cogs/tags.py`
Content:
```
1 import logging
2 import re
3 import time
4 from pathlib import Path
5 from typing import Callable, Dict, Iterable, List, Optional
6
7 from discord import Colour, Embed
8 from discord.ext.commands import Cog, Context, group
9
10 from bot import constants
11 from bot.bot import Bot
12 from bot.converters import TagNameConverter
13 from bot.pagination import LinePaginator
14 from bot.utils.messages import wait_for_deletion
15
16 log = logging.getLogger(__name__)
17
18 TEST_CHANNELS = (
19 constants.Channels.bot_commands,
20 constants.Channels.helpers
21 )
22
23 REGEX_NON_ALPHABET = re.compile(r"[^a-z]", re.MULTILINE & re.IGNORECASE)
24 FOOTER_TEXT = f"To show a tag, type {constants.Bot.prefix}tags <tagname>."
25
26
27 class Tags(Cog):
28 """Save new tags and fetch existing tags."""
29
30 def __init__(self, bot: Bot):
31 self.bot = bot
32 self.tag_cooldowns = {}
33 self._cache = self.get_tags()
34
35 @staticmethod
36 def get_tags() -> dict:
37 """Get all tags."""
38 # Save all tags in memory.
39 cache = {}
40 tag_files = Path("bot", "resources", "tags").iterdir()
41 for file in tag_files:
42 tag_title = file.stem
43 tag = {
44 "title": tag_title,
45 "embed": {
46 "description": file.read_text()
47 }
48 }
49 cache[tag_title] = tag
50 return cache
51
52 @staticmethod
53 def _fuzzy_search(search: str, target: str) -> float:
54 """A simple scoring algorithm based on how many letters are found / total, with order in mind."""
55 current, index = 0, 0
56 _search = REGEX_NON_ALPHABET.sub('', search.lower())
57 _targets = iter(REGEX_NON_ALPHABET.split(target.lower()))
58 _target = next(_targets)
59 try:
60 while True:
61 while index < len(_target) and _search[current] == _target[index]:
62 current += 1
63 index += 1
64 index, _target = 0, next(_targets)
65 except (StopIteration, IndexError):
66 pass
67 return current / len(_search) * 100
68
69 def _get_suggestions(self, tag_name: str, thresholds: Optional[List[int]] = None) -> List[str]:
70 """Return a list of suggested tags."""
71 scores: Dict[str, int] = {
72 tag_title: Tags._fuzzy_search(tag_name, tag['title'])
73 for tag_title, tag in self._cache.items()
74 }
75
76 thresholds = thresholds or [100, 90, 80, 70, 60]
77
78 for threshold in thresholds:
79 suggestions = [
80 self._cache[tag_title]
81 for tag_title, matching_score in scores.items()
82 if matching_score >= threshold
83 ]
84 if suggestions:
85 return suggestions
86
87 return []
88
89 def _get_tag(self, tag_name: str) -> list:
90 """Get a specific tag."""
91 found = [self._cache.get(tag_name.lower(), None)]
92 if not found[0]:
93 return self._get_suggestions(tag_name)
94 return found
95
96 def _get_tags_via_content(self, check: Callable[[Iterable], bool], keywords: str) -> list:
97 """
98 Search for tags via contents.
99
100 `predicate` will be the built-in any, all, or a custom callable. Must return a bool.
101 """
102 keywords_processed: List[str] = []
103 for keyword in keywords.split(','):
104 keyword_sanitized = keyword.strip().casefold()
105 if not keyword_sanitized:
106 # this happens when there are leading / trailing / consecutive comma.
107 continue
108 keywords_processed.append(keyword_sanitized)
109
110 if not keywords_processed:
111 # after sanitizing, we can end up with an empty list, for example when keywords is ','
112 # in that case, we simply want to search for such keywords directly instead.
113 keywords_processed = [keywords]
114
115 matching_tags = []
116 for tag in self._cache.values():
117 if check(query in tag['embed']['description'].casefold() for query in keywords_processed):
118 matching_tags.append(tag)
119
120 return matching_tags
121
122 async def _send_matching_tags(self, ctx: Context, keywords: str, matching_tags: list) -> None:
123 """Send the result of matching tags to user."""
124 if not matching_tags:
125 pass
126 elif len(matching_tags) == 1:
127 await ctx.send(embed=Embed().from_dict(matching_tags[0]['embed']))
128 else:
129 is_plural = keywords.strip().count(' ') > 0 or keywords.strip().count(',') > 0
130 embed = Embed(
131 title=f"Here are the tags containing the given keyword{'s' * is_plural}:",
132 description='\n'.join(tag['title'] for tag in matching_tags[:10])
133 )
134 await LinePaginator.paginate(
135 sorted(f"**»** {tag['title']}" for tag in matching_tags),
136 ctx,
137 embed,
138 footer_text=FOOTER_TEXT,
139 empty=False,
140 max_lines=15
141 )
142
143 @group(name='tags', aliases=('tag', 't'), invoke_without_command=True)
144 async def tags_group(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:
145 """Show all known tags, a single tag, or run a subcommand."""
146 await ctx.invoke(self.get_command, tag_name=tag_name)
147
148 @tags_group.group(name='search', invoke_without_command=True)
149 async def search_tag_content(self, ctx: Context, *, keywords: str) -> None:
150 """
151 Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.
152
153 Only search for tags that has ALL the keywords.
154 """
155 matching_tags = self._get_tags_via_content(all, keywords)
156 await self._send_matching_tags(ctx, keywords, matching_tags)
157
158 @search_tag_content.command(name='any')
159 async def search_tag_content_any_keyword(self, ctx: Context, *, keywords: Optional[str] = 'any') -> None:
160 """
161 Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.
162
163 Search for tags that has ANY of the keywords.
164 """
165 matching_tags = self._get_tags_via_content(any, keywords or 'any')
166 await self._send_matching_tags(ctx, keywords, matching_tags)
167
168 @tags_group.command(name='get', aliases=('show', 'g'))
169 async def get_command(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:
170 """Get a specified tag, or a list of all tags if no tag is specified."""
171
172 def _command_on_cooldown(tag_name: str) -> bool:
173 """
174 Check if the command is currently on cooldown, on a per-tag, per-channel basis.
175
176 The cooldown duration is set in constants.py.
177 """
178 now = time.time()
179
180 cooldown_conditions = (
181 tag_name
182 and tag_name in self.tag_cooldowns
183 and (now - self.tag_cooldowns[tag_name]["time"]) < constants.Cooldowns.tags
184 and self.tag_cooldowns[tag_name]["channel"] == ctx.channel.id
185 )
186
187 if cooldown_conditions:
188 return True
189 return False
190
191 if _command_on_cooldown(tag_name):
192 time_elapsed = time.time() - self.tag_cooldowns[tag_name]["time"]
193 time_left = constants.Cooldowns.tags - time_elapsed
194 log.info(
195 f"{ctx.author} tried to get the '{tag_name}' tag, but the tag is on cooldown. "
196 f"Cooldown ends in {time_left:.1f} seconds."
197 )
198 return
199
200 if tag_name is not None:
201 founds = self._get_tag(tag_name)
202
203 if len(founds) == 1:
204 tag = founds[0]
205 if ctx.channel.id not in TEST_CHANNELS:
206 self.tag_cooldowns[tag_name] = {
207 "time": time.time(),
208 "channel": ctx.channel.id
209 }
210
211 self.bot.stats.incr(f"tags.usages.{tag['title'].replace('-', '_')}")
212
213 await wait_for_deletion(
214 await ctx.send(embed=Embed.from_dict(tag['embed'])),
215 [ctx.author.id],
216 client=self.bot
217 )
218 elif founds and len(tag_name) >= 3:
219 await wait_for_deletion(
220 await ctx.send(
221 embed=Embed(
222 title='Did you mean ...',
223 description='\n'.join(tag['title'] for tag in founds[:10])
224 )
225 ),
226 [ctx.author.id],
227 client=self.bot
228 )
229
230 else:
231 tags = self._cache.values()
232 if not tags:
233 await ctx.send(embed=Embed(
234 description="**There are no tags in the database!**",
235 colour=Colour.red()
236 ))
237 else:
238 embed: Embed = Embed(title="**Current tags**")
239 await LinePaginator.paginate(
240 sorted(f"**»** {tag['title']}" for tag in tags),
241 ctx,
242 embed,
243 footer_text=FOOTER_TEXT,
244 empty=False,
245 max_lines=15
246 )
247
248
249 def setup(bot: Bot) -> None:
250 """Load the Tags cog."""
251 bot.add_cog(Tags(bot))
252
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bot/cogs/tags.py b/bot/cogs/tags.py
--- a/bot/cogs/tags.py
+++ b/bot/cogs/tags.py
@@ -43,7 +43,7 @@
tag = {
"title": tag_title,
"embed": {
- "description": file.read_text()
+ "description": file.read_text(encoding="utf-8")
}
}
cache[tag_title] = tag
|
{"golden_diff": "diff --git a/bot/cogs/tags.py b/bot/cogs/tags.py\n--- a/bot/cogs/tags.py\n+++ b/bot/cogs/tags.py\n@@ -43,7 +43,7 @@\n tag = {\n \"title\": tag_title,\n \"embed\": {\n- \"description\": file.read_text()\n+ \"description\": file.read_text(encoding=\"utf-8\")\n }\n }\n cache[tag_title] = tag\n", "issue": "Tag cog crashes bot on systems with non utf8 default encoding.\nThe tag cog leaves out encoding when opening files, assuming the default is UTF8 but that is not the case on some OSs and fails with the `UnicodeDecodeError`.\r\n\r\nThe offending block of code can be found here:\r\nhttps://github.com/python-discord/bot/blob/7571cabe65e39d231523e713923cd23b927225bc/bot/cogs/tags.py#L43-L48\r\n\r\npaging @kmonteith25 here as they mentioned the issue in #dev-contrib\n", "before_files": [{"content": "import logging\nimport re\nimport time\nfrom pathlib import Path\nfrom typing import Callable, Dict, Iterable, List, Optional\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import Cog, Context, group\n\nfrom bot import constants\nfrom bot.bot import Bot\nfrom bot.converters import TagNameConverter\nfrom bot.pagination import LinePaginator\nfrom bot.utils.messages import wait_for_deletion\n\nlog = logging.getLogger(__name__)\n\nTEST_CHANNELS = (\n constants.Channels.bot_commands,\n constants.Channels.helpers\n)\n\nREGEX_NON_ALPHABET = re.compile(r\"[^a-z]\", re.MULTILINE & re.IGNORECASE)\nFOOTER_TEXT = f\"To show a tag, type {constants.Bot.prefix}tags <tagname>.\"\n\n\nclass Tags(Cog):\n \"\"\"Save new tags and fetch existing tags.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.tag_cooldowns = {}\n self._cache = self.get_tags()\n\n @staticmethod\n def get_tags() -> dict:\n \"\"\"Get all tags.\"\"\"\n # Save all tags in memory.\n cache = {}\n tag_files = Path(\"bot\", \"resources\", \"tags\").iterdir()\n for file in tag_files:\n tag_title = file.stem\n tag = {\n \"title\": tag_title,\n \"embed\": {\n \"description\": file.read_text()\n }\n }\n cache[tag_title] = tag\n return cache\n\n @staticmethod\n def _fuzzy_search(search: str, target: str) -> float:\n \"\"\"A simple scoring algorithm based on how many letters are found / total, with order in mind.\"\"\"\n current, index = 0, 0\n _search = REGEX_NON_ALPHABET.sub('', search.lower())\n _targets = iter(REGEX_NON_ALPHABET.split(target.lower()))\n _target = next(_targets)\n try:\n while True:\n while index < len(_target) and _search[current] == _target[index]:\n current += 1\n index += 1\n index, _target = 0, next(_targets)\n except (StopIteration, IndexError):\n pass\n return current / len(_search) * 100\n\n def _get_suggestions(self, tag_name: str, thresholds: Optional[List[int]] = None) -> List[str]:\n \"\"\"Return a list of suggested tags.\"\"\"\n scores: Dict[str, int] = {\n tag_title: Tags._fuzzy_search(tag_name, tag['title'])\n for tag_title, tag in self._cache.items()\n }\n\n thresholds = thresholds or [100, 90, 80, 70, 60]\n\n for threshold in thresholds:\n suggestions = [\n self._cache[tag_title]\n for tag_title, matching_score in scores.items()\n if matching_score >= threshold\n ]\n if suggestions:\n return suggestions\n\n return []\n\n def _get_tag(self, tag_name: str) -> list:\n \"\"\"Get a specific tag.\"\"\"\n found = [self._cache.get(tag_name.lower(), None)]\n if not found[0]:\n return self._get_suggestions(tag_name)\n return found\n\n def _get_tags_via_content(self, check: Callable[[Iterable], bool], keywords: str) -> list:\n \"\"\"\n Search for tags via contents.\n\n `predicate` will be the built-in any, all, or a custom callable. Must return a bool.\n \"\"\"\n keywords_processed: List[str] = []\n for keyword in keywords.split(','):\n keyword_sanitized = keyword.strip().casefold()\n if not keyword_sanitized:\n # this happens when there are leading / trailing / consecutive comma.\n continue\n keywords_processed.append(keyword_sanitized)\n\n if not keywords_processed:\n # after sanitizing, we can end up with an empty list, for example when keywords is ','\n # in that case, we simply want to search for such keywords directly instead.\n keywords_processed = [keywords]\n\n matching_tags = []\n for tag in self._cache.values():\n if check(query in tag['embed']['description'].casefold() for query in keywords_processed):\n matching_tags.append(tag)\n\n return matching_tags\n\n async def _send_matching_tags(self, ctx: Context, keywords: str, matching_tags: list) -> None:\n \"\"\"Send the result of matching tags to user.\"\"\"\n if not matching_tags:\n pass\n elif len(matching_tags) == 1:\n await ctx.send(embed=Embed().from_dict(matching_tags[0]['embed']))\n else:\n is_plural = keywords.strip().count(' ') > 0 or keywords.strip().count(',') > 0\n embed = Embed(\n title=f\"Here are the tags containing the given keyword{'s' * is_plural}:\",\n description='\\n'.join(tag['title'] for tag in matching_tags[:10])\n )\n await LinePaginator.paginate(\n sorted(f\"**\u00bb** {tag['title']}\" for tag in matching_tags),\n ctx,\n embed,\n footer_text=FOOTER_TEXT,\n empty=False,\n max_lines=15\n )\n\n @group(name='tags', aliases=('tag', 't'), invoke_without_command=True)\n async def tags_group(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:\n \"\"\"Show all known tags, a single tag, or run a subcommand.\"\"\"\n await ctx.invoke(self.get_command, tag_name=tag_name)\n\n @tags_group.group(name='search', invoke_without_command=True)\n async def search_tag_content(self, ctx: Context, *, keywords: str) -> None:\n \"\"\"\n Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.\n\n Only search for tags that has ALL the keywords.\n \"\"\"\n matching_tags = self._get_tags_via_content(all, keywords)\n await self._send_matching_tags(ctx, keywords, matching_tags)\n\n @search_tag_content.command(name='any')\n async def search_tag_content_any_keyword(self, ctx: Context, *, keywords: Optional[str] = 'any') -> None:\n \"\"\"\n Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.\n\n Search for tags that has ANY of the keywords.\n \"\"\"\n matching_tags = self._get_tags_via_content(any, keywords or 'any')\n await self._send_matching_tags(ctx, keywords, matching_tags)\n\n @tags_group.command(name='get', aliases=('show', 'g'))\n async def get_command(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:\n \"\"\"Get a specified tag, or a list of all tags if no tag is specified.\"\"\"\n\n def _command_on_cooldown(tag_name: str) -> bool:\n \"\"\"\n Check if the command is currently on cooldown, on a per-tag, per-channel basis.\n\n The cooldown duration is set in constants.py.\n \"\"\"\n now = time.time()\n\n cooldown_conditions = (\n tag_name\n and tag_name in self.tag_cooldowns\n and (now - self.tag_cooldowns[tag_name][\"time\"]) < constants.Cooldowns.tags\n and self.tag_cooldowns[tag_name][\"channel\"] == ctx.channel.id\n )\n\n if cooldown_conditions:\n return True\n return False\n\n if _command_on_cooldown(tag_name):\n time_elapsed = time.time() - self.tag_cooldowns[tag_name][\"time\"]\n time_left = constants.Cooldowns.tags - time_elapsed\n log.info(\n f\"{ctx.author} tried to get the '{tag_name}' tag, but the tag is on cooldown. \"\n f\"Cooldown ends in {time_left:.1f} seconds.\"\n )\n return\n\n if tag_name is not None:\n founds = self._get_tag(tag_name)\n\n if len(founds) == 1:\n tag = founds[0]\n if ctx.channel.id not in TEST_CHANNELS:\n self.tag_cooldowns[tag_name] = {\n \"time\": time.time(),\n \"channel\": ctx.channel.id\n }\n\n self.bot.stats.incr(f\"tags.usages.{tag['title'].replace('-', '_')}\")\n\n await wait_for_deletion(\n await ctx.send(embed=Embed.from_dict(tag['embed'])),\n [ctx.author.id],\n client=self.bot\n )\n elif founds and len(tag_name) >= 3:\n await wait_for_deletion(\n await ctx.send(\n embed=Embed(\n title='Did you mean ...',\n description='\\n'.join(tag['title'] for tag in founds[:10])\n )\n ),\n [ctx.author.id],\n client=self.bot\n )\n\n else:\n tags = self._cache.values()\n if not tags:\n await ctx.send(embed=Embed(\n description=\"**There are no tags in the database!**\",\n colour=Colour.red()\n ))\n else:\n embed: Embed = Embed(title=\"**Current tags**\")\n await LinePaginator.paginate(\n sorted(f\"**\u00bb** {tag['title']}\" for tag in tags),\n ctx,\n embed,\n footer_text=FOOTER_TEXT,\n empty=False,\n max_lines=15\n )\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Tags cog.\"\"\"\n bot.add_cog(Tags(bot))\n", "path": "bot/cogs/tags.py"}], "after_files": [{"content": "import logging\nimport re\nimport time\nfrom pathlib import Path\nfrom typing import Callable, Dict, Iterable, List, Optional\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import Cog, Context, group\n\nfrom bot import constants\nfrom bot.bot import Bot\nfrom bot.converters import TagNameConverter\nfrom bot.pagination import LinePaginator\nfrom bot.utils.messages import wait_for_deletion\n\nlog = logging.getLogger(__name__)\n\nTEST_CHANNELS = (\n constants.Channels.bot_commands,\n constants.Channels.helpers\n)\n\nREGEX_NON_ALPHABET = re.compile(r\"[^a-z]\", re.MULTILINE & re.IGNORECASE)\nFOOTER_TEXT = f\"To show a tag, type {constants.Bot.prefix}tags <tagname>.\"\n\n\nclass Tags(Cog):\n \"\"\"Save new tags and fetch existing tags.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.tag_cooldowns = {}\n self._cache = self.get_tags()\n\n @staticmethod\n def get_tags() -> dict:\n \"\"\"Get all tags.\"\"\"\n # Save all tags in memory.\n cache = {}\n tag_files = Path(\"bot\", \"resources\", \"tags\").iterdir()\n for file in tag_files:\n tag_title = file.stem\n tag = {\n \"title\": tag_title,\n \"embed\": {\n \"description\": file.read_text(encoding=\"utf-8\")\n }\n }\n cache[tag_title] = tag\n return cache\n\n @staticmethod\n def _fuzzy_search(search: str, target: str) -> float:\n \"\"\"A simple scoring algorithm based on how many letters are found / total, with order in mind.\"\"\"\n current, index = 0, 0\n _search = REGEX_NON_ALPHABET.sub('', search.lower())\n _targets = iter(REGEX_NON_ALPHABET.split(target.lower()))\n _target = next(_targets)\n try:\n while True:\n while index < len(_target) and _search[current] == _target[index]:\n current += 1\n index += 1\n index, _target = 0, next(_targets)\n except (StopIteration, IndexError):\n pass\n return current / len(_search) * 100\n\n def _get_suggestions(self, tag_name: str, thresholds: Optional[List[int]] = None) -> List[str]:\n \"\"\"Return a list of suggested tags.\"\"\"\n scores: Dict[str, int] = {\n tag_title: Tags._fuzzy_search(tag_name, tag['title'])\n for tag_title, tag in self._cache.items()\n }\n\n thresholds = thresholds or [100, 90, 80, 70, 60]\n\n for threshold in thresholds:\n suggestions = [\n self._cache[tag_title]\n for tag_title, matching_score in scores.items()\n if matching_score >= threshold\n ]\n if suggestions:\n return suggestions\n\n return []\n\n def _get_tag(self, tag_name: str) -> list:\n \"\"\"Get a specific tag.\"\"\"\n found = [self._cache.get(tag_name.lower(), None)]\n if not found[0]:\n return self._get_suggestions(tag_name)\n return found\n\n def _get_tags_via_content(self, check: Callable[[Iterable], bool], keywords: str) -> list:\n \"\"\"\n Search for tags via contents.\n\n `predicate` will be the built-in any, all, or a custom callable. Must return a bool.\n \"\"\"\n keywords_processed: List[str] = []\n for keyword in keywords.split(','):\n keyword_sanitized = keyword.strip().casefold()\n if not keyword_sanitized:\n # this happens when there are leading / trailing / consecutive comma.\n continue\n keywords_processed.append(keyword_sanitized)\n\n if not keywords_processed:\n # after sanitizing, we can end up with an empty list, for example when keywords is ','\n # in that case, we simply want to search for such keywords directly instead.\n keywords_processed = [keywords]\n\n matching_tags = []\n for tag in self._cache.values():\n if check(query in tag['embed']['description'].casefold() for query in keywords_processed):\n matching_tags.append(tag)\n\n return matching_tags\n\n async def _send_matching_tags(self, ctx: Context, keywords: str, matching_tags: list) -> None:\n \"\"\"Send the result of matching tags to user.\"\"\"\n if not matching_tags:\n pass\n elif len(matching_tags) == 1:\n await ctx.send(embed=Embed().from_dict(matching_tags[0]['embed']))\n else:\n is_plural = keywords.strip().count(' ') > 0 or keywords.strip().count(',') > 0\n embed = Embed(\n title=f\"Here are the tags containing the given keyword{'s' * is_plural}:\",\n description='\\n'.join(tag['title'] for tag in matching_tags[:10])\n )\n await LinePaginator.paginate(\n sorted(f\"**\u00bb** {tag['title']}\" for tag in matching_tags),\n ctx,\n embed,\n footer_text=FOOTER_TEXT,\n empty=False,\n max_lines=15\n )\n\n @group(name='tags', aliases=('tag', 't'), invoke_without_command=True)\n async def tags_group(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:\n \"\"\"Show all known tags, a single tag, or run a subcommand.\"\"\"\n await ctx.invoke(self.get_command, tag_name=tag_name)\n\n @tags_group.group(name='search', invoke_without_command=True)\n async def search_tag_content(self, ctx: Context, *, keywords: str) -> None:\n \"\"\"\n Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.\n\n Only search for tags that has ALL the keywords.\n \"\"\"\n matching_tags = self._get_tags_via_content(all, keywords)\n await self._send_matching_tags(ctx, keywords, matching_tags)\n\n @search_tag_content.command(name='any')\n async def search_tag_content_any_keyword(self, ctx: Context, *, keywords: Optional[str] = 'any') -> None:\n \"\"\"\n Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.\n\n Search for tags that has ANY of the keywords.\n \"\"\"\n matching_tags = self._get_tags_via_content(any, keywords or 'any')\n await self._send_matching_tags(ctx, keywords, matching_tags)\n\n @tags_group.command(name='get', aliases=('show', 'g'))\n async def get_command(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:\n \"\"\"Get a specified tag, or a list of all tags if no tag is specified.\"\"\"\n\n def _command_on_cooldown(tag_name: str) -> bool:\n \"\"\"\n Check if the command is currently on cooldown, on a per-tag, per-channel basis.\n\n The cooldown duration is set in constants.py.\n \"\"\"\n now = time.time()\n\n cooldown_conditions = (\n tag_name\n and tag_name in self.tag_cooldowns\n and (now - self.tag_cooldowns[tag_name][\"time\"]) < constants.Cooldowns.tags\n and self.tag_cooldowns[tag_name][\"channel\"] == ctx.channel.id\n )\n\n if cooldown_conditions:\n return True\n return False\n\n if _command_on_cooldown(tag_name):\n time_elapsed = time.time() - self.tag_cooldowns[tag_name][\"time\"]\n time_left = constants.Cooldowns.tags - time_elapsed\n log.info(\n f\"{ctx.author} tried to get the '{tag_name}' tag, but the tag is on cooldown. \"\n f\"Cooldown ends in {time_left:.1f} seconds.\"\n )\n return\n\n if tag_name is not None:\n founds = self._get_tag(tag_name)\n\n if len(founds) == 1:\n tag = founds[0]\n if ctx.channel.id not in TEST_CHANNELS:\n self.tag_cooldowns[tag_name] = {\n \"time\": time.time(),\n \"channel\": ctx.channel.id\n }\n\n self.bot.stats.incr(f\"tags.usages.{tag['title'].replace('-', '_')}\")\n\n await wait_for_deletion(\n await ctx.send(embed=Embed.from_dict(tag['embed'])),\n [ctx.author.id],\n client=self.bot\n )\n elif founds and len(tag_name) >= 3:\n await wait_for_deletion(\n await ctx.send(\n embed=Embed(\n title='Did you mean ...',\n description='\\n'.join(tag['title'] for tag in founds[:10])\n )\n ),\n [ctx.author.id],\n client=self.bot\n )\n\n else:\n tags = self._cache.values()\n if not tags:\n await ctx.send(embed=Embed(\n description=\"**There are no tags in the database!**\",\n colour=Colour.red()\n ))\n else:\n embed: Embed = Embed(title=\"**Current tags**\")\n await LinePaginator.paginate(\n sorted(f\"**\u00bb** {tag['title']}\" for tag in tags),\n ctx,\n embed,\n footer_text=FOOTER_TEXT,\n empty=False,\n max_lines=15\n )\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Tags cog.\"\"\"\n bot.add_cog(Tags(bot))\n", "path": "bot/cogs/tags.py"}]}
| 3,070 | 99 |
gh_patches_debug_27324
|
rasdani/github-patches
|
git_diff
|
pretalx__pretalx-217
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
When redirecting to login view, urlquote path
Paths need to be urlquoted and get params need to be passed aswell.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pretalx/common/middleware.py`
Content:
```
1 from contextlib import suppress
2
3 import pytz
4 from django.conf import settings
5 from django.core.exceptions import PermissionDenied
6 from django.db.models import Q
7 from django.shortcuts import redirect, reverse
8 from django.urls import resolve
9 from django.utils import timezone, translation
10 from django.utils.translation.trans_real import (
11 get_supported_language_variant, language_code_re, parse_accept_lang_header,
12 )
13
14 from pretalx.event.models import Event
15 from pretalx.person.models import EventPermission
16
17
18 class EventPermissionMiddleware:
19 UNAUTHENTICATED_ORGA_URLS = (
20 'invitation.view',
21 'login',
22 )
23 REVIEWER_URLS = (
24 'submissions.list',
25 'submissions.content.view',
26 'submissions.questions.view'
27 )
28
29 def __init__(self, get_response):
30 self.get_response = get_response
31
32 def _set_orga_events(self, request):
33 if not request.user.is_anonymous:
34 if request.user.is_superuser:
35 request.orga_events = Event.objects.all()
36 else:
37 request.orga_events = Event.objects.filter(
38 Q(permissions__is_orga=True) | Q(permissions__is_reviewer=True),
39 permissions__user=request.user,
40 )
41
42 def _is_reviewer_url(self, url):
43 if url.url_name.startswith('reviews'):
44 return True
45 if url.url_name.endswith('dashboard'):
46 return True
47 if url.url_name in self.REVIEWER_URLS:
48 return True
49 return False
50
51 def _handle_orga_url(self, request, url):
52 if request.user.is_anonymous and url.url_name not in self.UNAUTHENTICATED_ORGA_URLS:
53 return reverse('orga:login') + f'?next={request.path}'
54 if hasattr(request, 'event') and request.event:
55 if not (request.is_orga or request.is_reviewer):
56 raise PermissionDenied()
57 if (request.is_orga and not request.user.is_superuser) and url.url_name.startswith('reviews'):
58 raise PermissionDenied()
59 if (request.is_reviewer and not request.user.is_superuser) and not self._is_reviewer_url(url):
60 raise PermissionDenied()
61 elif hasattr(request, 'event') and not request.user.is_superuser:
62 raise PermissionDenied()
63 self._select_locale(request)
64
65 def __call__(self, request):
66 url = resolve(request.path_info)
67
68 event_slug = url.kwargs.get('event')
69 if event_slug:
70 try:
71 request.event = Event.objects.get(slug__iexact=event_slug)
72 except Event.DoesNotExist:
73 request.event = None
74
75 if hasattr(request, 'event') and request.event:
76 if not request.user.is_anonymous:
77 request.is_orga = request.user.is_superuser or EventPermission.objects.filter(
78 user=request.user,
79 event=request.event,
80 is_orga=True
81 ).exists()
82 request.is_reviewer = request.user.is_superuser or EventPermission.objects.filter(
83 user=request.user,
84 event=request.event,
85 is_reviewer=True
86 ).exists()
87 else:
88 request.is_orga = False
89 request.is_reviewer = False
90 timezone.activate(pytz.timezone(request.event.timezone))
91
92 self._set_orga_events(request)
93
94 if 'orga' in url.namespaces:
95 url = self._handle_orga_url(request, url)
96 if url:
97 return redirect(url)
98 return self.get_response(request)
99
100 def _select_locale(self, request):
101 supported = request.event.locales if (hasattr(request, 'event') and request.event) else settings.LANGUAGES
102 language = (
103 self._language_from_user(request, supported)
104 or self._language_from_cookie(request, supported)
105 or self._language_from_browser(request, supported)
106 )
107 if hasattr(request, 'event') and request.event:
108 language = language or request.event.locale
109
110 translation.activate(language)
111 request.LANGUAGE_CODE = translation.get_language()
112
113 with suppress(pytz.UnknownTimeZoneError):
114 if request.user.is_authenticated:
115 tzname = request.user.timezone
116 elif hasattr(request, 'event') and request.event:
117 tzname = request.event.timezone
118 else:
119 tzname = settings.TIME_ZONE
120 timezone.activate(pytz.timezone(tzname))
121 request.timezone = tzname
122
123 def _language_from_browser(self, request, supported):
124 accept_value = request.META.get('HTTP_ACCEPT_LANGUAGE', '')
125 for accept_lang, unused in parse_accept_lang_header(accept_value):
126 if accept_lang == '*':
127 break
128
129 if not language_code_re.search(accept_lang):
130 continue
131
132 try:
133 val = get_supported_language_variant(accept_lang)
134 if val and val in supported:
135 return val
136 except LookupError:
137 continue
138
139 def _language_from_cookie(self, request, supported):
140 cookie_value = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME)
141 with suppress(LookupError):
142 cookie_value = get_supported_language_variant(cookie_value)
143 if cookie_value and cookie_value in supported:
144 return cookie_value
145
146 def _language_from_user(self, request, supported):
147 if request.user.is_authenticated:
148 with suppress(LookupError):
149 value = get_supported_language_variant(request.user.locale)
150 if value and value in supported:
151 return value
152
```
Path: `src/pretalx/orga/views/auth.py`
Content:
```
1 import random
2 import urllib
3
4 from django.contrib import messages
5 from django.contrib.auth import authenticate, login, logout
6 from django.http import HttpRequest, HttpResponseRedirect
7 from django.shortcuts import redirect
8 from django.urls import reverse
9 from django.utils.http import is_safe_url
10 from django.utils.translation import ugettext as _
11 from django.views.generic import TemplateView
12
13
14 class LoginView(TemplateView):
15 template_name = 'orga/auth/login.html'
16
17 def post(self, request: HttpRequest, *args, **kwargs) -> HttpResponseRedirect:
18 username = request.POST.get('username')
19 password = request.POST.get('password')
20 user = authenticate(username=username, password=password)
21
22 if user is None:
23 messages.error(request, _('No user account matches the entered credentials.'))
24 return redirect('orga:login')
25
26 if not user.is_active:
27 messages.error(request, _('User account is deactivated.'))
28 return redirect('orga:login')
29
30 login(request, user)
31 url = urllib.parse.unquote(request.GET.get('next', ''))
32 if url and is_safe_url(url, request.get_host()):
33 return redirect(url)
34
35 # check where to reasonably redirect:
36 # orga of a running event? go to that event.
37 # speaker of a running event? go to that event.
38 # neither? go to (a) current cfp
39 # no current cfp? dummy page
40
41 messages.success(request, random.choice([
42 _('Hi, nice to see you!'),
43 _('Welcome!'),
44 _('I hope you are having a good day :)'),
45 _('Remember: organizing events is lots of work, but it pays off.'),
46 _('If you are waiting for feedback from your speakers, try sending a mail to a subset of them.'),
47 _('Remember to provide your speakers with all information they need ahead of time.'),
48 _('Even the busiest event organizers should make time to see at least one talk ;)'),
49 ]))
50 return redirect(reverse('orga:dashboard'))
51
52
53 def logout_view(request: HttpRequest) -> HttpResponseRedirect:
54 logout(request)
55 return redirect(reverse('orga:login'))
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pretalx/common/middleware.py b/src/pretalx/common/middleware.py
--- a/src/pretalx/common/middleware.py
+++ b/src/pretalx/common/middleware.py
@@ -1,3 +1,4 @@
+import urllib
from contextlib import suppress
import pytz
@@ -50,7 +51,8 @@
def _handle_orga_url(self, request, url):
if request.user.is_anonymous and url.url_name not in self.UNAUTHENTICATED_ORGA_URLS:
- return reverse('orga:login') + f'?next={request.path}'
+ params = '&' + request.GET.urlencode() if request.GET else ''
+ return reverse('orga:login') + f'?next={urllib.parse.quote(request.path)}' + params
if hasattr(request, 'event') and request.event:
if not (request.is_orga or request.is_reviewer):
raise PermissionDenied()
diff --git a/src/pretalx/orga/views/auth.py b/src/pretalx/orga/views/auth.py
--- a/src/pretalx/orga/views/auth.py
+++ b/src/pretalx/orga/views/auth.py
@@ -28,9 +28,10 @@
return redirect('orga:login')
login(request, user)
- url = urllib.parse.unquote(request.GET.get('next', ''))
+ params = request.GET.copy()
+ url = urllib.parse.unquote(params.pop('next', [''])[0])
if url and is_safe_url(url, request.get_host()):
- return redirect(url)
+ return redirect(url + ('?' + params.urlencode() if params else ''))
# check where to reasonably redirect:
# orga of a running event? go to that event.
|
{"golden_diff": "diff --git a/src/pretalx/common/middleware.py b/src/pretalx/common/middleware.py\n--- a/src/pretalx/common/middleware.py\n+++ b/src/pretalx/common/middleware.py\n@@ -1,3 +1,4 @@\n+import urllib\n from contextlib import suppress\n \n import pytz\n@@ -50,7 +51,8 @@\n \n def _handle_orga_url(self, request, url):\n if request.user.is_anonymous and url.url_name not in self.UNAUTHENTICATED_ORGA_URLS:\n- return reverse('orga:login') + f'?next={request.path}'\n+ params = '&' + request.GET.urlencode() if request.GET else ''\n+ return reverse('orga:login') + f'?next={urllib.parse.quote(request.path)}' + params\n if hasattr(request, 'event') and request.event:\n if not (request.is_orga or request.is_reviewer):\n raise PermissionDenied()\ndiff --git a/src/pretalx/orga/views/auth.py b/src/pretalx/orga/views/auth.py\n--- a/src/pretalx/orga/views/auth.py\n+++ b/src/pretalx/orga/views/auth.py\n@@ -28,9 +28,10 @@\n return redirect('orga:login')\n \n login(request, user)\n- url = urllib.parse.unquote(request.GET.get('next', ''))\n+ params = request.GET.copy()\n+ url = urllib.parse.unquote(params.pop('next', [''])[0])\n if url and is_safe_url(url, request.get_host()):\n- return redirect(url)\n+ return redirect(url + ('?' + params.urlencode() if params else ''))\n \n # check where to reasonably redirect:\n # orga of a running event? go to that event.\n", "issue": "When redirecting to login view, urlquote path\nPaths need to be urlquoted and get params need to be passed aswell.\n", "before_files": [{"content": "from contextlib import suppress\n\nimport pytz\nfrom django.conf import settings\nfrom django.core.exceptions import PermissionDenied\nfrom django.db.models import Q\nfrom django.shortcuts import redirect, reverse\nfrom django.urls import resolve\nfrom django.utils import timezone, translation\nfrom django.utils.translation.trans_real import (\n get_supported_language_variant, language_code_re, parse_accept_lang_header,\n)\n\nfrom pretalx.event.models import Event\nfrom pretalx.person.models import EventPermission\n\n\nclass EventPermissionMiddleware:\n UNAUTHENTICATED_ORGA_URLS = (\n 'invitation.view',\n 'login',\n )\n REVIEWER_URLS = (\n 'submissions.list',\n 'submissions.content.view',\n 'submissions.questions.view'\n )\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def _set_orga_events(self, request):\n if not request.user.is_anonymous:\n if request.user.is_superuser:\n request.orga_events = Event.objects.all()\n else:\n request.orga_events = Event.objects.filter(\n Q(permissions__is_orga=True) | Q(permissions__is_reviewer=True),\n permissions__user=request.user,\n )\n\n def _is_reviewer_url(self, url):\n if url.url_name.startswith('reviews'):\n return True\n if url.url_name.endswith('dashboard'):\n return True\n if url.url_name in self.REVIEWER_URLS:\n return True\n return False\n\n def _handle_orga_url(self, request, url):\n if request.user.is_anonymous and url.url_name not in self.UNAUTHENTICATED_ORGA_URLS:\n return reverse('orga:login') + f'?next={request.path}'\n if hasattr(request, 'event') and request.event:\n if not (request.is_orga or request.is_reviewer):\n raise PermissionDenied()\n if (request.is_orga and not request.user.is_superuser) and url.url_name.startswith('reviews'):\n raise PermissionDenied()\n if (request.is_reviewer and not request.user.is_superuser) and not self._is_reviewer_url(url):\n raise PermissionDenied()\n elif hasattr(request, 'event') and not request.user.is_superuser:\n raise PermissionDenied()\n self._select_locale(request)\n\n def __call__(self, request):\n url = resolve(request.path_info)\n\n event_slug = url.kwargs.get('event')\n if event_slug:\n try:\n request.event = Event.objects.get(slug__iexact=event_slug)\n except Event.DoesNotExist:\n request.event = None\n\n if hasattr(request, 'event') and request.event:\n if not request.user.is_anonymous:\n request.is_orga = request.user.is_superuser or EventPermission.objects.filter(\n user=request.user,\n event=request.event,\n is_orga=True\n ).exists()\n request.is_reviewer = request.user.is_superuser or EventPermission.objects.filter(\n user=request.user,\n event=request.event,\n is_reviewer=True\n ).exists()\n else:\n request.is_orga = False\n request.is_reviewer = False\n timezone.activate(pytz.timezone(request.event.timezone))\n\n self._set_orga_events(request)\n\n if 'orga' in url.namespaces:\n url = self._handle_orga_url(request, url)\n if url:\n return redirect(url)\n return self.get_response(request)\n\n def _select_locale(self, request):\n supported = request.event.locales if (hasattr(request, 'event') and request.event) else settings.LANGUAGES\n language = (\n self._language_from_user(request, supported)\n or self._language_from_cookie(request, supported)\n or self._language_from_browser(request, supported)\n )\n if hasattr(request, 'event') and request.event:\n language = language or request.event.locale\n\n translation.activate(language)\n request.LANGUAGE_CODE = translation.get_language()\n\n with suppress(pytz.UnknownTimeZoneError):\n if request.user.is_authenticated:\n tzname = request.user.timezone\n elif hasattr(request, 'event') and request.event:\n tzname = request.event.timezone\n else:\n tzname = settings.TIME_ZONE\n timezone.activate(pytz.timezone(tzname))\n request.timezone = tzname\n\n def _language_from_browser(self, request, supported):\n accept_value = request.META.get('HTTP_ACCEPT_LANGUAGE', '')\n for accept_lang, unused in parse_accept_lang_header(accept_value):\n if accept_lang == '*':\n break\n\n if not language_code_re.search(accept_lang):\n continue\n\n try:\n val = get_supported_language_variant(accept_lang)\n if val and val in supported:\n return val\n except LookupError:\n continue\n\n def _language_from_cookie(self, request, supported):\n cookie_value = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME)\n with suppress(LookupError):\n cookie_value = get_supported_language_variant(cookie_value)\n if cookie_value and cookie_value in supported:\n return cookie_value\n\n def _language_from_user(self, request, supported):\n if request.user.is_authenticated:\n with suppress(LookupError):\n value = get_supported_language_variant(request.user.locale)\n if value and value in supported:\n return value\n", "path": "src/pretalx/common/middleware.py"}, {"content": "import random\nimport urllib\n\nfrom django.contrib import messages\nfrom django.contrib.auth import authenticate, login, logout\nfrom django.http import HttpRequest, HttpResponseRedirect\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.http import is_safe_url\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import TemplateView\n\n\nclass LoginView(TemplateView):\n template_name = 'orga/auth/login.html'\n\n def post(self, request: HttpRequest, *args, **kwargs) -> HttpResponseRedirect:\n username = request.POST.get('username')\n password = request.POST.get('password')\n user = authenticate(username=username, password=password)\n\n if user is None:\n messages.error(request, _('No user account matches the entered credentials.'))\n return redirect('orga:login')\n\n if not user.is_active:\n messages.error(request, _('User account is deactivated.'))\n return redirect('orga:login')\n\n login(request, user)\n url = urllib.parse.unquote(request.GET.get('next', ''))\n if url and is_safe_url(url, request.get_host()):\n return redirect(url)\n\n # check where to reasonably redirect:\n # orga of a running event? go to that event.\n # speaker of a running event? go to that event.\n # neither? go to (a) current cfp\n # no current cfp? dummy page\n\n messages.success(request, random.choice([\n _('Hi, nice to see you!'),\n _('Welcome!'),\n _('I hope you are having a good day :)'),\n _('Remember: organizing events is lots of work, but it pays off.'),\n _('If you are waiting for feedback from your speakers, try sending a mail to a subset of them.'),\n _('Remember to provide your speakers with all information they need ahead of time.'),\n _('Even the busiest event organizers should make time to see at least one talk ;)'),\n ]))\n return redirect(reverse('orga:dashboard'))\n\n\ndef logout_view(request: HttpRequest) -> HttpResponseRedirect:\n logout(request)\n return redirect(reverse('orga:login'))\n", "path": "src/pretalx/orga/views/auth.py"}], "after_files": [{"content": "import urllib\nfrom contextlib import suppress\n\nimport pytz\nfrom django.conf import settings\nfrom django.core.exceptions import PermissionDenied\nfrom django.db.models import Q\nfrom django.shortcuts import redirect, reverse\nfrom django.urls import resolve\nfrom django.utils import timezone, translation\nfrom django.utils.translation.trans_real import (\n get_supported_language_variant, language_code_re, parse_accept_lang_header,\n)\n\nfrom pretalx.event.models import Event\nfrom pretalx.person.models import EventPermission\n\n\nclass EventPermissionMiddleware:\n UNAUTHENTICATED_ORGA_URLS = (\n 'invitation.view',\n 'login',\n )\n REVIEWER_URLS = (\n 'submissions.list',\n 'submissions.content.view',\n 'submissions.questions.view'\n )\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def _set_orga_events(self, request):\n if not request.user.is_anonymous:\n if request.user.is_superuser:\n request.orga_events = Event.objects.all()\n else:\n request.orga_events = Event.objects.filter(\n Q(permissions__is_orga=True) | Q(permissions__is_reviewer=True),\n permissions__user=request.user,\n )\n\n def _is_reviewer_url(self, url):\n if url.url_name.startswith('reviews'):\n return True\n if url.url_name.endswith('dashboard'):\n return True\n if url.url_name in self.REVIEWER_URLS:\n return True\n return False\n\n def _handle_orga_url(self, request, url):\n if request.user.is_anonymous and url.url_name not in self.UNAUTHENTICATED_ORGA_URLS:\n params = '&' + request.GET.urlencode() if request.GET else ''\n return reverse('orga:login') + f'?next={urllib.parse.quote(request.path)}' + params\n if hasattr(request, 'event') and request.event:\n if not (request.is_orga or request.is_reviewer):\n raise PermissionDenied()\n if (request.is_orga and not request.user.is_superuser) and url.url_name.startswith('reviews'):\n raise PermissionDenied()\n if (request.is_reviewer and not request.user.is_superuser) and not self._is_reviewer_url(url):\n raise PermissionDenied()\n elif hasattr(request, 'event') and not request.user.is_superuser:\n raise PermissionDenied()\n self._select_locale(request)\n\n def __call__(self, request):\n url = resolve(request.path_info)\n\n event_slug = url.kwargs.get('event')\n if event_slug:\n try:\n request.event = Event.objects.get(slug__iexact=event_slug)\n except Event.DoesNotExist:\n request.event = None\n\n if hasattr(request, 'event') and request.event:\n if not request.user.is_anonymous:\n request.is_orga = request.user.is_superuser or EventPermission.objects.filter(\n user=request.user,\n event=request.event,\n is_orga=True\n ).exists()\n request.is_reviewer = request.user.is_superuser or EventPermission.objects.filter(\n user=request.user,\n event=request.event,\n is_reviewer=True\n ).exists()\n else:\n request.is_orga = False\n request.is_reviewer = False\n timezone.activate(pytz.timezone(request.event.timezone))\n\n self._set_orga_events(request)\n\n if 'orga' in url.namespaces:\n url = self._handle_orga_url(request, url)\n if url:\n return redirect(url)\n return self.get_response(request)\n\n def _select_locale(self, request):\n supported = request.event.locales if (hasattr(request, 'event') and request.event) else settings.LANGUAGES\n language = (\n self._language_from_user(request, supported)\n or self._language_from_cookie(request, supported)\n or self._language_from_browser(request, supported)\n )\n if hasattr(request, 'event') and request.event:\n language = language or request.event.locale\n\n translation.activate(language)\n request.LANGUAGE_CODE = translation.get_language()\n\n with suppress(pytz.UnknownTimeZoneError):\n if request.user.is_authenticated:\n tzname = request.user.timezone\n elif hasattr(request, 'event') and request.event:\n tzname = request.event.timezone\n else:\n tzname = settings.TIME_ZONE\n timezone.activate(pytz.timezone(tzname))\n request.timezone = tzname\n\n def _language_from_browser(self, request, supported):\n accept_value = request.META.get('HTTP_ACCEPT_LANGUAGE', '')\n for accept_lang, unused in parse_accept_lang_header(accept_value):\n if accept_lang == '*':\n break\n\n if not language_code_re.search(accept_lang):\n continue\n\n try:\n val = get_supported_language_variant(accept_lang)\n if val and val in supported:\n return val\n except LookupError:\n continue\n\n def _language_from_cookie(self, request, supported):\n cookie_value = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME)\n with suppress(LookupError):\n cookie_value = get_supported_language_variant(cookie_value)\n if cookie_value and cookie_value in supported:\n return cookie_value\n\n def _language_from_user(self, request, supported):\n if request.user.is_authenticated:\n with suppress(LookupError):\n value = get_supported_language_variant(request.user.locale)\n if value and value in supported:\n return value\n", "path": "src/pretalx/common/middleware.py"}, {"content": "import random\nimport urllib\n\nfrom django.contrib import messages\nfrom django.contrib.auth import authenticate, login, logout\nfrom django.http import HttpRequest, HttpResponseRedirect\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.http import is_safe_url\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import TemplateView\n\n\nclass LoginView(TemplateView):\n template_name = 'orga/auth/login.html'\n\n def post(self, request: HttpRequest, *args, **kwargs) -> HttpResponseRedirect:\n username = request.POST.get('username')\n password = request.POST.get('password')\n user = authenticate(username=username, password=password)\n\n if user is None:\n messages.error(request, _('No user account matches the entered credentials.'))\n return redirect('orga:login')\n\n if not user.is_active:\n messages.error(request, _('User account is deactivated.'))\n return redirect('orga:login')\n\n login(request, user)\n params = request.GET.copy()\n url = urllib.parse.unquote(params.pop('next', [''])[0])\n if url and is_safe_url(url, request.get_host()):\n return redirect(url + ('?' + params.urlencode() if params else ''))\n\n # check where to reasonably redirect:\n # orga of a running event? go to that event.\n # speaker of a running event? go to that event.\n # neither? go to (a) current cfp\n # no current cfp? dummy page\n\n messages.success(request, random.choice([\n _('Hi, nice to see you!'),\n _('Welcome!'),\n _('I hope you are having a good day :)'),\n _('Remember: organizing events is lots of work, but it pays off.'),\n _('If you are waiting for feedback from your speakers, try sending a mail to a subset of them.'),\n _('Remember to provide your speakers with all information they need ahead of time.'),\n _('Even the busiest event organizers should make time to see at least one talk ;)'),\n ]))\n return redirect(reverse('orga:dashboard'))\n\n\ndef logout_view(request: HttpRequest) -> HttpResponseRedirect:\n logout(request)\n return redirect(reverse('orga:login'))\n", "path": "src/pretalx/orga/views/auth.py"}]}
| 2,326 | 394 |
gh_patches_debug_20596
|
rasdani/github-patches
|
git_diff
|
googleapis__google-auth-library-python-262
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AuthorizedSession attempts to refresh token when no refresh token was provided
Using google-auth 1.4.1
If `google.auth.transport.requests.AuthorizedSession` is used with an expired token, it will automatically try to refresh the token even if no refresh token was provided to the credentials object.
This causes the unreadable exception
```TransportError: Invalid URL 'None': No schema supplied. Perhaps you meant http://None?```
There should be a sanity check for a non-existing refresh token before any refresh attempt is made. A proper exception should be raised if the token is expired.
Sample code:
```python
import google.oauth2.credentials
from google.auth.transport.requests import AuthorizedSession
credentials = google.oauth2.credentials.Credentials('an_expired_token')
authed_session = AuthorizedSession(credentials)
response = authed_session.get('some_url_requiring_authentication')
```
Traceback:
```
File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 521, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python3.6/site-packages/google/auth/transport/requests.py", line 218, in request
self.credentials.refresh(auth_request_with_timeout)
File "/usr/lib/python3.6/site-packages/google/oauth2/credentials.py", line 126, in refresh
self._client_secret))
File "/usr/lib/python3.6/site-packages/google/oauth2/_client.py", line 237, in refresh_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/usr/lib/python3.6/site-packages/google/oauth2/_client.py", line 106, in _token_endpoint_request
method='POST', url=token_uri, headers=headers, body=body)
File "/usr/lib/python3.6/site-packages/google/auth/transport/requests.py", line 124, in __call__
six.raise_from(new_exc, caught_exc)
File "<string>", line 3, in raise_from
google.auth.exceptions.TransportError: Invalid URL 'None': No schema supplied. Perhaps you meant http://None?
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/oauth2/credentials.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """OAuth 2.0 Credentials.
16
17 This module provides credentials based on OAuth 2.0 access and refresh tokens.
18 These credentials usually access resources on behalf of a user (resource
19 owner).
20
21 Specifically, this is intended to use access tokens acquired using the
22 `Authorization Code grant`_ and can refresh those tokens using a
23 optional `refresh token`_.
24
25 Obtaining the initial access and refresh token is outside of the scope of this
26 module. Consult `rfc6749 section 4.1`_ for complete details on the
27 Authorization Code grant flow.
28
29 .. _Authorization Code grant: https://tools.ietf.org/html/rfc6749#section-1.3.1
30 .. _refresh token: https://tools.ietf.org/html/rfc6749#section-6
31 .. _rfc6749 section 4.1: https://tools.ietf.org/html/rfc6749#section-4.1
32 """
33
34 import io
35 import json
36
37 import six
38
39 from google.auth import _helpers
40 from google.auth import credentials
41 from google.oauth2 import _client
42
43
44 # The Google OAuth 2.0 token endpoint. Used for authorized user credentials.
45 _GOOGLE_OAUTH2_TOKEN_ENDPOINT = 'https://accounts.google.com/o/oauth2/token'
46
47
48 class Credentials(credentials.ReadOnlyScoped, credentials.Credentials):
49 """Credentials using OAuth 2.0 access and refresh tokens."""
50
51 def __init__(self, token, refresh_token=None, id_token=None,
52 token_uri=None, client_id=None, client_secret=None,
53 scopes=None):
54 """
55 Args:
56 token (Optional(str)): The OAuth 2.0 access token. Can be None
57 if refresh information is provided.
58 refresh_token (str): The OAuth 2.0 refresh token. If specified,
59 credentials can be refreshed.
60 id_token (str): The Open ID Connect ID Token.
61 token_uri (str): The OAuth 2.0 authorization server's token
62 endpoint URI. Must be specified for refresh, can be left as
63 None if the token can not be refreshed.
64 client_id (str): The OAuth 2.0 client ID. Must be specified for
65 refresh, can be left as None if the token can not be refreshed.
66 client_secret(str): The OAuth 2.0 client secret. Must be specified
67 for refresh, can be left as None if the token can not be
68 refreshed.
69 scopes (Sequence[str]): The scopes that were originally used
70 to obtain authorization. This is a purely informative parameter
71 that can be used by :meth:`has_scopes`. OAuth 2.0 credentials
72 can not request additional scopes after authorization.
73 """
74 super(Credentials, self).__init__()
75 self.token = token
76 self._refresh_token = refresh_token
77 self._id_token = id_token
78 self._scopes = scopes
79 self._token_uri = token_uri
80 self._client_id = client_id
81 self._client_secret = client_secret
82
83 @property
84 def refresh_token(self):
85 """Optional[str]: The OAuth 2.0 refresh token."""
86 return self._refresh_token
87
88 @property
89 def token_uri(self):
90 """Optional[str]: The OAuth 2.0 authorization server's token endpoint
91 URI."""
92 return self._token_uri
93
94 @property
95 def id_token(self):
96 """Optional[str]: The Open ID Connect ID Token.
97
98 Depending on the authorization server and the scopes requested, this
99 may be populated when credentials are obtained and updated when
100 :meth:`refresh` is called. This token is a JWT. It can be verified
101 and decoded using :func:`google.oauth2.id_token.verify_oauth2_token`.
102 """
103 return self._id_token
104
105 @property
106 def client_id(self):
107 """Optional[str]: The OAuth 2.0 client ID."""
108 return self._client_id
109
110 @property
111 def client_secret(self):
112 """Optional[str]: The OAuth 2.0 client secret."""
113 return self._client_secret
114
115 @property
116 def requires_scopes(self):
117 """False: OAuth 2.0 credentials have their scopes set when
118 the initial token is requested and can not be changed."""
119 return False
120
121 @_helpers.copy_docstring(credentials.Credentials)
122 def refresh(self, request):
123 access_token, refresh_token, expiry, grant_response = (
124 _client.refresh_grant(
125 request, self._token_uri, self._refresh_token, self._client_id,
126 self._client_secret))
127
128 self.token = access_token
129 self.expiry = expiry
130 self._refresh_token = refresh_token
131 self._id_token = grant_response.get('id_token')
132
133 @classmethod
134 def from_authorized_user_info(cls, info, scopes=None):
135 """Creates a Credentials instance from parsed authorized user info.
136
137 Args:
138 info (Mapping[str, str]): The authorized user info in Google
139 format.
140 scopes (Sequence[str]): Optional list of scopes to include in the
141 credentials.
142
143 Returns:
144 google.oauth2.credentials.Credentials: The constructed
145 credentials.
146
147 Raises:
148 ValueError: If the info is not in the expected format.
149 """
150 keys_needed = set(('refresh_token', 'client_id', 'client_secret'))
151 missing = keys_needed.difference(six.iterkeys(info))
152
153 if missing:
154 raise ValueError(
155 'Authorized user info was not in the expected format, missing '
156 'fields {}.'.format(', '.join(missing)))
157
158 return Credentials(
159 None, # No access token, must be refreshed.
160 refresh_token=info['refresh_token'],
161 token_uri=_GOOGLE_OAUTH2_TOKEN_ENDPOINT,
162 scopes=scopes,
163 client_id=info['client_id'],
164 client_secret=info['client_secret'])
165
166 @classmethod
167 def from_authorized_user_file(cls, filename, scopes=None):
168 """Creates a Credentials instance from an authorized user json file.
169
170 Args:
171 filename (str): The path to the authorized user json file.
172 scopes (Sequence[str]): Optional list of scopes to include in the
173 credentials.
174
175 Returns:
176 google.oauth2.credentials.Credentials: The constructed
177 credentials.
178
179 Raises:
180 ValueError: If the file is not in the expected format.
181 """
182 with io.open(filename, 'r', encoding='utf-8') as json_file:
183 data = json.load(json_file)
184 return cls.from_authorized_user_info(data, scopes)
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/google/oauth2/credentials.py b/google/oauth2/credentials.py
--- a/google/oauth2/credentials.py
+++ b/google/oauth2/credentials.py
@@ -38,6 +38,7 @@
from google.auth import _helpers
from google.auth import credentials
+from google.auth import exceptions
from google.oauth2 import _client
@@ -120,6 +121,15 @@
@_helpers.copy_docstring(credentials.Credentials)
def refresh(self, request):
+ if (self._refresh_token is None or
+ self._token_uri is None or
+ self._client_id is None or
+ self._client_secret is None):
+ raise exceptions.RefreshError(
+ 'The credentials do not contain the necessary fields need to '
+ 'refresh the access token. You must specify refresh_token, '
+ 'token_uri, client_id, and client_secret.')
+
access_token, refresh_token, expiry, grant_response = (
_client.refresh_grant(
request, self._token_uri, self._refresh_token, self._client_id,
|
{"golden_diff": "diff --git a/google/oauth2/credentials.py b/google/oauth2/credentials.py\n--- a/google/oauth2/credentials.py\n+++ b/google/oauth2/credentials.py\n@@ -38,6 +38,7 @@\n \n from google.auth import _helpers\n from google.auth import credentials\n+from google.auth import exceptions\n from google.oauth2 import _client\n \n \n@@ -120,6 +121,15 @@\n \n @_helpers.copy_docstring(credentials.Credentials)\n def refresh(self, request):\n+ if (self._refresh_token is None or\n+ self._token_uri is None or\n+ self._client_id is None or\n+ self._client_secret is None):\n+ raise exceptions.RefreshError(\n+ 'The credentials do not contain the necessary fields need to '\n+ 'refresh the access token. You must specify refresh_token, '\n+ 'token_uri, client_id, and client_secret.')\n+\n access_token, refresh_token, expiry, grant_response = (\n _client.refresh_grant(\n request, self._token_uri, self._refresh_token, self._client_id,\n", "issue": "AuthorizedSession attempts to refresh token when no refresh token was provided\nUsing google-auth 1.4.1\r\n\r\nIf `google.auth.transport.requests.AuthorizedSession` is used with an expired token, it will automatically try to refresh the token even if no refresh token was provided to the credentials object.\r\nThis causes the unreadable exception\r\n```TransportError: Invalid URL 'None': No schema supplied. Perhaps you meant http://None?```\r\n\r\nThere should be a sanity check for a non-existing refresh token before any refresh attempt is made. A proper exception should be raised if the token is expired.\r\n\r\nSample code:\r\n```python\r\nimport google.oauth2.credentials\r\nfrom google.auth.transport.requests import AuthorizedSession\r\ncredentials = google.oauth2.credentials.Credentials('an_expired_token')\r\nauthed_session = AuthorizedSession(credentials)\r\nresponse = authed_session.get('some_url_requiring_authentication')\r\n```\r\n\r\nTraceback:\r\n```\r\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 521, in get\r\n return self.request('GET', url, **kwargs)\r\n File \"/usr/lib/python3.6/site-packages/google/auth/transport/requests.py\", line 218, in request\r\n self.credentials.refresh(auth_request_with_timeout)\r\n File \"/usr/lib/python3.6/site-packages/google/oauth2/credentials.py\", line 126, in refresh\r\n self._client_secret))\r\n File \"/usr/lib/python3.6/site-packages/google/oauth2/_client.py\", line 237, in refresh_grant\r\n response_data = _token_endpoint_request(request, token_uri, body)\r\n File \"/usr/lib/python3.6/site-packages/google/oauth2/_client.py\", line 106, in _token_endpoint_request\r\n method='POST', url=token_uri, headers=headers, body=body)\r\n File \"/usr/lib/python3.6/site-packages/google/auth/transport/requests.py\", line 124, in __call__\r\n six.raise_from(new_exc, caught_exc)\r\n File \"<string>\", line 3, in raise_from\r\ngoogle.auth.exceptions.TransportError: Invalid URL 'None': No schema supplied. Perhaps you meant http://None?\r\n```\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"OAuth 2.0 Credentials.\n\nThis module provides credentials based on OAuth 2.0 access and refresh tokens.\nThese credentials usually access resources on behalf of a user (resource\nowner).\n\nSpecifically, this is intended to use access tokens acquired using the\n`Authorization Code grant`_ and can refresh those tokens using a\noptional `refresh token`_.\n\nObtaining the initial access and refresh token is outside of the scope of this\nmodule. Consult `rfc6749 section 4.1`_ for complete details on the\nAuthorization Code grant flow.\n\n.. _Authorization Code grant: https://tools.ietf.org/html/rfc6749#section-1.3.1\n.. _refresh token: https://tools.ietf.org/html/rfc6749#section-6\n.. _rfc6749 section 4.1: https://tools.ietf.org/html/rfc6749#section-4.1\n\"\"\"\n\nimport io\nimport json\n\nimport six\n\nfrom google.auth import _helpers\nfrom google.auth import credentials\nfrom google.oauth2 import _client\n\n\n# The Google OAuth 2.0 token endpoint. Used for authorized user credentials.\n_GOOGLE_OAUTH2_TOKEN_ENDPOINT = 'https://accounts.google.com/o/oauth2/token'\n\n\nclass Credentials(credentials.ReadOnlyScoped, credentials.Credentials):\n \"\"\"Credentials using OAuth 2.0 access and refresh tokens.\"\"\"\n\n def __init__(self, token, refresh_token=None, id_token=None,\n token_uri=None, client_id=None, client_secret=None,\n scopes=None):\n \"\"\"\n Args:\n token (Optional(str)): The OAuth 2.0 access token. Can be None\n if refresh information is provided.\n refresh_token (str): The OAuth 2.0 refresh token. If specified,\n credentials can be refreshed.\n id_token (str): The Open ID Connect ID Token.\n token_uri (str): The OAuth 2.0 authorization server's token\n endpoint URI. Must be specified for refresh, can be left as\n None if the token can not be refreshed.\n client_id (str): The OAuth 2.0 client ID. Must be specified for\n refresh, can be left as None if the token can not be refreshed.\n client_secret(str): The OAuth 2.0 client secret. Must be specified\n for refresh, can be left as None if the token can not be\n refreshed.\n scopes (Sequence[str]): The scopes that were originally used\n to obtain authorization. This is a purely informative parameter\n that can be used by :meth:`has_scopes`. OAuth 2.0 credentials\n can not request additional scopes after authorization.\n \"\"\"\n super(Credentials, self).__init__()\n self.token = token\n self._refresh_token = refresh_token\n self._id_token = id_token\n self._scopes = scopes\n self._token_uri = token_uri\n self._client_id = client_id\n self._client_secret = client_secret\n\n @property\n def refresh_token(self):\n \"\"\"Optional[str]: The OAuth 2.0 refresh token.\"\"\"\n return self._refresh_token\n\n @property\n def token_uri(self):\n \"\"\"Optional[str]: The OAuth 2.0 authorization server's token endpoint\n URI.\"\"\"\n return self._token_uri\n\n @property\n def id_token(self):\n \"\"\"Optional[str]: The Open ID Connect ID Token.\n\n Depending on the authorization server and the scopes requested, this\n may be populated when credentials are obtained and updated when\n :meth:`refresh` is called. This token is a JWT. It can be verified\n and decoded using :func:`google.oauth2.id_token.verify_oauth2_token`.\n \"\"\"\n return self._id_token\n\n @property\n def client_id(self):\n \"\"\"Optional[str]: The OAuth 2.0 client ID.\"\"\"\n return self._client_id\n\n @property\n def client_secret(self):\n \"\"\"Optional[str]: The OAuth 2.0 client secret.\"\"\"\n return self._client_secret\n\n @property\n def requires_scopes(self):\n \"\"\"False: OAuth 2.0 credentials have their scopes set when\n the initial token is requested and can not be changed.\"\"\"\n return False\n\n @_helpers.copy_docstring(credentials.Credentials)\n def refresh(self, request):\n access_token, refresh_token, expiry, grant_response = (\n _client.refresh_grant(\n request, self._token_uri, self._refresh_token, self._client_id,\n self._client_secret))\n\n self.token = access_token\n self.expiry = expiry\n self._refresh_token = refresh_token\n self._id_token = grant_response.get('id_token')\n\n @classmethod\n def from_authorized_user_info(cls, info, scopes=None):\n \"\"\"Creates a Credentials instance from parsed authorized user info.\n\n Args:\n info (Mapping[str, str]): The authorized user info in Google\n format.\n scopes (Sequence[str]): Optional list of scopes to include in the\n credentials.\n\n Returns:\n google.oauth2.credentials.Credentials: The constructed\n credentials.\n\n Raises:\n ValueError: If the info is not in the expected format.\n \"\"\"\n keys_needed = set(('refresh_token', 'client_id', 'client_secret'))\n missing = keys_needed.difference(six.iterkeys(info))\n\n if missing:\n raise ValueError(\n 'Authorized user info was not in the expected format, missing '\n 'fields {}.'.format(', '.join(missing)))\n\n return Credentials(\n None, # No access token, must be refreshed.\n refresh_token=info['refresh_token'],\n token_uri=_GOOGLE_OAUTH2_TOKEN_ENDPOINT,\n scopes=scopes,\n client_id=info['client_id'],\n client_secret=info['client_secret'])\n\n @classmethod\n def from_authorized_user_file(cls, filename, scopes=None):\n \"\"\"Creates a Credentials instance from an authorized user json file.\n\n Args:\n filename (str): The path to the authorized user json file.\n scopes (Sequence[str]): Optional list of scopes to include in the\n credentials.\n\n Returns:\n google.oauth2.credentials.Credentials: The constructed\n credentials.\n\n Raises:\n ValueError: If the file is not in the expected format.\n \"\"\"\n with io.open(filename, 'r', encoding='utf-8') as json_file:\n data = json.load(json_file)\n return cls.from_authorized_user_info(data, scopes)\n", "path": "google/oauth2/credentials.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"OAuth 2.0 Credentials.\n\nThis module provides credentials based on OAuth 2.0 access and refresh tokens.\nThese credentials usually access resources on behalf of a user (resource\nowner).\n\nSpecifically, this is intended to use access tokens acquired using the\n`Authorization Code grant`_ and can refresh those tokens using a\noptional `refresh token`_.\n\nObtaining the initial access and refresh token is outside of the scope of this\nmodule. Consult `rfc6749 section 4.1`_ for complete details on the\nAuthorization Code grant flow.\n\n.. _Authorization Code grant: https://tools.ietf.org/html/rfc6749#section-1.3.1\n.. _refresh token: https://tools.ietf.org/html/rfc6749#section-6\n.. _rfc6749 section 4.1: https://tools.ietf.org/html/rfc6749#section-4.1\n\"\"\"\n\nimport io\nimport json\n\nimport six\n\nfrom google.auth import _helpers\nfrom google.auth import credentials\nfrom google.auth import exceptions\nfrom google.oauth2 import _client\n\n\n# The Google OAuth 2.0 token endpoint. Used for authorized user credentials.\n_GOOGLE_OAUTH2_TOKEN_ENDPOINT = 'https://accounts.google.com/o/oauth2/token'\n\n\nclass Credentials(credentials.ReadOnlyScoped, credentials.Credentials):\n \"\"\"Credentials using OAuth 2.0 access and refresh tokens.\"\"\"\n\n def __init__(self, token, refresh_token=None, id_token=None,\n token_uri=None, client_id=None, client_secret=None,\n scopes=None):\n \"\"\"\n Args:\n token (Optional(str)): The OAuth 2.0 access token. Can be None\n if refresh information is provided.\n refresh_token (str): The OAuth 2.0 refresh token. If specified,\n credentials can be refreshed.\n id_token (str): The Open ID Connect ID Token.\n token_uri (str): The OAuth 2.0 authorization server's token\n endpoint URI. Must be specified for refresh, can be left as\n None if the token can not be refreshed.\n client_id (str): The OAuth 2.0 client ID. Must be specified for\n refresh, can be left as None if the token can not be refreshed.\n client_secret(str): The OAuth 2.0 client secret. Must be specified\n for refresh, can be left as None if the token can not be\n refreshed.\n scopes (Sequence[str]): The scopes that were originally used\n to obtain authorization. This is a purely informative parameter\n that can be used by :meth:`has_scopes`. OAuth 2.0 credentials\n can not request additional scopes after authorization.\n \"\"\"\n super(Credentials, self).__init__()\n self.token = token\n self._refresh_token = refresh_token\n self._id_token = id_token\n self._scopes = scopes\n self._token_uri = token_uri\n self._client_id = client_id\n self._client_secret = client_secret\n\n @property\n def refresh_token(self):\n \"\"\"Optional[str]: The OAuth 2.0 refresh token.\"\"\"\n return self._refresh_token\n\n @property\n def token_uri(self):\n \"\"\"Optional[str]: The OAuth 2.0 authorization server's token endpoint\n URI.\"\"\"\n return self._token_uri\n\n @property\n def id_token(self):\n \"\"\"Optional[str]: The Open ID Connect ID Token.\n\n Depending on the authorization server and the scopes requested, this\n may be populated when credentials are obtained and updated when\n :meth:`refresh` is called. This token is a JWT. It can be verified\n and decoded using :func:`google.oauth2.id_token.verify_oauth2_token`.\n \"\"\"\n return self._id_token\n\n @property\n def client_id(self):\n \"\"\"Optional[str]: The OAuth 2.0 client ID.\"\"\"\n return self._client_id\n\n @property\n def client_secret(self):\n \"\"\"Optional[str]: The OAuth 2.0 client secret.\"\"\"\n return self._client_secret\n\n @property\n def requires_scopes(self):\n \"\"\"False: OAuth 2.0 credentials have their scopes set when\n the initial token is requested and can not be changed.\"\"\"\n return False\n\n @_helpers.copy_docstring(credentials.Credentials)\n def refresh(self, request):\n if (self._refresh_token is None or\n self._token_uri is None or\n self._client_id is None or\n self._client_secret is None):\n raise exceptions.RefreshError(\n 'The credentials do not contain the necessary fields need to '\n 'refresh the access token. You must specify refresh_token, '\n 'token_uri, client_id, and client_secret.')\n\n access_token, refresh_token, expiry, grant_response = (\n _client.refresh_grant(\n request, self._token_uri, self._refresh_token, self._client_id,\n self._client_secret))\n\n self.token = access_token\n self.expiry = expiry\n self._refresh_token = refresh_token\n self._id_token = grant_response.get('id_token')\n\n @classmethod\n def from_authorized_user_info(cls, info, scopes=None):\n \"\"\"Creates a Credentials instance from parsed authorized user info.\n\n Args:\n info (Mapping[str, str]): The authorized user info in Google\n format.\n scopes (Sequence[str]): Optional list of scopes to include in the\n credentials.\n\n Returns:\n google.oauth2.credentials.Credentials: The constructed\n credentials.\n\n Raises:\n ValueError: If the info is not in the expected format.\n \"\"\"\n keys_needed = set(('refresh_token', 'client_id', 'client_secret'))\n missing = keys_needed.difference(six.iterkeys(info))\n\n if missing:\n raise ValueError(\n 'Authorized user info was not in the expected format, missing '\n 'fields {}.'.format(', '.join(missing)))\n\n return Credentials(\n None, # No access token, must be refreshed.\n refresh_token=info['refresh_token'],\n token_uri=_GOOGLE_OAUTH2_TOKEN_ENDPOINT,\n scopes=scopes,\n client_id=info['client_id'],\n client_secret=info['client_secret'])\n\n @classmethod\n def from_authorized_user_file(cls, filename, scopes=None):\n \"\"\"Creates a Credentials instance from an authorized user json file.\n\n Args:\n filename (str): The path to the authorized user json file.\n scopes (Sequence[str]): Optional list of scopes to include in the\n credentials.\n\n Returns:\n google.oauth2.credentials.Credentials: The constructed\n credentials.\n\n Raises:\n ValueError: If the file is not in the expected format.\n \"\"\"\n with io.open(filename, 'r', encoding='utf-8') as json_file:\n data = json.load(json_file)\n return cls.from_authorized_user_info(data, scopes)\n", "path": "google/oauth2/credentials.py"}]}
| 2,689 | 240 |
gh_patches_debug_39047
|
rasdani/github-patches
|
git_diff
|
optuna__optuna-2343
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tensorboard integration with integers for parameter boundaries
When using the `optuna.integration.tensorboard.TensorBoardCallback` with integer parameters for `suggest_uniform`, a `TypeError` is raised.
## Expected behavior
No TypeError should be raised, instead the integer should be casted to a float.
## Environment
- Optuna version: 2.5.0
- Python version: 3.7.5
- OS: Debian Testing
- (Optional) Other libraries and their versions: Tensorboard 2.4.1
## Error messages, stack traces, or logs
```
Traceback (most recent call last):
File "tensorboard_test.py", line 13, in <module>
study.optimize(objective, n_trials=10, timeout=600, callbacks=[tensorboard_callback])
File "/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/study.py", line 385, in optimize
show_progress_bar=show_progress_bar,
File "/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/_optimize.py", line 73, in _optimize
progress_bar=progress_bar,
File "/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/_optimize.py", line 178, in _optimize_sequential
callback(study, frozen_trial)
File "/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/integration/tensorboard.py", line 41, in __call__
self._initialization(study)
File "/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/integration/tensorboard.py", line 102, in _initialization
self._add_distributions(trial.distributions)
File "/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/integration/tensorboard.py", line 62, in _add_distributions
param_name, hp.RealInterval(param_distribution.low, param_distribution.high)
File "/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/tensorboard/plugins/hparams/summary_v2.py", line 444, in __init__
raise TypeError("min_value must be a float: %r" % (min_value,))
TypeError: min_value must be a float: 0
```
## Steps to reproduce
1. Execute the example below
## Reproducible examples (optional)
```python
import optuna
from optuna.integration.tensorboard import TensorBoardCallback
def objective(trial: optuna.trial.Trial) -> float:
param = trial.suggest_uniform("param", 0, 1)
return param**2
tensorboard_callback = TensorBoardCallback("logs/", metric_name="value")
study = optuna.create_study()
study.optimize(objective, n_trials=10, callbacks=[tensorboard_callback])
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/integration/tensorboard.py`
Content:
```
1 import os
2 from typing import Dict
3
4 import optuna
5 from optuna._experimental import experimental
6 from optuna._imports import try_import
7
8
9 with try_import() as _imports:
10 from tensorboard.plugins.hparams import api as hp
11 import tensorflow as tf
12
13
14 @experimental("2.0.0")
15 class TensorBoardCallback(object):
16 """Callback to track Optuna trials with TensorBoard.
17
18 This callback adds relevant information that is tracked by Optuna to TensorBoard.
19
20 See `the example <https://github.com/optuna/optuna/blob/master/
21 examples/tensorboard_simple.py>`_.
22
23 Args:
24 dirname:
25 Directory to store TensorBoard logs.
26 metric_name:
27 Name of the metric. Since the metric itself is just a number,
28 `metric_name` can be used to give it a name. So you know later
29 if it was roc-auc or accuracy.
30
31 """
32
33 def __init__(self, dirname: str, metric_name: str) -> None:
34 _imports.check()
35 self._dirname = dirname
36 self._metric_name = metric_name
37 self._hp_params: Dict[str, hp.HParam] = {}
38
39 def __call__(self, study: optuna.study.Study, trial: optuna.trial.FrozenTrial) -> None:
40 if len(self._hp_params) == 0:
41 self._initialization(study)
42 if trial.state != optuna.trial.TrialState.COMPLETE:
43 return
44 trial_value = trial.value if trial.value is not None else float("nan")
45 hparams = {}
46 for param_name, param_value in trial.params.items():
47 if param_name not in self._hp_params:
48 self._add_distributions(trial.distributions)
49 hparams[self._hp_params[param_name]] = param_value
50 run_name = "trial-%d" % trial.number
51 run_dir = os.path.join(self._dirname, run_name)
52 with tf.summary.create_file_writer(run_dir).as_default():
53 hp.hparams(hparams, trial_id=run_name) # record the values used in this trial
54 tf.summary.scalar(self._metric_name, trial_value, step=trial.number)
55
56 def _add_distributions(
57 self, distributions: Dict[str, optuna.distributions.BaseDistribution]
58 ) -> None:
59 for param_name, param_distribution in distributions.items():
60 if isinstance(param_distribution, optuna.distributions.UniformDistribution):
61 self._hp_params[param_name] = hp.HParam(
62 param_name, hp.RealInterval(param_distribution.low, param_distribution.high)
63 )
64 elif isinstance(param_distribution, optuna.distributions.LogUniformDistribution):
65 self._hp_params[param_name] = hp.HParam(
66 param_name, hp.RealInterval(param_distribution.low, param_distribution.high)
67 )
68 elif isinstance(param_distribution, optuna.distributions.DiscreteUniformDistribution):
69 self._hp_params[param_name] = hp.HParam(
70 param_name, hp.RealInterval(param_distribution.low, param_distribution.high)
71 )
72 elif isinstance(param_distribution, optuna.distributions.IntUniformDistribution):
73 self._hp_params[param_name] = hp.HParam(
74 param_name, hp.IntInterval(param_distribution.low, param_distribution.high)
75 )
76 elif isinstance(param_distribution, optuna.distributions.CategoricalDistribution):
77 self._hp_params[param_name] = hp.HParam(
78 param_name, hp.Discrete(param_distribution.choices)
79 )
80 else:
81 distribution_list = [
82 optuna.distributions.UniformDistribution.__name__,
83 optuna.distributions.LogUniformDistribution.__name__,
84 optuna.distributions.DiscreteUniformDistribution.__name__,
85 optuna.distributions.IntUniformDistribution.__name__,
86 optuna.distributions.CategoricalDistribution.__name__,
87 ]
88 raise NotImplementedError(
89 "The distribution {} is not implemented. "
90 "The parameter distribution should be one of the {}".format(
91 param_distribution, distribution_list
92 )
93 )
94
95 def _initialization(self, study: optuna.Study) -> None:
96 completed_trials = [
97 trial
98 for trial in study.get_trials(deepcopy=False)
99 if trial.state == optuna.trial.TrialState.COMPLETE
100 ]
101 for trial in completed_trials:
102 self._add_distributions(trial.distributions)
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/optuna/integration/tensorboard.py b/optuna/integration/tensorboard.py
--- a/optuna/integration/tensorboard.py
+++ b/optuna/integration/tensorboard.py
@@ -56,34 +56,36 @@
def _add_distributions(
self, distributions: Dict[str, optuna.distributions.BaseDistribution]
) -> None:
+ real_distributions = (
+ optuna.distributions.UniformDistribution,
+ optuna.distributions.LogUniformDistribution,
+ optuna.distributions.DiscreteUniformDistribution,
+ )
+ int_distributions = (optuna.distributions.IntUniformDistribution,)
+ categorical_distributions = (optuna.distributions.CategoricalDistribution,)
+ supported_distributions = (
+ real_distributions + int_distributions + categorical_distributions
+ )
+
for param_name, param_distribution in distributions.items():
- if isinstance(param_distribution, optuna.distributions.UniformDistribution):
- self._hp_params[param_name] = hp.HParam(
- param_name, hp.RealInterval(param_distribution.low, param_distribution.high)
- )
- elif isinstance(param_distribution, optuna.distributions.LogUniformDistribution):
- self._hp_params[param_name] = hp.HParam(
- param_name, hp.RealInterval(param_distribution.low, param_distribution.high)
- )
- elif isinstance(param_distribution, optuna.distributions.DiscreteUniformDistribution):
+ if isinstance(param_distribution, real_distributions):
self._hp_params[param_name] = hp.HParam(
- param_name, hp.RealInterval(param_distribution.low, param_distribution.high)
+ param_name,
+ hp.RealInterval(float(param_distribution.low), float(param_distribution.high)),
)
- elif isinstance(param_distribution, optuna.distributions.IntUniformDistribution):
+ elif isinstance(param_distribution, int_distributions):
self._hp_params[param_name] = hp.HParam(
- param_name, hp.IntInterval(param_distribution.low, param_distribution.high)
+ param_name,
+ hp.IntInterval(param_distribution.low, param_distribution.high),
)
- elif isinstance(param_distribution, optuna.distributions.CategoricalDistribution):
+ elif isinstance(param_distribution, categorical_distributions):
self._hp_params[param_name] = hp.HParam(
- param_name, hp.Discrete(param_distribution.choices)
+ param_name,
+ hp.Discrete(param_distribution.choices),
)
else:
distribution_list = [
- optuna.distributions.UniformDistribution.__name__,
- optuna.distributions.LogUniformDistribution.__name__,
- optuna.distributions.DiscreteUniformDistribution.__name__,
- optuna.distributions.IntUniformDistribution.__name__,
- optuna.distributions.CategoricalDistribution.__name__,
+ distribution.__name__ for distribution in supported_distributions
]
raise NotImplementedError(
"The distribution {} is not implemented. "
|
{"golden_diff": "diff --git a/optuna/integration/tensorboard.py b/optuna/integration/tensorboard.py\n--- a/optuna/integration/tensorboard.py\n+++ b/optuna/integration/tensorboard.py\n@@ -56,34 +56,36 @@\n def _add_distributions(\n self, distributions: Dict[str, optuna.distributions.BaseDistribution]\n ) -> None:\n+ real_distributions = (\n+ optuna.distributions.UniformDistribution,\n+ optuna.distributions.LogUniformDistribution,\n+ optuna.distributions.DiscreteUniformDistribution,\n+ )\n+ int_distributions = (optuna.distributions.IntUniformDistribution,)\n+ categorical_distributions = (optuna.distributions.CategoricalDistribution,)\n+ supported_distributions = (\n+ real_distributions + int_distributions + categorical_distributions\n+ )\n+\n for param_name, param_distribution in distributions.items():\n- if isinstance(param_distribution, optuna.distributions.UniformDistribution):\n- self._hp_params[param_name] = hp.HParam(\n- param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\n- )\n- elif isinstance(param_distribution, optuna.distributions.LogUniformDistribution):\n- self._hp_params[param_name] = hp.HParam(\n- param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\n- )\n- elif isinstance(param_distribution, optuna.distributions.DiscreteUniformDistribution):\n+ if isinstance(param_distribution, real_distributions):\n self._hp_params[param_name] = hp.HParam(\n- param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\n+ param_name,\n+ hp.RealInterval(float(param_distribution.low), float(param_distribution.high)),\n )\n- elif isinstance(param_distribution, optuna.distributions.IntUniformDistribution):\n+ elif isinstance(param_distribution, int_distributions):\n self._hp_params[param_name] = hp.HParam(\n- param_name, hp.IntInterval(param_distribution.low, param_distribution.high)\n+ param_name,\n+ hp.IntInterval(param_distribution.low, param_distribution.high),\n )\n- elif isinstance(param_distribution, optuna.distributions.CategoricalDistribution):\n+ elif isinstance(param_distribution, categorical_distributions):\n self._hp_params[param_name] = hp.HParam(\n- param_name, hp.Discrete(param_distribution.choices)\n+ param_name,\n+ hp.Discrete(param_distribution.choices),\n )\n else:\n distribution_list = [\n- optuna.distributions.UniformDistribution.__name__,\n- optuna.distributions.LogUniformDistribution.__name__,\n- optuna.distributions.DiscreteUniformDistribution.__name__,\n- optuna.distributions.IntUniformDistribution.__name__,\n- optuna.distributions.CategoricalDistribution.__name__,\n+ distribution.__name__ for distribution in supported_distributions\n ]\n raise NotImplementedError(\n \"The distribution {} is not implemented. \"\n", "issue": "Tensorboard integration with integers for parameter boundaries\nWhen using the `optuna.integration.tensorboard.TensorBoardCallback` with integer parameters for `suggest_uniform`, a `TypeError` is raised.\r\n\r\n## Expected behavior\r\nNo TypeError should be raised, instead the integer should be casted to a float.\r\n\r\n## Environment\r\n\r\n- Optuna version: 2.5.0\r\n- Python version: 3.7.5\r\n- OS: Debian Testing\r\n- (Optional) Other libraries and their versions: Tensorboard 2.4.1\r\n\r\n## Error messages, stack traces, or logs\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"tensorboard_test.py\", line 13, in <module>\r\n study.optimize(objective, n_trials=10, timeout=600, callbacks=[tensorboard_callback])\r\n File \"/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/study.py\", line 385, in optimize\r\n show_progress_bar=show_progress_bar,\r\n File \"/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/_optimize.py\", line 73, in _optimize\r\n progress_bar=progress_bar,\r\n File \"/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/_optimize.py\", line 178, in _optimize_sequential\r\n callback(study, frozen_trial)\r\n File \"/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/integration/tensorboard.py\", line 41, in __call__\r\n self._initialization(study)\r\n File \"/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/integration/tensorboard.py\", line 102, in _initialization\r\n self._add_distributions(trial.distributions)\r\n File \"/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/optuna/integration/tensorboard.py\", line 62, in _add_distributions\r\n param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\r\n File \"/home/timon/.pyenv/versions/3.7.5/lib/python3.7/site-packages/tensorboard/plugins/hparams/summary_v2.py\", line 444, in __init__\r\n raise TypeError(\"min_value must be a float: %r\" % (min_value,))\r\nTypeError: min_value must be a float: 0\r\n```\r\n\r\n## Steps to reproduce\r\n\r\n1. Execute the example below\r\n\r\n## Reproducible examples (optional)\r\n\r\n```python\r\nimport optuna\r\nfrom optuna.integration.tensorboard import TensorBoardCallback\r\n\r\ndef objective(trial: optuna.trial.Trial) -> float:\r\n param = trial.suggest_uniform(\"param\", 0, 1)\r\n return param**2\r\n\r\ntensorboard_callback = TensorBoardCallback(\"logs/\", metric_name=\"value\")\r\n\r\nstudy = optuna.create_study()\r\nstudy.optimize(objective, n_trials=10, callbacks=[tensorboard_callback])\r\n```\n", "before_files": [{"content": "import os\nfrom typing import Dict\n\nimport optuna\nfrom optuna._experimental import experimental\nfrom optuna._imports import try_import\n\n\nwith try_import() as _imports:\n from tensorboard.plugins.hparams import api as hp\n import tensorflow as tf\n\n\n@experimental(\"2.0.0\")\nclass TensorBoardCallback(object):\n \"\"\"Callback to track Optuna trials with TensorBoard.\n\n This callback adds relevant information that is tracked by Optuna to TensorBoard.\n\n See `the example <https://github.com/optuna/optuna/blob/master/\n examples/tensorboard_simple.py>`_.\n\n Args:\n dirname:\n Directory to store TensorBoard logs.\n metric_name:\n Name of the metric. Since the metric itself is just a number,\n `metric_name` can be used to give it a name. So you know later\n if it was roc-auc or accuracy.\n\n \"\"\"\n\n def __init__(self, dirname: str, metric_name: str) -> None:\n _imports.check()\n self._dirname = dirname\n self._metric_name = metric_name\n self._hp_params: Dict[str, hp.HParam] = {}\n\n def __call__(self, study: optuna.study.Study, trial: optuna.trial.FrozenTrial) -> None:\n if len(self._hp_params) == 0:\n self._initialization(study)\n if trial.state != optuna.trial.TrialState.COMPLETE:\n return\n trial_value = trial.value if trial.value is not None else float(\"nan\")\n hparams = {}\n for param_name, param_value in trial.params.items():\n if param_name not in self._hp_params:\n self._add_distributions(trial.distributions)\n hparams[self._hp_params[param_name]] = param_value\n run_name = \"trial-%d\" % trial.number\n run_dir = os.path.join(self._dirname, run_name)\n with tf.summary.create_file_writer(run_dir).as_default():\n hp.hparams(hparams, trial_id=run_name) # record the values used in this trial\n tf.summary.scalar(self._metric_name, trial_value, step=trial.number)\n\n def _add_distributions(\n self, distributions: Dict[str, optuna.distributions.BaseDistribution]\n ) -> None:\n for param_name, param_distribution in distributions.items():\n if isinstance(param_distribution, optuna.distributions.UniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.LogUniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.DiscreteUniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.RealInterval(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.IntUniformDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.IntInterval(param_distribution.low, param_distribution.high)\n )\n elif isinstance(param_distribution, optuna.distributions.CategoricalDistribution):\n self._hp_params[param_name] = hp.HParam(\n param_name, hp.Discrete(param_distribution.choices)\n )\n else:\n distribution_list = [\n optuna.distributions.UniformDistribution.__name__,\n optuna.distributions.LogUniformDistribution.__name__,\n optuna.distributions.DiscreteUniformDistribution.__name__,\n optuna.distributions.IntUniformDistribution.__name__,\n optuna.distributions.CategoricalDistribution.__name__,\n ]\n raise NotImplementedError(\n \"The distribution {} is not implemented. \"\n \"The parameter distribution should be one of the {}\".format(\n param_distribution, distribution_list\n )\n )\n\n def _initialization(self, study: optuna.Study) -> None:\n completed_trials = [\n trial\n for trial in study.get_trials(deepcopy=False)\n if trial.state == optuna.trial.TrialState.COMPLETE\n ]\n for trial in completed_trials:\n self._add_distributions(trial.distributions)\n", "path": "optuna/integration/tensorboard.py"}], "after_files": [{"content": "import os\nfrom typing import Dict\n\nimport optuna\nfrom optuna._experimental import experimental\nfrom optuna._imports import try_import\n\n\nwith try_import() as _imports:\n from tensorboard.plugins.hparams import api as hp\n import tensorflow as tf\n\n\n@experimental(\"2.0.0\")\nclass TensorBoardCallback(object):\n \"\"\"Callback to track Optuna trials with TensorBoard.\n\n This callback adds relevant information that is tracked by Optuna to TensorBoard.\n\n See `the example <https://github.com/optuna/optuna/blob/master/\n examples/tensorboard_simple.py>`_.\n\n Args:\n dirname:\n Directory to store TensorBoard logs.\n metric_name:\n Name of the metric. Since the metric itself is just a number,\n `metric_name` can be used to give it a name. So you know later\n if it was roc-auc or accuracy.\n\n \"\"\"\n\n def __init__(self, dirname: str, metric_name: str) -> None:\n _imports.check()\n self._dirname = dirname\n self._metric_name = metric_name\n self._hp_params: Dict[str, hp.HParam] = {}\n\n def __call__(self, study: optuna.study.Study, trial: optuna.trial.FrozenTrial) -> None:\n if len(self._hp_params) == 0:\n self._initialization(study)\n if trial.state != optuna.trial.TrialState.COMPLETE:\n return\n trial_value = trial.value if trial.value is not None else float(\"nan\")\n hparams = {}\n for param_name, param_value in trial.params.items():\n if param_name not in self._hp_params:\n self._add_distributions(trial.distributions)\n hparams[self._hp_params[param_name]] = param_value\n run_name = \"trial-%d\" % trial.number\n run_dir = os.path.join(self._dirname, run_name)\n with tf.summary.create_file_writer(run_dir).as_default():\n hp.hparams(hparams, trial_id=run_name) # record the values used in this trial\n tf.summary.scalar(self._metric_name, trial_value, step=trial.number)\n\n def _add_distributions(\n self, distributions: Dict[str, optuna.distributions.BaseDistribution]\n ) -> None:\n real_distributions = (\n optuna.distributions.UniformDistribution,\n optuna.distributions.LogUniformDistribution,\n optuna.distributions.DiscreteUniformDistribution,\n )\n int_distributions = (optuna.distributions.IntUniformDistribution,)\n categorical_distributions = (optuna.distributions.CategoricalDistribution,)\n supported_distributions = (\n real_distributions + int_distributions + categorical_distributions\n )\n\n for param_name, param_distribution in distributions.items():\n if isinstance(param_distribution, real_distributions):\n self._hp_params[param_name] = hp.HParam(\n param_name,\n hp.RealInterval(float(param_distribution.low), float(param_distribution.high)),\n )\n elif isinstance(param_distribution, int_distributions):\n self._hp_params[param_name] = hp.HParam(\n param_name,\n hp.IntInterval(param_distribution.low, param_distribution.high),\n )\n elif isinstance(param_distribution, categorical_distributions):\n self._hp_params[param_name] = hp.HParam(\n param_name,\n hp.Discrete(param_distribution.choices),\n )\n else:\n distribution_list = [\n distribution.__name__ for distribution in supported_distributions\n ]\n raise NotImplementedError(\n \"The distribution {} is not implemented. \"\n \"The parameter distribution should be one of the {}\".format(\n param_distribution, distribution_list\n )\n )\n\n def _initialization(self, study: optuna.Study) -> None:\n completed_trials = [\n trial\n for trial in study.get_trials(deepcopy=False)\n if trial.state == optuna.trial.TrialState.COMPLETE\n ]\n for trial in completed_trials:\n self._add_distributions(trial.distributions)\n", "path": "optuna/integration/tensorboard.py"}]}
| 2,042 | 606 |
gh_patches_debug_6477
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-2483
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`clean_stale_db --force` kills demo databases newer than 3 days
## Description
`clean_stale_db --force` is meant to only kill demo databases older than 3 days (by default), but that doesn't seem to be the case.
## Additional context
https://github.com/centerofci/mathesar/blob/master/demo/management/commands/clean_stale_db.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `demo/management/commands/clean_stale_db.py`
Content:
```
1 from datetime import timedelta
2
3 from django.conf import settings
4 from django.core.management import BaseCommand
5 from django.utils.timezone import now
6 from sqlalchemy import text
7 from sqlalchemy.exc import OperationalError
8
9 from db import engine
10 from db.metadata import get_empty_metadata
11 from mathesar.models.base import Database
12 from mathesar.state.django import reflect_db_objects
13
14
15 class Command(BaseCommand):
16 help = 'Cleans up the stale database created during live demo'
17
18 def add_arguments(self, parser):
19 parser.add_argument(
20 '--force',
21 action='store_true',
22 help='Force delete a database even if it in use'
23 )
24 parser.add_argument(
25 '--max-days',
26 action='store',
27 type=int,
28 default=3,
29 help='A database is considered for deletion if it has existed for more than --max-days',
30 )
31
32 def handle(self, *args, **options):
33 drop_all_stale_databases(*args, **options)
34
35
36 def drop_all_stale_databases(force=False, max_days=3, *args, **kwargs):
37 excluded_databases = [
38 settings.DATABASES["default"]["NAME"],
39 settings.DATABASES["mathesar_tables"]["NAME"],
40 getattr(settings, "MATHESAR_DEMO_TEMPLATE", None),
41 # Exclude Postgres default databases
42 'postgres',
43 'template0',
44 'template1'
45 ]
46 stale_databases = Database.objects.filter(created_at__lt=now() - timedelta(minutes=max_days))
47 deleted_databases = []
48 for database in stale_databases:
49 if database.name not in excluded_databases and database.deleted is False:
50 dropped = drop_mathesar_database(
51 database.name,
52 username=settings.DATABASES["default"]["USER"],
53 password=settings.DATABASES["default"]["PASSWORD"],
54 hostname=settings.DATABASES["default"]["HOST"],
55 root_database=settings.DATABASES["default"]["NAME"],
56 port=settings.DATABASES["default"]["PORT"],
57 force=force
58 )
59 if dropped:
60 deleted_databases.append(database.name)
61 database.delete()
62 reflect_db_objects(get_empty_metadata())
63 return deleted_databases
64
65
66 def drop_mathesar_database(
67 user_database, username, password, hostname, root_database, port, force=False
68 ):
69 user_db_engine = engine.create_future_engine(
70 username, password, hostname, user_database, port
71 )
72 try:
73 user_db_engine.connect()
74 except OperationalError:
75 # Non existent db object
76 user_db_engine.dispose()
77 return True
78 else:
79 try:
80 root_db_engine = engine.create_future_engine(
81 username, password, hostname, root_database, port,
82 )
83 with root_db_engine.connect() as conn:
84 conn.execution_options(isolation_level="AUTOCOMMIT")
85 delete_stmt = f"DROP DATABASE {user_database} {'WITH (FORCE)' if force else ''}"
86 conn.execute(text(delete_stmt))
87 # This database is not created using a config file,
88 # so their objects can be safety deleted
89 # as they won't be created again during reflection
90 return True
91 except OperationalError:
92 # Database is in use, ignore
93 pass
94 return False
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/demo/management/commands/clean_stale_db.py b/demo/management/commands/clean_stale_db.py
--- a/demo/management/commands/clean_stale_db.py
+++ b/demo/management/commands/clean_stale_db.py
@@ -43,7 +43,7 @@
'template0',
'template1'
]
- stale_databases = Database.objects.filter(created_at__lt=now() - timedelta(minutes=max_days))
+ stale_databases = Database.objects.filter(created_at__lt=now() - timedelta(days=max_days))
deleted_databases = []
for database in stale_databases:
if database.name not in excluded_databases and database.deleted is False:
|
{"golden_diff": "diff --git a/demo/management/commands/clean_stale_db.py b/demo/management/commands/clean_stale_db.py\n--- a/demo/management/commands/clean_stale_db.py\n+++ b/demo/management/commands/clean_stale_db.py\n@@ -43,7 +43,7 @@\n 'template0',\n 'template1'\n ]\n- stale_databases = Database.objects.filter(created_at__lt=now() - timedelta(minutes=max_days))\n+ stale_databases = Database.objects.filter(created_at__lt=now() - timedelta(days=max_days))\n deleted_databases = []\n for database in stale_databases:\n if database.name not in excluded_databases and database.deleted is False:\n", "issue": "`clean_stale_db --force` kills demo databases newer than 3 days\n## Description\r\n`clean_stale_db --force` is meant to only kill demo databases older than 3 days (by default), but that doesn't seem to be the case.\r\n\r\n## Additional context\r\n\r\nhttps://github.com/centerofci/mathesar/blob/master/demo/management/commands/clean_stale_db.py\n", "before_files": [{"content": "from datetime import timedelta\n\nfrom django.conf import settings\nfrom django.core.management import BaseCommand\nfrom django.utils.timezone import now\nfrom sqlalchemy import text\nfrom sqlalchemy.exc import OperationalError\n\nfrom db import engine\nfrom db.metadata import get_empty_metadata\nfrom mathesar.models.base import Database\nfrom mathesar.state.django import reflect_db_objects\n\n\nclass Command(BaseCommand):\n help = 'Cleans up the stale database created during live demo'\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--force',\n action='store_true',\n help='Force delete a database even if it in use'\n )\n parser.add_argument(\n '--max-days',\n action='store',\n type=int,\n default=3,\n help='A database is considered for deletion if it has existed for more than --max-days',\n )\n\n def handle(self, *args, **options):\n drop_all_stale_databases(*args, **options)\n\n\ndef drop_all_stale_databases(force=False, max_days=3, *args, **kwargs):\n excluded_databases = [\n settings.DATABASES[\"default\"][\"NAME\"],\n settings.DATABASES[\"mathesar_tables\"][\"NAME\"],\n getattr(settings, \"MATHESAR_DEMO_TEMPLATE\", None),\n # Exclude Postgres default databases\n 'postgres',\n 'template0',\n 'template1'\n ]\n stale_databases = Database.objects.filter(created_at__lt=now() - timedelta(minutes=max_days))\n deleted_databases = []\n for database in stale_databases:\n if database.name not in excluded_databases and database.deleted is False:\n dropped = drop_mathesar_database(\n database.name,\n username=settings.DATABASES[\"default\"][\"USER\"],\n password=settings.DATABASES[\"default\"][\"PASSWORD\"],\n hostname=settings.DATABASES[\"default\"][\"HOST\"],\n root_database=settings.DATABASES[\"default\"][\"NAME\"],\n port=settings.DATABASES[\"default\"][\"PORT\"],\n force=force\n )\n if dropped:\n deleted_databases.append(database.name)\n database.delete()\n reflect_db_objects(get_empty_metadata())\n return deleted_databases\n\n\ndef drop_mathesar_database(\n user_database, username, password, hostname, root_database, port, force=False\n):\n user_db_engine = engine.create_future_engine(\n username, password, hostname, user_database, port\n )\n try:\n user_db_engine.connect()\n except OperationalError:\n # Non existent db object\n user_db_engine.dispose()\n return True\n else:\n try:\n root_db_engine = engine.create_future_engine(\n username, password, hostname, root_database, port,\n )\n with root_db_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n delete_stmt = f\"DROP DATABASE {user_database} {'WITH (FORCE)' if force else ''}\"\n conn.execute(text(delete_stmt))\n # This database is not created using a config file,\n # so their objects can be safety deleted\n # as they won't be created again during reflection\n return True\n except OperationalError:\n # Database is in use, ignore\n pass\n return False\n", "path": "demo/management/commands/clean_stale_db.py"}], "after_files": [{"content": "from datetime import timedelta\n\nfrom django.conf import settings\nfrom django.core.management import BaseCommand\nfrom django.utils.timezone import now\nfrom sqlalchemy import text\nfrom sqlalchemy.exc import OperationalError\n\nfrom db import engine\nfrom db.metadata import get_empty_metadata\nfrom mathesar.models.base import Database\nfrom mathesar.state.django import reflect_db_objects\n\n\nclass Command(BaseCommand):\n help = 'Cleans up the stale database created during live demo'\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--force',\n action='store_true',\n help='Force delete a database even if it in use'\n )\n parser.add_argument(\n '--max-days',\n action='store',\n type=int,\n default=3,\n help='A database is considered for deletion if it has existed for more than --max-days',\n )\n\n def handle(self, *args, **options):\n drop_all_stale_databases(*args, **options)\n\n\ndef drop_all_stale_databases(force=False, max_days=3, *args, **kwargs):\n excluded_databases = [\n settings.DATABASES[\"default\"][\"NAME\"],\n settings.DATABASES[\"mathesar_tables\"][\"NAME\"],\n getattr(settings, \"MATHESAR_DEMO_TEMPLATE\", None),\n # Exclude Postgres default databases\n 'postgres',\n 'template0',\n 'template1'\n ]\n stale_databases = Database.objects.filter(created_at__lt=now() - timedelta(days=max_days))\n deleted_databases = []\n for database in stale_databases:\n if database.name not in excluded_databases and database.deleted is False:\n dropped = drop_mathesar_database(\n database.name,\n username=settings.DATABASES[\"default\"][\"USER\"],\n password=settings.DATABASES[\"default\"][\"PASSWORD\"],\n hostname=settings.DATABASES[\"default\"][\"HOST\"],\n root_database=settings.DATABASES[\"default\"][\"NAME\"],\n port=settings.DATABASES[\"default\"][\"PORT\"],\n force=force\n )\n if dropped:\n deleted_databases.append(database.name)\n database.delete()\n reflect_db_objects(get_empty_metadata())\n return deleted_databases\n\n\ndef drop_mathesar_database(\n user_database, username, password, hostname, root_database, port, force=False\n):\n user_db_engine = engine.create_future_engine(\n username, password, hostname, user_database, port\n )\n try:\n user_db_engine.connect()\n except OperationalError:\n # Non existent db object\n user_db_engine.dispose()\n return True\n else:\n try:\n root_db_engine = engine.create_future_engine(\n username, password, hostname, root_database, port,\n )\n with root_db_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n delete_stmt = f\"DROP DATABASE {user_database} {'WITH (FORCE)' if force else ''}\"\n conn.execute(text(delete_stmt))\n # This database is not created using a config file,\n # so their objects can be safety deleted\n # as they won't be created again during reflection\n return True\n except OperationalError:\n # Database is in use, ignore\n pass\n return False\n", "path": "demo/management/commands/clean_stale_db.py"}]}
| 1,201 | 153 |
gh_patches_debug_7289
|
rasdani/github-patches
|
git_diff
|
beetbox__beets-1492
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
importfeeds: name of m3u_multi playlist get messed up when both m3u* options are on
activate both m3u output formats
```
importfeeds:
formats: m3u m3u_multi
```
Result : m3u_multi filename is not `<date> <track/album name>` as expected
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/importfeeds.py`
Content:
```
1 # This file is part of beets.
2 # Copyright 2015, Fabrice Laporte.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 from __future__ import (division, absolute_import, print_function,
16 unicode_literals)
17
18 """Write paths of imported files in various formats to ease later import in a
19 music player. Also allow printing the new file locations to stdout in case
20 one wants to manually add music to a player by its path.
21 """
22 import datetime
23 import os
24 import re
25
26 from beets.plugins import BeetsPlugin
27 from beets.util import mkdirall, normpath, syspath, bytestring_path
28 from beets import config
29
30 M3U_DEFAULT_NAME = 'imported.m3u'
31
32
33 def _get_feeds_dir(lib):
34 """Given a Library object, return the path to the feeds directory to be
35 used (either in the library directory or an explicitly configured
36 path). Ensures that the directory exists.
37 """
38 # Inside library directory.
39 dirpath = lib.directory
40
41 # Ensure directory exists.
42 if not os.path.exists(syspath(dirpath)):
43 os.makedirs(syspath(dirpath))
44 return dirpath
45
46
47 def _build_m3u_filename(basename):
48 """Builds unique m3u filename by appending given basename to current
49 date."""
50
51 basename = re.sub(r"[\s,/\\'\"]", '_', basename)
52 date = datetime.datetime.now().strftime("%Y%m%d_%Hh%M")
53 path = normpath(os.path.join(
54 config['importfeeds']['dir'].as_filename(),
55 date + '_' + basename + '.m3u'
56 ))
57 return path
58
59
60 def _write_m3u(m3u_path, items_paths):
61 """Append relative paths to items into m3u file.
62 """
63 mkdirall(m3u_path)
64 with open(syspath(m3u_path), 'a') as f:
65 for path in items_paths:
66 f.write(path + b'\n')
67
68
69 class ImportFeedsPlugin(BeetsPlugin):
70 def __init__(self):
71 super(ImportFeedsPlugin, self).__init__()
72
73 self.config.add({
74 'formats': [],
75 'm3u_name': u'imported.m3u',
76 'dir': None,
77 'relative_to': None,
78 'absolute_path': False,
79 })
80
81 feeds_dir = self.config['dir'].get()
82 if feeds_dir:
83 feeds_dir = os.path.expanduser(bytestring_path(feeds_dir))
84 self.config['dir'] = feeds_dir
85 if not os.path.exists(syspath(feeds_dir)):
86 os.makedirs(syspath(feeds_dir))
87
88 relative_to = self.config['relative_to'].get()
89 if relative_to:
90 self.config['relative_to'] = normpath(relative_to)
91 else:
92 self.config['relative_to'] = feeds_dir
93
94 self.register_listener('library_opened', self.library_opened)
95 self.register_listener('album_imported', self.album_imported)
96 self.register_listener('item_imported', self.item_imported)
97
98 def _record_items(self, lib, basename, items):
99 """Records relative paths to the given items for each feed format
100 """
101 feedsdir = bytestring_path(self.config['dir'].as_filename())
102 formats = self.config['formats'].as_str_seq()
103 relative_to = self.config['relative_to'].get() \
104 or self.config['dir'].as_filename()
105 relative_to = bytestring_path(relative_to)
106
107 paths = []
108 for item in items:
109 if self.config['absolute_path']:
110 paths.append(item.path)
111 else:
112 try:
113 relpath = os.path.relpath(item.path, relative_to)
114 except ValueError:
115 # On Windows, it is sometimes not possible to construct a
116 # relative path (if the files are on different disks).
117 relpath = item.path
118 paths.append(relpath)
119
120 if 'm3u' in formats:
121 basename = bytestring_path(
122 self.config['m3u_name'].get(unicode)
123 )
124 m3u_path = os.path.join(feedsdir, basename)
125 _write_m3u(m3u_path, paths)
126
127 if 'm3u_multi' in formats:
128 m3u_path = _build_m3u_filename(basename)
129 _write_m3u(m3u_path, paths)
130
131 if 'link' in formats:
132 for path in paths:
133 dest = os.path.join(feedsdir, os.path.basename(path))
134 if not os.path.exists(syspath(dest)):
135 os.symlink(syspath(path), syspath(dest))
136
137 if 'echo' in formats:
138 self._log.info("Location of imported music:")
139 for path in paths:
140 self._log.info(" {0}", path)
141
142 def library_opened(self, lib):
143 if self.config['dir'].get() is None:
144 self.config['dir'] = _get_feeds_dir(lib)
145
146 def album_imported(self, lib, album):
147 self._record_items(lib, album.album, album.items())
148
149 def item_imported(self, lib, item):
150 self._record_items(lib, item.title, [item])
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/beetsplug/importfeeds.py b/beetsplug/importfeeds.py
--- a/beetsplug/importfeeds.py
+++ b/beetsplug/importfeeds.py
@@ -118,10 +118,9 @@
paths.append(relpath)
if 'm3u' in formats:
- basename = bytestring_path(
- self.config['m3u_name'].get(unicode)
- )
- m3u_path = os.path.join(feedsdir, basename)
+ m3u_basename = bytestring_path(
+ self.config['m3u_name'].get(unicode))
+ m3u_path = os.path.join(feedsdir, m3u_basename)
_write_m3u(m3u_path, paths)
if 'm3u_multi' in formats:
|
{"golden_diff": "diff --git a/beetsplug/importfeeds.py b/beetsplug/importfeeds.py\n--- a/beetsplug/importfeeds.py\n+++ b/beetsplug/importfeeds.py\n@@ -118,10 +118,9 @@\n paths.append(relpath)\n \n if 'm3u' in formats:\n- basename = bytestring_path(\n- self.config['m3u_name'].get(unicode)\n- )\n- m3u_path = os.path.join(feedsdir, basename)\n+ m3u_basename = bytestring_path(\n+ self.config['m3u_name'].get(unicode))\n+ m3u_path = os.path.join(feedsdir, m3u_basename)\n _write_m3u(m3u_path, paths)\n \n if 'm3u_multi' in formats:\n", "issue": "importfeeds: name of m3u_multi playlist get messed up when both m3u* options are on\nactivate both m3u output formats \n\n```\n importfeeds:\n formats: m3u m3u_multi\n```\n\nResult : m3u_multi filename is not `<date> <track/album name>` as expected\n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Fabrice Laporte.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\nfrom __future__ import (division, absolute_import, print_function,\n unicode_literals)\n\n\"\"\"Write paths of imported files in various formats to ease later import in a\nmusic player. Also allow printing the new file locations to stdout in case\none wants to manually add music to a player by its path.\n\"\"\"\nimport datetime\nimport os\nimport re\n\nfrom beets.plugins import BeetsPlugin\nfrom beets.util import mkdirall, normpath, syspath, bytestring_path\nfrom beets import config\n\nM3U_DEFAULT_NAME = 'imported.m3u'\n\n\ndef _get_feeds_dir(lib):\n \"\"\"Given a Library object, return the path to the feeds directory to be\n used (either in the library directory or an explicitly configured\n path). Ensures that the directory exists.\n \"\"\"\n # Inside library directory.\n dirpath = lib.directory\n\n # Ensure directory exists.\n if not os.path.exists(syspath(dirpath)):\n os.makedirs(syspath(dirpath))\n return dirpath\n\n\ndef _build_m3u_filename(basename):\n \"\"\"Builds unique m3u filename by appending given basename to current\n date.\"\"\"\n\n basename = re.sub(r\"[\\s,/\\\\'\\\"]\", '_', basename)\n date = datetime.datetime.now().strftime(\"%Y%m%d_%Hh%M\")\n path = normpath(os.path.join(\n config['importfeeds']['dir'].as_filename(),\n date + '_' + basename + '.m3u'\n ))\n return path\n\n\ndef _write_m3u(m3u_path, items_paths):\n \"\"\"Append relative paths to items into m3u file.\n \"\"\"\n mkdirall(m3u_path)\n with open(syspath(m3u_path), 'a') as f:\n for path in items_paths:\n f.write(path + b'\\n')\n\n\nclass ImportFeedsPlugin(BeetsPlugin):\n def __init__(self):\n super(ImportFeedsPlugin, self).__init__()\n\n self.config.add({\n 'formats': [],\n 'm3u_name': u'imported.m3u',\n 'dir': None,\n 'relative_to': None,\n 'absolute_path': False,\n })\n\n feeds_dir = self.config['dir'].get()\n if feeds_dir:\n feeds_dir = os.path.expanduser(bytestring_path(feeds_dir))\n self.config['dir'] = feeds_dir\n if not os.path.exists(syspath(feeds_dir)):\n os.makedirs(syspath(feeds_dir))\n\n relative_to = self.config['relative_to'].get()\n if relative_to:\n self.config['relative_to'] = normpath(relative_to)\n else:\n self.config['relative_to'] = feeds_dir\n\n self.register_listener('library_opened', self.library_opened)\n self.register_listener('album_imported', self.album_imported)\n self.register_listener('item_imported', self.item_imported)\n\n def _record_items(self, lib, basename, items):\n \"\"\"Records relative paths to the given items for each feed format\n \"\"\"\n feedsdir = bytestring_path(self.config['dir'].as_filename())\n formats = self.config['formats'].as_str_seq()\n relative_to = self.config['relative_to'].get() \\\n or self.config['dir'].as_filename()\n relative_to = bytestring_path(relative_to)\n\n paths = []\n for item in items:\n if self.config['absolute_path']:\n paths.append(item.path)\n else:\n try:\n relpath = os.path.relpath(item.path, relative_to)\n except ValueError:\n # On Windows, it is sometimes not possible to construct a\n # relative path (if the files are on different disks).\n relpath = item.path\n paths.append(relpath)\n\n if 'm3u' in formats:\n basename = bytestring_path(\n self.config['m3u_name'].get(unicode)\n )\n m3u_path = os.path.join(feedsdir, basename)\n _write_m3u(m3u_path, paths)\n\n if 'm3u_multi' in formats:\n m3u_path = _build_m3u_filename(basename)\n _write_m3u(m3u_path, paths)\n\n if 'link' in formats:\n for path in paths:\n dest = os.path.join(feedsdir, os.path.basename(path))\n if not os.path.exists(syspath(dest)):\n os.symlink(syspath(path), syspath(dest))\n\n if 'echo' in formats:\n self._log.info(\"Location of imported music:\")\n for path in paths:\n self._log.info(\" {0}\", path)\n\n def library_opened(self, lib):\n if self.config['dir'].get() is None:\n self.config['dir'] = _get_feeds_dir(lib)\n\n def album_imported(self, lib, album):\n self._record_items(lib, album.album, album.items())\n\n def item_imported(self, lib, item):\n self._record_items(lib, item.title, [item])\n", "path": "beetsplug/importfeeds.py"}], "after_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Fabrice Laporte.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\nfrom __future__ import (division, absolute_import, print_function,\n unicode_literals)\n\n\"\"\"Write paths of imported files in various formats to ease later import in a\nmusic player. Also allow printing the new file locations to stdout in case\none wants to manually add music to a player by its path.\n\"\"\"\nimport datetime\nimport os\nimport re\n\nfrom beets.plugins import BeetsPlugin\nfrom beets.util import mkdirall, normpath, syspath, bytestring_path\nfrom beets import config\n\nM3U_DEFAULT_NAME = 'imported.m3u'\n\n\ndef _get_feeds_dir(lib):\n \"\"\"Given a Library object, return the path to the feeds directory to be\n used (either in the library directory or an explicitly configured\n path). Ensures that the directory exists.\n \"\"\"\n # Inside library directory.\n dirpath = lib.directory\n\n # Ensure directory exists.\n if not os.path.exists(syspath(dirpath)):\n os.makedirs(syspath(dirpath))\n return dirpath\n\n\ndef _build_m3u_filename(basename):\n \"\"\"Builds unique m3u filename by appending given basename to current\n date.\"\"\"\n\n basename = re.sub(r\"[\\s,/\\\\'\\\"]\", '_', basename)\n date = datetime.datetime.now().strftime(\"%Y%m%d_%Hh%M\")\n path = normpath(os.path.join(\n config['importfeeds']['dir'].as_filename(),\n date + '_' + basename + '.m3u'\n ))\n return path\n\n\ndef _write_m3u(m3u_path, items_paths):\n \"\"\"Append relative paths to items into m3u file.\n \"\"\"\n mkdirall(m3u_path)\n with open(syspath(m3u_path), 'a') as f:\n for path in items_paths:\n f.write(path + b'\\n')\n\n\nclass ImportFeedsPlugin(BeetsPlugin):\n def __init__(self):\n super(ImportFeedsPlugin, self).__init__()\n\n self.config.add({\n 'formats': [],\n 'm3u_name': u'imported.m3u',\n 'dir': None,\n 'relative_to': None,\n 'absolute_path': False,\n })\n\n feeds_dir = self.config['dir'].get()\n if feeds_dir:\n feeds_dir = os.path.expanduser(bytestring_path(feeds_dir))\n self.config['dir'] = feeds_dir\n if not os.path.exists(syspath(feeds_dir)):\n os.makedirs(syspath(feeds_dir))\n\n relative_to = self.config['relative_to'].get()\n if relative_to:\n self.config['relative_to'] = normpath(relative_to)\n else:\n self.config['relative_to'] = feeds_dir\n\n self.register_listener('library_opened', self.library_opened)\n self.register_listener('album_imported', self.album_imported)\n self.register_listener('item_imported', self.item_imported)\n\n def _record_items(self, lib, basename, items):\n \"\"\"Records relative paths to the given items for each feed format\n \"\"\"\n feedsdir = bytestring_path(self.config['dir'].as_filename())\n formats = self.config['formats'].as_str_seq()\n relative_to = self.config['relative_to'].get() \\\n or self.config['dir'].as_filename()\n relative_to = bytestring_path(relative_to)\n\n paths = []\n for item in items:\n if self.config['absolute_path']:\n paths.append(item.path)\n else:\n try:\n relpath = os.path.relpath(item.path, relative_to)\n except ValueError:\n # On Windows, it is sometimes not possible to construct a\n # relative path (if the files are on different disks).\n relpath = item.path\n paths.append(relpath)\n\n if 'm3u' in formats:\n m3u_basename = bytestring_path(\n self.config['m3u_name'].get(unicode))\n m3u_path = os.path.join(feedsdir, m3u_basename)\n _write_m3u(m3u_path, paths)\n\n if 'm3u_multi' in formats:\n m3u_path = _build_m3u_filename(basename)\n _write_m3u(m3u_path, paths)\n\n if 'link' in formats:\n for path in paths:\n dest = os.path.join(feedsdir, os.path.basename(path))\n if not os.path.exists(syspath(dest)):\n os.symlink(syspath(path), syspath(dest))\n\n if 'echo' in formats:\n self._log.info(\"Location of imported music:\")\n for path in paths:\n self._log.info(\" {0}\", path)\n\n def library_opened(self, lib):\n if self.config['dir'].get() is None:\n self.config['dir'] = _get_feeds_dir(lib)\n\n def album_imported(self, lib, album):\n self._record_items(lib, album.album, album.items())\n\n def item_imported(self, lib, item):\n self._record_items(lib, item.title, [item])\n", "path": "beetsplug/importfeeds.py"}]}
| 1,901 | 180 |
gh_patches_debug_66681
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-16793
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Please add Brand24 to the public list of Pants Users
### Company name
Brand24
### Company website
https://brand24.com
### Company logo

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `build-support/bin/generate_user_list.py`
Content:
```
1 #!/usr/bin/env python3
2 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
3 # Licensed under the Apache License, Version 2.0 (see LICENSE).
4
5 from __future__ import annotations
6
7 import pkgutil
8 from dataclasses import dataclass
9
10 import chevron
11
12 """Generates the custom HTML/CSS block in https://www.pantsbuild.org/docs/who-uses-pants .
13
14 To add new companies or make other changes, edit and run this script, then paste the output
15 into that block instead of its current content. Be sure to check that the page renders properly
16 and be prepared to revert (via the "Page history" link) if necessary.
17
18 On MacOS it's useful to pipe the output of this script into pbcopy, so it's in the clipboard
19 ready to be pasted:
20
21 ./pants run build-support/bin/generate_user_list.py | pbcopy
22
23 NOTE: Please consider adding your company/organization to this list! If you wish to do so then
24 thank you, and please follow the guidance at https://pantsbuild.org/register.
25 """
26
27 # Note: To create an image URL, temporarily add an image block to some page on readme.com (such
28 # as the user list page itself), and upload the logo image (after appropriate resizing in GIMP
29 # or your tool of choice). Do NOT save the page. Instead, right-click to capture the image URL
30 # from the preview in the edit page, and then remove the image block.
31
32
33 @dataclass
34 class Org:
35 name: str
36 website: str
37 image: str | None
38
39
40 # Orgs will be displayed in case-insensitive alphabetical order, but it's useful for human readers
41 # to keep this list in that order too.
42 _orgs = (
43 Org(
44 "Chartbeat", "https://chartbeat.com/", "https://files.readme.io/861ace7-chartbeat-small.png"
45 ),
46 Org(
47 "Coinbase",
48 "https://www.coinbase.com/",
49 "https://files.readme.io/a213f0f-coinbase-small.png",
50 ),
51 Org(
52 "ESL Gaming",
53 "https://about.eslgaming.com/",
54 "https://files.readme.io/b63d33d-esl-small.png",
55 ),
56 Org(
57 "Foursquare",
58 "https://foursquare.com/",
59 "https://files.readme.io/aa53b52-foursquare-small.png",
60 ),
61 Org(
62 "Geminus",
63 "https://www.geminus.ai/",
64 "https://files.readme.io/0da3c3f-geminus-small.png",
65 ),
66 Org("Grapl", "https://www.graplsecurity.com/", "https://files.readme.io/341b9cd-grapl.png"),
67 Org(
68 "HousingAnywhere",
69 "https://housinganywhere.com/",
70 "https://files.readme.io/dd2a703-housinganywhere-small.png",
71 ),
72 Org("IBM", "https://www.ibm.com/", None),
73 Org("iManage", "https://imanage.com/", "https://files.readme.io/0f7b5f6-imanage-small.png"),
74 Org("Lablup", "https://lablup.com/", "https://files.readme.io/a94d375-lablup-small.png"),
75 Org("Myst AI", "https://www.myst.ai/", "https://files.readme.io/802d8fa-myst_ai_small.png"),
76 Org("Ocrolus", "https://www.ocrolus.com/", "https://files.readme.io/ff166fa-ocrolus-small.png"),
77 Org(
78 "Orca Security",
79 "https://orca.security/",
80 "https://files.readme.io/e87f6c5-Orca_Security-small.png",
81 ),
82 Org("Pave", "https://www.pave.dev/", "https://files.readme.io/924aa3e-pave-small.png"),
83 Org(
84 "People Data Labs",
85 "https://www.peopledatalabs.com/",
86 "https://files.readme.io/8c4f5cd-peopledatalabs-small.png",
87 ),
88 Org(
89 "Rippling",
90 "https://www.rippling.com/",
91 "https://files.readme.io/c8be3a1-rippling-small.png",
92 ),
93 Org(
94 "Snowfall",
95 "https://snowfalltravel.com/",
96 "https://files.readme.io/245f03e-snowfall-small.png",
97 ),
98 Org(
99 "Tessian",
100 "https://www.tessian.com",
101 "https://files.readme.io/6ef9d57-tessian-small.png",
102 ),
103 Org(
104 "Toolchain",
105 "https://www.toolchain.com/",
106 "https://files.readme.io/43d674d-toolchain_logo_small.png",
107 ),
108 Org("Valon", "https://valon.com/", "https://files.readme.io/df5216a-valon-small.png"),
109 Org(
110 "Vicara Solutions",
111 "https://vicarasolutions.com/",
112 "https://files.readme.io/1748a22-vicara-solutions.png",
113 ),
114 )
115
116
117 @dataclass
118 class OrgPair:
119 a: Org
120 b: Org
121
122
123 def main():
124 orgs = sorted(_orgs, key=lambda x: x.name.lower())
125 # Ensure an even number of cells, leaving one to render blankly if necessary.
126 if len(orgs) % 2 == 1:
127 orgs.append(Org("", "", ""))
128 org_pairs = tuple(OrgPair(orgs[i], orgs[i + 1]) for i in range(0, len(orgs), 2))
129 buf = pkgutil.get_data("generate_user_list", "user_list_templates/table.html.mustache")
130 print(chevron.render(buf.decode(), data={"org_pairs": org_pairs}))
131
132
133 if __name__ == "__main__":
134 main()
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/build-support/bin/generate_user_list.py b/build-support/bin/generate_user_list.py
--- a/build-support/bin/generate_user_list.py
+++ b/build-support/bin/generate_user_list.py
@@ -40,6 +40,7 @@
# Orgs will be displayed in case-insensitive alphabetical order, but it's useful for human readers
# to keep this list in that order too.
_orgs = (
+ Org("Brand24", "https://brand24.com/", "https://files.readme.io/e3203d1-brand24-small.png"),
Org(
"Chartbeat", "https://chartbeat.com/", "https://files.readme.io/861ace7-chartbeat-small.png"
),
|
{"golden_diff": "diff --git a/build-support/bin/generate_user_list.py b/build-support/bin/generate_user_list.py\n--- a/build-support/bin/generate_user_list.py\n+++ b/build-support/bin/generate_user_list.py\n@@ -40,6 +40,7 @@\n # Orgs will be displayed in case-insensitive alphabetical order, but it's useful for human readers\n # to keep this list in that order too.\n _orgs = (\n+ Org(\"Brand24\", \"https://brand24.com/\", \"https://files.readme.io/e3203d1-brand24-small.png\"),\n Org(\n \"Chartbeat\", \"https://chartbeat.com/\", \"https://files.readme.io/861ace7-chartbeat-small.png\"\n ),\n", "issue": "Please add Brand24 to the public list of Pants Users\n### Company name\n\nBrand24\n\n### Company website\n\nhttps://brand24.com\n\n### Company logo\n\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport pkgutil\nfrom dataclasses import dataclass\n\nimport chevron\n\n\"\"\"Generates the custom HTML/CSS block in https://www.pantsbuild.org/docs/who-uses-pants .\n\nTo add new companies or make other changes, edit and run this script, then paste the output\ninto that block instead of its current content. Be sure to check that the page renders properly\nand be prepared to revert (via the \"Page history\" link) if necessary.\n\nOn MacOS it's useful to pipe the output of this script into pbcopy, so it's in the clipboard\nready to be pasted:\n\n./pants run build-support/bin/generate_user_list.py | pbcopy\n\nNOTE: Please consider adding your company/organization to this list! If you wish to do so then\n thank you, and please follow the guidance at https://pantsbuild.org/register.\n\"\"\"\n\n# Note: To create an image URL, temporarily add an image block to some page on readme.com (such\n# as the user list page itself), and upload the logo image (after appropriate resizing in GIMP\n# or your tool of choice). Do NOT save the page. Instead, right-click to capture the image URL\n# from the preview in the edit page, and then remove the image block.\n\n\n@dataclass\nclass Org:\n name: str\n website: str\n image: str | None\n\n\n# Orgs will be displayed in case-insensitive alphabetical order, but it's useful for human readers\n# to keep this list in that order too.\n_orgs = (\n Org(\n \"Chartbeat\", \"https://chartbeat.com/\", \"https://files.readme.io/861ace7-chartbeat-small.png\"\n ),\n Org(\n \"Coinbase\",\n \"https://www.coinbase.com/\",\n \"https://files.readme.io/a213f0f-coinbase-small.png\",\n ),\n Org(\n \"ESL Gaming\",\n \"https://about.eslgaming.com/\",\n \"https://files.readme.io/b63d33d-esl-small.png\",\n ),\n Org(\n \"Foursquare\",\n \"https://foursquare.com/\",\n \"https://files.readme.io/aa53b52-foursquare-small.png\",\n ),\n Org(\n \"Geminus\",\n \"https://www.geminus.ai/\",\n \"https://files.readme.io/0da3c3f-geminus-small.png\",\n ),\n Org(\"Grapl\", \"https://www.graplsecurity.com/\", \"https://files.readme.io/341b9cd-grapl.png\"),\n Org(\n \"HousingAnywhere\",\n \"https://housinganywhere.com/\",\n \"https://files.readme.io/dd2a703-housinganywhere-small.png\",\n ),\n Org(\"IBM\", \"https://www.ibm.com/\", None),\n Org(\"iManage\", \"https://imanage.com/\", \"https://files.readme.io/0f7b5f6-imanage-small.png\"),\n Org(\"Lablup\", \"https://lablup.com/\", \"https://files.readme.io/a94d375-lablup-small.png\"),\n Org(\"Myst AI\", \"https://www.myst.ai/\", \"https://files.readme.io/802d8fa-myst_ai_small.png\"),\n Org(\"Ocrolus\", \"https://www.ocrolus.com/\", \"https://files.readme.io/ff166fa-ocrolus-small.png\"),\n Org(\n \"Orca Security\",\n \"https://orca.security/\",\n \"https://files.readme.io/e87f6c5-Orca_Security-small.png\",\n ),\n Org(\"Pave\", \"https://www.pave.dev/\", \"https://files.readme.io/924aa3e-pave-small.png\"),\n Org(\n \"People Data Labs\",\n \"https://www.peopledatalabs.com/\",\n \"https://files.readme.io/8c4f5cd-peopledatalabs-small.png\",\n ),\n Org(\n \"Rippling\",\n \"https://www.rippling.com/\",\n \"https://files.readme.io/c8be3a1-rippling-small.png\",\n ),\n Org(\n \"Snowfall\",\n \"https://snowfalltravel.com/\",\n \"https://files.readme.io/245f03e-snowfall-small.png\",\n ),\n Org(\n \"Tessian\",\n \"https://www.tessian.com\",\n \"https://files.readme.io/6ef9d57-tessian-small.png\",\n ),\n Org(\n \"Toolchain\",\n \"https://www.toolchain.com/\",\n \"https://files.readme.io/43d674d-toolchain_logo_small.png\",\n ),\n Org(\"Valon\", \"https://valon.com/\", \"https://files.readme.io/df5216a-valon-small.png\"),\n Org(\n \"Vicara Solutions\",\n \"https://vicarasolutions.com/\",\n \"https://files.readme.io/1748a22-vicara-solutions.png\",\n ),\n)\n\n\n@dataclass\nclass OrgPair:\n a: Org\n b: Org\n\n\ndef main():\n orgs = sorted(_orgs, key=lambda x: x.name.lower())\n # Ensure an even number of cells, leaving one to render blankly if necessary.\n if len(orgs) % 2 == 1:\n orgs.append(Org(\"\", \"\", \"\"))\n org_pairs = tuple(OrgPair(orgs[i], orgs[i + 1]) for i in range(0, len(orgs), 2))\n buf = pkgutil.get_data(\"generate_user_list\", \"user_list_templates/table.html.mustache\")\n print(chevron.render(buf.decode(), data={\"org_pairs\": org_pairs}))\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "build-support/bin/generate_user_list.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport pkgutil\nfrom dataclasses import dataclass\n\nimport chevron\n\n\"\"\"Generates the custom HTML/CSS block in https://www.pantsbuild.org/docs/who-uses-pants .\n\nTo add new companies or make other changes, edit and run this script, then paste the output\ninto that block instead of its current content. Be sure to check that the page renders properly\nand be prepared to revert (via the \"Page history\" link) if necessary.\n\nOn MacOS it's useful to pipe the output of this script into pbcopy, so it's in the clipboard\nready to be pasted:\n\n./pants run build-support/bin/generate_user_list.py | pbcopy\n\nNOTE: Please consider adding your company/organization to this list! If you wish to do so then\n thank you, and please follow the guidance at https://pantsbuild.org/register.\n\"\"\"\n\n# Note: To create an image URL, temporarily add an image block to some page on readme.com (such\n# as the user list page itself), and upload the logo image (after appropriate resizing in GIMP\n# or your tool of choice). Do NOT save the page. Instead, right-click to capture the image URL\n# from the preview in the edit page, and then remove the image block.\n\n\n@dataclass\nclass Org:\n name: str\n website: str\n image: str | None\n\n\n# Orgs will be displayed in case-insensitive alphabetical order, but it's useful for human readers\n# to keep this list in that order too.\n_orgs = (\n Org(\"Brand24\", \"https://brand24.com/\", \"https://files.readme.io/e3203d1-brand24-small.png\"),\n Org(\n \"Chartbeat\", \"https://chartbeat.com/\", \"https://files.readme.io/861ace7-chartbeat-small.png\"\n ),\n Org(\n \"Coinbase\",\n \"https://www.coinbase.com/\",\n \"https://files.readme.io/a213f0f-coinbase-small.png\",\n ),\n Org(\n \"ESL Gaming\",\n \"https://about.eslgaming.com/\",\n \"https://files.readme.io/b63d33d-esl-small.png\",\n ),\n Org(\n \"Foursquare\",\n \"https://foursquare.com/\",\n \"https://files.readme.io/aa53b52-foursquare-small.png\",\n ),\n Org(\n \"Geminus\",\n \"https://www.geminus.ai/\",\n \"https://files.readme.io/0da3c3f-geminus-small.png\",\n ),\n Org(\"Grapl\", \"https://www.graplsecurity.com/\", \"https://files.readme.io/341b9cd-grapl.png\"),\n Org(\n \"HousingAnywhere\",\n \"https://housinganywhere.com/\",\n \"https://files.readme.io/dd2a703-housinganywhere-small.png\",\n ),\n Org(\"IBM\", \"https://www.ibm.com/\", None),\n Org(\"iManage\", \"https://imanage.com/\", \"https://files.readme.io/0f7b5f6-imanage-small.png\"),\n Org(\"Lablup\", \"https://lablup.com/\", \"https://files.readme.io/a94d375-lablup-small.png\"),\n Org(\"Myst AI\", \"https://www.myst.ai/\", \"https://files.readme.io/802d8fa-myst_ai_small.png\"),\n Org(\"Ocrolus\", \"https://www.ocrolus.com/\", \"https://files.readme.io/ff166fa-ocrolus-small.png\"),\n Org(\n \"Orca Security\",\n \"https://orca.security/\",\n \"https://files.readme.io/e87f6c5-Orca_Security-small.png\",\n ),\n Org(\"Pave\", \"https://www.pave.dev/\", \"https://files.readme.io/924aa3e-pave-small.png\"),\n Org(\n \"People Data Labs\",\n \"https://www.peopledatalabs.com/\",\n \"https://files.readme.io/8c4f5cd-peopledatalabs-small.png\",\n ),\n Org(\n \"Rippling\",\n \"https://www.rippling.com/\",\n \"https://files.readme.io/c8be3a1-rippling-small.png\",\n ),\n Org(\n \"Snowfall\",\n \"https://snowfalltravel.com/\",\n \"https://files.readme.io/245f03e-snowfall-small.png\",\n ),\n Org(\n \"Tessian\",\n \"https://www.tessian.com\",\n \"https://files.readme.io/6ef9d57-tessian-small.png\",\n ),\n Org(\n \"Toolchain\",\n \"https://www.toolchain.com/\",\n \"https://files.readme.io/43d674d-toolchain_logo_small.png\",\n ),\n Org(\"Valon\", \"https://valon.com/\", \"https://files.readme.io/df5216a-valon-small.png\"),\n Org(\n \"Vicara Solutions\",\n \"https://vicarasolutions.com/\",\n \"https://files.readme.io/1748a22-vicara-solutions.png\",\n ),\n)\n\n\n@dataclass\nclass OrgPair:\n a: Org\n b: Org\n\n\ndef main():\n orgs = sorted(_orgs, key=lambda x: x.name.lower())\n # Ensure an even number of cells, leaving one to render blankly if necessary.\n if len(orgs) % 2 == 1:\n orgs.append(Org(\"\", \"\", \"\"))\n org_pairs = tuple(OrgPair(orgs[i], orgs[i + 1]) for i in range(0, len(orgs), 2))\n buf = pkgutil.get_data(\"generate_user_list\", \"user_list_templates/table.html.mustache\")\n print(chevron.render(buf.decode(), data={\"org_pairs\": org_pairs}))\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "build-support/bin/generate_user_list.py"}]}
| 2,003 | 163 |
gh_patches_debug_50802
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-1481
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pubsub fails if data key is not present
If a message is published with a string of 0 length (`topic.publish( '', url=url, title=title)`) when the message is received there is no data field in the message and a key error is thrown when trying to transform the message from the PubSub API representation.
https://github.com/GoogleCloudPlatform/gcloud-python/blob/master/gcloud/pubsub/message.py#L74
```
Traceback (most recent call last):
File "/en_notifications/en_notifications.py", line 51, in <module>
received = PS_SUBSCRIPTION.pull(max_messages=PULL_COUNT)
File "/usr/local/lib/python2.7/dist-packages/gcloud/pubsub/subscription.py", line 212, in pull
File "/usr/local/lib/python2.7/dist-packages/gcloud/pubsub/message.py", line 74, in from_api_repr
for info in response.get('receivedMessages', ())]
data = base64.b64decode(api_repr['data'])
KeyError: 'data'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gcloud/pubsub/message.py`
Content:
```
1 # Copyright 2015 Google Inc. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Define API Topics."""
16
17 import base64
18
19 from gcloud._helpers import _rfc3339_to_datetime
20
21
22 class Message(object):
23 """Messages can be published to a topic and received by subscribers.
24
25 See:
26 https://cloud.google.com/pubsub/reference/rest/v1/PubsubMessage
27
28 :type data: bytes
29 :param data: the payload of the message
30
31 :type message_id: string
32 :param message_id: An ID assigned to the message by the API.
33
34 :type attributes: dict or None
35 :param attributes: Extra metadata associated by the publisher with the
36 message.
37 """
38 def __init__(self, data, message_id, attributes=None):
39 self.data = data
40 self.message_id = message_id
41 self._attributes = attributes
42
43 @property
44 def attributes(self):
45 """Lazily-constructed attribute dictionary"""
46 if self._attributes is None:
47 self._attributes = {}
48 return self._attributes
49
50 @property
51 def timestamp(self):
52 """Return sortable timestamp from attributes, if passed.
53
54 Allows sorting messages in publication order (assuming consistent
55 clocks across all publishers).
56
57 :rtype: :class:`datetime.datetime`
58 :returns: timestamp (in UTC timezone) parsed from RFC 3339 timestamp
59 :raises: ValueError if timestamp not in ``attributes``, or if it does
60 not match the RFC 3339 format.
61 """
62 stamp = self.attributes.get('timestamp')
63 if stamp is None:
64 raise ValueError('No timestamp')
65 return _rfc3339_to_datetime(stamp)
66
67 @classmethod
68 def from_api_repr(cls, api_repr):
69 """Factory: construct message from API representation.
70
71 :type api_repr: dict or None
72 :param api_repr: The API representation of the message
73 """
74 data = base64.b64decode(api_repr['data'])
75 return cls(data=data, message_id=api_repr['messageId'],
76 attributes=api_repr.get('attributes'))
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gcloud/pubsub/message.py b/gcloud/pubsub/message.py
--- a/gcloud/pubsub/message.py
+++ b/gcloud/pubsub/message.py
@@ -71,6 +71,6 @@
:type api_repr: dict or None
:param api_repr: The API representation of the message
"""
- data = base64.b64decode(api_repr['data'])
+ data = base64.b64decode(api_repr.get('data', b''))
return cls(data=data, message_id=api_repr['messageId'],
attributes=api_repr.get('attributes'))
|
{"golden_diff": "diff --git a/gcloud/pubsub/message.py b/gcloud/pubsub/message.py\n--- a/gcloud/pubsub/message.py\n+++ b/gcloud/pubsub/message.py\n@@ -71,6 +71,6 @@\n :type api_repr: dict or None\n :param api_repr: The API representation of the message\n \"\"\"\n- data = base64.b64decode(api_repr['data'])\n+ data = base64.b64decode(api_repr.get('data', b''))\n return cls(data=data, message_id=api_repr['messageId'],\n attributes=api_repr.get('attributes'))\n", "issue": "pubsub fails if data key is not present\nIf a message is published with a string of 0 length (`topic.publish( '', url=url, title=title)`) when the message is received there is no data field in the message and a key error is thrown when trying to transform the message from the PubSub API representation.\n\nhttps://github.com/GoogleCloudPlatform/gcloud-python/blob/master/gcloud/pubsub/message.py#L74\n\n```\nTraceback (most recent call last):\nFile \"/en_notifications/en_notifications.py\", line 51, in <module>\nreceived = PS_SUBSCRIPTION.pull(max_messages=PULL_COUNT)\nFile \"/usr/local/lib/python2.7/dist-packages/gcloud/pubsub/subscription.py\", line 212, in pull\nFile \"/usr/local/lib/python2.7/dist-packages/gcloud/pubsub/message.py\", line 74, in from_api_repr\nfor info in response.get('receivedMessages', ())]\ndata = base64.b64decode(api_repr['data'])\nKeyError: 'data'\n```\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Define API Topics.\"\"\"\n\nimport base64\n\nfrom gcloud._helpers import _rfc3339_to_datetime\n\n\nclass Message(object):\n \"\"\"Messages can be published to a topic and received by subscribers.\n\n See:\n https://cloud.google.com/pubsub/reference/rest/v1/PubsubMessage\n\n :type data: bytes\n :param data: the payload of the message\n\n :type message_id: string\n :param message_id: An ID assigned to the message by the API.\n\n :type attributes: dict or None\n :param attributes: Extra metadata associated by the publisher with the\n message.\n \"\"\"\n def __init__(self, data, message_id, attributes=None):\n self.data = data\n self.message_id = message_id\n self._attributes = attributes\n\n @property\n def attributes(self):\n \"\"\"Lazily-constructed attribute dictionary\"\"\"\n if self._attributes is None:\n self._attributes = {}\n return self._attributes\n\n @property\n def timestamp(self):\n \"\"\"Return sortable timestamp from attributes, if passed.\n\n Allows sorting messages in publication order (assuming consistent\n clocks across all publishers).\n\n :rtype: :class:`datetime.datetime`\n :returns: timestamp (in UTC timezone) parsed from RFC 3339 timestamp\n :raises: ValueError if timestamp not in ``attributes``, or if it does\n not match the RFC 3339 format.\n \"\"\"\n stamp = self.attributes.get('timestamp')\n if stamp is None:\n raise ValueError('No timestamp')\n return _rfc3339_to_datetime(stamp)\n\n @classmethod\n def from_api_repr(cls, api_repr):\n \"\"\"Factory: construct message from API representation.\n\n :type api_repr: dict or None\n :param api_repr: The API representation of the message\n \"\"\"\n data = base64.b64decode(api_repr['data'])\n return cls(data=data, message_id=api_repr['messageId'],\n attributes=api_repr.get('attributes'))\n", "path": "gcloud/pubsub/message.py"}], "after_files": [{"content": "# Copyright 2015 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Define API Topics.\"\"\"\n\nimport base64\n\nfrom gcloud._helpers import _rfc3339_to_datetime\n\n\nclass Message(object):\n \"\"\"Messages can be published to a topic and received by subscribers.\n\n See:\n https://cloud.google.com/pubsub/reference/rest/v1/PubsubMessage\n\n :type data: bytes\n :param data: the payload of the message\n\n :type message_id: string\n :param message_id: An ID assigned to the message by the API.\n\n :type attributes: dict or None\n :param attributes: Extra metadata associated by the publisher with the\n message.\n \"\"\"\n def __init__(self, data, message_id, attributes=None):\n self.data = data\n self.message_id = message_id\n self._attributes = attributes\n\n @property\n def attributes(self):\n \"\"\"Lazily-constructed attribute dictionary\"\"\"\n if self._attributes is None:\n self._attributes = {}\n return self._attributes\n\n @property\n def timestamp(self):\n \"\"\"Return sortable timestamp from attributes, if passed.\n\n Allows sorting messages in publication order (assuming consistent\n clocks across all publishers).\n\n :rtype: :class:`datetime.datetime`\n :returns: timestamp (in UTC timezone) parsed from RFC 3339 timestamp\n :raises: ValueError if timestamp not in ``attributes``, or if it does\n not match the RFC 3339 format.\n \"\"\"\n stamp = self.attributes.get('timestamp')\n if stamp is None:\n raise ValueError('No timestamp')\n return _rfc3339_to_datetime(stamp)\n\n @classmethod\n def from_api_repr(cls, api_repr):\n \"\"\"Factory: construct message from API representation.\n\n :type api_repr: dict or None\n :param api_repr: The API representation of the message\n \"\"\"\n data = base64.b64decode(api_repr.get('data', b''))\n return cls(data=data, message_id=api_repr['messageId'],\n attributes=api_repr.get('attributes'))\n", "path": "gcloud/pubsub/message.py"}]}
| 1,204 | 133 |
gh_patches_debug_1643
|
rasdani/github-patches
|
git_diff
|
dask__distributed-367
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OverflowError when sending large sparse arrays
I don't yet have a small reproducible example, but I can make this happen every time I try to collect many large sparse arrays. I do have a notebook that will produce it though, and can make that available. The traceback:
```
Traceback (most recent call last):
File "/home/jcrist/miniconda/envs/dask_learn/lib/python2.7/site-packages/distributed/core.py", line 266, in write
frames = protocol.dumps(msg)
File "/home/jcrist/miniconda/envs/dask_learn/lib/python2.7/site-packages/distributed/protocol.py", line 81, in dumps
frames = dumps_msgpack(small)
File "/home/jcrist/miniconda/envs/dask_learn/lib/python2.7/site-packages/distributed/protocol.py", line 155, in dumps_msgpack
fmt, payload = maybe_compress(payload)
File "/home/jcrist/miniconda/envs/dask_learn/lib/python2.7/site-packages/distributed/protocol.py", line 137, in maybe_compress
compressed = compress(payload)
OverflowError: size does not fit in an int
```
A few notes:
- Each array is roughly `675000 x 745`, and ~1% dense. The total bytes for indices + indptr + data is ~40MB each.
- I can get each array individually, so it's not a problem with a chunk being too large
- The error appears only when I'm collecting enough at once (for my size, 39 and and lower works fine).
- At 41 arrays I get the above error, 40 arrays gives me a different (but probably related) error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-55-7b87709b6c67> in <module>()
----> 1 res = t.compute()
/home/jcrist/dask/dask/base.pyc in compute(self, **kwargs)
84 Extra keywords to forward to the scheduler ``get`` function.
85 """
---> 86 return compute(self, **kwargs)[0]
87
88 @classmethod
/home/jcrist/dask/dask/base.pyc in compute(*args, **kwargs)
177 dsk = merge(var.dask for var in variables)
178 keys = [var._keys() for var in variables]
--> 179 results = get(dsk, keys, **kwargs)
180
181 results_iter = iter(results)
/home/jcrist/miniconda/envs/dask_learn/lib/python2.7/site-packages/distributed/executor.pyc in get(self, dsk, keys, **kwargs)
1008
1009 if status == 'error':
-> 1010 raise result
1011 else:
1012 return result
ValueError: corrupt input at byte 2
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `distributed/protocol.py`
Content:
```
1 """
2 The distributed message protocol consists of the following parts:
3
4 1. The length of the header, stored as a uint32
5 2. The header, stored as msgpack.
6 If there are no fields in the header then we skip it entirely.
7 3. The payload, stored as possibly compressed msgpack
8 4. A sentinel value
9
10 **Header**
11
12 The Header contains the following fields:
13
14 * **compression**: string, optional
15 One of the following: ``'snappy', 'lz4', 'zlib'`` or missing for None
16
17 **Payload**
18
19 The payload is any msgpack serializable value. It may be compressed based
20 on the header.
21
22 **Sentinel**
23
24 We often terminate each message with a sentinel value. This happens
25 outside of this module though and is not baked in.
26 """
27 from __future__ import print_function, division, absolute_import
28
29 import random
30 import struct
31
32 try:
33 import pandas.msgpack as msgpack
34 except ImportError:
35 import msgpack
36
37 from toolz import first, keymap, identity, merge
38
39 from .utils import ignoring
40 from .compatibility import unicode
41
42
43 compressions = {None: {'compress': identity,
44 'decompress': identity}}
45
46 default_compression = None
47
48
49 with ignoring(ImportError):
50 import zlib
51 compressions['zlib'] = {'compress': zlib.compress,
52 'decompress': zlib.decompress}
53
54 with ignoring(ImportError):
55 import snappy
56 compressions['snappy'] = {'compress': snappy.compress,
57 'decompress': snappy.decompress}
58 default_compression = 'snappy'
59
60 with ignoring(ImportError):
61 import lz4
62 compressions['lz4'] = {'compress': lz4.LZ4_compress,
63 'decompress': lz4.LZ4_uncompress}
64 default_compression = 'lz4'
65
66
67 def dumps(msg):
68 """ Transform Python value to bytestream suitable for communication """
69 small_header = {}
70
71 if isinstance(msg, dict):
72 big = {k: v for k, v in msg.items()
73 if isinstance(v, bytes) and len(v) > 1e6}
74 else:
75 big = False
76 if big:
77 small = {k: v for k, v in msg.items() if k not in big}
78 else:
79 small = msg
80
81 frames = dumps_msgpack(small)
82 if big:
83 frames += dumps_big_byte_dict(big)
84
85 return frames
86
87
88 def loads(frames):
89 """ Transform bytestream back into Python value """
90 header, payload, frames = frames[0], frames[1], frames[2:]
91 msg = loads_msgpack(header, payload)
92
93 if frames:
94 big = loads_big_byte_dict(*frames)
95 msg.update(big)
96
97 return msg
98
99
100 def byte_sample(b, size, n):
101 """ Sample a bytestring from many locations """
102 starts = [random.randint(0, len(b) - size) for j in range(n)]
103 ends = []
104 for i, start in enumerate(starts[:-1]):
105 ends.append(min(start + size, starts[i + 1]))
106 ends.append(starts[-1] + size)
107
108 return b''.join([b[start:end] for start, end in zip(starts, ends)])
109
110
111 def maybe_compress(payload, compression=default_compression, min_size=1e4,
112 sample_size=1e4, nsamples=5):
113 """ Maybe compress payload
114
115 1. We don't compress small messages
116 2. We sample the payload in a few spots, compress that, and if it doesn't
117 do any good we return the original
118 3. We then compress the full original, it it doesn't compress well then we
119 return the original
120 4. We return the compressed result
121 """
122 if not compression:
123 return None, payload
124 if len(payload) < min_size:
125 return None, payload
126
127 min_size = int(min_size)
128 sample_size = int(sample_size)
129
130 compress = compressions[compression]['compress']
131
132 # Compress a sample, return original if not very compressed
133 sample = byte_sample(payload, sample_size, nsamples)
134 if len(compress(sample)) > 0.9 * len(sample): # not very compressible
135 return None, payload
136
137 compressed = compress(payload)
138 if len(compressed) > 0.9 * len(payload): # not very compressible
139 return None, payload
140
141 return compression, compress(payload)
142
143
144 def dumps_msgpack(msg):
145 """ Dump msg into header and payload, both bytestrings
146
147 All of the message must be msgpack encodable
148
149 See Also:
150 loads_msgpack
151 """
152 header = {}
153 payload = msgpack.dumps(msg, use_bin_type=True)
154
155 fmt, payload = maybe_compress(payload)
156 if fmt:
157 header['compression'] = fmt
158
159 if header:
160 header_bytes = msgpack.dumps(header, use_bin_type=True)
161 else:
162 header_bytes = b''
163
164 return [header_bytes, payload]
165
166
167 def loads_msgpack(header, payload):
168 """ Read msgpack header and payload back to Python object
169
170 See Also:
171 dumps_msgpack
172 """
173 if header:
174 header = msgpack.loads(header, encoding='utf8')
175 else:
176 header = {}
177
178 if header.get('compression'):
179 try:
180 decompress = compressions[header['compression']]['decompress']
181 payload = decompress(payload)
182 except KeyError:
183 raise ValueError("Data is compressed as %s but we don't have this"
184 " installed" % header['compression'].decode())
185
186 return msgpack.loads(payload, encoding='utf8')
187
188
189 def dumps_big_byte_dict(d):
190 """ Serialize large byte dictionary to sequence of frames
191
192 The input must be a dictionary and all values of that dictionary must be
193 bytestrings. These should probably be large.
194
195 Returns a sequence of frames, one header followed by each of the values
196
197 See Also:
198 loads_big_byte_dict
199 """
200 assert isinstance(d, dict) and all(isinstance(v, bytes) for v in d.values())
201 shards = {}
202 for k, v in list(d.items()):
203 if len(v) >= 2**31:
204 L = []
205 for i, j in enumerate(range(0, len(v), 2**30)):
206 key = '.shard-%d-%s' % (i, k)
207 d[key] = v[j: j + 2**30]
208 L.append(key)
209 del d[k]
210 shards[k] = L
211
212 keys, values = zip(*d.items())
213
214 compress = compressions[default_compression]['compress']
215 compression = []
216 values2 = []
217 for v in values:
218 fmt, vv = maybe_compress(v)
219 compression.append(fmt)
220 values2.append(vv)
221
222 header = {'encoding': 'big-byte-dict',
223 'keys': keys,
224 'compression': compression}
225 if shards:
226 header['shards'] = shards
227
228 return [msgpack.dumps(header, use_bin_type=True)] + values2
229
230
231 def loads_big_byte_dict(header, *values):
232 """ Deserialize big-byte frames to large byte dictionary
233
234 See Also:
235 dumps_big_byte_dict
236 """
237 header = msgpack.loads(header, encoding='utf8')
238
239 values2 = [compressions[c]['decompress'](v)
240 for c, v in zip(header['compression'], values)]
241 result = dict(zip(header['keys'], values2))
242
243 for k, keys in header.get('shards', {}).items():
244 result[k] = b''.join(result.pop(kk) for kk in keys)
245 return result
246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/distributed/protocol.py b/distributed/protocol.py
--- a/distributed/protocol.py
+++ b/distributed/protocol.py
@@ -123,6 +123,8 @@
return None, payload
if len(payload) < min_size:
return None, payload
+ if len(payload) > 2**31:
+ return None, payload
min_size = int(min_size)
sample_size = int(sample_size)
|
{"golden_diff": "diff --git a/distributed/protocol.py b/distributed/protocol.py\n--- a/distributed/protocol.py\n+++ b/distributed/protocol.py\n@@ -123,6 +123,8 @@\n return None, payload\n if len(payload) < min_size:\n return None, payload\n+ if len(payload) > 2**31:\n+ return None, payload\n \n min_size = int(min_size)\n sample_size = int(sample_size)\n", "issue": "OverflowError when sending large sparse arrays\nI don't yet have a small reproducible example, but I can make this happen every time I try to collect many large sparse arrays. I do have a notebook that will produce it though, and can make that available. The traceback:\n\n```\nTraceback (most recent call last):\n File \"/home/jcrist/miniconda/envs/dask_learn/lib/python2.7/site-packages/distributed/core.py\", line 266, in write\n frames = protocol.dumps(msg)\n File \"/home/jcrist/miniconda/envs/dask_learn/lib/python2.7/site-packages/distributed/protocol.py\", line 81, in dumps\n frames = dumps_msgpack(small)\n File \"/home/jcrist/miniconda/envs/dask_learn/lib/python2.7/site-packages/distributed/protocol.py\", line 155, in dumps_msgpack\n fmt, payload = maybe_compress(payload)\n File \"/home/jcrist/miniconda/envs/dask_learn/lib/python2.7/site-packages/distributed/protocol.py\", line 137, in maybe_compress\n compressed = compress(payload)\nOverflowError: size does not fit in an int\n```\n\nA few notes:\n- Each array is roughly `675000 x 745`, and ~1% dense. The total bytes for indices + indptr + data is ~40MB each.\n- I can get each array individually, so it's not a problem with a chunk being too large\n- The error appears only when I'm collecting enough at once (for my size, 39 and and lower works fine).\n- At 41 arrays I get the above error, 40 arrays gives me a different (but probably related) error:\n\n```\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\n<ipython-input-55-7b87709b6c67> in <module>()\n----> 1 res = t.compute()\n\n/home/jcrist/dask/dask/base.pyc in compute(self, **kwargs)\n 84 Extra keywords to forward to the scheduler ``get`` function.\n 85 \"\"\"\n---> 86 return compute(self, **kwargs)[0]\n 87 \n 88 @classmethod\n\n/home/jcrist/dask/dask/base.pyc in compute(*args, **kwargs)\n 177 dsk = merge(var.dask for var in variables)\n 178 keys = [var._keys() for var in variables]\n--> 179 results = get(dsk, keys, **kwargs)\n 180 \n 181 results_iter = iter(results)\n\n/home/jcrist/miniconda/envs/dask_learn/lib/python2.7/site-packages/distributed/executor.pyc in get(self, dsk, keys, **kwargs)\n 1008 \n 1009 if status == 'error':\n-> 1010 raise result\n 1011 else:\n 1012 return result\n\nValueError: corrupt input at byte 2\n```\n\n", "before_files": [{"content": "\"\"\"\nThe distributed message protocol consists of the following parts:\n\n1. The length of the header, stored as a uint32\n2. The header, stored as msgpack.\n If there are no fields in the header then we skip it entirely.\n3. The payload, stored as possibly compressed msgpack\n4. A sentinel value\n\n**Header**\n\nThe Header contains the following fields:\n\n* **compression**: string, optional\n One of the following: ``'snappy', 'lz4', 'zlib'`` or missing for None\n\n**Payload**\n\nThe payload is any msgpack serializable value. It may be compressed based\non the header.\n\n**Sentinel**\n\nWe often terminate each message with a sentinel value. This happens\noutside of this module though and is not baked in.\n\"\"\"\nfrom __future__ import print_function, division, absolute_import\n\nimport random\nimport struct\n\ntry:\n import pandas.msgpack as msgpack\nexcept ImportError:\n import msgpack\n\nfrom toolz import first, keymap, identity, merge\n\nfrom .utils import ignoring\nfrom .compatibility import unicode\n\n\ncompressions = {None: {'compress': identity,\n 'decompress': identity}}\n\ndefault_compression = None\n\n\nwith ignoring(ImportError):\n import zlib\n compressions['zlib'] = {'compress': zlib.compress,\n 'decompress': zlib.decompress}\n\nwith ignoring(ImportError):\n import snappy\n compressions['snappy'] = {'compress': snappy.compress,\n 'decompress': snappy.decompress}\n default_compression = 'snappy'\n\nwith ignoring(ImportError):\n import lz4\n compressions['lz4'] = {'compress': lz4.LZ4_compress,\n 'decompress': lz4.LZ4_uncompress}\n default_compression = 'lz4'\n\n\ndef dumps(msg):\n \"\"\" Transform Python value to bytestream suitable for communication \"\"\"\n small_header = {}\n\n if isinstance(msg, dict):\n big = {k: v for k, v in msg.items()\n if isinstance(v, bytes) and len(v) > 1e6}\n else:\n big = False\n if big:\n small = {k: v for k, v in msg.items() if k not in big}\n else:\n small = msg\n\n frames = dumps_msgpack(small)\n if big:\n frames += dumps_big_byte_dict(big)\n\n return frames\n\n\ndef loads(frames):\n \"\"\" Transform bytestream back into Python value \"\"\"\n header, payload, frames = frames[0], frames[1], frames[2:]\n msg = loads_msgpack(header, payload)\n\n if frames:\n big = loads_big_byte_dict(*frames)\n msg.update(big)\n\n return msg\n\n\ndef byte_sample(b, size, n):\n \"\"\" Sample a bytestring from many locations \"\"\"\n starts = [random.randint(0, len(b) - size) for j in range(n)]\n ends = []\n for i, start in enumerate(starts[:-1]):\n ends.append(min(start + size, starts[i + 1]))\n ends.append(starts[-1] + size)\n\n return b''.join([b[start:end] for start, end in zip(starts, ends)])\n\n\ndef maybe_compress(payload, compression=default_compression, min_size=1e4,\n sample_size=1e4, nsamples=5):\n \"\"\" Maybe compress payload\n\n 1. We don't compress small messages\n 2. We sample the payload in a few spots, compress that, and if it doesn't\n do any good we return the original\n 3. We then compress the full original, it it doesn't compress well then we\n return the original\n 4. We return the compressed result\n \"\"\"\n if not compression:\n return None, payload\n if len(payload) < min_size:\n return None, payload\n\n min_size = int(min_size)\n sample_size = int(sample_size)\n\n compress = compressions[compression]['compress']\n\n # Compress a sample, return original if not very compressed\n sample = byte_sample(payload, sample_size, nsamples)\n if len(compress(sample)) > 0.9 * len(sample): # not very compressible\n return None, payload\n\n compressed = compress(payload)\n if len(compressed) > 0.9 * len(payload): # not very compressible\n return None, payload\n\n return compression, compress(payload)\n\n\ndef dumps_msgpack(msg):\n \"\"\" Dump msg into header and payload, both bytestrings\n\n All of the message must be msgpack encodable\n\n See Also:\n loads_msgpack\n \"\"\"\n header = {}\n payload = msgpack.dumps(msg, use_bin_type=True)\n\n fmt, payload = maybe_compress(payload)\n if fmt:\n header['compression'] = fmt\n\n if header:\n header_bytes = msgpack.dumps(header, use_bin_type=True)\n else:\n header_bytes = b''\n\n return [header_bytes, payload]\n\n\ndef loads_msgpack(header, payload):\n \"\"\" Read msgpack header and payload back to Python object\n\n See Also:\n dumps_msgpack\n \"\"\"\n if header:\n header = msgpack.loads(header, encoding='utf8')\n else:\n header = {}\n\n if header.get('compression'):\n try:\n decompress = compressions[header['compression']]['decompress']\n payload = decompress(payload)\n except KeyError:\n raise ValueError(\"Data is compressed as %s but we don't have this\"\n \" installed\" % header['compression'].decode())\n\n return msgpack.loads(payload, encoding='utf8')\n\n\ndef dumps_big_byte_dict(d):\n \"\"\" Serialize large byte dictionary to sequence of frames\n\n The input must be a dictionary and all values of that dictionary must be\n bytestrings. These should probably be large.\n\n Returns a sequence of frames, one header followed by each of the values\n\n See Also:\n loads_big_byte_dict\n \"\"\"\n assert isinstance(d, dict) and all(isinstance(v, bytes) for v in d.values())\n shards = {}\n for k, v in list(d.items()):\n if len(v) >= 2**31:\n L = []\n for i, j in enumerate(range(0, len(v), 2**30)):\n key = '.shard-%d-%s' % (i, k)\n d[key] = v[j: j + 2**30]\n L.append(key)\n del d[k]\n shards[k] = L\n\n keys, values = zip(*d.items())\n\n compress = compressions[default_compression]['compress']\n compression = []\n values2 = []\n for v in values:\n fmt, vv = maybe_compress(v)\n compression.append(fmt)\n values2.append(vv)\n\n header = {'encoding': 'big-byte-dict',\n 'keys': keys,\n 'compression': compression}\n if shards:\n header['shards'] = shards\n\n return [msgpack.dumps(header, use_bin_type=True)] + values2\n\n\ndef loads_big_byte_dict(header, *values):\n \"\"\" Deserialize big-byte frames to large byte dictionary\n\n See Also:\n dumps_big_byte_dict\n \"\"\"\n header = msgpack.loads(header, encoding='utf8')\n\n values2 = [compressions[c]['decompress'](v)\n for c, v in zip(header['compression'], values)]\n result = dict(zip(header['keys'], values2))\n\n for k, keys in header.get('shards', {}).items():\n result[k] = b''.join(result.pop(kk) for kk in keys)\n return result\n", "path": "distributed/protocol.py"}], "after_files": [{"content": "\"\"\"\nThe distributed message protocol consists of the following parts:\n\n1. The length of the header, stored as a uint32\n2. The header, stored as msgpack.\n If there are no fields in the header then we skip it entirely.\n3. The payload, stored as possibly compressed msgpack\n4. A sentinel value\n\n**Header**\n\nThe Header contains the following fields:\n\n* **compression**: string, optional\n One of the following: ``'snappy', 'lz4', 'zlib'`` or missing for None\n\n**Payload**\n\nThe payload is any msgpack serializable value. It may be compressed based\non the header.\n\n**Sentinel**\n\nWe often terminate each message with a sentinel value. This happens\noutside of this module though and is not baked in.\n\"\"\"\nfrom __future__ import print_function, division, absolute_import\n\nimport random\nimport struct\n\ntry:\n import pandas.msgpack as msgpack\nexcept ImportError:\n import msgpack\n\nfrom toolz import first, keymap, identity, merge\n\nfrom .utils import ignoring\nfrom .compatibility import unicode\n\n\ncompressions = {None: {'compress': identity,\n 'decompress': identity}}\n\ndefault_compression = None\n\n\nwith ignoring(ImportError):\n import zlib\n compressions['zlib'] = {'compress': zlib.compress,\n 'decompress': zlib.decompress}\n\nwith ignoring(ImportError):\n import snappy\n compressions['snappy'] = {'compress': snappy.compress,\n 'decompress': snappy.decompress}\n default_compression = 'snappy'\n\nwith ignoring(ImportError):\n import lz4\n compressions['lz4'] = {'compress': lz4.LZ4_compress,\n 'decompress': lz4.LZ4_uncompress}\n default_compression = 'lz4'\n\n\ndef dumps(msg):\n \"\"\" Transform Python value to bytestream suitable for communication \"\"\"\n small_header = {}\n\n if isinstance(msg, dict):\n big = {k: v for k, v in msg.items()\n if isinstance(v, bytes) and len(v) > 1e6}\n else:\n big = False\n if big:\n small = {k: v for k, v in msg.items() if k not in big}\n else:\n small = msg\n\n frames = dumps_msgpack(small)\n if big:\n frames += dumps_big_byte_dict(big)\n\n return frames\n\n\ndef loads(frames):\n \"\"\" Transform bytestream back into Python value \"\"\"\n header, payload, frames = frames[0], frames[1], frames[2:]\n msg = loads_msgpack(header, payload)\n\n if frames:\n big = loads_big_byte_dict(*frames)\n msg.update(big)\n\n return msg\n\n\ndef byte_sample(b, size, n):\n \"\"\" Sample a bytestring from many locations \"\"\"\n starts = [random.randint(0, len(b) - size) for j in range(n)]\n ends = []\n for i, start in enumerate(starts[:-1]):\n ends.append(min(start + size, starts[i + 1]))\n ends.append(starts[-1] + size)\n\n return b''.join([b[start:end] for start, end in zip(starts, ends)])\n\n\ndef maybe_compress(payload, compression=default_compression, min_size=1e4,\n sample_size=1e4, nsamples=5):\n \"\"\" Maybe compress payload\n\n 1. We don't compress small messages\n 2. We sample the payload in a few spots, compress that, and if it doesn't\n do any good we return the original\n 3. We then compress the full original, it it doesn't compress well then we\n return the original\n 4. We return the compressed result\n \"\"\"\n if not compression:\n return None, payload\n if len(payload) < min_size:\n return None, payload\n if len(payload) > 2**31:\n return None, payload\n\n min_size = int(min_size)\n sample_size = int(sample_size)\n\n compress = compressions[compression]['compress']\n\n # Compress a sample, return original if not very compressed\n sample = byte_sample(payload, sample_size, nsamples)\n if len(compress(sample)) > 0.9 * len(sample): # not very compressible\n return None, payload\n\n compressed = compress(payload)\n if len(compressed) > 0.9 * len(payload): # not very compressible\n return None, payload\n\n return compression, compress(payload)\n\n\ndef dumps_msgpack(msg):\n \"\"\" Dump msg into header and payload, both bytestrings\n\n All of the message must be msgpack encodable\n\n See Also:\n loads_msgpack\n \"\"\"\n header = {}\n payload = msgpack.dumps(msg, use_bin_type=True)\n\n fmt, payload = maybe_compress(payload)\n if fmt:\n header['compression'] = fmt\n\n if header:\n header_bytes = msgpack.dumps(header, use_bin_type=True)\n else:\n header_bytes = b''\n\n return [header_bytes, payload]\n\n\ndef loads_msgpack(header, payload):\n \"\"\" Read msgpack header and payload back to Python object\n\n See Also:\n dumps_msgpack\n \"\"\"\n if header:\n header = msgpack.loads(header, encoding='utf8')\n else:\n header = {}\n\n if header.get('compression'):\n try:\n decompress = compressions[header['compression']]['decompress']\n payload = decompress(payload)\n except KeyError:\n raise ValueError(\"Data is compressed as %s but we don't have this\"\n \" installed\" % header['compression'].decode())\n\n return msgpack.loads(payload, encoding='utf8')\n\n\ndef dumps_big_byte_dict(d):\n \"\"\" Serialize large byte dictionary to sequence of frames\n\n The input must be a dictionary and all values of that dictionary must be\n bytestrings. These should probably be large.\n\n Returns a sequence of frames, one header followed by each of the values\n\n See Also:\n loads_big_byte_dict\n \"\"\"\n assert isinstance(d, dict) and all(isinstance(v, bytes) for v in d.values())\n shards = {}\n for k, v in list(d.items()):\n if len(v) >= 2**31:\n L = []\n for i, j in enumerate(range(0, len(v), 2**30)):\n key = '.shard-%d-%s' % (i, k)\n d[key] = v[j: j + 2**30]\n L.append(key)\n del d[k]\n shards[k] = L\n\n keys, values = zip(*d.items())\n\n compress = compressions[default_compression]['compress']\n compression = []\n values2 = []\n for v in values:\n fmt, vv = maybe_compress(v)\n compression.append(fmt)\n values2.append(vv)\n\n header = {'encoding': 'big-byte-dict',\n 'keys': keys,\n 'compression': compression}\n if shards:\n header['shards'] = shards\n\n return [msgpack.dumps(header, use_bin_type=True)] + values2\n\n\ndef loads_big_byte_dict(header, *values):\n \"\"\" Deserialize big-byte frames to large byte dictionary\n\n See Also:\n dumps_big_byte_dict\n \"\"\"\n header = msgpack.loads(header, encoding='utf8')\n\n values2 = [compressions[c]['decompress'](v)\n for c, v in zip(header['compression'], values)]\n result = dict(zip(header['keys'], values2))\n\n for k, keys in header.get('shards', {}).items():\n result[k] = b''.join(result.pop(kk) for kk in keys)\n return result\n", "path": "distributed/protocol.py"}]}
| 3,271 | 103 |
gh_patches_debug_16958
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-2914
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Minor Bug | metrics.ssim, produces empty tensors in update function
## 🐛 Bug description
In line 165
```
output_list = [outputs[x * y_pred.size(0) : (x + 1) * y_pred.size(0)] for x in range(len(outputs))]
```
the list comprehension produces a list with `len = (Batch_size * 5)`, where only the first 5 elements are valid and correspond to stacked `[y_pred, y, y_pred * y_pred, y * y, y_pred * y]` of all the batches, in cases where the batch size is greater than one the elements with index>4 are empty `torch.Tensors` with shape `(0, C, H, W)` .
### Solution
This bug neither affects the output, nor consumes a lot of RAM, but I thought I should point it out.
The fix for this is pretty simple and you only need to divide the len of outputs by the batch size.
```
output_list = [outputs[x * y_pred.size(0) : (x + 1) * y_pred.size(0)] for x in range(int(len(outputs)/y_pred.size(0)))] # len(outputs) is B*5 so we need to divide it by B so it's only 5 -> [y_pred, y, y_pred * y_pred, y * y, y_pred * y]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/metrics/ssim.py`
Content:
```
1 from typing import Callable, Sequence, Union
2
3 import torch
4 import torch.nn.functional as F
5
6 from ignite.exceptions import NotComputableError
7 from ignite.metrics.metric import Metric, reinit__is_reduced, sync_all_reduce
8
9 __all__ = ["SSIM"]
10
11
12 class SSIM(Metric):
13 """
14 Computes Structual Similarity Index Measure
15
16 - ``update`` must receive output of the form ``(y_pred, y)``.
17
18 Args:
19 data_range: Range of the image. Typically, ``1.0`` or ``255``.
20 kernel_size: Size of the kernel. Default: (11, 11)
21 sigma: Standard deviation of the gaussian kernel.
22 Argument is used if ``gaussian=True``. Default: (1.5, 1.5)
23 k1: Parameter of SSIM. Default: 0.01
24 k2: Parameter of SSIM. Default: 0.03
25 gaussian: ``True`` to use gaussian kernel, ``False`` to use uniform kernel
26 output_transform: A callable that is used to transform the
27 :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
28 form expected by the metric.
29 device: specifies which device updates are accumulated on. Setting the metric's
30 device to be the same as your ``update`` arguments ensures the ``update`` method is non-blocking. By
31 default, CPU.
32
33 Examples:
34 To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.
35 The output of the engine's ``process_function`` needs to be in the format of
36 ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``. If not, ``output_tranform`` can be added
37 to the metric to transform the output into the form expected by the metric.
38
39 ``y_pred`` and ``y`` can be un-normalized or normalized image tensors. Depending on that, the user might need
40 to adjust ``data_range``. ``y_pred`` and ``y`` should have the same shape.
41
42 For more information on how metric works with :class:`~ignite.engine.engine.Engine`, visit :ref:`attach-engine`.
43
44 .. include:: defaults.rst
45 :start-after: :orphan:
46
47 .. testcode::
48
49 metric = SSIM(data_range=1.0)
50 metric.attach(default_evaluator, 'ssim')
51 preds = torch.rand([4, 3, 16, 16])
52 target = preds * 0.75
53 state = default_evaluator.run([[preds, target]])
54 print(state.metrics['ssim'])
55
56 .. testoutput::
57
58 0.9218971...
59
60 .. versionadded:: 0.4.2
61 """
62
63 def __init__(
64 self,
65 data_range: Union[int, float],
66 kernel_size: Union[int, Sequence[int]] = (11, 11),
67 sigma: Union[float, Sequence[float]] = (1.5, 1.5),
68 k1: float = 0.01,
69 k2: float = 0.03,
70 gaussian: bool = True,
71 output_transform: Callable = lambda x: x,
72 device: Union[str, torch.device] = torch.device("cpu"),
73 ):
74 if isinstance(kernel_size, int):
75 self.kernel_size: Sequence[int] = [kernel_size, kernel_size]
76 elif isinstance(kernel_size, Sequence):
77 self.kernel_size = kernel_size
78 else:
79 raise ValueError("Argument kernel_size should be either int or a sequence of int.")
80
81 if isinstance(sigma, float):
82 self.sigma: Sequence[float] = [sigma, sigma]
83 elif isinstance(sigma, Sequence):
84 self.sigma = sigma
85 else:
86 raise ValueError("Argument sigma should be either float or a sequence of float.")
87
88 if any(x % 2 == 0 or x <= 0 for x in self.kernel_size):
89 raise ValueError(f"Expected kernel_size to have odd positive number. Got {kernel_size}.")
90
91 if any(y <= 0 for y in self.sigma):
92 raise ValueError(f"Expected sigma to have positive number. Got {sigma}.")
93
94 super(SSIM, self).__init__(output_transform=output_transform, device=device)
95 self.gaussian = gaussian
96 self.c1 = (k1 * data_range) ** 2
97 self.c2 = (k2 * data_range) ** 2
98 self.pad_h = (self.kernel_size[0] - 1) // 2
99 self.pad_w = (self.kernel_size[1] - 1) // 2
100 self._kernel = self._gaussian_or_uniform_kernel(kernel_size=self.kernel_size, sigma=self.sigma)
101
102 @reinit__is_reduced
103 def reset(self) -> None:
104 self._sum_of_ssim = torch.tensor(0.0, dtype=torch.float64, device=self._device)
105 self._num_examples = 0
106 self._kernel = self._gaussian_or_uniform_kernel(kernel_size=self.kernel_size, sigma=self.sigma)
107
108 def _uniform(self, kernel_size: int) -> torch.Tensor:
109 max, min = 2.5, -2.5
110 ksize_half = (kernel_size - 1) * 0.5
111 kernel = torch.linspace(-ksize_half, ksize_half, steps=kernel_size, device=self._device)
112 for i, j in enumerate(kernel):
113 if min <= j <= max:
114 kernel[i] = 1 / (max - min)
115 else:
116 kernel[i] = 0
117
118 return kernel.unsqueeze(dim=0) # (1, kernel_size)
119
120 def _gaussian(self, kernel_size: int, sigma: float) -> torch.Tensor:
121 ksize_half = (kernel_size - 1) * 0.5
122 kernel = torch.linspace(-ksize_half, ksize_half, steps=kernel_size, device=self._device)
123 gauss = torch.exp(-0.5 * (kernel / sigma).pow(2))
124 return (gauss / gauss.sum()).unsqueeze(dim=0) # (1, kernel_size)
125
126 def _gaussian_or_uniform_kernel(self, kernel_size: Sequence[int], sigma: Sequence[float]) -> torch.Tensor:
127 if self.gaussian:
128 kernel_x = self._gaussian(kernel_size[0], sigma[0])
129 kernel_y = self._gaussian(kernel_size[1], sigma[1])
130 else:
131 kernel_x = self._uniform(kernel_size[0])
132 kernel_y = self._uniform(kernel_size[1])
133
134 return torch.matmul(kernel_x.t(), kernel_y) # (kernel_size, 1) * (1, kernel_size)
135
136 @reinit__is_reduced
137 def update(self, output: Sequence[torch.Tensor]) -> None:
138 y_pred, y = output[0].detach(), output[1].detach()
139
140 if y_pred.dtype != y.dtype:
141 raise TypeError(
142 f"Expected y_pred and y to have the same data type. Got y_pred: {y_pred.dtype} and y: {y.dtype}."
143 )
144
145 if y_pred.shape != y.shape:
146 raise ValueError(
147 f"Expected y_pred and y to have the same shape. Got y_pred: {y_pred.shape} and y: {y.shape}."
148 )
149
150 if len(y_pred.shape) != 4 or len(y.shape) != 4:
151 raise ValueError(
152 f"Expected y_pred and y to have BxCxHxW shape. Got y_pred: {y_pred.shape} and y: {y.shape}."
153 )
154
155 channel = y_pred.size(1)
156 if len(self._kernel.shape) < 4:
157 self._kernel = self._kernel.expand(channel, 1, -1, -1).to(device=y_pred.device)
158
159 y_pred = F.pad(y_pred, [self.pad_w, self.pad_w, self.pad_h, self.pad_h], mode="reflect")
160 y = F.pad(y, [self.pad_w, self.pad_w, self.pad_h, self.pad_h], mode="reflect")
161
162 input_list = torch.cat([y_pred, y, y_pred * y_pred, y * y, y_pred * y])
163 outputs = F.conv2d(input_list, self._kernel, groups=channel)
164
165 output_list = [outputs[x * y_pred.size(0) : (x + 1) * y_pred.size(0)] for x in range(len(outputs))]
166
167 mu_pred_sq = output_list[0].pow(2)
168 mu_target_sq = output_list[1].pow(2)
169 mu_pred_target = output_list[0] * output_list[1]
170
171 sigma_pred_sq = output_list[2] - mu_pred_sq
172 sigma_target_sq = output_list[3] - mu_target_sq
173 sigma_pred_target = output_list[4] - mu_pred_target
174
175 a1 = 2 * mu_pred_target + self.c1
176 a2 = 2 * sigma_pred_target + self.c2
177 b1 = mu_pred_sq + mu_target_sq + self.c1
178 b2 = sigma_pred_sq + sigma_target_sq + self.c2
179
180 ssim_idx = (a1 * a2) / (b1 * b2)
181 self._sum_of_ssim += torch.mean(ssim_idx, (1, 2, 3), dtype=torch.float64).sum().to(self._device)
182
183 self._num_examples += y.shape[0]
184
185 @sync_all_reduce("_sum_of_ssim", "_num_examples")
186 def compute(self) -> float:
187 if self._num_examples == 0:
188 raise NotComputableError("SSIM must have at least one example before it can be computed.")
189 return (self._sum_of_ssim / self._num_examples).item()
190
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/metrics/ssim.py b/ignite/metrics/ssim.py
--- a/ignite/metrics/ssim.py
+++ b/ignite/metrics/ssim.py
@@ -159,10 +159,10 @@
y_pred = F.pad(y_pred, [self.pad_w, self.pad_w, self.pad_h, self.pad_h], mode="reflect")
y = F.pad(y, [self.pad_w, self.pad_w, self.pad_h, self.pad_h], mode="reflect")
- input_list = torch.cat([y_pred, y, y_pred * y_pred, y * y, y_pred * y])
- outputs = F.conv2d(input_list, self._kernel, groups=channel)
-
- output_list = [outputs[x * y_pred.size(0) : (x + 1) * y_pred.size(0)] for x in range(len(outputs))]
+ input_list = [y_pred, y, y_pred * y_pred, y * y, y_pred * y]
+ outputs = F.conv2d(torch.cat(input_list), self._kernel, groups=channel)
+ batch_size = y_pred.size(0)
+ output_list = [outputs[x * batch_size : (x + 1) * batch_size] for x in range(len(input_list))]
mu_pred_sq = output_list[0].pow(2)
mu_target_sq = output_list[1].pow(2)
|
{"golden_diff": "diff --git a/ignite/metrics/ssim.py b/ignite/metrics/ssim.py\n--- a/ignite/metrics/ssim.py\n+++ b/ignite/metrics/ssim.py\n@@ -159,10 +159,10 @@\n y_pred = F.pad(y_pred, [self.pad_w, self.pad_w, self.pad_h, self.pad_h], mode=\"reflect\")\n y = F.pad(y, [self.pad_w, self.pad_w, self.pad_h, self.pad_h], mode=\"reflect\")\n \n- input_list = torch.cat([y_pred, y, y_pred * y_pred, y * y, y_pred * y])\n- outputs = F.conv2d(input_list, self._kernel, groups=channel)\n-\n- output_list = [outputs[x * y_pred.size(0) : (x + 1) * y_pred.size(0)] for x in range(len(outputs))]\n+ input_list = [y_pred, y, y_pred * y_pred, y * y, y_pred * y]\n+ outputs = F.conv2d(torch.cat(input_list), self._kernel, groups=channel)\n+ batch_size = y_pred.size(0)\n+ output_list = [outputs[x * batch_size : (x + 1) * batch_size] for x in range(len(input_list))]\n \n mu_pred_sq = output_list[0].pow(2)\n mu_target_sq = output_list[1].pow(2)\n", "issue": "Minor Bug | metrics.ssim, produces empty tensors in update function\n## \ud83d\udc1b Bug description\r\n\r\nIn line 165\r\n```\r\noutput_list = [outputs[x * y_pred.size(0) : (x + 1) * y_pred.size(0)] for x in range(len(outputs))]\r\n```\r\nthe list comprehension produces a list with `len = (Batch_size * 5)`, where only the first 5 elements are valid and correspond to stacked `[y_pred, y, y_pred * y_pred, y * y, y_pred * y]` of all the batches, in cases where the batch size is greater than one the elements with index>4 are empty `torch.Tensors` with shape `(0, C, H, W)` .\r\n\r\n### Solution\r\nThis bug neither affects the output, nor consumes a lot of RAM, but I thought I should point it out.\r\nThe fix for this is pretty simple and you only need to divide the len of outputs by the batch size.\r\n```\r\noutput_list = [outputs[x * y_pred.size(0) : (x + 1) * y_pred.size(0)] for x in range(int(len(outputs)/y_pred.size(0)))] # len(outputs) is B*5 so we need to divide it by B so it's only 5 -> [y_pred, y, y_pred * y_pred, y * y, y_pred * y]\r\n```\r\n\n", "before_files": [{"content": "from typing import Callable, Sequence, Union\n\nimport torch\nimport torch.nn.functional as F\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric, reinit__is_reduced, sync_all_reduce\n\n__all__ = [\"SSIM\"]\n\n\nclass SSIM(Metric):\n \"\"\"\n Computes Structual Similarity Index Measure\n\n - ``update`` must receive output of the form ``(y_pred, y)``.\n\n Args:\n data_range: Range of the image. Typically, ``1.0`` or ``255``.\n kernel_size: Size of the kernel. Default: (11, 11)\n sigma: Standard deviation of the gaussian kernel.\n Argument is used if ``gaussian=True``. Default: (1.5, 1.5)\n k1: Parameter of SSIM. Default: 0.01\n k2: Parameter of SSIM. Default: 0.03\n gaussian: ``True`` to use gaussian kernel, ``False`` to use uniform kernel\n output_transform: A callable that is used to transform the\n :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the\n form expected by the metric.\n device: specifies which device updates are accumulated on. Setting the metric's\n device to be the same as your ``update`` arguments ensures the ``update`` method is non-blocking. By\n default, CPU.\n\n Examples:\n To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.\n The output of the engine's ``process_function`` needs to be in the format of\n ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``. If not, ``output_tranform`` can be added\n to the metric to transform the output into the form expected by the metric.\n\n ``y_pred`` and ``y`` can be un-normalized or normalized image tensors. Depending on that, the user might need\n to adjust ``data_range``. ``y_pred`` and ``y`` should have the same shape.\n\n For more information on how metric works with :class:`~ignite.engine.engine.Engine`, visit :ref:`attach-engine`.\n\n .. include:: defaults.rst\n :start-after: :orphan:\n\n .. testcode::\n\n metric = SSIM(data_range=1.0)\n metric.attach(default_evaluator, 'ssim')\n preds = torch.rand([4, 3, 16, 16])\n target = preds * 0.75\n state = default_evaluator.run([[preds, target]])\n print(state.metrics['ssim'])\n\n .. testoutput::\n\n 0.9218971...\n\n .. versionadded:: 0.4.2\n \"\"\"\n\n def __init__(\n self,\n data_range: Union[int, float],\n kernel_size: Union[int, Sequence[int]] = (11, 11),\n sigma: Union[float, Sequence[float]] = (1.5, 1.5),\n k1: float = 0.01,\n k2: float = 0.03,\n gaussian: bool = True,\n output_transform: Callable = lambda x: x,\n device: Union[str, torch.device] = torch.device(\"cpu\"),\n ):\n if isinstance(kernel_size, int):\n self.kernel_size: Sequence[int] = [kernel_size, kernel_size]\n elif isinstance(kernel_size, Sequence):\n self.kernel_size = kernel_size\n else:\n raise ValueError(\"Argument kernel_size should be either int or a sequence of int.\")\n\n if isinstance(sigma, float):\n self.sigma: Sequence[float] = [sigma, sigma]\n elif isinstance(sigma, Sequence):\n self.sigma = sigma\n else:\n raise ValueError(\"Argument sigma should be either float or a sequence of float.\")\n\n if any(x % 2 == 0 or x <= 0 for x in self.kernel_size):\n raise ValueError(f\"Expected kernel_size to have odd positive number. Got {kernel_size}.\")\n\n if any(y <= 0 for y in self.sigma):\n raise ValueError(f\"Expected sigma to have positive number. Got {sigma}.\")\n\n super(SSIM, self).__init__(output_transform=output_transform, device=device)\n self.gaussian = gaussian\n self.c1 = (k1 * data_range) ** 2\n self.c2 = (k2 * data_range) ** 2\n self.pad_h = (self.kernel_size[0] - 1) // 2\n self.pad_w = (self.kernel_size[1] - 1) // 2\n self._kernel = self._gaussian_or_uniform_kernel(kernel_size=self.kernel_size, sigma=self.sigma)\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum_of_ssim = torch.tensor(0.0, dtype=torch.float64, device=self._device)\n self._num_examples = 0\n self._kernel = self._gaussian_or_uniform_kernel(kernel_size=self.kernel_size, sigma=self.sigma)\n\n def _uniform(self, kernel_size: int) -> torch.Tensor:\n max, min = 2.5, -2.5\n ksize_half = (kernel_size - 1) * 0.5\n kernel = torch.linspace(-ksize_half, ksize_half, steps=kernel_size, device=self._device)\n for i, j in enumerate(kernel):\n if min <= j <= max:\n kernel[i] = 1 / (max - min)\n else:\n kernel[i] = 0\n\n return kernel.unsqueeze(dim=0) # (1, kernel_size)\n\n def _gaussian(self, kernel_size: int, sigma: float) -> torch.Tensor:\n ksize_half = (kernel_size - 1) * 0.5\n kernel = torch.linspace(-ksize_half, ksize_half, steps=kernel_size, device=self._device)\n gauss = torch.exp(-0.5 * (kernel / sigma).pow(2))\n return (gauss / gauss.sum()).unsqueeze(dim=0) # (1, kernel_size)\n\n def _gaussian_or_uniform_kernel(self, kernel_size: Sequence[int], sigma: Sequence[float]) -> torch.Tensor:\n if self.gaussian:\n kernel_x = self._gaussian(kernel_size[0], sigma[0])\n kernel_y = self._gaussian(kernel_size[1], sigma[1])\n else:\n kernel_x = self._uniform(kernel_size[0])\n kernel_y = self._uniform(kernel_size[1])\n\n return torch.matmul(kernel_x.t(), kernel_y) # (kernel_size, 1) * (1, kernel_size)\n\n @reinit__is_reduced\n def update(self, output: Sequence[torch.Tensor]) -> None:\n y_pred, y = output[0].detach(), output[1].detach()\n\n if y_pred.dtype != y.dtype:\n raise TypeError(\n f\"Expected y_pred and y to have the same data type. Got y_pred: {y_pred.dtype} and y: {y.dtype}.\"\n )\n\n if y_pred.shape != y.shape:\n raise ValueError(\n f\"Expected y_pred and y to have the same shape. Got y_pred: {y_pred.shape} and y: {y.shape}.\"\n )\n\n if len(y_pred.shape) != 4 or len(y.shape) != 4:\n raise ValueError(\n f\"Expected y_pred and y to have BxCxHxW shape. Got y_pred: {y_pred.shape} and y: {y.shape}.\"\n )\n\n channel = y_pred.size(1)\n if len(self._kernel.shape) < 4:\n self._kernel = self._kernel.expand(channel, 1, -1, -1).to(device=y_pred.device)\n\n y_pred = F.pad(y_pred, [self.pad_w, self.pad_w, self.pad_h, self.pad_h], mode=\"reflect\")\n y = F.pad(y, [self.pad_w, self.pad_w, self.pad_h, self.pad_h], mode=\"reflect\")\n\n input_list = torch.cat([y_pred, y, y_pred * y_pred, y * y, y_pred * y])\n outputs = F.conv2d(input_list, self._kernel, groups=channel)\n\n output_list = [outputs[x * y_pred.size(0) : (x + 1) * y_pred.size(0)] for x in range(len(outputs))]\n\n mu_pred_sq = output_list[0].pow(2)\n mu_target_sq = output_list[1].pow(2)\n mu_pred_target = output_list[0] * output_list[1]\n\n sigma_pred_sq = output_list[2] - mu_pred_sq\n sigma_target_sq = output_list[3] - mu_target_sq\n sigma_pred_target = output_list[4] - mu_pred_target\n\n a1 = 2 * mu_pred_target + self.c1\n a2 = 2 * sigma_pred_target + self.c2\n b1 = mu_pred_sq + mu_target_sq + self.c1\n b2 = sigma_pred_sq + sigma_target_sq + self.c2\n\n ssim_idx = (a1 * a2) / (b1 * b2)\n self._sum_of_ssim += torch.mean(ssim_idx, (1, 2, 3), dtype=torch.float64).sum().to(self._device)\n\n self._num_examples += y.shape[0]\n\n @sync_all_reduce(\"_sum_of_ssim\", \"_num_examples\")\n def compute(self) -> float:\n if self._num_examples == 0:\n raise NotComputableError(\"SSIM must have at least one example before it can be computed.\")\n return (self._sum_of_ssim / self._num_examples).item()\n", "path": "ignite/metrics/ssim.py"}], "after_files": [{"content": "from typing import Callable, Sequence, Union\n\nimport torch\nimport torch.nn.functional as F\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric, reinit__is_reduced, sync_all_reduce\n\n__all__ = [\"SSIM\"]\n\n\nclass SSIM(Metric):\n \"\"\"\n Computes Structual Similarity Index Measure\n\n - ``update`` must receive output of the form ``(y_pred, y)``.\n\n Args:\n data_range: Range of the image. Typically, ``1.0`` or ``255``.\n kernel_size: Size of the kernel. Default: (11, 11)\n sigma: Standard deviation of the gaussian kernel.\n Argument is used if ``gaussian=True``. Default: (1.5, 1.5)\n k1: Parameter of SSIM. Default: 0.01\n k2: Parameter of SSIM. Default: 0.03\n gaussian: ``True`` to use gaussian kernel, ``False`` to use uniform kernel\n output_transform: A callable that is used to transform the\n :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the\n form expected by the metric.\n device: specifies which device updates are accumulated on. Setting the metric's\n device to be the same as your ``update`` arguments ensures the ``update`` method is non-blocking. By\n default, CPU.\n\n Examples:\n To use with ``Engine`` and ``process_function``, simply attach the metric instance to the engine.\n The output of the engine's ``process_function`` needs to be in the format of\n ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y, ...}``. If not, ``output_tranform`` can be added\n to the metric to transform the output into the form expected by the metric.\n\n ``y_pred`` and ``y`` can be un-normalized or normalized image tensors. Depending on that, the user might need\n to adjust ``data_range``. ``y_pred`` and ``y`` should have the same shape.\n\n For more information on how metric works with :class:`~ignite.engine.engine.Engine`, visit :ref:`attach-engine`.\n\n .. include:: defaults.rst\n :start-after: :orphan:\n\n .. testcode::\n\n metric = SSIM(data_range=1.0)\n metric.attach(default_evaluator, 'ssim')\n preds = torch.rand([4, 3, 16, 16])\n target = preds * 0.75\n state = default_evaluator.run([[preds, target]])\n print(state.metrics['ssim'])\n\n .. testoutput::\n\n 0.9218971...\n\n .. versionadded:: 0.4.2\n \"\"\"\n\n def __init__(\n self,\n data_range: Union[int, float],\n kernel_size: Union[int, Sequence[int]] = (11, 11),\n sigma: Union[float, Sequence[float]] = (1.5, 1.5),\n k1: float = 0.01,\n k2: float = 0.03,\n gaussian: bool = True,\n output_transform: Callable = lambda x: x,\n device: Union[str, torch.device] = torch.device(\"cpu\"),\n ):\n if isinstance(kernel_size, int):\n self.kernel_size: Sequence[int] = [kernel_size, kernel_size]\n elif isinstance(kernel_size, Sequence):\n self.kernel_size = kernel_size\n else:\n raise ValueError(\"Argument kernel_size should be either int or a sequence of int.\")\n\n if isinstance(sigma, float):\n self.sigma: Sequence[float] = [sigma, sigma]\n elif isinstance(sigma, Sequence):\n self.sigma = sigma\n else:\n raise ValueError(\"Argument sigma should be either float or a sequence of float.\")\n\n if any(x % 2 == 0 or x <= 0 for x in self.kernel_size):\n raise ValueError(f\"Expected kernel_size to have odd positive number. Got {kernel_size}.\")\n\n if any(y <= 0 for y in self.sigma):\n raise ValueError(f\"Expected sigma to have positive number. Got {sigma}.\")\n\n super(SSIM, self).__init__(output_transform=output_transform, device=device)\n self.gaussian = gaussian\n self.c1 = (k1 * data_range) ** 2\n self.c2 = (k2 * data_range) ** 2\n self.pad_h = (self.kernel_size[0] - 1) // 2\n self.pad_w = (self.kernel_size[1] - 1) // 2\n self._kernel = self._gaussian_or_uniform_kernel(kernel_size=self.kernel_size, sigma=self.sigma)\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum_of_ssim = torch.tensor(0.0, dtype=torch.float64, device=self._device)\n self._num_examples = 0\n self._kernel = self._gaussian_or_uniform_kernel(kernel_size=self.kernel_size, sigma=self.sigma)\n\n def _uniform(self, kernel_size: int) -> torch.Tensor:\n max, min = 2.5, -2.5\n ksize_half = (kernel_size - 1) * 0.5\n kernel = torch.linspace(-ksize_half, ksize_half, steps=kernel_size, device=self._device)\n for i, j in enumerate(kernel):\n if min <= j <= max:\n kernel[i] = 1 / (max - min)\n else:\n kernel[i] = 0\n\n return kernel.unsqueeze(dim=0) # (1, kernel_size)\n\n def _gaussian(self, kernel_size: int, sigma: float) -> torch.Tensor:\n ksize_half = (kernel_size - 1) * 0.5\n kernel = torch.linspace(-ksize_half, ksize_half, steps=kernel_size, device=self._device)\n gauss = torch.exp(-0.5 * (kernel / sigma).pow(2))\n return (gauss / gauss.sum()).unsqueeze(dim=0) # (1, kernel_size)\n\n def _gaussian_or_uniform_kernel(self, kernel_size: Sequence[int], sigma: Sequence[float]) -> torch.Tensor:\n if self.gaussian:\n kernel_x = self._gaussian(kernel_size[0], sigma[0])\n kernel_y = self._gaussian(kernel_size[1], sigma[1])\n else:\n kernel_x = self._uniform(kernel_size[0])\n kernel_y = self._uniform(kernel_size[1])\n\n return torch.matmul(kernel_x.t(), kernel_y) # (kernel_size, 1) * (1, kernel_size)\n\n @reinit__is_reduced\n def update(self, output: Sequence[torch.Tensor]) -> None:\n y_pred, y = output[0].detach(), output[1].detach()\n\n if y_pred.dtype != y.dtype:\n raise TypeError(\n f\"Expected y_pred and y to have the same data type. Got y_pred: {y_pred.dtype} and y: {y.dtype}.\"\n )\n\n if y_pred.shape != y.shape:\n raise ValueError(\n f\"Expected y_pred and y to have the same shape. Got y_pred: {y_pred.shape} and y: {y.shape}.\"\n )\n\n if len(y_pred.shape) != 4 or len(y.shape) != 4:\n raise ValueError(\n f\"Expected y_pred and y to have BxCxHxW shape. Got y_pred: {y_pred.shape} and y: {y.shape}.\"\n )\n\n channel = y_pred.size(1)\n if len(self._kernel.shape) < 4:\n self._kernel = self._kernel.expand(channel, 1, -1, -1).to(device=y_pred.device)\n\n y_pred = F.pad(y_pred, [self.pad_w, self.pad_w, self.pad_h, self.pad_h], mode=\"reflect\")\n y = F.pad(y, [self.pad_w, self.pad_w, self.pad_h, self.pad_h], mode=\"reflect\")\n\n input_list = [y_pred, y, y_pred * y_pred, y * y, y_pred * y]\n outputs = F.conv2d(torch.cat(input_list), self._kernel, groups=channel)\n batch_size = y_pred.size(0)\n output_list = [outputs[x * batch_size : (x + 1) * batch_size] for x in range(len(input_list))]\n\n mu_pred_sq = output_list[0].pow(2)\n mu_target_sq = output_list[1].pow(2)\n mu_pred_target = output_list[0] * output_list[1]\n\n sigma_pred_sq = output_list[2] - mu_pred_sq\n sigma_target_sq = output_list[3] - mu_target_sq\n sigma_pred_target = output_list[4] - mu_pred_target\n\n a1 = 2 * mu_pred_target + self.c1\n a2 = 2 * sigma_pred_target + self.c2\n b1 = mu_pred_sq + mu_target_sq + self.c1\n b2 = sigma_pred_sq + sigma_target_sq + self.c2\n\n ssim_idx = (a1 * a2) / (b1 * b2)\n self._sum_of_ssim += torch.mean(ssim_idx, (1, 2, 3), dtype=torch.float64).sum().to(self._device)\n\n self._num_examples += y.shape[0]\n\n @sync_all_reduce(\"_sum_of_ssim\", \"_num_examples\")\n def compute(self) -> float:\n if self._num_examples == 0:\n raise NotComputableError(\"SSIM must have at least one example before it can be computed.\")\n return (self._sum_of_ssim / self._num_examples).item()\n", "path": "ignite/metrics/ssim.py"}]}
| 3,179 | 316 |
gh_patches_debug_7350
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-1292
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support for Sanic v21.12.0
Raising custom exceptions in Sanic's latest version (which shouldn't be logged to Sentry), getting **IndexError: pop from empty list**
```
File "/app/.heroku/python/lib/python3.9/site-packages/sentry_sdk/integrations/sanic.py", line 184, in _hub_exit
request.ctx._sentry_hub.__exit__(None, None, None)
File "/app/.heroku/python/lib/python3.9/site-packages/sentry_sdk/hub.py", line 247, in __exit__
old = self._old_hubs.pop()
IndexError: pop from empty list
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/sanic.py`
Content:
```
1 import sys
2 import weakref
3 from inspect import isawaitable
4
5 from sentry_sdk._compat import urlparse, reraise
6 from sentry_sdk.hub import Hub
7 from sentry_sdk.utils import (
8 capture_internal_exceptions,
9 event_from_exception,
10 HAS_REAL_CONTEXTVARS,
11 CONTEXTVARS_ERROR_MESSAGE,
12 )
13 from sentry_sdk.integrations import Integration, DidNotEnable
14 from sentry_sdk.integrations._wsgi_common import RequestExtractor, _filter_headers
15 from sentry_sdk.integrations.logging import ignore_logger
16
17 from sentry_sdk._types import MYPY
18
19 if MYPY:
20 from typing import Any
21 from typing import Callable
22 from typing import Optional
23 from typing import Union
24 from typing import Tuple
25 from typing import Dict
26
27 from sanic.request import Request, RequestParameters
28
29 from sentry_sdk._types import Event, EventProcessor, Hint
30 from sanic.router import Route
31
32 try:
33 from sanic import Sanic, __version__ as SANIC_VERSION
34 from sanic.exceptions import SanicException
35 from sanic.router import Router
36 from sanic.handlers import ErrorHandler
37 except ImportError:
38 raise DidNotEnable("Sanic not installed")
39
40 old_error_handler_lookup = ErrorHandler.lookup
41 old_handle_request = Sanic.handle_request
42 old_router_get = Router.get
43
44 try:
45 # This method was introduced in Sanic v21.9
46 old_startup = Sanic._startup
47 except AttributeError:
48 pass
49
50
51 class SanicIntegration(Integration):
52 identifier = "sanic"
53 version = (0, 0) # type: Tuple[int, ...]
54
55 @staticmethod
56 def setup_once():
57 # type: () -> None
58
59 try:
60 SanicIntegration.version = tuple(map(int, SANIC_VERSION.split(".")))
61 except (TypeError, ValueError):
62 raise DidNotEnable("Unparsable Sanic version: {}".format(SANIC_VERSION))
63
64 if SanicIntegration.version < (0, 8):
65 raise DidNotEnable("Sanic 0.8 or newer required.")
66
67 if not HAS_REAL_CONTEXTVARS:
68 # We better have contextvars or we're going to leak state between
69 # requests.
70 raise DidNotEnable(
71 "The sanic integration for Sentry requires Python 3.7+ "
72 " or the aiocontextvars package." + CONTEXTVARS_ERROR_MESSAGE
73 )
74
75 if SANIC_VERSION.startswith("0.8."):
76 # Sanic 0.8 and older creates a logger named "root" and puts a
77 # stringified version of every exception in there (without exc_info),
78 # which our error deduplication can't detect.
79 #
80 # We explicitly check the version here because it is a very
81 # invasive step to ignore this logger and not necessary in newer
82 # versions at all.
83 #
84 # https://github.com/huge-success/sanic/issues/1332
85 ignore_logger("root")
86
87 if SanicIntegration.version < (21, 9):
88 _setup_legacy_sanic()
89 return
90
91 _setup_sanic()
92
93
94 class SanicRequestExtractor(RequestExtractor):
95 def content_length(self):
96 # type: () -> int
97 if self.request.body is None:
98 return 0
99 return len(self.request.body)
100
101 def cookies(self):
102 # type: () -> Dict[str, str]
103 return dict(self.request.cookies)
104
105 def raw_data(self):
106 # type: () -> bytes
107 return self.request.body
108
109 def form(self):
110 # type: () -> RequestParameters
111 return self.request.form
112
113 def is_json(self):
114 # type: () -> bool
115 raise NotImplementedError()
116
117 def json(self):
118 # type: () -> Optional[Any]
119 return self.request.json
120
121 def files(self):
122 # type: () -> RequestParameters
123 return self.request.files
124
125 def size_of_file(self, file):
126 # type: (Any) -> int
127 return len(file.body or ())
128
129
130 def _setup_sanic():
131 # type: () -> None
132 Sanic._startup = _startup
133 ErrorHandler.lookup = _sentry_error_handler_lookup
134
135
136 def _setup_legacy_sanic():
137 # type: () -> None
138 Sanic.handle_request = _legacy_handle_request
139 Router.get = _legacy_router_get
140 ErrorHandler.lookup = _sentry_error_handler_lookup
141
142
143 async def _startup(self):
144 # type: (Sanic) -> None
145 # This happens about as early in the lifecycle as possible, just after the
146 # Request object is created. The body has not yet been consumed.
147 self.signal("http.lifecycle.request")(_hub_enter)
148
149 # This happens after the handler is complete. In v21.9 this signal is not
150 # dispatched when there is an exception. Therefore we need to close out
151 # and call _hub_exit from the custom exception handler as well.
152 # See https://github.com/sanic-org/sanic/issues/2297
153 self.signal("http.lifecycle.response")(_hub_exit)
154
155 # This happens inside of request handling immediately after the route
156 # has been identified by the router.
157 self.signal("http.routing.after")(_set_transaction)
158
159 # The above signals need to be declared before this can be called.
160 await old_startup(self)
161
162
163 async def _hub_enter(request):
164 # type: (Request) -> None
165 hub = Hub.current
166 request.ctx._sentry_do_integration = (
167 hub.get_integration(SanicIntegration) is not None
168 )
169
170 if not request.ctx._sentry_do_integration:
171 return
172
173 weak_request = weakref.ref(request)
174 request.ctx._sentry_hub = Hub(hub)
175 request.ctx._sentry_hub.__enter__()
176
177 with request.ctx._sentry_hub.configure_scope() as scope:
178 scope.clear_breadcrumbs()
179 scope.add_event_processor(_make_request_processor(weak_request))
180
181
182 async def _hub_exit(request, **_):
183 # type: (Request, **Any) -> None
184 request.ctx._sentry_hub.__exit__(None, None, None)
185
186
187 async def _set_transaction(request, route, **kwargs):
188 # type: (Request, Route, **Any) -> None
189 hub = Hub.current
190 if hub.get_integration(SanicIntegration) is not None:
191 with capture_internal_exceptions():
192 with hub.configure_scope() as scope:
193 route_name = route.name.replace(request.app.name, "").strip(".")
194 scope.transaction = route_name
195
196
197 def _sentry_error_handler_lookup(self, exception, *args, **kwargs):
198 # type: (Any, Exception, *Any, **Any) -> Optional[object]
199 _capture_exception(exception)
200 old_error_handler = old_error_handler_lookup(self, exception, *args, **kwargs)
201
202 if old_error_handler is None:
203 return None
204
205 if Hub.current.get_integration(SanicIntegration) is None:
206 return old_error_handler
207
208 async def sentry_wrapped_error_handler(request, exception):
209 # type: (Request, Exception) -> Any
210 try:
211 response = old_error_handler(request, exception)
212 if isawaitable(response):
213 response = await response
214 return response
215 except Exception:
216 # Report errors that occur in Sanic error handler. These
217 # exceptions will not even show up in Sanic's
218 # `sanic.exceptions` logger.
219 exc_info = sys.exc_info()
220 _capture_exception(exc_info)
221 reraise(*exc_info)
222 finally:
223 # As mentioned in previous comment in _startup, this can be removed
224 # after https://github.com/sanic-org/sanic/issues/2297 is resolved
225 if SanicIntegration.version >= (21, 9):
226 await _hub_exit(request)
227
228 return sentry_wrapped_error_handler
229
230
231 async def _legacy_handle_request(self, request, *args, **kwargs):
232 # type: (Any, Request, *Any, **Any) -> Any
233 hub = Hub.current
234 if hub.get_integration(SanicIntegration) is None:
235 return old_handle_request(self, request, *args, **kwargs)
236
237 weak_request = weakref.ref(request)
238
239 with Hub(hub) as hub:
240 with hub.configure_scope() as scope:
241 scope.clear_breadcrumbs()
242 scope.add_event_processor(_make_request_processor(weak_request))
243
244 response = old_handle_request(self, request, *args, **kwargs)
245 if isawaitable(response):
246 response = await response
247
248 return response
249
250
251 def _legacy_router_get(self, *args):
252 # type: (Any, Union[Any, Request]) -> Any
253 rv = old_router_get(self, *args)
254 hub = Hub.current
255 if hub.get_integration(SanicIntegration) is not None:
256 with capture_internal_exceptions():
257 with hub.configure_scope() as scope:
258 if SanicIntegration.version and SanicIntegration.version >= (21, 3):
259 # Sanic versions above and including 21.3 append the app name to the
260 # route name, and so we need to remove it from Route name so the
261 # transaction name is consistent across all versions
262 sanic_app_name = self.ctx.app.name
263 sanic_route = rv[0].name
264
265 if sanic_route.startswith("%s." % sanic_app_name):
266 # We add a 1 to the len of the sanic_app_name because there is a dot
267 # that joins app name and the route name
268 # Format: app_name.route_name
269 sanic_route = sanic_route[len(sanic_app_name) + 1 :]
270
271 scope.transaction = sanic_route
272 else:
273 scope.transaction = rv[0].__name__
274 return rv
275
276
277 def _capture_exception(exception):
278 # type: (Union[Tuple[Optional[type], Optional[BaseException], Any], BaseException]) -> None
279 hub = Hub.current
280 integration = hub.get_integration(SanicIntegration)
281 if integration is None:
282 return
283
284 # If an integration is there, a client has to be there.
285 client = hub.client # type: Any
286
287 with capture_internal_exceptions():
288 event, hint = event_from_exception(
289 exception,
290 client_options=client.options,
291 mechanism={"type": "sanic", "handled": False},
292 )
293 hub.capture_event(event, hint=hint)
294
295
296 def _make_request_processor(weak_request):
297 # type: (Callable[[], Request]) -> EventProcessor
298 def sanic_processor(event, hint):
299 # type: (Event, Optional[Hint]) -> Optional[Event]
300
301 try:
302 if hint and issubclass(hint["exc_info"][0], SanicException):
303 return None
304 except KeyError:
305 pass
306
307 request = weak_request()
308 if request is None:
309 return event
310
311 with capture_internal_exceptions():
312 extractor = SanicRequestExtractor(request)
313 extractor.extract_into_event(event)
314
315 request_info = event["request"]
316 urlparts = urlparse.urlsplit(request.url)
317
318 request_info["url"] = "%s://%s%s" % (
319 urlparts.scheme,
320 urlparts.netloc,
321 urlparts.path,
322 )
323
324 request_info["query_string"] = urlparts.query
325 request_info["method"] = request.method
326 request_info["env"] = {"REMOTE_ADDR": request.remote_addr}
327 request_info["headers"] = _filter_headers(dict(request.headers))
328
329 return event
330
331 return sanic_processor
332
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/integrations/sanic.py b/sentry_sdk/integrations/sanic.py
--- a/sentry_sdk/integrations/sanic.py
+++ b/sentry_sdk/integrations/sanic.py
@@ -222,7 +222,7 @@
finally:
# As mentioned in previous comment in _startup, this can be removed
# after https://github.com/sanic-org/sanic/issues/2297 is resolved
- if SanicIntegration.version >= (21, 9):
+ if SanicIntegration.version == (21, 9):
await _hub_exit(request)
return sentry_wrapped_error_handler
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/sanic.py b/sentry_sdk/integrations/sanic.py\n--- a/sentry_sdk/integrations/sanic.py\n+++ b/sentry_sdk/integrations/sanic.py\n@@ -222,7 +222,7 @@\n finally:\n # As mentioned in previous comment in _startup, this can be removed\n # after https://github.com/sanic-org/sanic/issues/2297 is resolved\n- if SanicIntegration.version >= (21, 9):\n+ if SanicIntegration.version == (21, 9):\n await _hub_exit(request)\n \n return sentry_wrapped_error_handler\n", "issue": "Support for Sanic v21.12.0\nRaising custom exceptions in Sanic's latest version (which shouldn't be logged to Sentry), getting **IndexError: pop from empty list**\r\n\r\n```\r\nFile \"/app/.heroku/python/lib/python3.9/site-packages/sentry_sdk/integrations/sanic.py\", line 184, in _hub_exit\r\n request.ctx._sentry_hub.__exit__(None, None, None)\r\nFile \"/app/.heroku/python/lib/python3.9/site-packages/sentry_sdk/hub.py\", line 247, in __exit__\r\n old = self._old_hubs.pop()\r\nIndexError: pop from empty list\r\n```\r\n\r\n\n", "before_files": [{"content": "import sys\nimport weakref\nfrom inspect import isawaitable\n\nfrom sentry_sdk._compat import urlparse, reraise\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import (\n capture_internal_exceptions,\n event_from_exception,\n HAS_REAL_CONTEXTVARS,\n CONTEXTVARS_ERROR_MESSAGE,\n)\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk.integrations._wsgi_common import RequestExtractor, _filter_headers\nfrom sentry_sdk.integrations.logging import ignore_logger\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n from typing import Callable\n from typing import Optional\n from typing import Union\n from typing import Tuple\n from typing import Dict\n\n from sanic.request import Request, RequestParameters\n\n from sentry_sdk._types import Event, EventProcessor, Hint\n from sanic.router import Route\n\ntry:\n from sanic import Sanic, __version__ as SANIC_VERSION\n from sanic.exceptions import SanicException\n from sanic.router import Router\n from sanic.handlers import ErrorHandler\nexcept ImportError:\n raise DidNotEnable(\"Sanic not installed\")\n\nold_error_handler_lookup = ErrorHandler.lookup\nold_handle_request = Sanic.handle_request\nold_router_get = Router.get\n\ntry:\n # This method was introduced in Sanic v21.9\n old_startup = Sanic._startup\nexcept AttributeError:\n pass\n\n\nclass SanicIntegration(Integration):\n identifier = \"sanic\"\n version = (0, 0) # type: Tuple[int, ...]\n\n @staticmethod\n def setup_once():\n # type: () -> None\n\n try:\n SanicIntegration.version = tuple(map(int, SANIC_VERSION.split(\".\")))\n except (TypeError, ValueError):\n raise DidNotEnable(\"Unparsable Sanic version: {}\".format(SANIC_VERSION))\n\n if SanicIntegration.version < (0, 8):\n raise DidNotEnable(\"Sanic 0.8 or newer required.\")\n\n if not HAS_REAL_CONTEXTVARS:\n # We better have contextvars or we're going to leak state between\n # requests.\n raise DidNotEnable(\n \"The sanic integration for Sentry requires Python 3.7+ \"\n \" or the aiocontextvars package.\" + CONTEXTVARS_ERROR_MESSAGE\n )\n\n if SANIC_VERSION.startswith(\"0.8.\"):\n # Sanic 0.8 and older creates a logger named \"root\" and puts a\n # stringified version of every exception in there (without exc_info),\n # which our error deduplication can't detect.\n #\n # We explicitly check the version here because it is a very\n # invasive step to ignore this logger and not necessary in newer\n # versions at all.\n #\n # https://github.com/huge-success/sanic/issues/1332\n ignore_logger(\"root\")\n\n if SanicIntegration.version < (21, 9):\n _setup_legacy_sanic()\n return\n\n _setup_sanic()\n\n\nclass SanicRequestExtractor(RequestExtractor):\n def content_length(self):\n # type: () -> int\n if self.request.body is None:\n return 0\n return len(self.request.body)\n\n def cookies(self):\n # type: () -> Dict[str, str]\n return dict(self.request.cookies)\n\n def raw_data(self):\n # type: () -> bytes\n return self.request.body\n\n def form(self):\n # type: () -> RequestParameters\n return self.request.form\n\n def is_json(self):\n # type: () -> bool\n raise NotImplementedError()\n\n def json(self):\n # type: () -> Optional[Any]\n return self.request.json\n\n def files(self):\n # type: () -> RequestParameters\n return self.request.files\n\n def size_of_file(self, file):\n # type: (Any) -> int\n return len(file.body or ())\n\n\ndef _setup_sanic():\n # type: () -> None\n Sanic._startup = _startup\n ErrorHandler.lookup = _sentry_error_handler_lookup\n\n\ndef _setup_legacy_sanic():\n # type: () -> None\n Sanic.handle_request = _legacy_handle_request\n Router.get = _legacy_router_get\n ErrorHandler.lookup = _sentry_error_handler_lookup\n\n\nasync def _startup(self):\n # type: (Sanic) -> None\n # This happens about as early in the lifecycle as possible, just after the\n # Request object is created. The body has not yet been consumed.\n self.signal(\"http.lifecycle.request\")(_hub_enter)\n\n # This happens after the handler is complete. In v21.9 this signal is not\n # dispatched when there is an exception. Therefore we need to close out\n # and call _hub_exit from the custom exception handler as well.\n # See https://github.com/sanic-org/sanic/issues/2297\n self.signal(\"http.lifecycle.response\")(_hub_exit)\n\n # This happens inside of request handling immediately after the route\n # has been identified by the router.\n self.signal(\"http.routing.after\")(_set_transaction)\n\n # The above signals need to be declared before this can be called.\n await old_startup(self)\n\n\nasync def _hub_enter(request):\n # type: (Request) -> None\n hub = Hub.current\n request.ctx._sentry_do_integration = (\n hub.get_integration(SanicIntegration) is not None\n )\n\n if not request.ctx._sentry_do_integration:\n return\n\n weak_request = weakref.ref(request)\n request.ctx._sentry_hub = Hub(hub)\n request.ctx._sentry_hub.__enter__()\n\n with request.ctx._sentry_hub.configure_scope() as scope:\n scope.clear_breadcrumbs()\n scope.add_event_processor(_make_request_processor(weak_request))\n\n\nasync def _hub_exit(request, **_):\n # type: (Request, **Any) -> None\n request.ctx._sentry_hub.__exit__(None, None, None)\n\n\nasync def _set_transaction(request, route, **kwargs):\n # type: (Request, Route, **Any) -> None\n hub = Hub.current\n if hub.get_integration(SanicIntegration) is not None:\n with capture_internal_exceptions():\n with hub.configure_scope() as scope:\n route_name = route.name.replace(request.app.name, \"\").strip(\".\")\n scope.transaction = route_name\n\n\ndef _sentry_error_handler_lookup(self, exception, *args, **kwargs):\n # type: (Any, Exception, *Any, **Any) -> Optional[object]\n _capture_exception(exception)\n old_error_handler = old_error_handler_lookup(self, exception, *args, **kwargs)\n\n if old_error_handler is None:\n return None\n\n if Hub.current.get_integration(SanicIntegration) is None:\n return old_error_handler\n\n async def sentry_wrapped_error_handler(request, exception):\n # type: (Request, Exception) -> Any\n try:\n response = old_error_handler(request, exception)\n if isawaitable(response):\n response = await response\n return response\n except Exception:\n # Report errors that occur in Sanic error handler. These\n # exceptions will not even show up in Sanic's\n # `sanic.exceptions` logger.\n exc_info = sys.exc_info()\n _capture_exception(exc_info)\n reraise(*exc_info)\n finally:\n # As mentioned in previous comment in _startup, this can be removed\n # after https://github.com/sanic-org/sanic/issues/2297 is resolved\n if SanicIntegration.version >= (21, 9):\n await _hub_exit(request)\n\n return sentry_wrapped_error_handler\n\n\nasync def _legacy_handle_request(self, request, *args, **kwargs):\n # type: (Any, Request, *Any, **Any) -> Any\n hub = Hub.current\n if hub.get_integration(SanicIntegration) is None:\n return old_handle_request(self, request, *args, **kwargs)\n\n weak_request = weakref.ref(request)\n\n with Hub(hub) as hub:\n with hub.configure_scope() as scope:\n scope.clear_breadcrumbs()\n scope.add_event_processor(_make_request_processor(weak_request))\n\n response = old_handle_request(self, request, *args, **kwargs)\n if isawaitable(response):\n response = await response\n\n return response\n\n\ndef _legacy_router_get(self, *args):\n # type: (Any, Union[Any, Request]) -> Any\n rv = old_router_get(self, *args)\n hub = Hub.current\n if hub.get_integration(SanicIntegration) is not None:\n with capture_internal_exceptions():\n with hub.configure_scope() as scope:\n if SanicIntegration.version and SanicIntegration.version >= (21, 3):\n # Sanic versions above and including 21.3 append the app name to the\n # route name, and so we need to remove it from Route name so the\n # transaction name is consistent across all versions\n sanic_app_name = self.ctx.app.name\n sanic_route = rv[0].name\n\n if sanic_route.startswith(\"%s.\" % sanic_app_name):\n # We add a 1 to the len of the sanic_app_name because there is a dot\n # that joins app name and the route name\n # Format: app_name.route_name\n sanic_route = sanic_route[len(sanic_app_name) + 1 :]\n\n scope.transaction = sanic_route\n else:\n scope.transaction = rv[0].__name__\n return rv\n\n\ndef _capture_exception(exception):\n # type: (Union[Tuple[Optional[type], Optional[BaseException], Any], BaseException]) -> None\n hub = Hub.current\n integration = hub.get_integration(SanicIntegration)\n if integration is None:\n return\n\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n exception,\n client_options=client.options,\n mechanism={\"type\": \"sanic\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n\ndef _make_request_processor(weak_request):\n # type: (Callable[[], Request]) -> EventProcessor\n def sanic_processor(event, hint):\n # type: (Event, Optional[Hint]) -> Optional[Event]\n\n try:\n if hint and issubclass(hint[\"exc_info\"][0], SanicException):\n return None\n except KeyError:\n pass\n\n request = weak_request()\n if request is None:\n return event\n\n with capture_internal_exceptions():\n extractor = SanicRequestExtractor(request)\n extractor.extract_into_event(event)\n\n request_info = event[\"request\"]\n urlparts = urlparse.urlsplit(request.url)\n\n request_info[\"url\"] = \"%s://%s%s\" % (\n urlparts.scheme,\n urlparts.netloc,\n urlparts.path,\n )\n\n request_info[\"query_string\"] = urlparts.query\n request_info[\"method\"] = request.method\n request_info[\"env\"] = {\"REMOTE_ADDR\": request.remote_addr}\n request_info[\"headers\"] = _filter_headers(dict(request.headers))\n\n return event\n\n return sanic_processor\n", "path": "sentry_sdk/integrations/sanic.py"}], "after_files": [{"content": "import sys\nimport weakref\nfrom inspect import isawaitable\n\nfrom sentry_sdk._compat import urlparse, reraise\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import (\n capture_internal_exceptions,\n event_from_exception,\n HAS_REAL_CONTEXTVARS,\n CONTEXTVARS_ERROR_MESSAGE,\n)\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk.integrations._wsgi_common import RequestExtractor, _filter_headers\nfrom sentry_sdk.integrations.logging import ignore_logger\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n from typing import Callable\n from typing import Optional\n from typing import Union\n from typing import Tuple\n from typing import Dict\n\n from sanic.request import Request, RequestParameters\n\n from sentry_sdk._types import Event, EventProcessor, Hint\n from sanic.router import Route\n\ntry:\n from sanic import Sanic, __version__ as SANIC_VERSION\n from sanic.exceptions import SanicException\n from sanic.router import Router\n from sanic.handlers import ErrorHandler\nexcept ImportError:\n raise DidNotEnable(\"Sanic not installed\")\n\nold_error_handler_lookup = ErrorHandler.lookup\nold_handle_request = Sanic.handle_request\nold_router_get = Router.get\n\ntry:\n # This method was introduced in Sanic v21.9\n old_startup = Sanic._startup\nexcept AttributeError:\n pass\n\n\nclass SanicIntegration(Integration):\n identifier = \"sanic\"\n version = (0, 0) # type: Tuple[int, ...]\n\n @staticmethod\n def setup_once():\n # type: () -> None\n\n try:\n SanicIntegration.version = tuple(map(int, SANIC_VERSION.split(\".\")))\n except (TypeError, ValueError):\n raise DidNotEnable(\"Unparsable Sanic version: {}\".format(SANIC_VERSION))\n\n if SanicIntegration.version < (0, 8):\n raise DidNotEnable(\"Sanic 0.8 or newer required.\")\n\n if not HAS_REAL_CONTEXTVARS:\n # We better have contextvars or we're going to leak state between\n # requests.\n raise DidNotEnable(\n \"The sanic integration for Sentry requires Python 3.7+ \"\n \" or the aiocontextvars package.\" + CONTEXTVARS_ERROR_MESSAGE\n )\n\n if SANIC_VERSION.startswith(\"0.8.\"):\n # Sanic 0.8 and older creates a logger named \"root\" and puts a\n # stringified version of every exception in there (without exc_info),\n # which our error deduplication can't detect.\n #\n # We explicitly check the version here because it is a very\n # invasive step to ignore this logger and not necessary in newer\n # versions at all.\n #\n # https://github.com/huge-success/sanic/issues/1332\n ignore_logger(\"root\")\n\n if SanicIntegration.version < (21, 9):\n _setup_legacy_sanic()\n return\n\n _setup_sanic()\n\n\nclass SanicRequestExtractor(RequestExtractor):\n def content_length(self):\n # type: () -> int\n if self.request.body is None:\n return 0\n return len(self.request.body)\n\n def cookies(self):\n # type: () -> Dict[str, str]\n return dict(self.request.cookies)\n\n def raw_data(self):\n # type: () -> bytes\n return self.request.body\n\n def form(self):\n # type: () -> RequestParameters\n return self.request.form\n\n def is_json(self):\n # type: () -> bool\n raise NotImplementedError()\n\n def json(self):\n # type: () -> Optional[Any]\n return self.request.json\n\n def files(self):\n # type: () -> RequestParameters\n return self.request.files\n\n def size_of_file(self, file):\n # type: (Any) -> int\n return len(file.body or ())\n\n\ndef _setup_sanic():\n # type: () -> None\n Sanic._startup = _startup\n ErrorHandler.lookup = _sentry_error_handler_lookup\n\n\ndef _setup_legacy_sanic():\n # type: () -> None\n Sanic.handle_request = _legacy_handle_request\n Router.get = _legacy_router_get\n ErrorHandler.lookup = _sentry_error_handler_lookup\n\n\nasync def _startup(self):\n # type: (Sanic) -> None\n # This happens about as early in the lifecycle as possible, just after the\n # Request object is created. The body has not yet been consumed.\n self.signal(\"http.lifecycle.request\")(_hub_enter)\n\n # This happens after the handler is complete. In v21.9 this signal is not\n # dispatched when there is an exception. Therefore we need to close out\n # and call _hub_exit from the custom exception handler as well.\n # See https://github.com/sanic-org/sanic/issues/2297\n self.signal(\"http.lifecycle.response\")(_hub_exit)\n\n # This happens inside of request handling immediately after the route\n # has been identified by the router.\n self.signal(\"http.routing.after\")(_set_transaction)\n\n # The above signals need to be declared before this can be called.\n await old_startup(self)\n\n\nasync def _hub_enter(request):\n # type: (Request) -> None\n hub = Hub.current\n request.ctx._sentry_do_integration = (\n hub.get_integration(SanicIntegration) is not None\n )\n\n if not request.ctx._sentry_do_integration:\n return\n\n weak_request = weakref.ref(request)\n request.ctx._sentry_hub = Hub(hub)\n request.ctx._sentry_hub.__enter__()\n\n with request.ctx._sentry_hub.configure_scope() as scope:\n scope.clear_breadcrumbs()\n scope.add_event_processor(_make_request_processor(weak_request))\n\n\nasync def _hub_exit(request, **_):\n # type: (Request, **Any) -> None\n request.ctx._sentry_hub.__exit__(None, None, None)\n\n\nasync def _set_transaction(request, route, **kwargs):\n # type: (Request, Route, **Any) -> None\n hub = Hub.current\n if hub.get_integration(SanicIntegration) is not None:\n with capture_internal_exceptions():\n with hub.configure_scope() as scope:\n route_name = route.name.replace(request.app.name, \"\").strip(\".\")\n scope.transaction = route_name\n\n\ndef _sentry_error_handler_lookup(self, exception, *args, **kwargs):\n # type: (Any, Exception, *Any, **Any) -> Optional[object]\n _capture_exception(exception)\n old_error_handler = old_error_handler_lookup(self, exception, *args, **kwargs)\n\n if old_error_handler is None:\n return None\n\n if Hub.current.get_integration(SanicIntegration) is None:\n return old_error_handler\n\n async def sentry_wrapped_error_handler(request, exception):\n # type: (Request, Exception) -> Any\n try:\n response = old_error_handler(request, exception)\n if isawaitable(response):\n response = await response\n return response\n except Exception:\n # Report errors that occur in Sanic error handler. These\n # exceptions will not even show up in Sanic's\n # `sanic.exceptions` logger.\n exc_info = sys.exc_info()\n _capture_exception(exc_info)\n reraise(*exc_info)\n finally:\n # As mentioned in previous comment in _startup, this can be removed\n # after https://github.com/sanic-org/sanic/issues/2297 is resolved\n if SanicIntegration.version == (21, 9):\n await _hub_exit(request)\n\n return sentry_wrapped_error_handler\n\n\nasync def _legacy_handle_request(self, request, *args, **kwargs):\n # type: (Any, Request, *Any, **Any) -> Any\n hub = Hub.current\n if hub.get_integration(SanicIntegration) is None:\n return old_handle_request(self, request, *args, **kwargs)\n\n weak_request = weakref.ref(request)\n\n with Hub(hub) as hub:\n with hub.configure_scope() as scope:\n scope.clear_breadcrumbs()\n scope.add_event_processor(_make_request_processor(weak_request))\n\n response = old_handle_request(self, request, *args, **kwargs)\n if isawaitable(response):\n response = await response\n\n return response\n\n\ndef _legacy_router_get(self, *args):\n # type: (Any, Union[Any, Request]) -> Any\n rv = old_router_get(self, *args)\n hub = Hub.current\n if hub.get_integration(SanicIntegration) is not None:\n with capture_internal_exceptions():\n with hub.configure_scope() as scope:\n if SanicIntegration.version and SanicIntegration.version >= (21, 3):\n # Sanic versions above and including 21.3 append the app name to the\n # route name, and so we need to remove it from Route name so the\n # transaction name is consistent across all versions\n sanic_app_name = self.ctx.app.name\n sanic_route = rv[0].name\n\n if sanic_route.startswith(\"%s.\" % sanic_app_name):\n # We add a 1 to the len of the sanic_app_name because there is a dot\n # that joins app name and the route name\n # Format: app_name.route_name\n sanic_route = sanic_route[len(sanic_app_name) + 1 :]\n\n scope.transaction = sanic_route\n else:\n scope.transaction = rv[0].__name__\n return rv\n\n\ndef _capture_exception(exception):\n # type: (Union[Tuple[Optional[type], Optional[BaseException], Any], BaseException]) -> None\n hub = Hub.current\n integration = hub.get_integration(SanicIntegration)\n if integration is None:\n return\n\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n with capture_internal_exceptions():\n event, hint = event_from_exception(\n exception,\n client_options=client.options,\n mechanism={\"type\": \"sanic\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n\ndef _make_request_processor(weak_request):\n # type: (Callable[[], Request]) -> EventProcessor\n def sanic_processor(event, hint):\n # type: (Event, Optional[Hint]) -> Optional[Event]\n\n try:\n if hint and issubclass(hint[\"exc_info\"][0], SanicException):\n return None\n except KeyError:\n pass\n\n request = weak_request()\n if request is None:\n return event\n\n with capture_internal_exceptions():\n extractor = SanicRequestExtractor(request)\n extractor.extract_into_event(event)\n\n request_info = event[\"request\"]\n urlparts = urlparse.urlsplit(request.url)\n\n request_info[\"url\"] = \"%s://%s%s\" % (\n urlparts.scheme,\n urlparts.netloc,\n urlparts.path,\n )\n\n request_info[\"query_string\"] = urlparts.query\n request_info[\"method\"] = request.method\n request_info[\"env\"] = {\"REMOTE_ADDR\": request.remote_addr}\n request_info[\"headers\"] = _filter_headers(dict(request.headers))\n\n return event\n\n return sanic_processor\n", "path": "sentry_sdk/integrations/sanic.py"}]}
| 3,797 | 149 |
gh_patches_debug_31403
|
rasdani/github-patches
|
git_diff
|
PyGithub__PyGithub-718
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Comments do not include reply info
assert github.Github().get_user().get_repo("PyGithub").get_pull(664).get_comment(166456140).in_reply_to_id == "166453895"
Currently, in_reply_to_id is undefined. This makes it impossible to understand the comment threading.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `github/PullRequestComment.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # ########################## Copyrights and license ############################
4 # #
5 # Copyright 2012 Vincent Jacques <[email protected]> #
6 # Copyright 2012 Zearin <[email protected]> #
7 # Copyright 2013 AKFish <[email protected]> #
8 # Copyright 2013 Michael Stead <[email protected]> #
9 # Copyright 2013 Vincent Jacques <[email protected]> #
10 # Copyright 2013 martinqt <[email protected]> #
11 # #
12 # This file is part of PyGithub. #
13 # http://pygithub.github.io/PyGithub/v1/index.html #
14 # #
15 # PyGithub is free software: you can redistribute it and/or modify it under #
16 # the terms of the GNU Lesser General Public License as published by the Free #
17 # Software Foundation, either version 3 of the License, or (at your option) #
18 # any later version. #
19 # #
20 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
21 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
22 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
23 # details. #
24 # #
25 # You should have received a copy of the GNU Lesser General Public License #
26 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
27 # #
28 # ##############################################################################
29
30 import github.GithubObject
31
32 import github.NamedUser
33
34
35 class PullRequestComment(github.GithubObject.CompletableGithubObject):
36 """
37 This class represents PullRequestComments. The reference can be found here http://developer.github.com/v3/pulls/comments/
38 """
39
40 def __repr__(self):
41 return self.get__repr__({"id": self._id.value, "user": self._user.value})
42
43 @property
44 def body(self):
45 """
46 :type: string
47 """
48 self._completeIfNotSet(self._body)
49 return self._body.value
50
51 @property
52 def commit_id(self):
53 """
54 :type: string
55 """
56 self._completeIfNotSet(self._commit_id)
57 return self._commit_id.value
58
59 @property
60 def created_at(self):
61 """
62 :type: datetime.datetime
63 """
64 self._completeIfNotSet(self._created_at)
65 return self._created_at.value
66
67 @property
68 def diff_hunk(self):
69 """
70 :type: string
71 """
72 self._completeIfNotSet(self._diff_hunk)
73 return self._diff_hunk.value
74
75 @property
76 def id(self):
77 """
78 :type: integer
79 """
80 self._completeIfNotSet(self._id)
81 return self._id.value
82
83 @property
84 def original_commit_id(self):
85 """
86 :type: string
87 """
88 self._completeIfNotSet(self._original_commit_id)
89 return self._original_commit_id.value
90
91 @property
92 def original_position(self):
93 """
94 :type: integer
95 """
96 self._completeIfNotSet(self._original_position)
97 return self._original_position.value
98
99 @property
100 def path(self):
101 """
102 :type: string
103 """
104 self._completeIfNotSet(self._path)
105 return self._path.value
106
107 @property
108 def position(self):
109 """
110 :type: integer
111 """
112 self._completeIfNotSet(self._position)
113 return self._position.value
114
115 @property
116 def pull_request_url(self):
117 """
118 :type: string
119 """
120 self._completeIfNotSet(self._pull_request_url)
121 return self._pull_request_url.value
122
123 @property
124 def updated_at(self):
125 """
126 :type: datetime.datetime
127 """
128 self._completeIfNotSet(self._updated_at)
129 return self._updated_at.value
130
131 @property
132 def url(self):
133 """
134 :type: string
135 """
136 self._completeIfNotSet(self._url)
137 return self._url.value
138
139 @property
140 def html_url(self):
141 """
142 :type: string
143 """
144 self._completeIfNotSet(self._html_url)
145 return self._html_url.value
146
147 @property
148 def user(self):
149 """
150 :type: :class:`github.NamedUser.NamedUser`
151 """
152 self._completeIfNotSet(self._user)
153 return self._user.value
154
155 def delete(self):
156 """
157 :calls: `DELETE /repos/:owner/:repo/pulls/comments/:number <http://developer.github.com/v3/pulls/comments>`_
158 :rtype: None
159 """
160 headers, data = self._requester.requestJsonAndCheck(
161 "DELETE",
162 self.url
163 )
164
165 def edit(self, body):
166 """
167 :calls: `PATCH /repos/:owner/:repo/pulls/comments/:number <http://developer.github.com/v3/pulls/comments>`_
168 :param body: string
169 :rtype: None
170 """
171 assert isinstance(body, (str, unicode)), body
172 post_parameters = {
173 "body": body,
174 }
175 headers, data = self._requester.requestJsonAndCheck(
176 "PATCH",
177 self.url,
178 input=post_parameters
179 )
180 self._useAttributes(data)
181
182 def get_reactions(self):
183 """
184 :calls: `GET /repos/:owner/:repo/pulls/comments/:number/reactions
185 <https://developer.github.com/v3/reactions/#list-reactions-for-a-pull-request-review-comment>`
186 :return: :class: :class:`github.PaginatedList.PaginatedList` of :class:`github.Reaction.Reaction`
187 """
188 return github.PaginatedList.PaginatedList(
189 github.Reaction.Reaction,
190 self._requester,
191 self.url + "/reactions",
192 None,
193 headers={'Accept': 'application/vnd.github.squirrel-girl-preview'}
194 )
195
196 def create_reaction(self, reaction_type):
197 """
198 :calls: `POST /repos/:owner/:repo/pulls/comments/:number/reactions
199 <https://developer.github.com/v3/reactions/#create-reaction-for-a-pull-request-review-comment>`_
200 :param reaction_type: string
201 :rtype: :class:`github.Reaction.Reaction`
202 """
203 assert isinstance(reaction_type, (str, unicode)), "reaction type should be a string"
204 assert reaction_type in ["+1", "-1", "laugh", "confused", "heart", "hooray"], \
205 "Invalid reaction type (https://developer.github.com/v3/reactions/#reaction-types)"
206
207 post_parameters = {
208 "content": reaction_type,
209 }
210 headers, data = self._requester.requestJsonAndCheck(
211 "POST",
212 self.url + "/reactions",
213 input=post_parameters,
214 headers={'Accept': 'application/vnd.github.squirrel-girl-preview'}
215 )
216 return github.Reaction.Reaction(self._requester, headers, data, completed=True)
217
218 def _initAttributes(self):
219 self._body = github.GithubObject.NotSet
220 self._commit_id = github.GithubObject.NotSet
221 self._created_at = github.GithubObject.NotSet
222 self._diff_hunk = github.GithubObject.NotSet
223 self._id = github.GithubObject.NotSet
224 self._original_commit_id = github.GithubObject.NotSet
225 self._original_position = github.GithubObject.NotSet
226 self._path = github.GithubObject.NotSet
227 self._position = github.GithubObject.NotSet
228 self._pull_request_url = github.GithubObject.NotSet
229 self._updated_at = github.GithubObject.NotSet
230 self._url = github.GithubObject.NotSet
231 self._html_url = github.GithubObject.NotSet
232 self._user = github.GithubObject.NotSet
233
234 def _useAttributes(self, attributes):
235 if "body" in attributes: # pragma no branch
236 self._body = self._makeStringAttribute(attributes["body"])
237 if "commit_id" in attributes: # pragma no branch
238 self._commit_id = self._makeStringAttribute(attributes["commit_id"])
239 if "created_at" in attributes: # pragma no branch
240 self._created_at = self._makeDatetimeAttribute(attributes["created_at"])
241 if "diff_hunk" in attributes: # pragma no branch
242 self._diff_hunk = self._makeStringAttribute(attributes["diff_hunk"])
243 if "id" in attributes: # pragma no branch
244 self._id = self._makeIntAttribute(attributes["id"])
245 if "original_commit_id" in attributes: # pragma no branch
246 self._original_commit_id = self._makeStringAttribute(attributes["original_commit_id"])
247 if "original_position" in attributes: # pragma no branch
248 self._original_position = self._makeIntAttribute(attributes["original_position"])
249 if "path" in attributes: # pragma no branch
250 self._path = self._makeStringAttribute(attributes["path"])
251 if "position" in attributes: # pragma no branch
252 self._position = self._makeIntAttribute(attributes["position"])
253 if "pull_request_url" in attributes: # pragma no branch
254 self._pull_request_url = self._makeStringAttribute(attributes["pull_request_url"])
255 if "updated_at" in attributes: # pragma no branch
256 self._updated_at = self._makeDatetimeAttribute(attributes["updated_at"])
257 if "url" in attributes: # pragma no branch
258 self._url = self._makeStringAttribute(attributes["url"])
259 if "html_url" in attributes: # pragma no branch
260 self._html_url = self._makeStringAttribute(attributes["html_url"])
261 if "user" in attributes: # pragma no branch
262 self._user = self._makeClassAttribute(github.NamedUser.NamedUser, attributes["user"])
263
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/github/PullRequestComment.py b/github/PullRequestComment.py
--- a/github/PullRequestComment.py
+++ b/github/PullRequestComment.py
@@ -80,6 +80,14 @@
self._completeIfNotSet(self._id)
return self._id.value
+ @property
+ def in_reply_to_id(self):
+ """
+ :type: integer
+ """
+ self._completeIfNotSet(self._in_reply_to_id)
+ return self._in_reply_to_id.value
+
@property
def original_commit_id(self):
"""
@@ -221,6 +229,7 @@
self._created_at = github.GithubObject.NotSet
self._diff_hunk = github.GithubObject.NotSet
self._id = github.GithubObject.NotSet
+ self._in_reply_to_id = github.GithubObject.NotSet
self._original_commit_id = github.GithubObject.NotSet
self._original_position = github.GithubObject.NotSet
self._path = github.GithubObject.NotSet
@@ -242,6 +251,8 @@
self._diff_hunk = self._makeStringAttribute(attributes["diff_hunk"])
if "id" in attributes: # pragma no branch
self._id = self._makeIntAttribute(attributes["id"])
+ if "in_reply_to_id" in attributes: # pragma no branch
+ self._in_reply_to_id = self._makeIntAttribute(attributes["in_reply_to_id"])
if "original_commit_id" in attributes: # pragma no branch
self._original_commit_id = self._makeStringAttribute(attributes["original_commit_id"])
if "original_position" in attributes: # pragma no branch
|
{"golden_diff": "diff --git a/github/PullRequestComment.py b/github/PullRequestComment.py\n--- a/github/PullRequestComment.py\n+++ b/github/PullRequestComment.py\n@@ -80,6 +80,14 @@\n self._completeIfNotSet(self._id)\n return self._id.value\n \n+ @property\n+ def in_reply_to_id(self):\n+ \"\"\"\n+ :type: integer\n+ \"\"\"\n+ self._completeIfNotSet(self._in_reply_to_id)\n+ return self._in_reply_to_id.value\n+\n @property\n def original_commit_id(self):\n \"\"\"\n@@ -221,6 +229,7 @@\n self._created_at = github.GithubObject.NotSet\n self._diff_hunk = github.GithubObject.NotSet\n self._id = github.GithubObject.NotSet\n+ self._in_reply_to_id = github.GithubObject.NotSet\n self._original_commit_id = github.GithubObject.NotSet\n self._original_position = github.GithubObject.NotSet\n self._path = github.GithubObject.NotSet\n@@ -242,6 +251,8 @@\n self._diff_hunk = self._makeStringAttribute(attributes[\"diff_hunk\"])\n if \"id\" in attributes: # pragma no branch\n self._id = self._makeIntAttribute(attributes[\"id\"])\n+ if \"in_reply_to_id\" in attributes: # pragma no branch\n+ self._in_reply_to_id = self._makeIntAttribute(attributes[\"in_reply_to_id\"])\n if \"original_commit_id\" in attributes: # pragma no branch\n self._original_commit_id = self._makeStringAttribute(attributes[\"original_commit_id\"])\n if \"original_position\" in attributes: # pragma no branch\n", "issue": "Comments do not include reply info\nassert github.Github().get_user().get_repo(\"PyGithub\").get_pull(664).get_comment(166456140).in_reply_to_id == \"166453895\"\r\n\r\nCurrently, in_reply_to_id is undefined. This makes it impossible to understand the comment threading.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# ########################## Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Michael Stead <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 martinqt <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.github.io/PyGithub/v1/index.html #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n# ##############################################################################\n\nimport github.GithubObject\n\nimport github.NamedUser\n\n\nclass PullRequestComment(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents PullRequestComments. The reference can be found here http://developer.github.com/v3/pulls/comments/\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"id\": self._id.value, \"user\": self._user.value})\n\n @property\n def body(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._body)\n return self._body.value\n\n @property\n def commit_id(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._commit_id)\n return self._commit_id.value\n\n @property\n def created_at(self):\n \"\"\"\n :type: datetime.datetime\n \"\"\"\n self._completeIfNotSet(self._created_at)\n return self._created_at.value\n\n @property\n def diff_hunk(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._diff_hunk)\n return self._diff_hunk.value\n\n @property\n def id(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._id)\n return self._id.value\n\n @property\n def original_commit_id(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._original_commit_id)\n return self._original_commit_id.value\n\n @property\n def original_position(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._original_position)\n return self._original_position.value\n\n @property\n def path(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._path)\n return self._path.value\n\n @property\n def position(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._position)\n return self._position.value\n\n @property\n def pull_request_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._pull_request_url)\n return self._pull_request_url.value\n\n @property\n def updated_at(self):\n \"\"\"\n :type: datetime.datetime\n \"\"\"\n self._completeIfNotSet(self._updated_at)\n return self._updated_at.value\n\n @property\n def url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._url)\n return self._url.value\n\n @property\n def html_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._html_url)\n return self._html_url.value\n\n @property\n def user(self):\n \"\"\"\n :type: :class:`github.NamedUser.NamedUser`\n \"\"\"\n self._completeIfNotSet(self._user)\n return self._user.value\n\n def delete(self):\n \"\"\"\n :calls: `DELETE /repos/:owner/:repo/pulls/comments/:number <http://developer.github.com/v3/pulls/comments>`_\n :rtype: None\n \"\"\"\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url\n )\n\n def edit(self, body):\n \"\"\"\n :calls: `PATCH /repos/:owner/:repo/pulls/comments/:number <http://developer.github.com/v3/pulls/comments>`_\n :param body: string\n :rtype: None\n \"\"\"\n assert isinstance(body, (str, unicode)), body\n post_parameters = {\n \"body\": body,\n }\n headers, data = self._requester.requestJsonAndCheck(\n \"PATCH\",\n self.url,\n input=post_parameters\n )\n self._useAttributes(data)\n\n def get_reactions(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/pulls/comments/:number/reactions\n <https://developer.github.com/v3/reactions/#list-reactions-for-a-pull-request-review-comment>`\n :return: :class: :class:`github.PaginatedList.PaginatedList` of :class:`github.Reaction.Reaction`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.Reaction.Reaction,\n self._requester,\n self.url + \"/reactions\",\n None,\n headers={'Accept': 'application/vnd.github.squirrel-girl-preview'}\n )\n\n def create_reaction(self, reaction_type):\n \"\"\"\n :calls: `POST /repos/:owner/:repo/pulls/comments/:number/reactions\n <https://developer.github.com/v3/reactions/#create-reaction-for-a-pull-request-review-comment>`_\n :param reaction_type: string\n :rtype: :class:`github.Reaction.Reaction`\n \"\"\"\n assert isinstance(reaction_type, (str, unicode)), \"reaction type should be a string\"\n assert reaction_type in [\"+1\", \"-1\", \"laugh\", \"confused\", \"heart\", \"hooray\"], \\\n \"Invalid reaction type (https://developer.github.com/v3/reactions/#reaction-types)\"\n\n post_parameters = {\n \"content\": reaction_type,\n }\n headers, data = self._requester.requestJsonAndCheck(\n \"POST\",\n self.url + \"/reactions\",\n input=post_parameters,\n headers={'Accept': 'application/vnd.github.squirrel-girl-preview'}\n )\n return github.Reaction.Reaction(self._requester, headers, data, completed=True)\n\n def _initAttributes(self):\n self._body = github.GithubObject.NotSet\n self._commit_id = github.GithubObject.NotSet\n self._created_at = github.GithubObject.NotSet\n self._diff_hunk = github.GithubObject.NotSet\n self._id = github.GithubObject.NotSet\n self._original_commit_id = github.GithubObject.NotSet\n self._original_position = github.GithubObject.NotSet\n self._path = github.GithubObject.NotSet\n self._position = github.GithubObject.NotSet\n self._pull_request_url = github.GithubObject.NotSet\n self._updated_at = github.GithubObject.NotSet\n self._url = github.GithubObject.NotSet\n self._html_url = github.GithubObject.NotSet\n self._user = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"body\" in attributes: # pragma no branch\n self._body = self._makeStringAttribute(attributes[\"body\"])\n if \"commit_id\" in attributes: # pragma no branch\n self._commit_id = self._makeStringAttribute(attributes[\"commit_id\"])\n if \"created_at\" in attributes: # pragma no branch\n self._created_at = self._makeDatetimeAttribute(attributes[\"created_at\"])\n if \"diff_hunk\" in attributes: # pragma no branch\n self._diff_hunk = self._makeStringAttribute(attributes[\"diff_hunk\"])\n if \"id\" in attributes: # pragma no branch\n self._id = self._makeIntAttribute(attributes[\"id\"])\n if \"original_commit_id\" in attributes: # pragma no branch\n self._original_commit_id = self._makeStringAttribute(attributes[\"original_commit_id\"])\n if \"original_position\" in attributes: # pragma no branch\n self._original_position = self._makeIntAttribute(attributes[\"original_position\"])\n if \"path\" in attributes: # pragma no branch\n self._path = self._makeStringAttribute(attributes[\"path\"])\n if \"position\" in attributes: # pragma no branch\n self._position = self._makeIntAttribute(attributes[\"position\"])\n if \"pull_request_url\" in attributes: # pragma no branch\n self._pull_request_url = self._makeStringAttribute(attributes[\"pull_request_url\"])\n if \"updated_at\" in attributes: # pragma no branch\n self._updated_at = self._makeDatetimeAttribute(attributes[\"updated_at\"])\n if \"url\" in attributes: # pragma no branch\n self._url = self._makeStringAttribute(attributes[\"url\"])\n if \"html_url\" in attributes: # pragma no branch\n self._html_url = self._makeStringAttribute(attributes[\"html_url\"])\n if \"user\" in attributes: # pragma no branch\n self._user = self._makeClassAttribute(github.NamedUser.NamedUser, attributes[\"user\"])\n", "path": "github/PullRequestComment.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# ########################## Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Michael Stead <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2013 martinqt <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.github.io/PyGithub/v1/index.html #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n# ##############################################################################\n\nimport github.GithubObject\n\nimport github.NamedUser\n\n\nclass PullRequestComment(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents PullRequestComments. The reference can be found here http://developer.github.com/v3/pulls/comments/\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"id\": self._id.value, \"user\": self._user.value})\n\n @property\n def body(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._body)\n return self._body.value\n\n @property\n def commit_id(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._commit_id)\n return self._commit_id.value\n\n @property\n def created_at(self):\n \"\"\"\n :type: datetime.datetime\n \"\"\"\n self._completeIfNotSet(self._created_at)\n return self._created_at.value\n\n @property\n def diff_hunk(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._diff_hunk)\n return self._diff_hunk.value\n\n @property\n def id(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._id)\n return self._id.value\n\n @property\n def in_reply_to_id(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._in_reply_to_id)\n return self._in_reply_to_id.value\n\n @property\n def original_commit_id(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._original_commit_id)\n return self._original_commit_id.value\n\n @property\n def original_position(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._original_position)\n return self._original_position.value\n\n @property\n def path(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._path)\n return self._path.value\n\n @property\n def position(self):\n \"\"\"\n :type: integer\n \"\"\"\n self._completeIfNotSet(self._position)\n return self._position.value\n\n @property\n def pull_request_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._pull_request_url)\n return self._pull_request_url.value\n\n @property\n def updated_at(self):\n \"\"\"\n :type: datetime.datetime\n \"\"\"\n self._completeIfNotSet(self._updated_at)\n return self._updated_at.value\n\n @property\n def url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._url)\n return self._url.value\n\n @property\n def html_url(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._html_url)\n return self._html_url.value\n\n @property\n def user(self):\n \"\"\"\n :type: :class:`github.NamedUser.NamedUser`\n \"\"\"\n self._completeIfNotSet(self._user)\n return self._user.value\n\n def delete(self):\n \"\"\"\n :calls: `DELETE /repos/:owner/:repo/pulls/comments/:number <http://developer.github.com/v3/pulls/comments>`_\n :rtype: None\n \"\"\"\n headers, data = self._requester.requestJsonAndCheck(\n \"DELETE\",\n self.url\n )\n\n def edit(self, body):\n \"\"\"\n :calls: `PATCH /repos/:owner/:repo/pulls/comments/:number <http://developer.github.com/v3/pulls/comments>`_\n :param body: string\n :rtype: None\n \"\"\"\n assert isinstance(body, (str, unicode)), body\n post_parameters = {\n \"body\": body,\n }\n headers, data = self._requester.requestJsonAndCheck(\n \"PATCH\",\n self.url,\n input=post_parameters\n )\n self._useAttributes(data)\n\n def get_reactions(self):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/pulls/comments/:number/reactions\n <https://developer.github.com/v3/reactions/#list-reactions-for-a-pull-request-review-comment>`\n :return: :class: :class:`github.PaginatedList.PaginatedList` of :class:`github.Reaction.Reaction`\n \"\"\"\n return github.PaginatedList.PaginatedList(\n github.Reaction.Reaction,\n self._requester,\n self.url + \"/reactions\",\n None,\n headers={'Accept': 'application/vnd.github.squirrel-girl-preview'}\n )\n\n def create_reaction(self, reaction_type):\n \"\"\"\n :calls: `POST /repos/:owner/:repo/pulls/comments/:number/reactions\n <https://developer.github.com/v3/reactions/#create-reaction-for-a-pull-request-review-comment>`_\n :param reaction_type: string\n :rtype: :class:`github.Reaction.Reaction`\n \"\"\"\n assert isinstance(reaction_type, (str, unicode)), \"reaction type should be a string\"\n assert reaction_type in [\"+1\", \"-1\", \"laugh\", \"confused\", \"heart\", \"hooray\"], \\\n \"Invalid reaction type (https://developer.github.com/v3/reactions/#reaction-types)\"\n\n post_parameters = {\n \"content\": reaction_type,\n }\n headers, data = self._requester.requestJsonAndCheck(\n \"POST\",\n self.url + \"/reactions\",\n input=post_parameters,\n headers={'Accept': 'application/vnd.github.squirrel-girl-preview'}\n )\n return github.Reaction.Reaction(self._requester, headers, data, completed=True)\n\n def _initAttributes(self):\n self._body = github.GithubObject.NotSet\n self._commit_id = github.GithubObject.NotSet\n self._created_at = github.GithubObject.NotSet\n self._diff_hunk = github.GithubObject.NotSet\n self._id = github.GithubObject.NotSet\n self._in_reply_to_id = github.GithubObject.NotSet\n self._original_commit_id = github.GithubObject.NotSet\n self._original_position = github.GithubObject.NotSet\n self._path = github.GithubObject.NotSet\n self._position = github.GithubObject.NotSet\n self._pull_request_url = github.GithubObject.NotSet\n self._updated_at = github.GithubObject.NotSet\n self._url = github.GithubObject.NotSet\n self._html_url = github.GithubObject.NotSet\n self._user = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"body\" in attributes: # pragma no branch\n self._body = self._makeStringAttribute(attributes[\"body\"])\n if \"commit_id\" in attributes: # pragma no branch\n self._commit_id = self._makeStringAttribute(attributes[\"commit_id\"])\n if \"created_at\" in attributes: # pragma no branch\n self._created_at = self._makeDatetimeAttribute(attributes[\"created_at\"])\n if \"diff_hunk\" in attributes: # pragma no branch\n self._diff_hunk = self._makeStringAttribute(attributes[\"diff_hunk\"])\n if \"id\" in attributes: # pragma no branch\n self._id = self._makeIntAttribute(attributes[\"id\"])\n if \"in_reply_to_id\" in attributes: # pragma no branch\n self._in_reply_to_id = self._makeIntAttribute(attributes[\"in_reply_to_id\"])\n if \"original_commit_id\" in attributes: # pragma no branch\n self._original_commit_id = self._makeStringAttribute(attributes[\"original_commit_id\"])\n if \"original_position\" in attributes: # pragma no branch\n self._original_position = self._makeIntAttribute(attributes[\"original_position\"])\n if \"path\" in attributes: # pragma no branch\n self._path = self._makeStringAttribute(attributes[\"path\"])\n if \"position\" in attributes: # pragma no branch\n self._position = self._makeIntAttribute(attributes[\"position\"])\n if \"pull_request_url\" in attributes: # pragma no branch\n self._pull_request_url = self._makeStringAttribute(attributes[\"pull_request_url\"])\n if \"updated_at\" in attributes: # pragma no branch\n self._updated_at = self._makeDatetimeAttribute(attributes[\"updated_at\"])\n if \"url\" in attributes: # pragma no branch\n self._url = self._makeStringAttribute(attributes[\"url\"])\n if \"html_url\" in attributes: # pragma no branch\n self._html_url = self._makeStringAttribute(attributes[\"html_url\"])\n if \"user\" in attributes: # pragma no branch\n self._user = self._makeClassAttribute(github.NamedUser.NamedUser, attributes[\"user\"])\n", "path": "github/PullRequestComment.py"}]}
| 3,205 | 390 |
gh_patches_debug_23315
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-5328
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Internal Server Error thrown for patching a nonexistent session
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
HTTP 500 error is thrown when a patch request is sent for a session which doesn't exist.
**To Reproduce**
Steps to reproduce the behavior:
1. Send a patch request for a session which doesn't exist
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
A proper error should be sent.
Working on fixing this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/schema/sessions.py`
Content:
```
1 from marshmallow import validates_schema, validate
2 from marshmallow_jsonapi import fields
3 from marshmallow_jsonapi.flask import Relationship
4
5 from app.api.helpers.exceptions import UnprocessableEntity, ForbiddenException
6 from app.api.helpers.permission_manager import has_access
7 from app.api.helpers.utilities import dasherize
8 from app.api.schema.base import SoftDeletionSchema
9 from app.models.session import Session
10 from utils.common import use_defaults
11
12
13 @use_defaults()
14 class SessionSchema(SoftDeletionSchema):
15 """
16 Api schema for Session Model
17 """
18
19 class Meta:
20 """
21 Meta class for Session Api Schema
22 """
23 type_ = 'session'
24 self_view = 'v1.session_detail'
25 self_view_kwargs = {'id': '<id>'}
26 inflect = dasherize
27
28 @validates_schema(pass_original=True)
29 def validate_date(self, data, original_data):
30 if 'id' in original_data['data']:
31 session = Session.query.filter_by(id=original_data['data']['id']).one()
32
33 if 'starts_at' not in data:
34 data['starts_at'] = session.starts_at
35
36 if 'ends_at' not in data:
37 data['ends_at'] = session.ends_at
38
39 if 'event' not in data:
40 data['event'] = session.event_id
41
42 if data['starts_at'] and data['ends_at']:
43 if data['starts_at'] >= data['ends_at']:
44 raise UnprocessableEntity(
45 {'pointer': '/data/attributes/ends-at'}, "ends-at should be after starts-at")
46
47 if 'state' in data:
48 if data['state'] is not 'draft' or not 'pending':
49 if not has_access('is_coorganizer', event_id=data['event']):
50 return ForbiddenException({'source': ''}, 'Co-organizer access is required.')
51
52 if 'track' in data:
53 if not has_access('is_coorganizer', event_id=data['event']):
54 return ForbiddenException({'source': ''}, 'Co-organizer access is required.')
55
56 if 'microlocation' in data:
57 if not has_access('is_coorganizer', event_id=data['event']):
58 return ForbiddenException({'source': ''}, 'Co-organizer access is required.')
59
60 id = fields.Str(dump_only=True)
61 title = fields.Str(required=True)
62 subtitle = fields.Str(allow_none=True)
63 level = fields.Int(allow_none=True)
64 short_abstract = fields.Str(allow_none=True)
65 long_abstract = fields.Str(allow_none=True)
66 comments = fields.Str(allow_none=True)
67 starts_at = fields.DateTime(allow_none=True)
68 ends_at = fields.DateTime(allow_none=True)
69 language = fields.Str(allow_none=True)
70 slides_url = fields.Url(allow_none=True)
71 video_url = fields.Url(allow_none=True)
72 audio_url = fields.Url(allow_none=True)
73 signup_url = fields.Url(allow_none=True)
74 state = fields.Str(validate=validate.OneOf(choices=["pending", "accepted", "confirmed", "rejected", "draft"]),
75 allow_none=True, default='draft')
76 created_at = fields.DateTime(dump_only=True)
77 deleted_at = fields.DateTime(dump_only=True)
78 submitted_at = fields.DateTime(allow_none=True)
79 is_mail_sent = fields.Boolean()
80 last_modified_at = fields.DateTime(dump_only=True)
81 send_email = fields.Boolean(load_only=True, allow_none=True)
82 average_rating = fields.Float(dump_only=True)
83 microlocation = Relationship(attribute='microlocation',
84 self_view='v1.session_microlocation',
85 self_view_kwargs={'id': '<id>'},
86 related_view='v1.microlocation_detail',
87 related_view_kwargs={'session_id': '<id>'},
88 schema='MicrolocationSchema',
89 type_='microlocation')
90 track = Relationship(attribute='track',
91 self_view='v1.session_track',
92 self_view_kwargs={'id': '<id>'},
93 related_view='v1.track_detail',
94 related_view_kwargs={'session_id': '<id>'},
95 schema='TrackSchema',
96 type_='track')
97 session_type = Relationship(attribute='session_type',
98 self_view='v1.session_session_type',
99 self_view_kwargs={'id': '<id>'},
100 related_view='v1.session_type_detail',
101 related_view_kwargs={'session_id': '<id>'},
102 schema='SessionTypeSchema',
103 type_='session-type')
104 event = Relationship(attribute='event',
105 self_view='v1.session_event',
106 self_view_kwargs={'id': '<id>'},
107 related_view='v1.event_detail',
108 related_view_kwargs={'session_id': '<id>'},
109 schema='EventSchemaPublic',
110 type_='event')
111 feedbacks = Relationship(attribute='feedbacks',
112 self_view='v1.session_feedbacks',
113 self_view_kwargs={'id': '<id>'},
114 related_view='v1.feedback_list',
115 related_view_kwargs={'session_id': '<id>'},
116 schema='FeedbackSchema',
117 many=True,
118 type_='feedback')
119 speakers = Relationship(attribute='speakers',
120 many=True,
121 self_view='v1.session_speaker',
122 self_view_kwargs={'id': '<id>'},
123 related_view='v1.speaker_list',
124 related_view_kwargs={'session_id': '<id>'},
125 schema='SpeakerSchema',
126 type_='speaker')
127 creator = Relationship(attribute='user',
128 self_view='v1.session_user',
129 self_view_kwargs={'id': '<id>'},
130 related_view='v1.user_detail',
131 related_view_kwargs={'session_id': '<id>'},
132 schema='UserSchemaPublic',
133 type_='user')
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/api/schema/sessions.py b/app/api/schema/sessions.py
--- a/app/api/schema/sessions.py
+++ b/app/api/schema/sessions.py
@@ -1,6 +1,8 @@
+from flask_rest_jsonapi.exceptions import ObjectNotFound
from marshmallow import validates_schema, validate
from marshmallow_jsonapi import fields
from marshmallow_jsonapi.flask import Relationship
+from sqlalchemy.orm.exc import NoResultFound
from app.api.helpers.exceptions import UnprocessableEntity, ForbiddenException
from app.api.helpers.permission_manager import has_access
@@ -28,7 +30,10 @@
@validates_schema(pass_original=True)
def validate_date(self, data, original_data):
if 'id' in original_data['data']:
- session = Session.query.filter_by(id=original_data['data']['id']).one()
+ try:
+ session = Session.query.filter_by(id=original_data['data']['id']).one()
+ except NoResultFound:
+ raise ObjectNotFound({'parameter': '{id}'}, "Session: not found")
if 'starts_at' not in data:
data['starts_at'] = session.starts_at
|
{"golden_diff": "diff --git a/app/api/schema/sessions.py b/app/api/schema/sessions.py\n--- a/app/api/schema/sessions.py\n+++ b/app/api/schema/sessions.py\n@@ -1,6 +1,8 @@\n+from flask_rest_jsonapi.exceptions import ObjectNotFound\n from marshmallow import validates_schema, validate\n from marshmallow_jsonapi import fields\n from marshmallow_jsonapi.flask import Relationship\n+from sqlalchemy.orm.exc import NoResultFound\n \n from app.api.helpers.exceptions import UnprocessableEntity, ForbiddenException\n from app.api.helpers.permission_manager import has_access\n@@ -28,7 +30,10 @@\n @validates_schema(pass_original=True)\n def validate_date(self, data, original_data):\n if 'id' in original_data['data']:\n- session = Session.query.filter_by(id=original_data['data']['id']).one()\n+ try:\n+ session = Session.query.filter_by(id=original_data['data']['id']).one()\n+ except NoResultFound:\n+ raise ObjectNotFound({'parameter': '{id}'}, \"Session: not found\")\n \n if 'starts_at' not in data:\n data['starts_at'] = session.starts_at\n", "issue": "Internal Server Error thrown for patching a nonexistent session\n**Describe the bug**\r\n<!-- A clear and concise description of what the bug is. -->\r\nHTTP 500 error is thrown when a patch request is sent for a session which doesn't exist.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Send a patch request for a session which doesn't exist\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nA proper error should be sent.\r\n\r\nWorking on fixing this.\n", "before_files": [{"content": "from marshmallow import validates_schema, validate\nfrom marshmallow_jsonapi import fields\nfrom marshmallow_jsonapi.flask import Relationship\n\nfrom app.api.helpers.exceptions import UnprocessableEntity, ForbiddenException\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.schema.base import SoftDeletionSchema\nfrom app.models.session import Session\nfrom utils.common import use_defaults\n\n\n@use_defaults()\nclass SessionSchema(SoftDeletionSchema):\n \"\"\"\n Api schema for Session Model\n \"\"\"\n\n class Meta:\n \"\"\"\n Meta class for Session Api Schema\n \"\"\"\n type_ = 'session'\n self_view = 'v1.session_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n @validates_schema(pass_original=True)\n def validate_date(self, data, original_data):\n if 'id' in original_data['data']:\n session = Session.query.filter_by(id=original_data['data']['id']).one()\n\n if 'starts_at' not in data:\n data['starts_at'] = session.starts_at\n\n if 'ends_at' not in data:\n data['ends_at'] = session.ends_at\n\n if 'event' not in data:\n data['event'] = session.event_id\n\n if data['starts_at'] and data['ends_at']:\n if data['starts_at'] >= data['ends_at']:\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/ends-at'}, \"ends-at should be after starts-at\")\n\n if 'state' in data:\n if data['state'] is not 'draft' or not 'pending':\n if not has_access('is_coorganizer', event_id=data['event']):\n return ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n\n if 'track' in data:\n if not has_access('is_coorganizer', event_id=data['event']):\n return ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n\n if 'microlocation' in data:\n if not has_access('is_coorganizer', event_id=data['event']):\n return ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n\n id = fields.Str(dump_only=True)\n title = fields.Str(required=True)\n subtitle = fields.Str(allow_none=True)\n level = fields.Int(allow_none=True)\n short_abstract = fields.Str(allow_none=True)\n long_abstract = fields.Str(allow_none=True)\n comments = fields.Str(allow_none=True)\n starts_at = fields.DateTime(allow_none=True)\n ends_at = fields.DateTime(allow_none=True)\n language = fields.Str(allow_none=True)\n slides_url = fields.Url(allow_none=True)\n video_url = fields.Url(allow_none=True)\n audio_url = fields.Url(allow_none=True)\n signup_url = fields.Url(allow_none=True)\n state = fields.Str(validate=validate.OneOf(choices=[\"pending\", \"accepted\", \"confirmed\", \"rejected\", \"draft\"]),\n allow_none=True, default='draft')\n created_at = fields.DateTime(dump_only=True)\n deleted_at = fields.DateTime(dump_only=True)\n submitted_at = fields.DateTime(allow_none=True)\n is_mail_sent = fields.Boolean()\n last_modified_at = fields.DateTime(dump_only=True)\n send_email = fields.Boolean(load_only=True, allow_none=True)\n average_rating = fields.Float(dump_only=True)\n microlocation = Relationship(attribute='microlocation',\n self_view='v1.session_microlocation',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.microlocation_detail',\n related_view_kwargs={'session_id': '<id>'},\n schema='MicrolocationSchema',\n type_='microlocation')\n track = Relationship(attribute='track',\n self_view='v1.session_track',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.track_detail',\n related_view_kwargs={'session_id': '<id>'},\n schema='TrackSchema',\n type_='track')\n session_type = Relationship(attribute='session_type',\n self_view='v1.session_session_type',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.session_type_detail',\n related_view_kwargs={'session_id': '<id>'},\n schema='SessionTypeSchema',\n type_='session-type')\n event = Relationship(attribute='event',\n self_view='v1.session_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'session_id': '<id>'},\n schema='EventSchemaPublic',\n type_='event')\n feedbacks = Relationship(attribute='feedbacks',\n self_view='v1.session_feedbacks',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.feedback_list',\n related_view_kwargs={'session_id': '<id>'},\n schema='FeedbackSchema',\n many=True,\n type_='feedback')\n speakers = Relationship(attribute='speakers',\n many=True,\n self_view='v1.session_speaker',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.speaker_list',\n related_view_kwargs={'session_id': '<id>'},\n schema='SpeakerSchema',\n type_='speaker')\n creator = Relationship(attribute='user',\n self_view='v1.session_user',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.user_detail',\n related_view_kwargs={'session_id': '<id>'},\n schema='UserSchemaPublic',\n type_='user')\n", "path": "app/api/schema/sessions.py"}], "after_files": [{"content": "from flask_rest_jsonapi.exceptions import ObjectNotFound\nfrom marshmallow import validates_schema, validate\nfrom marshmallow_jsonapi import fields\nfrom marshmallow_jsonapi.flask import Relationship\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom app.api.helpers.exceptions import UnprocessableEntity, ForbiddenException\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.schema.base import SoftDeletionSchema\nfrom app.models.session import Session\nfrom utils.common import use_defaults\n\n\n@use_defaults()\nclass SessionSchema(SoftDeletionSchema):\n \"\"\"\n Api schema for Session Model\n \"\"\"\n\n class Meta:\n \"\"\"\n Meta class for Session Api Schema\n \"\"\"\n type_ = 'session'\n self_view = 'v1.session_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n @validates_schema(pass_original=True)\n def validate_date(self, data, original_data):\n if 'id' in original_data['data']:\n try:\n session = Session.query.filter_by(id=original_data['data']['id']).one()\n except NoResultFound:\n raise ObjectNotFound({'parameter': '{id}'}, \"Session: not found\")\n\n if 'starts_at' not in data:\n data['starts_at'] = session.starts_at\n\n if 'ends_at' not in data:\n data['ends_at'] = session.ends_at\n\n if 'event' not in data:\n data['event'] = session.event_id\n\n if data['starts_at'] and data['ends_at']:\n if data['starts_at'] >= data['ends_at']:\n raise UnprocessableEntity(\n {'pointer': '/data/attributes/ends-at'}, \"ends-at should be after starts-at\")\n\n if 'state' in data:\n if data['state'] is not 'draft' or not 'pending':\n if not has_access('is_coorganizer', event_id=data['event']):\n return ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n\n if 'track' in data:\n if not has_access('is_coorganizer', event_id=data['event']):\n return ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n\n if 'microlocation' in data:\n if not has_access('is_coorganizer', event_id=data['event']):\n return ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n\n id = fields.Str(dump_only=True)\n title = fields.Str(required=True)\n subtitle = fields.Str(allow_none=True)\n level = fields.Int(allow_none=True)\n short_abstract = fields.Str(allow_none=True)\n long_abstract = fields.Str(allow_none=True)\n comments = fields.Str(allow_none=True)\n starts_at = fields.DateTime(allow_none=True)\n ends_at = fields.DateTime(allow_none=True)\n language = fields.Str(allow_none=True)\n slides_url = fields.Url(allow_none=True)\n video_url = fields.Url(allow_none=True)\n audio_url = fields.Url(allow_none=True)\n signup_url = fields.Url(allow_none=True)\n state = fields.Str(validate=validate.OneOf(choices=[\"pending\", \"accepted\", \"confirmed\", \"rejected\", \"draft\"]),\n allow_none=True, default='draft')\n created_at = fields.DateTime(dump_only=True)\n deleted_at = fields.DateTime(dump_only=True)\n submitted_at = fields.DateTime(allow_none=True)\n is_mail_sent = fields.Boolean()\n last_modified_at = fields.DateTime(dump_only=True)\n send_email = fields.Boolean(load_only=True, allow_none=True)\n average_rating = fields.Float(dump_only=True)\n microlocation = Relationship(attribute='microlocation',\n self_view='v1.session_microlocation',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.microlocation_detail',\n related_view_kwargs={'session_id': '<id>'},\n schema='MicrolocationSchema',\n type_='microlocation')\n track = Relationship(attribute='track',\n self_view='v1.session_track',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.track_detail',\n related_view_kwargs={'session_id': '<id>'},\n schema='TrackSchema',\n type_='track')\n session_type = Relationship(attribute='session_type',\n self_view='v1.session_session_type',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.session_type_detail',\n related_view_kwargs={'session_id': '<id>'},\n schema='SessionTypeSchema',\n type_='session-type')\n event = Relationship(attribute='event',\n self_view='v1.session_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'session_id': '<id>'},\n schema='EventSchemaPublic',\n type_='event')\n feedbacks = Relationship(attribute='feedbacks',\n self_view='v1.session_feedbacks',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.feedback_list',\n related_view_kwargs={'session_id': '<id>'},\n schema='FeedbackSchema',\n many=True,\n type_='feedback')\n speakers = Relationship(attribute='speakers',\n many=True,\n self_view='v1.session_speaker',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.speaker_list',\n related_view_kwargs={'session_id': '<id>'},\n schema='SpeakerSchema',\n type_='speaker')\n creator = Relationship(attribute='user',\n self_view='v1.session_user',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.user_detail',\n related_view_kwargs={'session_id': '<id>'},\n schema='UserSchemaPublic',\n type_='user')\n", "path": "app/api/schema/sessions.py"}]}
| 1,852 | 251 |
gh_patches_debug_17825
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-6443
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Billets précédents et suivants : uniquement ceux choisis par le Staff
[Discussion sur le forum](https://zestedesavoir.com/forums/sujet/14090/bug-dans-les-billets-recents/)
> Les billets écrits ne sont pas validés avant publication par le Staff, n’importe quel membre peut publier son billet quand il le souhaite. Par contre, les billets affichés sur la page d’accueil sont ceux sélectionnés par le Staff, ce qui n’a pas été le cas de "L’hygiènisme une bombe à retardement" pour l’instant.
>
> Il serait à mon avis souhaitable et cohérent que les liens Précédent et Suivant en bas des billets ne proposent que les billets sélectionnés par le Staff.
Il faut ajouter dans le fichier `zds/tutorialv2/views/published.py` quelque chose comme :
```py
if self.current_content_type == 'OPINION':
queryset_pagination = queryset_pagination.filter(content__sha_picked=F('sha_public'))
```
au niveau des ces lignes :
https://github.com/zestedesavoir/zds-site/blob/c4d3dd39c6a780054113d5185fb83bc82f6753be/zds/tutorialv2/views/published.py#L108-L117
Il faut ajouter au tout début du fichier `from django.db.models import F`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/tutorialv2/views/display.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.http import Http404
5 from django.utils.translation import gettext_lazy as _
6
7 from zds.featured.mixins import FeatureableMixin
8 from zds.tutorialv2 import signals
9 from zds.notification.models import ContentReactionAnswerSubscription
10 from zds.tutorialv2.forms import (
11 RevokeValidationForm,
12 UnpublicationForm,
13 WarnTypoForm,
14 PickOpinionForm,
15 UnpickOpinionForm,
16 PromoteOpinionToArticleForm,
17 SearchSuggestionForm,
18 EditContentTagsForm,
19 )
20 from zds.tutorialv2.mixins import SingleOnlineContentDetailViewMixin
21
22 from zds.tutorialv2.models.database import (
23 PublishableContent,
24 PublishedContent,
25 ContentReaction,
26 ContentSuggestion,
27 ContentContribution,
28 )
29 from zds.tutorialv2.utils import search_container_or_404, last_participation_is_old, mark_read
30 from zds.tutorialv2.views.containers_extracts import DisplayContainer
31 from zds.tutorialv2.views.contents import DisplayContent
32 from zds.tutorialv2.views.goals import EditGoalsForm
33 from zds.utils.models import CommentVote
34 from zds.utils.paginator import make_pagination
35
36 logger = logging.getLogger(__name__)
37
38
39 class DisplayOnlineContent(FeatureableMixin, SingleOnlineContentDetailViewMixin):
40 """Base class that can show any online content"""
41
42 model = PublishedContent
43 template_name = "tutorialv2/view/content_online.html"
44
45 current_content_type = ""
46 verbose_type_name = _("contenu")
47 verbose_type_name_plural = _("contenus")
48
49 def featured_request_allowed(self):
50 """Featured request is not allowed on obsolete content and opinions"""
51 return self.object.type != "OPINION" and not self.object.is_obsolete
52
53 def get_context_data(self, **kwargs):
54 """Show the given tutorial if exists."""
55 context = super().get_context_data(**kwargs)
56
57 if context["is_staff"]:
58 if self.current_content_type == "OPINION":
59 context["alerts"] = self.object.alerts_on_this_content.all()
60 context["formRevokeValidation"] = RevokeValidationForm(
61 self.versioned_object, initial={"version": self.versioned_object.sha_public}
62 )
63 context["formUnpublication"] = UnpublicationForm(
64 self.versioned_object, initial={"version": self.versioned_object.sha_public}
65 )
66
67 context["formWarnTypo"] = WarnTypoForm(self.versioned_object, self.versioned_object)
68
69 reactions = list(
70 ContentReaction.objects.select_related("author")
71 .select_related("author__profile")
72 .select_related("hat")
73 .select_related("editor")
74 .prefetch_related("alerts_on_this_comment")
75 .prefetch_related("alerts_on_this_comment__author")
76 .filter(related_content__pk=self.object.pk)
77 .order_by("pubdate")
78 )
79
80 # pagination of articles and opinions
81 context["previous_content"] = None
82 context["next_content"] = None
83
84 if self.current_content_type in ("ARTICLE", "OPINION"):
85 queryset_pagination = PublishedContent.objects.filter(
86 content_type=self.current_content_type, must_redirect=False
87 )
88
89 context["previous_content"] = (
90 queryset_pagination.filter(publication_date__lt=self.public_content_object.publication_date)
91 .order_by("-publication_date")
92 .first()
93 )
94 context["next_content"] = (
95 queryset_pagination.filter(publication_date__gt=self.public_content_object.publication_date)
96 .order_by("publication_date")
97 .first()
98 )
99
100 if self.versioned_object.type == "OPINION":
101 context["formPickOpinion"] = PickOpinionForm(
102 self.versioned_object, initial={"version": self.versioned_object.sha_public}
103 )
104 context["formUnpickOpinion"] = UnpickOpinionForm(
105 self.versioned_object, initial={"version": self.versioned_object.sha_public}
106 )
107 context["formConvertOpinion"] = PromoteOpinionToArticleForm(
108 self.versioned_object, initial={"version": self.versioned_object.sha_public}
109 )
110 else:
111 context["content_suggestions"] = ContentSuggestion.objects.filter(publication=self.object)
112 excluded_for_search = [str(x.suggestion.pk) for x in context["content_suggestions"]]
113 excluded_for_search.append(str(self.object.pk))
114 context["formAddSuggestion"] = SearchSuggestionForm(
115 content=self.object, initial={"excluded_pk": ",".join(excluded_for_search)}
116 )
117
118 context["form_edit_tags"] = EditContentTagsForm(self.versioned_object, self.object)
119 context["form_edit_goals"] = EditGoalsForm(self.object)
120
121 # pagination of comments
122 make_pagination(
123 context,
124 self.request,
125 reactions,
126 settings.ZDS_APP["content"]["notes_per_page"],
127 context_list_name="reactions",
128 with_previous_item=True,
129 )
130
131 # is JS activated ?
132 context["is_js"] = True
133 if not self.object.js_support:
134 context["is_js"] = False
135
136 # optimize requests:
137 votes = CommentVote.objects.filter(user_id=self.request.user.id, comment__in=reactions).all()
138 context["user_like"] = [vote.comment_id for vote in votes if vote.positive]
139 context["user_dislike"] = [vote.comment_id for vote in votes if not vote.positive]
140
141 if self.request.user.has_perm("tutorialv2.change_contentreaction"):
142 context["user_can_modify"] = [reaction.pk for reaction in reactions]
143 else:
144 context["user_can_modify"] = [reaction.pk for reaction in reactions if reaction.author == self.request.user]
145
146 context["is_antispam"] = self.object.antispam()
147 context["pm_link"] = self.object.get_absolute_contact_url(_("À propos de"))
148 context["subscriber_count"] = ContentReactionAnswerSubscription.objects.get_subscriptions(self.object).count()
149 # We need reading time expressed in minutes
150 try:
151 char_count = self.object.public_version.char_count
152 if char_count:
153 context["reading_time"] = int(
154 self.versioned_object.get_tree_level()
155 * char_count
156 / settings.ZDS_APP["content"]["characters_per_minute"]
157 )
158 else:
159 logger.warning("For unknown reason content with id %s has no char count", self.object.pk)
160 context["reading_time"] = 0
161 except ZeroDivisionError as e:
162 logger.warning("could not compute reading time: setting characters_per_minute is set to zero (error=%s)", e)
163
164 if self.request.user.is_authenticated:
165 if len(context["reactions"]) > 0:
166 signals.content_read.send(
167 sender=context["reactions"][0].__class__, instances=context["reactions"], user=self.request.user
168 )
169 signals.content_read.send(
170 sender=self.object.__class__, instance=self.object, user=self.request.user, target=PublishableContent
171 )
172 if last_participation_is_old(self.object, self.request.user):
173 mark_read(self.object, self.request.user)
174
175 context["contributions"] = ContentContribution.objects.filter(content=self.object).order_by(
176 "contribution_role__position"
177 )
178 context["content_suggestions_random"] = ContentSuggestion.objects.filter(publication=self.object).order_by("?")[
179 : settings.ZDS_APP["content"]["suggestions_per_page"]
180 ]
181
182 return context
183
184
185 class DisplayOnlineArticle(DisplayOnlineContent):
186 """Displays the list of published articles"""
187
188 current_content_type = "ARTICLE"
189 verbose_type_name = _("article")
190 verbose_type_name_plural = _("articles")
191
192
193 class DisplayOnlineTutorial(DisplayOnlineContent):
194 """Displays the list of published tutorials"""
195
196 current_content_type = "TUTORIAL"
197 verbose_type_name = _("tutoriel")
198 verbose_type_name_plural = _("tutoriels")
199
200
201 class DisplayOnlineOpinion(DisplayOnlineContent):
202 """Displays the list of published articles"""
203
204 current_content_type = "OPINION"
205 verbose_type_name = _("billet")
206 verbose_type_name_plural = _("billets")
207
208
209 class DisplayOnlineContainer(SingleOnlineContentDetailViewMixin):
210 """Base class that can show any content in any state"""
211
212 template_name = "tutorialv2/view/container_online.html"
213 current_content_type = "TUTORIAL" # obviously, an article cannot have container !
214
215 def get_context_data(self, **kwargs):
216 context = super().get_context_data(**kwargs)
217 container = search_container_or_404(self.versioned_object, self.kwargs)
218
219 context["container"] = container
220 context["pm_link"] = self.object.get_absolute_contact_url(_("À propos de"))
221
222 context["formWarnTypo"] = WarnTypoForm(
223 self.versioned_object, container, initial={"target": container.get_path(relative=True)}
224 )
225
226 # pagination: search for `previous` and `next`, if available
227 if not self.versioned_object.has_extracts():
228 chapters = self.versioned_object.get_list_of_chapters()
229 try:
230 position = chapters.index(container)
231 except ValueError:
232 pass # this is not (yet?) a chapter
233 else:
234 context["has_pagination"] = True
235 context["previous"] = None
236 context["next"] = None
237 if position == 0:
238 context["previous"] = container.parent
239 if position > 0:
240 previous_chapter = chapters[position - 1]
241 if previous_chapter.parent == container.parent:
242 context["previous"] = previous_chapter
243 else:
244 context["previous"] = container.parent
245 if position < len(chapters) - 1:
246 next_chapter = chapters[position + 1]
247 if next_chapter.parent == container.parent:
248 context["next"] = next_chapter
249 else:
250 context["next"] = next_chapter.parent
251
252 return context
253
254
255 class DisplayBetaContent(DisplayContent):
256 """View to get the beta version of a content"""
257
258 sha = None
259
260 def get_object(self, queryset=None):
261 """rewritten to ensure that the version is set to beta, raise Http404 if there is no such version"""
262 obj = super().get_object(queryset)
263
264 if not obj.sha_beta:
265 raise Http404("Aucune bêta n'existe pour ce contenu.")
266 else:
267 self.sha = obj.sha_beta
268
269 # make the slug always right in URLs resolution:
270 if "slug" in self.kwargs:
271 self.kwargs["slug"] = obj.slug
272
273 return obj
274
275 def get_context_data(self, **kwargs):
276 context = super().get_context_data(**kwargs)
277 context["pm_link"] = self.object.get_absolute_contact_url()
278 return context
279
280
281 class DisplayBetaContainer(DisplayContainer):
282 """View to get the beta version of a container"""
283
284 sha = None
285
286 def get_object(self, queryset=None):
287 """rewritten to ensure that the version is set to beta, raise Http404 if there is no such version"""
288 obj = super().get_object(queryset)
289
290 if not obj.sha_beta:
291 raise Http404("Aucune bêta n'existe pour ce contenu.")
292 else:
293 self.sha = obj.sha_beta
294
295 # make the slug always right in URLs resolution:
296 if "slug" in self.kwargs:
297 self.kwargs["slug"] = obj.slug
298
299 return obj
300
301 def get_context_data(self, **kwargs):
302 context = super().get_context_data(**kwargs)
303 context["pm_link"] = self.object.get_absolute_contact_url()
304 return context
305
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zds/tutorialv2/views/display.py b/zds/tutorialv2/views/display.py
--- a/zds/tutorialv2/views/display.py
+++ b/zds/tutorialv2/views/display.py
@@ -1,5 +1,5 @@
import logging
-
+from django.db.models import F
from django.conf import settings
from django.http import Http404
from django.utils.translation import gettext_lazy as _
@@ -86,6 +86,8 @@
content_type=self.current_content_type, must_redirect=False
)
+ if self.current_content_type == "OPINION":
+ queryset_pagination = queryset_pagination.filter(content__sha_picked=F("sha_public"))
context["previous_content"] = (
queryset_pagination.filter(publication_date__lt=self.public_content_object.publication_date)
.order_by("-publication_date")
|
{"golden_diff": "diff --git a/zds/tutorialv2/views/display.py b/zds/tutorialv2/views/display.py\n--- a/zds/tutorialv2/views/display.py\n+++ b/zds/tutorialv2/views/display.py\n@@ -1,5 +1,5 @@\n import logging\n-\n+from django.db.models import F\n from django.conf import settings\n from django.http import Http404\n from django.utils.translation import gettext_lazy as _\n@@ -86,6 +86,8 @@\n content_type=self.current_content_type, must_redirect=False\n )\n \n+ if self.current_content_type == \"OPINION\":\n+ queryset_pagination = queryset_pagination.filter(content__sha_picked=F(\"sha_public\"))\n context[\"previous_content\"] = (\n queryset_pagination.filter(publication_date__lt=self.public_content_object.publication_date)\n .order_by(\"-publication_date\")\n", "issue": "Billets pr\u00e9c\u00e9dents et suivants : uniquement ceux choisis par le Staff\n[Discussion sur le forum](https://zestedesavoir.com/forums/sujet/14090/bug-dans-les-billets-recents/)\r\n\r\n> Les billets \u00e9crits ne sont pas valid\u00e9s avant publication par le Staff, n\u2019importe quel membre peut publier son billet quand il le souhaite. Par contre, les billets affich\u00e9s sur la page d\u2019accueil sont ceux s\u00e9lectionn\u00e9s par le Staff, ce qui n\u2019a pas \u00e9t\u00e9 le cas de \"L\u2019hygi\u00e8nisme une bombe \u00e0 retardement\" pour l\u2019instant.\r\n> \r\n> Il serait \u00e0 mon avis souhaitable et coh\u00e9rent que les liens Pr\u00e9c\u00e9dent et Suivant en bas des billets ne proposent que les billets s\u00e9lectionn\u00e9s par le Staff.\r\n\r\nIl faut ajouter dans le fichier `zds/tutorialv2/views/published.py` quelque chose comme : \r\n\r\n```py\r\nif self.current_content_type == 'OPINION':\r\n queryset_pagination = queryset_pagination.filter(content__sha_picked=F('sha_public'))\r\n```\r\n\r\nau niveau des ces lignes : \r\n\r\nhttps://github.com/zestedesavoir/zds-site/blob/c4d3dd39c6a780054113d5185fb83bc82f6753be/zds/tutorialv2/views/published.py#L108-L117\r\n\r\nIl faut ajouter au tout d\u00e9but du fichier `from django.db.models import F`.\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.http import Http404\nfrom django.utils.translation import gettext_lazy as _\n\nfrom zds.featured.mixins import FeatureableMixin\nfrom zds.tutorialv2 import signals\nfrom zds.notification.models import ContentReactionAnswerSubscription\nfrom zds.tutorialv2.forms import (\n RevokeValidationForm,\n UnpublicationForm,\n WarnTypoForm,\n PickOpinionForm,\n UnpickOpinionForm,\n PromoteOpinionToArticleForm,\n SearchSuggestionForm,\n EditContentTagsForm,\n)\nfrom zds.tutorialv2.mixins import SingleOnlineContentDetailViewMixin\n\nfrom zds.tutorialv2.models.database import (\n PublishableContent,\n PublishedContent,\n ContentReaction,\n ContentSuggestion,\n ContentContribution,\n)\nfrom zds.tutorialv2.utils import search_container_or_404, last_participation_is_old, mark_read\nfrom zds.tutorialv2.views.containers_extracts import DisplayContainer\nfrom zds.tutorialv2.views.contents import DisplayContent\nfrom zds.tutorialv2.views.goals import EditGoalsForm\nfrom zds.utils.models import CommentVote\nfrom zds.utils.paginator import make_pagination\n\nlogger = logging.getLogger(__name__)\n\n\nclass DisplayOnlineContent(FeatureableMixin, SingleOnlineContentDetailViewMixin):\n \"\"\"Base class that can show any online content\"\"\"\n\n model = PublishedContent\n template_name = \"tutorialv2/view/content_online.html\"\n\n current_content_type = \"\"\n verbose_type_name = _(\"contenu\")\n verbose_type_name_plural = _(\"contenus\")\n\n def featured_request_allowed(self):\n \"\"\"Featured request is not allowed on obsolete content and opinions\"\"\"\n return self.object.type != \"OPINION\" and not self.object.is_obsolete\n\n def get_context_data(self, **kwargs):\n \"\"\"Show the given tutorial if exists.\"\"\"\n context = super().get_context_data(**kwargs)\n\n if context[\"is_staff\"]:\n if self.current_content_type == \"OPINION\":\n context[\"alerts\"] = self.object.alerts_on_this_content.all()\n context[\"formRevokeValidation\"] = RevokeValidationForm(\n self.versioned_object, initial={\"version\": self.versioned_object.sha_public}\n )\n context[\"formUnpublication\"] = UnpublicationForm(\n self.versioned_object, initial={\"version\": self.versioned_object.sha_public}\n )\n\n context[\"formWarnTypo\"] = WarnTypoForm(self.versioned_object, self.versioned_object)\n\n reactions = list(\n ContentReaction.objects.select_related(\"author\")\n .select_related(\"author__profile\")\n .select_related(\"hat\")\n .select_related(\"editor\")\n .prefetch_related(\"alerts_on_this_comment\")\n .prefetch_related(\"alerts_on_this_comment__author\")\n .filter(related_content__pk=self.object.pk)\n .order_by(\"pubdate\")\n )\n\n # pagination of articles and opinions\n context[\"previous_content\"] = None\n context[\"next_content\"] = None\n\n if self.current_content_type in (\"ARTICLE\", \"OPINION\"):\n queryset_pagination = PublishedContent.objects.filter(\n content_type=self.current_content_type, must_redirect=False\n )\n\n context[\"previous_content\"] = (\n queryset_pagination.filter(publication_date__lt=self.public_content_object.publication_date)\n .order_by(\"-publication_date\")\n .first()\n )\n context[\"next_content\"] = (\n queryset_pagination.filter(publication_date__gt=self.public_content_object.publication_date)\n .order_by(\"publication_date\")\n .first()\n )\n\n if self.versioned_object.type == \"OPINION\":\n context[\"formPickOpinion\"] = PickOpinionForm(\n self.versioned_object, initial={\"version\": self.versioned_object.sha_public}\n )\n context[\"formUnpickOpinion\"] = UnpickOpinionForm(\n self.versioned_object, initial={\"version\": self.versioned_object.sha_public}\n )\n context[\"formConvertOpinion\"] = PromoteOpinionToArticleForm(\n self.versioned_object, initial={\"version\": self.versioned_object.sha_public}\n )\n else:\n context[\"content_suggestions\"] = ContentSuggestion.objects.filter(publication=self.object)\n excluded_for_search = [str(x.suggestion.pk) for x in context[\"content_suggestions\"]]\n excluded_for_search.append(str(self.object.pk))\n context[\"formAddSuggestion\"] = SearchSuggestionForm(\n content=self.object, initial={\"excluded_pk\": \",\".join(excluded_for_search)}\n )\n\n context[\"form_edit_tags\"] = EditContentTagsForm(self.versioned_object, self.object)\n context[\"form_edit_goals\"] = EditGoalsForm(self.object)\n\n # pagination of comments\n make_pagination(\n context,\n self.request,\n reactions,\n settings.ZDS_APP[\"content\"][\"notes_per_page\"],\n context_list_name=\"reactions\",\n with_previous_item=True,\n )\n\n # is JS activated ?\n context[\"is_js\"] = True\n if not self.object.js_support:\n context[\"is_js\"] = False\n\n # optimize requests:\n votes = CommentVote.objects.filter(user_id=self.request.user.id, comment__in=reactions).all()\n context[\"user_like\"] = [vote.comment_id for vote in votes if vote.positive]\n context[\"user_dislike\"] = [vote.comment_id for vote in votes if not vote.positive]\n\n if self.request.user.has_perm(\"tutorialv2.change_contentreaction\"):\n context[\"user_can_modify\"] = [reaction.pk for reaction in reactions]\n else:\n context[\"user_can_modify\"] = [reaction.pk for reaction in reactions if reaction.author == self.request.user]\n\n context[\"is_antispam\"] = self.object.antispam()\n context[\"pm_link\"] = self.object.get_absolute_contact_url(_(\"\u00c0 propos de\"))\n context[\"subscriber_count\"] = ContentReactionAnswerSubscription.objects.get_subscriptions(self.object).count()\n # We need reading time expressed in minutes\n try:\n char_count = self.object.public_version.char_count\n if char_count:\n context[\"reading_time\"] = int(\n self.versioned_object.get_tree_level()\n * char_count\n / settings.ZDS_APP[\"content\"][\"characters_per_minute\"]\n )\n else:\n logger.warning(\"For unknown reason content with id %s has no char count\", self.object.pk)\n context[\"reading_time\"] = 0\n except ZeroDivisionError as e:\n logger.warning(\"could not compute reading time: setting characters_per_minute is set to zero (error=%s)\", e)\n\n if self.request.user.is_authenticated:\n if len(context[\"reactions\"]) > 0:\n signals.content_read.send(\n sender=context[\"reactions\"][0].__class__, instances=context[\"reactions\"], user=self.request.user\n )\n signals.content_read.send(\n sender=self.object.__class__, instance=self.object, user=self.request.user, target=PublishableContent\n )\n if last_participation_is_old(self.object, self.request.user):\n mark_read(self.object, self.request.user)\n\n context[\"contributions\"] = ContentContribution.objects.filter(content=self.object).order_by(\n \"contribution_role__position\"\n )\n context[\"content_suggestions_random\"] = ContentSuggestion.objects.filter(publication=self.object).order_by(\"?\")[\n : settings.ZDS_APP[\"content\"][\"suggestions_per_page\"]\n ]\n\n return context\n\n\nclass DisplayOnlineArticle(DisplayOnlineContent):\n \"\"\"Displays the list of published articles\"\"\"\n\n current_content_type = \"ARTICLE\"\n verbose_type_name = _(\"article\")\n verbose_type_name_plural = _(\"articles\")\n\n\nclass DisplayOnlineTutorial(DisplayOnlineContent):\n \"\"\"Displays the list of published tutorials\"\"\"\n\n current_content_type = \"TUTORIAL\"\n verbose_type_name = _(\"tutoriel\")\n verbose_type_name_plural = _(\"tutoriels\")\n\n\nclass DisplayOnlineOpinion(DisplayOnlineContent):\n \"\"\"Displays the list of published articles\"\"\"\n\n current_content_type = \"OPINION\"\n verbose_type_name = _(\"billet\")\n verbose_type_name_plural = _(\"billets\")\n\n\nclass DisplayOnlineContainer(SingleOnlineContentDetailViewMixin):\n \"\"\"Base class that can show any content in any state\"\"\"\n\n template_name = \"tutorialv2/view/container_online.html\"\n current_content_type = \"TUTORIAL\" # obviously, an article cannot have container !\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n container = search_container_or_404(self.versioned_object, self.kwargs)\n\n context[\"container\"] = container\n context[\"pm_link\"] = self.object.get_absolute_contact_url(_(\"\u00c0 propos de\"))\n\n context[\"formWarnTypo\"] = WarnTypoForm(\n self.versioned_object, container, initial={\"target\": container.get_path(relative=True)}\n )\n\n # pagination: search for `previous` and `next`, if available\n if not self.versioned_object.has_extracts():\n chapters = self.versioned_object.get_list_of_chapters()\n try:\n position = chapters.index(container)\n except ValueError:\n pass # this is not (yet?) a chapter\n else:\n context[\"has_pagination\"] = True\n context[\"previous\"] = None\n context[\"next\"] = None\n if position == 0:\n context[\"previous\"] = container.parent\n if position > 0:\n previous_chapter = chapters[position - 1]\n if previous_chapter.parent == container.parent:\n context[\"previous\"] = previous_chapter\n else:\n context[\"previous\"] = container.parent\n if position < len(chapters) - 1:\n next_chapter = chapters[position + 1]\n if next_chapter.parent == container.parent:\n context[\"next\"] = next_chapter\n else:\n context[\"next\"] = next_chapter.parent\n\n return context\n\n\nclass DisplayBetaContent(DisplayContent):\n \"\"\"View to get the beta version of a content\"\"\"\n\n sha = None\n\n def get_object(self, queryset=None):\n \"\"\"rewritten to ensure that the version is set to beta, raise Http404 if there is no such version\"\"\"\n obj = super().get_object(queryset)\n\n if not obj.sha_beta:\n raise Http404(\"Aucune b\u00eata n'existe pour ce contenu.\")\n else:\n self.sha = obj.sha_beta\n\n # make the slug always right in URLs resolution:\n if \"slug\" in self.kwargs:\n self.kwargs[\"slug\"] = obj.slug\n\n return obj\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"pm_link\"] = self.object.get_absolute_contact_url()\n return context\n\n\nclass DisplayBetaContainer(DisplayContainer):\n \"\"\"View to get the beta version of a container\"\"\"\n\n sha = None\n\n def get_object(self, queryset=None):\n \"\"\"rewritten to ensure that the version is set to beta, raise Http404 if there is no such version\"\"\"\n obj = super().get_object(queryset)\n\n if not obj.sha_beta:\n raise Http404(\"Aucune b\u00eata n'existe pour ce contenu.\")\n else:\n self.sha = obj.sha_beta\n\n # make the slug always right in URLs resolution:\n if \"slug\" in self.kwargs:\n self.kwargs[\"slug\"] = obj.slug\n\n return obj\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"pm_link\"] = self.object.get_absolute_contact_url()\n return context\n", "path": "zds/tutorialv2/views/display.py"}], "after_files": [{"content": "import logging\nfrom django.db.models import F\nfrom django.conf import settings\nfrom django.http import Http404\nfrom django.utils.translation import gettext_lazy as _\n\nfrom zds.featured.mixins import FeatureableMixin\nfrom zds.tutorialv2 import signals\nfrom zds.notification.models import ContentReactionAnswerSubscription\nfrom zds.tutorialv2.forms import (\n RevokeValidationForm,\n UnpublicationForm,\n WarnTypoForm,\n PickOpinionForm,\n UnpickOpinionForm,\n PromoteOpinionToArticleForm,\n SearchSuggestionForm,\n EditContentTagsForm,\n)\nfrom zds.tutorialv2.mixins import SingleOnlineContentDetailViewMixin\n\nfrom zds.tutorialv2.models.database import (\n PublishableContent,\n PublishedContent,\n ContentReaction,\n ContentSuggestion,\n ContentContribution,\n)\nfrom zds.tutorialv2.utils import search_container_or_404, last_participation_is_old, mark_read\nfrom zds.tutorialv2.views.containers_extracts import DisplayContainer\nfrom zds.tutorialv2.views.contents import DisplayContent\nfrom zds.tutorialv2.views.goals import EditGoalsForm\nfrom zds.utils.models import CommentVote\nfrom zds.utils.paginator import make_pagination\n\nlogger = logging.getLogger(__name__)\n\n\nclass DisplayOnlineContent(FeatureableMixin, SingleOnlineContentDetailViewMixin):\n \"\"\"Base class that can show any online content\"\"\"\n\n model = PublishedContent\n template_name = \"tutorialv2/view/content_online.html\"\n\n current_content_type = \"\"\n verbose_type_name = _(\"contenu\")\n verbose_type_name_plural = _(\"contenus\")\n\n def featured_request_allowed(self):\n \"\"\"Featured request is not allowed on obsolete content and opinions\"\"\"\n return self.object.type != \"OPINION\" and not self.object.is_obsolete\n\n def get_context_data(self, **kwargs):\n \"\"\"Show the given tutorial if exists.\"\"\"\n context = super().get_context_data(**kwargs)\n\n if context[\"is_staff\"]:\n if self.current_content_type == \"OPINION\":\n context[\"alerts\"] = self.object.alerts_on_this_content.all()\n context[\"formRevokeValidation\"] = RevokeValidationForm(\n self.versioned_object, initial={\"version\": self.versioned_object.sha_public}\n )\n context[\"formUnpublication\"] = UnpublicationForm(\n self.versioned_object, initial={\"version\": self.versioned_object.sha_public}\n )\n\n context[\"formWarnTypo\"] = WarnTypoForm(self.versioned_object, self.versioned_object)\n\n reactions = list(\n ContentReaction.objects.select_related(\"author\")\n .select_related(\"author__profile\")\n .select_related(\"hat\")\n .select_related(\"editor\")\n .prefetch_related(\"alerts_on_this_comment\")\n .prefetch_related(\"alerts_on_this_comment__author\")\n .filter(related_content__pk=self.object.pk)\n .order_by(\"pubdate\")\n )\n\n # pagination of articles and opinions\n context[\"previous_content\"] = None\n context[\"next_content\"] = None\n\n if self.current_content_type in (\"ARTICLE\", \"OPINION\"):\n queryset_pagination = PublishedContent.objects.filter(\n content_type=self.current_content_type, must_redirect=False\n )\n\n if self.current_content_type == \"OPINION\":\n queryset_pagination = queryset_pagination.filter(content__sha_picked=F(\"sha_public\"))\n context[\"previous_content\"] = (\n queryset_pagination.filter(publication_date__lt=self.public_content_object.publication_date)\n .order_by(\"-publication_date\")\n .first()\n )\n context[\"next_content\"] = (\n queryset_pagination.filter(publication_date__gt=self.public_content_object.publication_date)\n .order_by(\"publication_date\")\n .first()\n )\n\n if self.versioned_object.type == \"OPINION\":\n context[\"formPickOpinion\"] = PickOpinionForm(\n self.versioned_object, initial={\"version\": self.versioned_object.sha_public}\n )\n context[\"formUnpickOpinion\"] = UnpickOpinionForm(\n self.versioned_object, initial={\"version\": self.versioned_object.sha_public}\n )\n context[\"formConvertOpinion\"] = PromoteOpinionToArticleForm(\n self.versioned_object, initial={\"version\": self.versioned_object.sha_public}\n )\n else:\n context[\"content_suggestions\"] = ContentSuggestion.objects.filter(publication=self.object)\n excluded_for_search = [str(x.suggestion.pk) for x in context[\"content_suggestions\"]]\n excluded_for_search.append(str(self.object.pk))\n context[\"formAddSuggestion\"] = SearchSuggestionForm(\n content=self.object, initial={\"excluded_pk\": \",\".join(excluded_for_search)}\n )\n\n context[\"form_edit_tags\"] = EditContentTagsForm(self.versioned_object, self.object)\n context[\"form_edit_goals\"] = EditGoalsForm(self.object)\n\n # pagination of comments\n make_pagination(\n context,\n self.request,\n reactions,\n settings.ZDS_APP[\"content\"][\"notes_per_page\"],\n context_list_name=\"reactions\",\n with_previous_item=True,\n )\n\n # is JS activated ?\n context[\"is_js\"] = True\n if not self.object.js_support:\n context[\"is_js\"] = False\n\n # optimize requests:\n votes = CommentVote.objects.filter(user_id=self.request.user.id, comment__in=reactions).all()\n context[\"user_like\"] = [vote.comment_id for vote in votes if vote.positive]\n context[\"user_dislike\"] = [vote.comment_id for vote in votes if not vote.positive]\n\n if self.request.user.has_perm(\"tutorialv2.change_contentreaction\"):\n context[\"user_can_modify\"] = [reaction.pk for reaction in reactions]\n else:\n context[\"user_can_modify\"] = [reaction.pk for reaction in reactions if reaction.author == self.request.user]\n\n context[\"is_antispam\"] = self.object.antispam()\n context[\"pm_link\"] = self.object.get_absolute_contact_url(_(\"\u00c0 propos de\"))\n context[\"subscriber_count\"] = ContentReactionAnswerSubscription.objects.get_subscriptions(self.object).count()\n # We need reading time expressed in minutes\n try:\n char_count = self.object.public_version.char_count\n if char_count:\n context[\"reading_time\"] = int(\n self.versioned_object.get_tree_level()\n * char_count\n / settings.ZDS_APP[\"content\"][\"characters_per_minute\"]\n )\n else:\n logger.warning(\"For unknown reason content with id %s has no char count\", self.object.pk)\n context[\"reading_time\"] = 0\n except ZeroDivisionError as e:\n logger.warning(\"could not compute reading time: setting characters_per_minute is set to zero (error=%s)\", e)\n\n if self.request.user.is_authenticated:\n if len(context[\"reactions\"]) > 0:\n signals.content_read.send(\n sender=context[\"reactions\"][0].__class__, instances=context[\"reactions\"], user=self.request.user\n )\n signals.content_read.send(\n sender=self.object.__class__, instance=self.object, user=self.request.user, target=PublishableContent\n )\n if last_participation_is_old(self.object, self.request.user):\n mark_read(self.object, self.request.user)\n\n context[\"contributions\"] = ContentContribution.objects.filter(content=self.object).order_by(\n \"contribution_role__position\"\n )\n context[\"content_suggestions_random\"] = ContentSuggestion.objects.filter(publication=self.object).order_by(\"?\")[\n : settings.ZDS_APP[\"content\"][\"suggestions_per_page\"]\n ]\n\n return context\n\n\nclass DisplayOnlineArticle(DisplayOnlineContent):\n \"\"\"Displays the list of published articles\"\"\"\n\n current_content_type = \"ARTICLE\"\n verbose_type_name = _(\"article\")\n verbose_type_name_plural = _(\"articles\")\n\n\nclass DisplayOnlineTutorial(DisplayOnlineContent):\n \"\"\"Displays the list of published tutorials\"\"\"\n\n current_content_type = \"TUTORIAL\"\n verbose_type_name = _(\"tutoriel\")\n verbose_type_name_plural = _(\"tutoriels\")\n\n\nclass DisplayOnlineOpinion(DisplayOnlineContent):\n \"\"\"Displays the list of published articles\"\"\"\n\n current_content_type = \"OPINION\"\n verbose_type_name = _(\"billet\")\n verbose_type_name_plural = _(\"billets\")\n\n\nclass DisplayOnlineContainer(SingleOnlineContentDetailViewMixin):\n \"\"\"Base class that can show any content in any state\"\"\"\n\n template_name = \"tutorialv2/view/container_online.html\"\n current_content_type = \"TUTORIAL\" # obviously, an article cannot have container !\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n container = search_container_or_404(self.versioned_object, self.kwargs)\n\n context[\"container\"] = container\n context[\"pm_link\"] = self.object.get_absolute_contact_url(_(\"\u00c0 propos de\"))\n\n context[\"formWarnTypo\"] = WarnTypoForm(\n self.versioned_object, container, initial={\"target\": container.get_path(relative=True)}\n )\n\n # pagination: search for `previous` and `next`, if available\n if not self.versioned_object.has_extracts():\n chapters = self.versioned_object.get_list_of_chapters()\n try:\n position = chapters.index(container)\n except ValueError:\n pass # this is not (yet?) a chapter\n else:\n context[\"has_pagination\"] = True\n context[\"previous\"] = None\n context[\"next\"] = None\n if position == 0:\n context[\"previous\"] = container.parent\n if position > 0:\n previous_chapter = chapters[position - 1]\n if previous_chapter.parent == container.parent:\n context[\"previous\"] = previous_chapter\n else:\n context[\"previous\"] = container.parent\n if position < len(chapters) - 1:\n next_chapter = chapters[position + 1]\n if next_chapter.parent == container.parent:\n context[\"next\"] = next_chapter\n else:\n context[\"next\"] = next_chapter.parent\n\n return context\n\n\nclass DisplayBetaContent(DisplayContent):\n \"\"\"View to get the beta version of a content\"\"\"\n\n sha = None\n\n def get_object(self, queryset=None):\n \"\"\"rewritten to ensure that the version is set to beta, raise Http404 if there is no such version\"\"\"\n obj = super().get_object(queryset)\n\n if not obj.sha_beta:\n raise Http404(\"Aucune b\u00eata n'existe pour ce contenu.\")\n else:\n self.sha = obj.sha_beta\n\n # make the slug always right in URLs resolution:\n if \"slug\" in self.kwargs:\n self.kwargs[\"slug\"] = obj.slug\n\n return obj\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"pm_link\"] = self.object.get_absolute_contact_url()\n return context\n\n\nclass DisplayBetaContainer(DisplayContainer):\n \"\"\"View to get the beta version of a container\"\"\"\n\n sha = None\n\n def get_object(self, queryset=None):\n \"\"\"rewritten to ensure that the version is set to beta, raise Http404 if there is no such version\"\"\"\n obj = super().get_object(queryset)\n\n if not obj.sha_beta:\n raise Http404(\"Aucune b\u00eata n'existe pour ce contenu.\")\n else:\n self.sha = obj.sha_beta\n\n # make the slug always right in URLs resolution:\n if \"slug\" in self.kwargs:\n self.kwargs[\"slug\"] = obj.slug\n\n return obj\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"pm_link\"] = self.object.get_absolute_contact_url()\n return context\n", "path": "zds/tutorialv2/views/display.py"}]}
| 3,905 | 181 |
gh_patches_debug_3950
|
rasdani/github-patches
|
git_diff
|
deis__deis-2622
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LICENSE needs updating for 2014
It still says 2013
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # deis documentation build configuration file, created by
4 # sphinx-quickstart on Fri Jul 26 12:12:00 2013.
5 #
6 # This file is execfile()d with the current directory set to its containing dir.
7 #
8 # Note that not all possible configuration values are present in this
9 # autogenerated file.
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14 import os
15 import sys
16
17 # If extensions (or modules to document with autodoc) are in another directory,
18 # add these directories to sys.path here. If the directory is relative to the
19 # documentation root, use os.path.abspath to make it absolute, like shown here.
20
21 # Some hackery here to get deis.py to be importable as client.deis
22 open(os.path.join('..', '__init__.py'), 'a')
23 sys.path.insert(0, os.path.abspath(os.path.join('..')))
24 sys.path.insert(0, os.path.abspath(os.path.join('..', 'controller')))
25 # create local_settings.py for SECRET_KEY if necessary
26 local_settings_path = os.path.abspath(
27 os.path.join('..', 'controller', 'deis', 'local_settings.py'))
28 if not os.path.exists(local_settings_path):
29 with open(local_settings_path, 'w') as local_settings:
30 local_settings.write("SECRET_KEY = 'DummySecretKey'\n")
31 # set up Django
32 os.environ['DJANGO_SETTINGS_MODULE'] = 'deis.settings'
33 from django.conf import settings # noqa
34
35 # -- General configuration -----------------------------------------------------
36
37 # If your documentation needs a minimal Sphinx version, state it here.
38 #needs_sphinx = '1.0'
39
40 # Add any Sphinx extension module names here, as strings. They can be extensions
41 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
42 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.autosummary',
43 'sphinx.ext.viewcode']
44
45 # default flags for auto-generated python code documetation
46 autodoc_default_flags = ['members', 'undoc-members']
47
48 # Add any paths that contain templates here, relative to this directory.
49 templates_path = ['_templates']
50
51 # The suffix of source filenames.
52 source_suffix = '.rst'
53
54 # The encoding of source files.
55 #source_encoding = 'utf-8-sig'
56
57 # The master toctree document.
58 master_doc = 'toctree'
59
60 # General information about the project.
61 project = u'deis'
62 copyright = u'2013, OpDemand LLC'
63
64 # The version info for the project you're documenting, acts as replacement for
65 # |version| and |release|, also used in various other places throughout the
66 # built documents.
67 #
68 from deis import __version__
69
70 # The short X.Y version.
71 version = __version__.rsplit('.', 1)[0]
72 # The full version, including alpha/beta/rc tags.
73 release = __version__
74
75 # The language for content autogenerated by Sphinx. Refer to documentation
76 # for a list of supported languages.
77 #language = None
78
79 # There are two options for replacing |today|: either, you set today to some
80 # non-false value, then it is used:
81 #today = ''
82 # Else, today_fmt is used as the format for a strftime call.
83 #today_fmt = '%B %d, %Y'
84
85 # List of patterns, relative to source directory, that match files and
86 # directories to ignore when looking for source files.
87 exclude_patterns = ['_build', 'venv', '**/_*.rst']
88
89 # The reST default role (used for this markup: `text`) to use for all documents.
90 #default_role = None
91
92 # If true, '()' will be appended to :func: etc. cross-reference text.
93 #add_function_parentheses = True
94
95 # If true, the current module name will be prepended to all description
96 # unit titles (such as .. function::).
97 #add_module_names = True
98
99 # If true, sectionauthor and moduleauthor directives will be shown in the
100 # output. They are ignored by default.
101 #show_authors = False
102
103 # The name of the Pygments (syntax highlighting) style to use.
104 pygments_style = 'sphinx'
105
106 # A list of ignored prefixes for module index sorting.
107 #modindex_common_prefix = []
108
109 # If true, keep warnings as "system message" paragraphs in the built documents.
110 #keep_warnings = False
111
112
113 # -- Options for HTML output ---------------------------------------------------
114
115 # The theme to use for HTML and HTML Help pages. See the documentation for
116 # a list of builtin themes.
117 html_theme = 'deis'
118
119 # Theme options are theme-specific and customize the look and feel of a theme
120 # further. For a list of options available for each theme, see the
121 # documentation.
122 #html_theme_options = {}
123
124 # Add any paths that contain custom themes here, relative to this directory.
125 html_theme_path = ['theme']
126
127 # The name for this set of Sphinx documents. If None, it defaults to
128 # "<project> v<release> documentation".
129 #html_title = None
130
131 # A shorter title for the navigation bar. Default is the same as html_title.
132 #html_short_title = None
133
134 # The name of an image file (relative to this directory) to place at the top
135 # of the sidebar.
136 #html_logo = None
137
138 # The name of an image file (within the static path) to use as favicon of the
139 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
140 # pixels large.
141 #html_favicon = None
142
143 # Add any paths that contain custom static files (such as style sheets) here,
144 # relative to this directory. They are copied after the builtin static files,
145 # so a file named "default.css" will overwrite the builtin "default.css".
146 html_static_path = ['../controller/web/static']
147
148 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
149 # using the given strftime format.
150 #html_last_updated_fmt = '%b %d, %Y'
151
152 # If true, SmartyPants will be used to convert quotes and dashes to
153 # typographically correct entities.
154 html_use_smartypants = True
155
156 html_add_permalinks = True
157
158 # Custom sidebar templates, maps document names to template names.
159 #html_sidebars = {}
160
161 # Additional templates that should be rendered to pages, maps page names to
162 # template names.
163 #html_additional_pages = {}
164
165 # If false, no module index is generated.
166 #html_domain_indices = True
167
168 # If false, no index is generated.
169 #html_use_index = True
170
171 # If true, the index is split into individual pages for each letter.
172 #html_split_index = False
173
174 # If true, links to the reST sources are added to the pages.
175 #html_show_sourcelink = True
176
177 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
178 #html_show_sphinx = True
179
180 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
181 #html_show_copyright = True
182
183 # If true, an OpenSearch description file will be output, and all pages will
184 # contain a <link> tag referring to it. The value of this option must be the
185 # base URL from which the finished HTML is served.
186 #html_use_opensearch = ''
187
188 # This is the file name suffix for HTML files (e.g. ".xhtml").
189 #html_file_suffix = None
190
191 # Output file base name for HTML help builder.
192 htmlhelp_basename = 'deisdoc'
193
194
195 # -- Options for LaTeX output --------------------------------------------------
196
197 latex_elements = {
198 # The paper size ('letterpaper' or 'a4paper').
199 #'papersize': 'letterpaper',
200
201 # The font size ('10pt', '11pt' or '12pt').
202 #'pointsize': '10pt',
203
204 # Additional stuff for the LaTeX preamble.
205 #'preamble': '',
206 }
207
208 # Grouping the document tree into LaTeX files. List of tuples
209 # (source start file, target name, title, author, documentclass [howto/manual]).
210 latex_documents = [
211 ('index', 'deis.tex', u'deis Documentation',
212 u'Author', 'manual'),
213 ]
214
215 # The name of an image file (relative to this directory) to place at the top of
216 # the title page.
217 #latex_logo = None
218
219 # For "manual" documents, if this is true, then toplevel headings are parts,
220 # not chapters.
221 #latex_use_parts = False
222
223 # If true, show page references after internal links.
224 #latex_show_pagerefs = False
225
226 # If true, show URL addresses after external links.
227 #latex_show_urls = False
228
229 # Documents to append as an appendix to all manuals.
230 #latex_appendices = []
231
232 # If false, no module index is generated.
233 #latex_domain_indices = True
234
235
236 # -- Options for manual page output --------------------------------------------
237
238 # One entry per manual page. List of tuples
239 # (source start file, name, description, authors, manual section).
240 man_pages = [
241 ('index', 'deis', u'deis Documentation',
242 [u'Author'], 1)
243 ]
244
245 # If true, show URL addresses after external links.
246 #man_show_urls = False
247
248
249 # -- Options for Texinfo output ------------------------------------------------
250
251 # Grouping the document tree into Texinfo files. List of tuples
252 # (source start file, target name, title, author,
253 # dir menu entry, description, category)
254 texinfo_documents = [
255 ('index', 'deis', u'deis Documentation',
256 u'Author', 'deis', 'One line description of project.',
257 'Miscellaneous'),
258 ]
259
260 # Documents to append as an appendix to all manuals.
261 #texinfo_appendices = []
262
263 # If false, no module index is generated.
264 #texinfo_domain_indices = True
265
266 # How to display URL addresses: 'footnote', 'no', or 'inline'.
267 #texinfo_show_urls = 'footnote'
268
269 # If true, do not generate a @detailmenu in the "Top" node's menu.
270 #texinfo_no_detailmenu = False
271
272
273 # -- Options for Epub output ---------------------------------------------------
274
275 # Bibliographic Dublin Core info.
276 epub_title = u'deis'
277 epub_author = u'OpDemand LLC'
278 epub_publisher = u'OpDemand LLC'
279 epub_copyright = u'2013, OpDemand LLC'
280
281 # The language of the text. It defaults to the language option
282 # or en if the language is not set.
283 #epub_language = ''
284
285 # The scheme of the identifier. Typical schemes are ISBN or URL.
286 #epub_scheme = ''
287
288 # The unique identifier of the text. This can be a ISBN number
289 # or the project homepage.
290 #epub_identifier = ''
291
292 # A unique identification for the text.
293 #epub_uid = ''
294
295 # A tuple containing the cover image and cover page html template filenames.
296 #epub_cover = ()
297
298 # A sequence of (type, uri, title) tuples for the guide element of content.opf.
299 #epub_guide = ()
300
301 # HTML files that should be inserted before the pages created by sphinx.
302 # The format is a list of tuples containing the path and title.
303 #epub_pre_files = []
304
305 # HTML files shat should be inserted after the pages created by sphinx.
306 # The format is a list of tuples containing the path and title.
307 #epub_post_files = []
308
309 # A list of files that should not be packed into the epub file.
310 #epub_exclude_files = []
311
312 # The depth of the table of contents in toc.ncx.
313 #epub_tocdepth = 3
314
315 # Allow duplicate toc entries.
316 #epub_tocdup = True
317
318 # Fix unsupported image types using the PIL.
319 #epub_fix_images = False
320
321 # Scale large images.
322 #epub_max_image_width = 0
323
324 # If 'no', URL addresses will not be shown.
325 #epub_show_urls = 'inline'
326
327 # If false, no index is generated.
328 #epub_use_index = True
329
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -59,7 +59,7 @@
# General information about the project.
project = u'deis'
-copyright = u'2013, OpDemand LLC'
+copyright = u'2013, 2014 OpDemand LLC'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -59,7 +59,7 @@\n \n # General information about the project.\n project = u'deis'\n-copyright = u'2013, OpDemand LLC'\n+copyright = u'2013, 2014 OpDemand LLC'\n \n # The version info for the project you're documenting, acts as replacement for\n # |version| and |release|, also used in various other places throughout the\n", "issue": "LICENSE needs updating for 2014\nIt still says 2013\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# deis documentation build configuration file, created by\n# sphinx-quickstart on Fri Jul 26 12:12:00 2013.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\nimport sys\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\n# Some hackery here to get deis.py to be importable as client.deis\nopen(os.path.join('..', '__init__.py'), 'a')\nsys.path.insert(0, os.path.abspath(os.path.join('..')))\nsys.path.insert(0, os.path.abspath(os.path.join('..', 'controller')))\n# create local_settings.py for SECRET_KEY if necessary\nlocal_settings_path = os.path.abspath(\n os.path.join('..', 'controller', 'deis', 'local_settings.py'))\nif not os.path.exists(local_settings_path):\n with open(local_settings_path, 'w') as local_settings:\n local_settings.write(\"SECRET_KEY = 'DummySecretKey'\\n\")\n# set up Django\nos.environ['DJANGO_SETTINGS_MODULE'] = 'deis.settings'\nfrom django.conf import settings # noqa\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.autosummary',\n 'sphinx.ext.viewcode']\n\n# default flags for auto-generated python code documetation\nautodoc_default_flags = ['members', 'undoc-members']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'toctree'\n\n# General information about the project.\nproject = u'deis'\ncopyright = u'2013, OpDemand LLC'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\nfrom deis import __version__\n\n# The short X.Y version.\nversion = __version__.rsplit('.', 1)[0]\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build', 'venv', '**/_*.rst']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'deis'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = ['theme']\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['../controller/web/static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\nhtml_use_smartypants = True\n\nhtml_add_permalinks = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'deisdoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'deis.tex', u'deis Documentation',\n u'Author', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'deis', u'deis Documentation',\n [u'Author'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'deis', u'deis Documentation',\n u'Author', 'deis', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# -- Options for Epub output ---------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = u'deis'\nepub_author = u'OpDemand LLC'\nepub_publisher = u'OpDemand LLC'\nepub_copyright = u'2013, OpDemand LLC'\n\n# The language of the text. It defaults to the language option\n# or en if the language is not set.\n#epub_language = ''\n\n# The scheme of the identifier. Typical schemes are ISBN or URL.\n#epub_scheme = ''\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#epub_identifier = ''\n\n# A unique identification for the text.\n#epub_uid = ''\n\n# A tuple containing the cover image and cover page html template filenames.\n#epub_cover = ()\n\n# A sequence of (type, uri, title) tuples for the guide element of content.opf.\n#epub_guide = ()\n\n# HTML files that should be inserted before the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_pre_files = []\n\n# HTML files shat should be inserted after the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_post_files = []\n\n# A list of files that should not be packed into the epub file.\n#epub_exclude_files = []\n\n# The depth of the table of contents in toc.ncx.\n#epub_tocdepth = 3\n\n# Allow duplicate toc entries.\n#epub_tocdup = True\n\n# Fix unsupported image types using the PIL.\n#epub_fix_images = False\n\n# Scale large images.\n#epub_max_image_width = 0\n\n# If 'no', URL addresses will not be shown.\n#epub_show_urls = 'inline'\n\n# If false, no index is generated.\n#epub_use_index = True\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# deis documentation build configuration file, created by\n# sphinx-quickstart on Fri Jul 26 12:12:00 2013.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\nimport sys\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\n# Some hackery here to get deis.py to be importable as client.deis\nopen(os.path.join('..', '__init__.py'), 'a')\nsys.path.insert(0, os.path.abspath(os.path.join('..')))\nsys.path.insert(0, os.path.abspath(os.path.join('..', 'controller')))\n# create local_settings.py for SECRET_KEY if necessary\nlocal_settings_path = os.path.abspath(\n os.path.join('..', 'controller', 'deis', 'local_settings.py'))\nif not os.path.exists(local_settings_path):\n with open(local_settings_path, 'w') as local_settings:\n local_settings.write(\"SECRET_KEY = 'DummySecretKey'\\n\")\n# set up Django\nos.environ['DJANGO_SETTINGS_MODULE'] = 'deis.settings'\nfrom django.conf import settings # noqa\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.autosummary',\n 'sphinx.ext.viewcode']\n\n# default flags for auto-generated python code documetation\nautodoc_default_flags = ['members', 'undoc-members']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'toctree'\n\n# General information about the project.\nproject = u'deis'\ncopyright = u'2013, 2014 OpDemand LLC'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\nfrom deis import __version__\n\n# The short X.Y version.\nversion = __version__.rsplit('.', 1)[0]\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build', 'venv', '**/_*.rst']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'deis'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = ['theme']\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['../controller/web/static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\nhtml_use_smartypants = True\n\nhtml_add_permalinks = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'deisdoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'deis.tex', u'deis Documentation',\n u'Author', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'deis', u'deis Documentation',\n [u'Author'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'deis', u'deis Documentation',\n u'Author', 'deis', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# -- Options for Epub output ---------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = u'deis'\nepub_author = u'OpDemand LLC'\nepub_publisher = u'OpDemand LLC'\nepub_copyright = u'2013, OpDemand LLC'\n\n# The language of the text. It defaults to the language option\n# or en if the language is not set.\n#epub_language = ''\n\n# The scheme of the identifier. Typical schemes are ISBN or URL.\n#epub_scheme = ''\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#epub_identifier = ''\n\n# A unique identification for the text.\n#epub_uid = ''\n\n# A tuple containing the cover image and cover page html template filenames.\n#epub_cover = ()\n\n# A sequence of (type, uri, title) tuples for the guide element of content.opf.\n#epub_guide = ()\n\n# HTML files that should be inserted before the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_pre_files = []\n\n# HTML files shat should be inserted after the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_post_files = []\n\n# A list of files that should not be packed into the epub file.\n#epub_exclude_files = []\n\n# The depth of the table of contents in toc.ncx.\n#epub_tocdepth = 3\n\n# Allow duplicate toc entries.\n#epub_tocdup = True\n\n# Fix unsupported image types using the PIL.\n#epub_fix_images = False\n\n# Scale large images.\n#epub_max_image_width = 0\n\n# If 'no', URL addresses will not be shown.\n#epub_show_urls = 'inline'\n\n# If false, no index is generated.\n#epub_use_index = True\n", "path": "docs/conf.py"}]}
| 3,798 | 119 |
gh_patches_debug_14589
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-3379
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot use contracts with inherited callbacks
### Description
If you want to `scrapy check` a spider that has inherited methods, these methods' contracts will be ignored.
### Reproduce
```python
class BaseSpider(Spider):
def returns_request(self, response):
""" method which returns request
@url https://docs.scrapy.org/en/latest/
@returns requests 1
"""
return Request('http://scrapy.org', callback=self.returns_item)
class DemoSpider(BaseSpider):
name = 'demo_spider'
```
And then run `scrapy check`.
You'll get the following output:
```
----------------------------------------------------------------------
Ran 0 contracts in 0.000s
OK
```
### Reason
`ContractsManager.tested_methods_from_spidercls` uses `vars(spidercls).items()` to get methods.
### Solution
Use `inspect.getmembers(spidercls)` instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/contracts/__init__.py`
Content:
```
1 import sys
2 import re
3 from functools import wraps
4 from unittest import TestCase
5
6 from scrapy.http import Request
7 from scrapy.utils.spider import iterate_spider_output
8 from scrapy.utils.python import get_spec
9
10
11 class ContractsManager(object):
12 contracts = {}
13
14 def __init__(self, contracts):
15 for contract in contracts:
16 self.contracts[contract.name] = contract
17
18 def tested_methods_from_spidercls(self, spidercls):
19 methods = []
20 for key, value in vars(spidercls).items():
21 if (callable(value) and value.__doc__ and
22 re.search(r'^\s*@', value.__doc__, re.MULTILINE)):
23 methods.append(key)
24
25 return methods
26
27 def extract_contracts(self, method):
28 contracts = []
29 for line in method.__doc__.split('\n'):
30 line = line.strip()
31
32 if line.startswith('@'):
33 name, args = re.match(r'@(\w+)\s*(.*)', line).groups()
34 args = re.split(r'\s+', args)
35
36 contracts.append(self.contracts[name](method, *args))
37
38 return contracts
39
40 def from_spider(self, spider, results):
41 requests = []
42 for method in self.tested_methods_from_spidercls(type(spider)):
43 bound_method = spider.__getattribute__(method)
44 requests.append(self.from_method(bound_method, results))
45
46 return requests
47
48 def from_method(self, method, results):
49 contracts = self.extract_contracts(method)
50 if contracts:
51 # calculate request args
52 args, kwargs = get_spec(Request.__init__)
53 kwargs['callback'] = method
54 for contract in contracts:
55 kwargs = contract.adjust_request_args(kwargs)
56
57 # create and prepare request
58 args.remove('self')
59 if set(args).issubset(set(kwargs)):
60 request = Request(**kwargs)
61
62 # execute pre and post hooks in order
63 for contract in reversed(contracts):
64 request = contract.add_pre_hook(request, results)
65 for contract in contracts:
66 request = contract.add_post_hook(request, results)
67
68 self._clean_req(request, method, results)
69 return request
70
71 def _clean_req(self, request, method, results):
72 """ stop the request from returning objects and records any errors """
73
74 cb = request.callback
75
76 @wraps(cb)
77 def cb_wrapper(response):
78 try:
79 output = cb(response)
80 output = list(iterate_spider_output(output))
81 except:
82 case = _create_testcase(method, 'callback')
83 results.addError(case, sys.exc_info())
84
85 def eb_wrapper(failure):
86 case = _create_testcase(method, 'errback')
87 exc_info = failure.type, failure.value, failure.getTracebackObject()
88 results.addError(case, exc_info)
89
90 request.callback = cb_wrapper
91 request.errback = eb_wrapper
92
93
94 class Contract(object):
95 """ Abstract class for contracts """
96
97 def __init__(self, method, *args):
98 self.testcase_pre = _create_testcase(method, '@%s pre-hook' % self.name)
99 self.testcase_post = _create_testcase(method, '@%s post-hook' % self.name)
100 self.args = args
101
102 def add_pre_hook(self, request, results):
103 if hasattr(self, 'pre_process'):
104 cb = request.callback
105
106 @wraps(cb)
107 def wrapper(response):
108 try:
109 results.startTest(self.testcase_pre)
110 self.pre_process(response)
111 results.stopTest(self.testcase_pre)
112 except AssertionError:
113 results.addFailure(self.testcase_pre, sys.exc_info())
114 except Exception:
115 results.addError(self.testcase_pre, sys.exc_info())
116 else:
117 results.addSuccess(self.testcase_pre)
118 finally:
119 return list(iterate_spider_output(cb(response)))
120
121 request.callback = wrapper
122
123 return request
124
125 def add_post_hook(self, request, results):
126 if hasattr(self, 'post_process'):
127 cb = request.callback
128
129 @wraps(cb)
130 def wrapper(response):
131 output = list(iterate_spider_output(cb(response)))
132 try:
133 results.startTest(self.testcase_post)
134 self.post_process(output)
135 results.stopTest(self.testcase_post)
136 except AssertionError:
137 results.addFailure(self.testcase_post, sys.exc_info())
138 except Exception:
139 results.addError(self.testcase_post, sys.exc_info())
140 else:
141 results.addSuccess(self.testcase_post)
142 finally:
143 return output
144
145 request.callback = wrapper
146
147 return request
148
149 def adjust_request_args(self, args):
150 return args
151
152
153 def _create_testcase(method, desc):
154 spider = method.__self__.name
155
156 class ContractTestCase(TestCase):
157 def __str__(_self):
158 return "[%s] %s (%s)" % (spider, method.__name__, desc)
159
160 name = '%s_%s' % (spider, method.__name__)
161 setattr(ContractTestCase, name, lambda x: x)
162 return ContractTestCase(name)
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py
--- a/scrapy/contracts/__init__.py
+++ b/scrapy/contracts/__init__.py
@@ -1,6 +1,7 @@
import sys
import re
from functools import wraps
+from inspect import getmembers
from unittest import TestCase
from scrapy.http import Request
@@ -17,7 +18,7 @@
def tested_methods_from_spidercls(self, spidercls):
methods = []
- for key, value in vars(spidercls).items():
+ for key, value in getmembers(spidercls):
if (callable(value) and value.__doc__ and
re.search(r'^\s*@', value.__doc__, re.MULTILINE)):
methods.append(key)
|
{"golden_diff": "diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py\n--- a/scrapy/contracts/__init__.py\n+++ b/scrapy/contracts/__init__.py\n@@ -1,6 +1,7 @@\n import sys\n import re\n from functools import wraps\n+from inspect import getmembers\n from unittest import TestCase\n \n from scrapy.http import Request\n@@ -17,7 +18,7 @@\n \n def tested_methods_from_spidercls(self, spidercls):\n methods = []\n- for key, value in vars(spidercls).items():\n+ for key, value in getmembers(spidercls):\n if (callable(value) and value.__doc__ and\n re.search(r'^\\s*@', value.__doc__, re.MULTILINE)):\n methods.append(key)\n", "issue": "Cannot use contracts with inherited callbacks\n### Description\r\n\r\nIf you want to `scrapy check` a spider that has inherited methods, these methods' contracts will be ignored.\r\n\r\n### Reproduce\r\n\r\n```python\r\nclass BaseSpider(Spider):\r\n\r\n def returns_request(self, response):\r\n \"\"\" method which returns request\r\n @url https://docs.scrapy.org/en/latest/\r\n @returns requests 1\r\n \"\"\"\r\n return Request('http://scrapy.org', callback=self.returns_item)\r\n\r\n\r\nclass DemoSpider(BaseSpider):\r\n name = 'demo_spider'\r\n```\r\n\r\nAnd then run `scrapy check`.\r\n\r\nYou'll get the following output:\r\n\r\n```\r\n----------------------------------------------------------------------\r\nRan 0 contracts in 0.000s\r\n\r\nOK\r\n```\r\n\r\n### Reason\r\n\r\n`ContractsManager.tested_methods_from_spidercls` uses `vars(spidercls).items()` to get methods.\r\n\r\n### Solution\r\n\r\nUse `inspect.getmembers(spidercls)` instead.\n", "before_files": [{"content": "import sys\nimport re\nfrom functools import wraps\nfrom unittest import TestCase\n\nfrom scrapy.http import Request\nfrom scrapy.utils.spider import iterate_spider_output\nfrom scrapy.utils.python import get_spec\n\n\nclass ContractsManager(object):\n contracts = {}\n\n def __init__(self, contracts):\n for contract in contracts:\n self.contracts[contract.name] = contract\n\n def tested_methods_from_spidercls(self, spidercls):\n methods = []\n for key, value in vars(spidercls).items():\n if (callable(value) and value.__doc__ and\n re.search(r'^\\s*@', value.__doc__, re.MULTILINE)):\n methods.append(key)\n\n return methods\n\n def extract_contracts(self, method):\n contracts = []\n for line in method.__doc__.split('\\n'):\n line = line.strip()\n\n if line.startswith('@'):\n name, args = re.match(r'@(\\w+)\\s*(.*)', line).groups()\n args = re.split(r'\\s+', args)\n\n contracts.append(self.contracts[name](method, *args))\n\n return contracts\n\n def from_spider(self, spider, results):\n requests = []\n for method in self.tested_methods_from_spidercls(type(spider)):\n bound_method = spider.__getattribute__(method)\n requests.append(self.from_method(bound_method, results))\n\n return requests\n\n def from_method(self, method, results):\n contracts = self.extract_contracts(method)\n if contracts:\n # calculate request args\n args, kwargs = get_spec(Request.__init__)\n kwargs['callback'] = method\n for contract in contracts:\n kwargs = contract.adjust_request_args(kwargs)\n\n # create and prepare request\n args.remove('self')\n if set(args).issubset(set(kwargs)):\n request = Request(**kwargs)\n\n # execute pre and post hooks in order\n for contract in reversed(contracts):\n request = contract.add_pre_hook(request, results)\n for contract in contracts:\n request = contract.add_post_hook(request, results)\n\n self._clean_req(request, method, results)\n return request\n\n def _clean_req(self, request, method, results):\n \"\"\" stop the request from returning objects and records any errors \"\"\"\n\n cb = request.callback\n\n @wraps(cb)\n def cb_wrapper(response):\n try:\n output = cb(response)\n output = list(iterate_spider_output(output))\n except:\n case = _create_testcase(method, 'callback')\n results.addError(case, sys.exc_info())\n\n def eb_wrapper(failure):\n case = _create_testcase(method, 'errback')\n exc_info = failure.type, failure.value, failure.getTracebackObject()\n results.addError(case, exc_info)\n\n request.callback = cb_wrapper\n request.errback = eb_wrapper\n\n\nclass Contract(object):\n \"\"\" Abstract class for contracts \"\"\"\n\n def __init__(self, method, *args):\n self.testcase_pre = _create_testcase(method, '@%s pre-hook' % self.name)\n self.testcase_post = _create_testcase(method, '@%s post-hook' % self.name)\n self.args = args\n\n def add_pre_hook(self, request, results):\n if hasattr(self, 'pre_process'):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response):\n try:\n results.startTest(self.testcase_pre)\n self.pre_process(response)\n results.stopTest(self.testcase_pre)\n except AssertionError:\n results.addFailure(self.testcase_pre, sys.exc_info())\n except Exception:\n results.addError(self.testcase_pre, sys.exc_info())\n else:\n results.addSuccess(self.testcase_pre)\n finally:\n return list(iterate_spider_output(cb(response)))\n\n request.callback = wrapper\n\n return request\n\n def add_post_hook(self, request, results):\n if hasattr(self, 'post_process'):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response):\n output = list(iterate_spider_output(cb(response)))\n try:\n results.startTest(self.testcase_post)\n self.post_process(output)\n results.stopTest(self.testcase_post)\n except AssertionError:\n results.addFailure(self.testcase_post, sys.exc_info())\n except Exception:\n results.addError(self.testcase_post, sys.exc_info())\n else:\n results.addSuccess(self.testcase_post)\n finally:\n return output\n\n request.callback = wrapper\n\n return request\n\n def adjust_request_args(self, args):\n return args\n\n\ndef _create_testcase(method, desc):\n spider = method.__self__.name\n\n class ContractTestCase(TestCase):\n def __str__(_self):\n return \"[%s] %s (%s)\" % (spider, method.__name__, desc)\n\n name = '%s_%s' % (spider, method.__name__)\n setattr(ContractTestCase, name, lambda x: x)\n return ContractTestCase(name)\n", "path": "scrapy/contracts/__init__.py"}], "after_files": [{"content": "import sys\nimport re\nfrom functools import wraps\nfrom inspect import getmembers\nfrom unittest import TestCase\n\nfrom scrapy.http import Request\nfrom scrapy.utils.spider import iterate_spider_output\nfrom scrapy.utils.python import get_spec\n\n\nclass ContractsManager(object):\n contracts = {}\n\n def __init__(self, contracts):\n for contract in contracts:\n self.contracts[contract.name] = contract\n\n def tested_methods_from_spidercls(self, spidercls):\n methods = []\n for key, value in getmembers(spidercls):\n if (callable(value) and value.__doc__ and\n re.search(r'^\\s*@', value.__doc__, re.MULTILINE)):\n methods.append(key)\n\n return methods\n\n def extract_contracts(self, method):\n contracts = []\n for line in method.__doc__.split('\\n'):\n line = line.strip()\n\n if line.startswith('@'):\n name, args = re.match(r'@(\\w+)\\s*(.*)', line).groups()\n args = re.split(r'\\s+', args)\n\n contracts.append(self.contracts[name](method, *args))\n\n return contracts\n\n def from_spider(self, spider, results):\n requests = []\n for method in self.tested_methods_from_spidercls(type(spider)):\n bound_method = spider.__getattribute__(method)\n requests.append(self.from_method(bound_method, results))\n\n return requests\n\n def from_method(self, method, results):\n contracts = self.extract_contracts(method)\n if contracts:\n # calculate request args\n args, kwargs = get_spec(Request.__init__)\n kwargs['callback'] = method\n for contract in contracts:\n kwargs = contract.adjust_request_args(kwargs)\n\n # create and prepare request\n args.remove('self')\n if set(args).issubset(set(kwargs)):\n request = Request(**kwargs)\n\n # execute pre and post hooks in order\n for contract in reversed(contracts):\n request = contract.add_pre_hook(request, results)\n for contract in contracts:\n request = contract.add_post_hook(request, results)\n\n self._clean_req(request, method, results)\n return request\n\n def _clean_req(self, request, method, results):\n \"\"\" stop the request from returning objects and records any errors \"\"\"\n\n cb = request.callback\n\n @wraps(cb)\n def cb_wrapper(response):\n try:\n output = cb(response)\n output = list(iterate_spider_output(output))\n except:\n case = _create_testcase(method, 'callback')\n results.addError(case, sys.exc_info())\n\n def eb_wrapper(failure):\n case = _create_testcase(method, 'errback')\n exc_info = failure.type, failure.value, failure.getTracebackObject()\n results.addError(case, exc_info)\n\n request.callback = cb_wrapper\n request.errback = eb_wrapper\n\n\nclass Contract(object):\n \"\"\" Abstract class for contracts \"\"\"\n\n def __init__(self, method, *args):\n self.testcase_pre = _create_testcase(method, '@%s pre-hook' % self.name)\n self.testcase_post = _create_testcase(method, '@%s post-hook' % self.name)\n self.args = args\n\n def add_pre_hook(self, request, results):\n if hasattr(self, 'pre_process'):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response):\n try:\n results.startTest(self.testcase_pre)\n self.pre_process(response)\n results.stopTest(self.testcase_pre)\n except AssertionError:\n results.addFailure(self.testcase_pre, sys.exc_info())\n except Exception:\n results.addError(self.testcase_pre, sys.exc_info())\n else:\n results.addSuccess(self.testcase_pre)\n finally:\n return list(iterate_spider_output(cb(response)))\n\n request.callback = wrapper\n\n return request\n\n def add_post_hook(self, request, results):\n if hasattr(self, 'post_process'):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response):\n output = list(iterate_spider_output(cb(response)))\n try:\n results.startTest(self.testcase_post)\n self.post_process(output)\n results.stopTest(self.testcase_post)\n except AssertionError:\n results.addFailure(self.testcase_post, sys.exc_info())\n except Exception:\n results.addError(self.testcase_post, sys.exc_info())\n else:\n results.addSuccess(self.testcase_post)\n finally:\n return output\n\n request.callback = wrapper\n\n return request\n\n def adjust_request_args(self, args):\n return args\n\n\ndef _create_testcase(method, desc):\n spider = method.__self__.name\n\n class ContractTestCase(TestCase):\n def __str__(_self):\n return \"[%s] %s (%s)\" % (spider, method.__name__, desc)\n\n name = '%s_%s' % (spider, method.__name__)\n setattr(ContractTestCase, name, lambda x: x)\n return ContractTestCase(name)\n", "path": "scrapy/contracts/__init__.py"}]}
| 1,916 | 175 |
gh_patches_debug_50118
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-5754
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Silence the `filelock` logger
After we started using `tldextract` we sometimes get log messages from `filelock` with the DEBUG level, it makes sense to silence them like we do for some other libraries in https://github.com/scrapy/scrapy/blob/fe60c1224e39aa3d85b20afd54566f135d9de085/scrapy/utils/log.py#L45-L59
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/log.py`
Content:
```
1 import logging
2 import sys
3 import warnings
4 from logging.config import dictConfig
5
6 from twisted.python import log as twisted_log
7 from twisted.python.failure import Failure
8
9 import scrapy
10 from scrapy.exceptions import ScrapyDeprecationWarning
11 from scrapy.settings import Settings
12 from scrapy.utils.versions import scrapy_components_versions
13
14
15 logger = logging.getLogger(__name__)
16
17
18 def failure_to_exc_info(failure):
19 """Extract exc_info from Failure instances"""
20 if isinstance(failure, Failure):
21 return (failure.type, failure.value, failure.getTracebackObject())
22
23
24 class TopLevelFormatter(logging.Filter):
25 """Keep only top level loggers's name (direct children from root) from
26 records.
27
28 This filter will replace Scrapy loggers' names with 'scrapy'. This mimics
29 the old Scrapy log behaviour and helps shortening long names.
30
31 Since it can't be set for just one logger (it won't propagate for its
32 children), it's going to be set in the root handler, with a parametrized
33 ``loggers`` list where it should act.
34 """
35
36 def __init__(self, loggers=None):
37 self.loggers = loggers or []
38
39 def filter(self, record):
40 if any(record.name.startswith(logger + '.') for logger in self.loggers):
41 record.name = record.name.split('.', 1)[0]
42 return True
43
44
45 DEFAULT_LOGGING = {
46 'version': 1,
47 'disable_existing_loggers': False,
48 'loggers': {
49 'hpack': {
50 'level': 'ERROR',
51 },
52 'scrapy': {
53 'level': 'DEBUG',
54 },
55 'twisted': {
56 'level': 'ERROR',
57 },
58 }
59 }
60
61
62 def configure_logging(settings=None, install_root_handler=True):
63 """
64 Initialize logging defaults for Scrapy.
65
66 :param settings: settings used to create and configure a handler for the
67 root logger (default: None).
68 :type settings: dict, :class:`~scrapy.settings.Settings` object or ``None``
69
70 :param install_root_handler: whether to install root logging handler
71 (default: True)
72 :type install_root_handler: bool
73
74 This function does:
75
76 - Route warnings and twisted logging through Python standard logging
77 - Assign DEBUG and ERROR level to Scrapy and Twisted loggers respectively
78 - Route stdout to log if LOG_STDOUT setting is True
79
80 When ``install_root_handler`` is True (default), this function also
81 creates a handler for the root logger according to given settings
82 (see :ref:`topics-logging-settings`). You can override default options
83 using ``settings`` argument. When ``settings`` is empty or None, defaults
84 are used.
85 """
86 if not sys.warnoptions:
87 # Route warnings through python logging
88 logging.captureWarnings(True)
89
90 observer = twisted_log.PythonLoggingObserver('twisted')
91 observer.start()
92
93 dictConfig(DEFAULT_LOGGING)
94
95 if isinstance(settings, dict) or settings is None:
96 settings = Settings(settings)
97
98 if settings.getbool('LOG_STDOUT'):
99 sys.stdout = StreamLogger(logging.getLogger('stdout'))
100
101 if install_root_handler:
102 install_scrapy_root_handler(settings)
103
104
105 def install_scrapy_root_handler(settings):
106 global _scrapy_root_handler
107
108 if (_scrapy_root_handler is not None
109 and _scrapy_root_handler in logging.root.handlers):
110 logging.root.removeHandler(_scrapy_root_handler)
111 logging.root.setLevel(logging.NOTSET)
112 _scrapy_root_handler = _get_handler(settings)
113 logging.root.addHandler(_scrapy_root_handler)
114
115
116 def get_scrapy_root_handler():
117 return _scrapy_root_handler
118
119
120 _scrapy_root_handler = None
121
122
123 def _get_handler(settings):
124 """ Return a log handler object according to settings """
125 filename = settings.get('LOG_FILE')
126 if filename:
127 mode = 'a' if settings.getbool('LOG_FILE_APPEND') else 'w'
128 encoding = settings.get('LOG_ENCODING')
129 handler = logging.FileHandler(filename, mode=mode, encoding=encoding)
130 elif settings.getbool('LOG_ENABLED'):
131 handler = logging.StreamHandler()
132 else:
133 handler = logging.NullHandler()
134
135 formatter = logging.Formatter(
136 fmt=settings.get('LOG_FORMAT'),
137 datefmt=settings.get('LOG_DATEFORMAT')
138 )
139 handler.setFormatter(formatter)
140 handler.setLevel(settings.get('LOG_LEVEL'))
141 if settings.getbool('LOG_SHORT_NAMES'):
142 handler.addFilter(TopLevelFormatter(['scrapy']))
143 return handler
144
145
146 def log_scrapy_info(settings: Settings) -> None:
147 logger.info("Scrapy %(version)s started (bot: %(bot)s)",
148 {'version': scrapy.__version__, 'bot': settings['BOT_NAME']})
149 versions = [
150 f"{name} {version}"
151 for name, version in scrapy_components_versions()
152 if name != "Scrapy"
153 ]
154 logger.info("Versions: %(versions)s", {'versions': ", ".join(versions)})
155
156
157 def log_reactor_info() -> None:
158 from twisted.internet import reactor
159 logger.debug("Using reactor: %s.%s", reactor.__module__, reactor.__class__.__name__)
160 from twisted.internet import asyncioreactor
161 if isinstance(reactor, asyncioreactor.AsyncioSelectorReactor):
162 logger.debug(
163 "Using asyncio event loop: %s.%s",
164 reactor._asyncioEventloop.__module__,
165 reactor._asyncioEventloop.__class__.__name__,
166 )
167
168
169 class StreamLogger:
170 """Fake file-like stream object that redirects writes to a logger instance
171
172 Taken from:
173 https://www.electricmonk.nl/log/2011/08/14/redirect-stdout-and-stderr-to-a-logger-in-python/
174 """
175 def __init__(self, logger, log_level=logging.INFO):
176 self.logger = logger
177 self.log_level = log_level
178 self.linebuf = ''
179
180 def write(self, buf):
181 for line in buf.rstrip().splitlines():
182 self.logger.log(self.log_level, line.rstrip())
183
184 def flush(self):
185 for h in self.logger.handlers:
186 h.flush()
187
188
189 class LogCounterHandler(logging.Handler):
190 """Record log levels count into a crawler stats"""
191
192 def __init__(self, crawler, *args, **kwargs):
193 super().__init__(*args, **kwargs)
194 self.crawler = crawler
195
196 def emit(self, record):
197 sname = f'log_count/{record.levelname}'
198 self.crawler.stats.inc_value(sname)
199
200
201 def logformatter_adapter(logkws):
202 """
203 Helper that takes the dictionary output from the methods in LogFormatter
204 and adapts it into a tuple of positional arguments for logger.log calls,
205 handling backward compatibility as well.
206 """
207 if not {'level', 'msg', 'args'} <= set(logkws):
208 warnings.warn('Missing keys in LogFormatter method',
209 ScrapyDeprecationWarning)
210
211 if 'format' in logkws:
212 warnings.warn('`format` key in LogFormatter methods has been '
213 'deprecated, use `msg` instead',
214 ScrapyDeprecationWarning)
215
216 level = logkws.get('level', logging.INFO)
217 message = logkws.get('format', logkws.get('msg'))
218 # NOTE: This also handles 'args' being an empty dict, that case doesn't
219 # play well in logger.log calls
220 args = logkws if not logkws.get('args') else logkws['args']
221
222 return (level, message, args)
223
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/utils/log.py b/scrapy/utils/log.py
--- a/scrapy/utils/log.py
+++ b/scrapy/utils/log.py
@@ -46,6 +46,9 @@
'version': 1,
'disable_existing_loggers': False,
'loggers': {
+ 'filelock': {
+ 'level': 'ERROR',
+ },
'hpack': {
'level': 'ERROR',
},
|
{"golden_diff": "diff --git a/scrapy/utils/log.py b/scrapy/utils/log.py\n--- a/scrapy/utils/log.py\n+++ b/scrapy/utils/log.py\n@@ -46,6 +46,9 @@\n 'version': 1,\n 'disable_existing_loggers': False,\n 'loggers': {\n+ 'filelock': {\n+ 'level': 'ERROR',\n+ },\n 'hpack': {\n 'level': 'ERROR',\n },\n", "issue": "Silence the `filelock` logger\nAfter we started using `tldextract` we sometimes get log messages from `filelock` with the DEBUG level, it makes sense to silence them like we do for some other libraries in https://github.com/scrapy/scrapy/blob/fe60c1224e39aa3d85b20afd54566f135d9de085/scrapy/utils/log.py#L45-L59\n", "before_files": [{"content": "import logging\nimport sys\nimport warnings\nfrom logging.config import dictConfig\n\nfrom twisted.python import log as twisted_log\nfrom twisted.python.failure import Failure\n\nimport scrapy\nfrom scrapy.exceptions import ScrapyDeprecationWarning\nfrom scrapy.settings import Settings\nfrom scrapy.utils.versions import scrapy_components_versions\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef failure_to_exc_info(failure):\n \"\"\"Extract exc_info from Failure instances\"\"\"\n if isinstance(failure, Failure):\n return (failure.type, failure.value, failure.getTracebackObject())\n\n\nclass TopLevelFormatter(logging.Filter):\n \"\"\"Keep only top level loggers's name (direct children from root) from\n records.\n\n This filter will replace Scrapy loggers' names with 'scrapy'. This mimics\n the old Scrapy log behaviour and helps shortening long names.\n\n Since it can't be set for just one logger (it won't propagate for its\n children), it's going to be set in the root handler, with a parametrized\n ``loggers`` list where it should act.\n \"\"\"\n\n def __init__(self, loggers=None):\n self.loggers = loggers or []\n\n def filter(self, record):\n if any(record.name.startswith(logger + '.') for logger in self.loggers):\n record.name = record.name.split('.', 1)[0]\n return True\n\n\nDEFAULT_LOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'loggers': {\n 'hpack': {\n 'level': 'ERROR',\n },\n 'scrapy': {\n 'level': 'DEBUG',\n },\n 'twisted': {\n 'level': 'ERROR',\n },\n }\n}\n\n\ndef configure_logging(settings=None, install_root_handler=True):\n \"\"\"\n Initialize logging defaults for Scrapy.\n\n :param settings: settings used to create and configure a handler for the\n root logger (default: None).\n :type settings: dict, :class:`~scrapy.settings.Settings` object or ``None``\n\n :param install_root_handler: whether to install root logging handler\n (default: True)\n :type install_root_handler: bool\n\n This function does:\n\n - Route warnings and twisted logging through Python standard logging\n - Assign DEBUG and ERROR level to Scrapy and Twisted loggers respectively\n - Route stdout to log if LOG_STDOUT setting is True\n\n When ``install_root_handler`` is True (default), this function also\n creates a handler for the root logger according to given settings\n (see :ref:`topics-logging-settings`). You can override default options\n using ``settings`` argument. When ``settings`` is empty or None, defaults\n are used.\n \"\"\"\n if not sys.warnoptions:\n # Route warnings through python logging\n logging.captureWarnings(True)\n\n observer = twisted_log.PythonLoggingObserver('twisted')\n observer.start()\n\n dictConfig(DEFAULT_LOGGING)\n\n if isinstance(settings, dict) or settings is None:\n settings = Settings(settings)\n\n if settings.getbool('LOG_STDOUT'):\n sys.stdout = StreamLogger(logging.getLogger('stdout'))\n\n if install_root_handler:\n install_scrapy_root_handler(settings)\n\n\ndef install_scrapy_root_handler(settings):\n global _scrapy_root_handler\n\n if (_scrapy_root_handler is not None\n and _scrapy_root_handler in logging.root.handlers):\n logging.root.removeHandler(_scrapy_root_handler)\n logging.root.setLevel(logging.NOTSET)\n _scrapy_root_handler = _get_handler(settings)\n logging.root.addHandler(_scrapy_root_handler)\n\n\ndef get_scrapy_root_handler():\n return _scrapy_root_handler\n\n\n_scrapy_root_handler = None\n\n\ndef _get_handler(settings):\n \"\"\" Return a log handler object according to settings \"\"\"\n filename = settings.get('LOG_FILE')\n if filename:\n mode = 'a' if settings.getbool('LOG_FILE_APPEND') else 'w'\n encoding = settings.get('LOG_ENCODING')\n handler = logging.FileHandler(filename, mode=mode, encoding=encoding)\n elif settings.getbool('LOG_ENABLED'):\n handler = logging.StreamHandler()\n else:\n handler = logging.NullHandler()\n\n formatter = logging.Formatter(\n fmt=settings.get('LOG_FORMAT'),\n datefmt=settings.get('LOG_DATEFORMAT')\n )\n handler.setFormatter(formatter)\n handler.setLevel(settings.get('LOG_LEVEL'))\n if settings.getbool('LOG_SHORT_NAMES'):\n handler.addFilter(TopLevelFormatter(['scrapy']))\n return handler\n\n\ndef log_scrapy_info(settings: Settings) -> None:\n logger.info(\"Scrapy %(version)s started (bot: %(bot)s)\",\n {'version': scrapy.__version__, 'bot': settings['BOT_NAME']})\n versions = [\n f\"{name} {version}\"\n for name, version in scrapy_components_versions()\n if name != \"Scrapy\"\n ]\n logger.info(\"Versions: %(versions)s\", {'versions': \", \".join(versions)})\n\n\ndef log_reactor_info() -> None:\n from twisted.internet import reactor\n logger.debug(\"Using reactor: %s.%s\", reactor.__module__, reactor.__class__.__name__)\n from twisted.internet import asyncioreactor\n if isinstance(reactor, asyncioreactor.AsyncioSelectorReactor):\n logger.debug(\n \"Using asyncio event loop: %s.%s\",\n reactor._asyncioEventloop.__module__,\n reactor._asyncioEventloop.__class__.__name__,\n )\n\n\nclass StreamLogger:\n \"\"\"Fake file-like stream object that redirects writes to a logger instance\n\n Taken from:\n https://www.electricmonk.nl/log/2011/08/14/redirect-stdout-and-stderr-to-a-logger-in-python/\n \"\"\"\n def __init__(self, logger, log_level=logging.INFO):\n self.logger = logger\n self.log_level = log_level\n self.linebuf = ''\n\n def write(self, buf):\n for line in buf.rstrip().splitlines():\n self.logger.log(self.log_level, line.rstrip())\n\n def flush(self):\n for h in self.logger.handlers:\n h.flush()\n\n\nclass LogCounterHandler(logging.Handler):\n \"\"\"Record log levels count into a crawler stats\"\"\"\n\n def __init__(self, crawler, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.crawler = crawler\n\n def emit(self, record):\n sname = f'log_count/{record.levelname}'\n self.crawler.stats.inc_value(sname)\n\n\ndef logformatter_adapter(logkws):\n \"\"\"\n Helper that takes the dictionary output from the methods in LogFormatter\n and adapts it into a tuple of positional arguments for logger.log calls,\n handling backward compatibility as well.\n \"\"\"\n if not {'level', 'msg', 'args'} <= set(logkws):\n warnings.warn('Missing keys in LogFormatter method',\n ScrapyDeprecationWarning)\n\n if 'format' in logkws:\n warnings.warn('`format` key in LogFormatter methods has been '\n 'deprecated, use `msg` instead',\n ScrapyDeprecationWarning)\n\n level = logkws.get('level', logging.INFO)\n message = logkws.get('format', logkws.get('msg'))\n # NOTE: This also handles 'args' being an empty dict, that case doesn't\n # play well in logger.log calls\n args = logkws if not logkws.get('args') else logkws['args']\n\n return (level, message, args)\n", "path": "scrapy/utils/log.py"}], "after_files": [{"content": "import logging\nimport sys\nimport warnings\nfrom logging.config import dictConfig\n\nfrom twisted.python import log as twisted_log\nfrom twisted.python.failure import Failure\n\nimport scrapy\nfrom scrapy.exceptions import ScrapyDeprecationWarning\nfrom scrapy.settings import Settings\nfrom scrapy.utils.versions import scrapy_components_versions\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef failure_to_exc_info(failure):\n \"\"\"Extract exc_info from Failure instances\"\"\"\n if isinstance(failure, Failure):\n return (failure.type, failure.value, failure.getTracebackObject())\n\n\nclass TopLevelFormatter(logging.Filter):\n \"\"\"Keep only top level loggers's name (direct children from root) from\n records.\n\n This filter will replace Scrapy loggers' names with 'scrapy'. This mimics\n the old Scrapy log behaviour and helps shortening long names.\n\n Since it can't be set for just one logger (it won't propagate for its\n children), it's going to be set in the root handler, with a parametrized\n ``loggers`` list where it should act.\n \"\"\"\n\n def __init__(self, loggers=None):\n self.loggers = loggers or []\n\n def filter(self, record):\n if any(record.name.startswith(logger + '.') for logger in self.loggers):\n record.name = record.name.split('.', 1)[0]\n return True\n\n\nDEFAULT_LOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'loggers': {\n 'filelock': {\n 'level': 'ERROR',\n },\n 'hpack': {\n 'level': 'ERROR',\n },\n 'scrapy': {\n 'level': 'DEBUG',\n },\n 'twisted': {\n 'level': 'ERROR',\n },\n }\n}\n\n\ndef configure_logging(settings=None, install_root_handler=True):\n \"\"\"\n Initialize logging defaults for Scrapy.\n\n :param settings: settings used to create and configure a handler for the\n root logger (default: None).\n :type settings: dict, :class:`~scrapy.settings.Settings` object or ``None``\n\n :param install_root_handler: whether to install root logging handler\n (default: True)\n :type install_root_handler: bool\n\n This function does:\n\n - Route warnings and twisted logging through Python standard logging\n - Assign DEBUG and ERROR level to Scrapy and Twisted loggers respectively\n - Route stdout to log if LOG_STDOUT setting is True\n\n When ``install_root_handler`` is True (default), this function also\n creates a handler for the root logger according to given settings\n (see :ref:`topics-logging-settings`). You can override default options\n using ``settings`` argument. When ``settings`` is empty or None, defaults\n are used.\n \"\"\"\n if not sys.warnoptions:\n # Route warnings through python logging\n logging.captureWarnings(True)\n\n observer = twisted_log.PythonLoggingObserver('twisted')\n observer.start()\n\n dictConfig(DEFAULT_LOGGING)\n\n if isinstance(settings, dict) or settings is None:\n settings = Settings(settings)\n\n if settings.getbool('LOG_STDOUT'):\n sys.stdout = StreamLogger(logging.getLogger('stdout'))\n\n if install_root_handler:\n install_scrapy_root_handler(settings)\n\n\ndef install_scrapy_root_handler(settings):\n global _scrapy_root_handler\n\n if (_scrapy_root_handler is not None\n and _scrapy_root_handler in logging.root.handlers):\n logging.root.removeHandler(_scrapy_root_handler)\n logging.root.setLevel(logging.NOTSET)\n _scrapy_root_handler = _get_handler(settings)\n logging.root.addHandler(_scrapy_root_handler)\n\n\ndef get_scrapy_root_handler():\n return _scrapy_root_handler\n\n\n_scrapy_root_handler = None\n\n\ndef _get_handler(settings):\n \"\"\" Return a log handler object according to settings \"\"\"\n filename = settings.get('LOG_FILE')\n if filename:\n mode = 'a' if settings.getbool('LOG_FILE_APPEND') else 'w'\n encoding = settings.get('LOG_ENCODING')\n handler = logging.FileHandler(filename, mode=mode, encoding=encoding)\n elif settings.getbool('LOG_ENABLED'):\n handler = logging.StreamHandler()\n else:\n handler = logging.NullHandler()\n\n formatter = logging.Formatter(\n fmt=settings.get('LOG_FORMAT'),\n datefmt=settings.get('LOG_DATEFORMAT')\n )\n handler.setFormatter(formatter)\n handler.setLevel(settings.get('LOG_LEVEL'))\n if settings.getbool('LOG_SHORT_NAMES'):\n handler.addFilter(TopLevelFormatter(['scrapy']))\n return handler\n\n\ndef log_scrapy_info(settings: Settings) -> None:\n logger.info(\"Scrapy %(version)s started (bot: %(bot)s)\",\n {'version': scrapy.__version__, 'bot': settings['BOT_NAME']})\n versions = [\n f\"{name} {version}\"\n for name, version in scrapy_components_versions()\n if name != \"Scrapy\"\n ]\n logger.info(\"Versions: %(versions)s\", {'versions': \", \".join(versions)})\n\n\ndef log_reactor_info() -> None:\n from twisted.internet import reactor\n logger.debug(\"Using reactor: %s.%s\", reactor.__module__, reactor.__class__.__name__)\n from twisted.internet import asyncioreactor\n if isinstance(reactor, asyncioreactor.AsyncioSelectorReactor):\n logger.debug(\n \"Using asyncio event loop: %s.%s\",\n reactor._asyncioEventloop.__module__,\n reactor._asyncioEventloop.__class__.__name__,\n )\n\n\nclass StreamLogger:\n \"\"\"Fake file-like stream object that redirects writes to a logger instance\n\n Taken from:\n https://www.electricmonk.nl/log/2011/08/14/redirect-stdout-and-stderr-to-a-logger-in-python/\n \"\"\"\n def __init__(self, logger, log_level=logging.INFO):\n self.logger = logger\n self.log_level = log_level\n self.linebuf = ''\n\n def write(self, buf):\n for line in buf.rstrip().splitlines():\n self.logger.log(self.log_level, line.rstrip())\n\n def flush(self):\n for h in self.logger.handlers:\n h.flush()\n\n\nclass LogCounterHandler(logging.Handler):\n \"\"\"Record log levels count into a crawler stats\"\"\"\n\n def __init__(self, crawler, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.crawler = crawler\n\n def emit(self, record):\n sname = f'log_count/{record.levelname}'\n self.crawler.stats.inc_value(sname)\n\n\ndef logformatter_adapter(logkws):\n \"\"\"\n Helper that takes the dictionary output from the methods in LogFormatter\n and adapts it into a tuple of positional arguments for logger.log calls,\n handling backward compatibility as well.\n \"\"\"\n if not {'level', 'msg', 'args'} <= set(logkws):\n warnings.warn('Missing keys in LogFormatter method',\n ScrapyDeprecationWarning)\n\n if 'format' in logkws:\n warnings.warn('`format` key in LogFormatter methods has been '\n 'deprecated, use `msg` instead',\n ScrapyDeprecationWarning)\n\n level = logkws.get('level', logging.INFO)\n message = logkws.get('format', logkws.get('msg'))\n # NOTE: This also handles 'args' being an empty dict, that case doesn't\n # play well in logger.log calls\n args = logkws if not logkws.get('args') else logkws['args']\n\n return (level, message, args)\n", "path": "scrapy/utils/log.py"}]}
| 2,555 | 99 |
gh_patches_debug_1581
|
rasdani/github-patches
|
git_diff
|
enthought__chaco-537
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Switch on the default warning flag when running testsuite
See related https://github.com/enthought/envisage/issues/311
Deprecation warnings triggered during test runs are visible, but deprecation warnings triggered prior to when a unittest test case is loaded are by default hidden from the console.
With that, deprecation warnings from trait types definition do not show up from running the current test command, because the warnings occur at import time.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ci/edmtool.py`
Content:
```
1 #
2 # Copyright (c) 2017, Enthought, Inc.
3 # All rights reserved.
4 #
5 # This software is provided without warranty under the terms of the BSD
6 # license included in enthought/LICENSE.txt and may be redistributed only
7 # under the conditions described in the aforementioned license. The license
8 # is also available online at http://www.enthought.com/licenses/BSD.txt
9 #
10 # Thanks for using Enthought open source!
11 #
12 """
13 Tasks for Test Runs
14 ===================
15 This file is intended to be used with a python environment with the
16 click library to automate the process of setting up test environments
17 and running the test within them. This improves repeatability and
18 reliability of tests be removing many of the variables around the
19 developer's particular Python environment. Test environment setup and
20 package management is performed using `EDM http://docs.enthought.com/edm/`_
21
22 To use this to run you tests, you will need to install EDM and click
23 into your working environment. You will also need to have git
24 installed to access required source code from github repositories.
25
26 You can then do::
27 python edmtool.py install --runtime=... --toolkit=...
28 to create a test environment from the current codebase and::
29 python edmtool.py test --runtime=... --toolkit=...
30 to run tests in that environment. You can remove the environment with::
31 python edmtool.py cleanup --runtime=... --toolkit=...
32
33 If you make changes you will either need to remove and re-install the
34 environment or manually update the environment using ``edm``, as
35 the install performs a ``python setup.py install`` rather than a ``develop``,
36 so changes in your code will not be automatically mirrored in the test
37 environment. You can update with a command like::
38 edm run --environment ... -- python setup.py install
39 You can run all three tasks at once with::
40 python edmtool.py test_clean --runtime=... --toolkit=...
41 which will create, install, run tests, and then clean-up the environment. And
42 you can run tests in all supported runtimes and toolkits (with cleanup)
43 using::
44 python edmtool.py test_all
45
46 Currently supported runtime values are ``3.6``, and currently
47 supported toolkits are ``null``, ``pyqt``, ``pyqt5`` and ``pyside2``. Not all
48 combinations of toolkits and runtimes will work, but the tasks will fail with
49 a clear error if that is the case. Tests can still be run via the usual means
50 in other environments if that suits a developer's purpose.
51
52 Changing This File
53 ------------------
54 To change the packages installed during a test run, change the dependencies
55 variable below. To install a package from github, or one which is not yet
56 available via EDM, add it to the `ci/requirements.txt` file (these will be
57 installed by `pip`).
58
59 Other changes to commands should be a straightforward change to the listed
60 commands for each task. See the EDM documentation for more information about
61 how to run commands within an EDM enviornment.
62 """
63 import glob
64 import os
65 import subprocess
66 import sys
67 from shutil import rmtree, copy as copyfile
68 from tempfile import mkdtemp
69 from contextlib import contextmanager
70
71 import click
72
73 supported_combinations = {
74 '3.6': {'pyside2', 'pyqt', 'pyqt5', 'null'},
75 }
76
77 dependencies = {
78 "six",
79 "mock",
80 "numpy",
81 "pandas",
82 "pyface",
83 "pygments",
84 "pyparsing",
85 "traits",
86 "traitsui",
87 "cython",
88 "enable",
89 # Needed to install enable from source
90 "swig",
91 }
92
93 # Dependencies we install from source for cron tests
94 source_dependencies = {
95 "enable",
96 "pyface",
97 "traits",
98 "traitsui",
99 }
100
101 github_url_fmt = "git+http://github.com/enthought/{0}.git#egg={0}"
102
103 extra_dependencies = {
104 'pyside2': set(), # pyside2 is pip-installed during the install step
105 'pyqt': {'pyqt'},
106 'pyqt5': {'pyqt5'},
107 'null': set()
108 }
109
110 environment_vars = {
111 'pyside2': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyside2'},
112 'pyqt': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt'},
113 'pyqt5': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt5'},
114 'null': {'ETS_TOOLKIT': 'null.image'},
115 }
116
117
118 def normalize(name):
119 return name.replace("_", "-")
120
121
122 @click.group(context_settings={"token_normalize_func": normalize})
123 def cli():
124 pass
125
126
127 @cli.command()
128 @click.option('--runtime', default='3.6')
129 @click.option('--toolkit', default='null')
130 @click.option('--environment', default=None)
131 @click.option(
132 "--source/--no-source",
133 default=False,
134 help="Install ETS packages from source",
135 )
136 def install(runtime, toolkit, environment, source):
137 """ Install project and dependencies into a clean EDM environment.
138 """
139 parameters = get_parameters(runtime, toolkit, environment)
140 parameters['packages'] = ' '.join(
141 dependencies | extra_dependencies.get(toolkit, set()))
142 # edm commands to setup the development environment
143 commands = [
144 "edm environments create {environment} --force --version={runtime}",
145 "edm install -y -e {environment} {packages}",
146 ("edm run -e {environment} -- pip install -r ci/requirements.txt"
147 " --no-dependencies"),
148 "edm run -e {environment} -- pip install . --no-deps",
149 ]
150 # pip install pyside2, because we don't have it in EDM yet
151 if toolkit == 'pyside2':
152 commands.append(
153 "edm run -e {environment} -- pip install pyside2==5.11"
154 )
155
156 click.echo("Creating environment '{environment}'".format(**parameters))
157 execute(commands, parameters)
158
159 if source:
160 # Remove EDM ETS packages and install them from source
161 cmd_fmt = (
162 "edm plumbing remove-package "
163 "--environment {environment} --force "
164 )
165 commands = [cmd_fmt + source_pkg for source_pkg in source_dependencies]
166 execute(commands, parameters)
167 source_pkgs = [
168 github_url_fmt.format(pkg) for pkg in source_dependencies
169 ]
170 commands = [
171 "python -m pip install {pkg} --no-deps".format(pkg=pkg)
172 for pkg in source_pkgs
173 ]
174 commands = [
175 "edm run -e {environment} -- " + command for command in commands
176 ]
177 execute(commands, parameters)
178 click.echo('Done install')
179
180
181 @cli.command()
182 @click.option('--runtime', default='3.6')
183 @click.option('--toolkit', default='null')
184 @click.option('--environment', default=None)
185 def test(runtime, toolkit, environment):
186 """ Run the test suite in a given environment with the specified toolkit.
187 """
188 parameters = get_parameters(runtime, toolkit, environment)
189 environ = environment_vars.get(toolkit, {}).copy()
190
191 environ['PYTHONUNBUFFERED'] = "1"
192 commands = [
193 "edm run -e {environment} -- coverage run -m unittest discover -v chaco"
194 ]
195
196 cwd = os.getcwd()
197
198 # We run in a tempdir to avoid accidentally picking up wrong traitsui
199 # code from a local dir. We need to ensure a good .coveragerc is in
200 # that directory, plus coverage has a bug that means a non-local coverage
201 # file doesn't get populated correctly.
202 click.echo("Running tests in '{environment}'".format(**parameters))
203 with do_in_tempdir(files=['.coveragerc'], capture_files=['./.coverage*']):
204 os.environ.update(environ)
205 execute(commands, parameters)
206
207 click.echo('Done test')
208
209
210 @cli.command()
211 @click.option('--runtime', default='3.6')
212 @click.option('--toolkit', default='null')
213 @click.option('--environment', default=None)
214 def cleanup(runtime, toolkit, environment):
215 """ Remove a development environment.
216 """
217 parameters = get_parameters(runtime, toolkit, environment)
218 commands = [
219 "edm run -e {environment} -- python setup.py clean",
220 "edm environments remove {environment} --purge -y",
221 ]
222 click.echo("Cleaning up environment '{environment}'".format(**parameters))
223 execute(commands, parameters)
224 click.echo('Done cleanup')
225
226
227 @cli.command()
228 @click.option('--runtime', default='3.6')
229 @click.option('--toolkit', default='null')
230 def test_clean(runtime, toolkit):
231 """ Run tests in a clean environment, cleaning up afterwards
232 """
233 args = ['--toolkit={}'.format(toolkit),
234 '--runtime={}'.format(runtime)]
235 try:
236 install(args=args, standalone_mode=False)
237 test(args=args, standalone_mode=False)
238 finally:
239 cleanup(args=args, standalone_mode=False)
240
241
242 @cli.command()
243 @click.option('--runtime', default='3.6')
244 @click.option('--toolkit', default='null')
245 @click.option('--environment', default=None)
246 def update(runtime, toolkit, environment):
247 """ Update/Reinstall package into environment.
248 """
249 parameters = get_parameters(runtime, toolkit, environment)
250 commands = [
251 "edm run -e {environment} -- python setup.py install"]
252 click.echo("Re-installing in '{environment}'".format(**parameters))
253 execute(commands, parameters)
254 click.echo('Done update')
255
256
257 @cli.command()
258 def test_all():
259 """ Run test_clean across all supported environment combinations.
260 """
261 for runtime, toolkits in supported_combinations.items():
262 for toolkit in toolkits:
263 args = ['--toolkit={}'.format(toolkit),
264 '--runtime={}'.format(runtime)]
265 test_clean(args, standalone_mode=True)
266
267
268 # ----------------------------------------------------------------------------
269 # Utility routines
270 # ----------------------------------------------------------------------------
271
272 def get_parameters(runtime, toolkit, environment):
273 """Set up parameters dictionary for format() substitution
274 """
275 parameters = {'runtime': runtime, 'toolkit': toolkit,
276 'environment': environment}
277 if toolkit not in supported_combinations[runtime]:
278 msg = ("Python {runtime!r}, toolkit {toolkit!r}, "
279 "not supported by test environments ({available})")
280 available = ", ".join(
281 repr(tk) for tk in sorted(supported_combinations[runtime])
282 )
283 raise RuntimeError(msg.format(available=available, **parameters))
284 if environment is None:
285 tmpl = 'chaco-test-{runtime}-{toolkit}'
286 environment = tmpl.format(**parameters)
287 parameters['environment'] = environment
288 return parameters
289
290
291 @contextmanager
292 def do_in_tempdir(files=(), capture_files=()):
293 """ Create a temporary directory, cleaning up after done.
294 Creates the temporary directory, and changes into it. On exit returns to
295 original directory and removes temporary dir.
296 Parameters
297 ----------
298 files : sequence of filenames
299 Files to be copied across to temporary directory.
300 capture_files : sequence of filenames
301 Files to be copied back from temporary directory.
302 """
303 path = mkdtemp()
304 old_path = os.getcwd()
305
306 # send across any files we need
307 for filepath in files:
308 click.echo('copying file to tempdir: {}'.format(filepath))
309 copyfile(filepath, path)
310
311 os.chdir(path)
312 try:
313 yield path
314 # retrieve any result files we want
315 for pattern in capture_files:
316 for filepath in glob.iglob(pattern):
317 click.echo('copying file back: {}'.format(filepath))
318 copyfile(filepath, old_path)
319 finally:
320 os.chdir(old_path)
321 rmtree(path)
322
323
324 def execute(commands, parameters):
325 for command in commands:
326 print("[EXECUTING]", command.format(**parameters))
327 try:
328 subprocess.check_call(command.format(**parameters).split())
329 except subprocess.CalledProcessError:
330 sys.exit(1)
331
332
333 if __name__ == '__main__':
334 cli()
335
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ci/edmtool.py b/ci/edmtool.py
--- a/ci/edmtool.py
+++ b/ci/edmtool.py
@@ -190,7 +190,8 @@
environ['PYTHONUNBUFFERED'] = "1"
commands = [
- "edm run -e {environment} -- coverage run -m unittest discover -v chaco"
+ "edm run -e {environment} -- python -W default -m "
+ "coverage run -m unittest discover -v chaco"
]
cwd = os.getcwd()
|
{"golden_diff": "diff --git a/ci/edmtool.py b/ci/edmtool.py\n--- a/ci/edmtool.py\n+++ b/ci/edmtool.py\n@@ -190,7 +190,8 @@\n \n environ['PYTHONUNBUFFERED'] = \"1\"\n commands = [\n- \"edm run -e {environment} -- coverage run -m unittest discover -v chaco\"\n+ \"edm run -e {environment} -- python -W default -m \"\n+ \"coverage run -m unittest discover -v chaco\"\n ]\n \n cwd = os.getcwd()\n", "issue": "Switch on the default warning flag when running testsuite\nSee related https://github.com/enthought/envisage/issues/311\r\n\r\nDeprecation warnings triggered during test runs are visible, but deprecation warnings triggered prior to when a unittest test case is loaded are by default hidden from the console.\r\n\r\nWith that, deprecation warnings from trait types definition do not show up from running the current test command, because the warnings occur at import time.\n", "before_files": [{"content": "#\n# Copyright (c) 2017, Enthought, Inc.\n# All rights reserved.\n#\n# This software is provided without warranty under the terms of the BSD\n# license included in enthought/LICENSE.txt and may be redistributed only\n# under the conditions described in the aforementioned license. The license\n# is also available online at http://www.enthought.com/licenses/BSD.txt\n#\n# Thanks for using Enthought open source!\n#\n\"\"\"\nTasks for Test Runs\n===================\nThis file is intended to be used with a python environment with the\nclick library to automate the process of setting up test environments\nand running the test within them. This improves repeatability and\nreliability of tests be removing many of the variables around the\ndeveloper's particular Python environment. Test environment setup and\npackage management is performed using `EDM http://docs.enthought.com/edm/`_\n\nTo use this to run you tests, you will need to install EDM and click\ninto your working environment. You will also need to have git\ninstalled to access required source code from github repositories.\n\nYou can then do::\n python edmtool.py install --runtime=... --toolkit=...\nto create a test environment from the current codebase and::\n python edmtool.py test --runtime=... --toolkit=...\nto run tests in that environment. You can remove the environment with::\n python edmtool.py cleanup --runtime=... --toolkit=...\n\nIf you make changes you will either need to remove and re-install the\nenvironment or manually update the environment using ``edm``, as\nthe install performs a ``python setup.py install`` rather than a ``develop``,\nso changes in your code will not be automatically mirrored in the test\nenvironment. You can update with a command like::\n edm run --environment ... -- python setup.py install\nYou can run all three tasks at once with::\n python edmtool.py test_clean --runtime=... --toolkit=...\nwhich will create, install, run tests, and then clean-up the environment. And\nyou can run tests in all supported runtimes and toolkits (with cleanup)\nusing::\n python edmtool.py test_all\n\nCurrently supported runtime values are ``3.6``, and currently\nsupported toolkits are ``null``, ``pyqt``, ``pyqt5`` and ``pyside2``. Not all\ncombinations of toolkits and runtimes will work, but the tasks will fail with\na clear error if that is the case. Tests can still be run via the usual means\nin other environments if that suits a developer's purpose.\n\nChanging This File\n------------------\nTo change the packages installed during a test run, change the dependencies\nvariable below. To install a package from github, or one which is not yet\navailable via EDM, add it to the `ci/requirements.txt` file (these will be\ninstalled by `pip`).\n\nOther changes to commands should be a straightforward change to the listed\ncommands for each task. See the EDM documentation for more information about\nhow to run commands within an EDM enviornment.\n\"\"\"\nimport glob\nimport os\nimport subprocess\nimport sys\nfrom shutil import rmtree, copy as copyfile\nfrom tempfile import mkdtemp\nfrom contextlib import contextmanager\n\nimport click\n\nsupported_combinations = {\n '3.6': {'pyside2', 'pyqt', 'pyqt5', 'null'},\n}\n\ndependencies = {\n \"six\",\n \"mock\",\n \"numpy\",\n \"pandas\",\n \"pyface\",\n \"pygments\",\n \"pyparsing\",\n \"traits\",\n \"traitsui\",\n \"cython\",\n \"enable\",\n # Needed to install enable from source\n \"swig\",\n}\n\n# Dependencies we install from source for cron tests\nsource_dependencies = {\n \"enable\",\n \"pyface\",\n \"traits\",\n \"traitsui\",\n}\n\ngithub_url_fmt = \"git+http://github.com/enthought/{0}.git#egg={0}\"\n\nextra_dependencies = {\n 'pyside2': set(), # pyside2 is pip-installed during the install step\n 'pyqt': {'pyqt'},\n 'pyqt5': {'pyqt5'},\n 'null': set()\n}\n\nenvironment_vars = {\n 'pyside2': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyside2'},\n 'pyqt': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt'},\n 'pyqt5': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt5'},\n 'null': {'ETS_TOOLKIT': 'null.image'},\n}\n\n\ndef normalize(name):\n return name.replace(\"_\", \"-\")\n\n\[email protected](context_settings={\"token_normalize_func\": normalize})\ndef cli():\n pass\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\[email protected](\n \"--source/--no-source\",\n default=False,\n help=\"Install ETS packages from source\",\n)\ndef install(runtime, toolkit, environment, source):\n \"\"\" Install project and dependencies into a clean EDM environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n parameters['packages'] = ' '.join(\n dependencies | extra_dependencies.get(toolkit, set()))\n # edm commands to setup the development environment\n commands = [\n \"edm environments create {environment} --force --version={runtime}\",\n \"edm install -y -e {environment} {packages}\",\n (\"edm run -e {environment} -- pip install -r ci/requirements.txt\"\n \" --no-dependencies\"),\n \"edm run -e {environment} -- pip install . --no-deps\",\n ]\n # pip install pyside2, because we don't have it in EDM yet\n if toolkit == 'pyside2':\n commands.append(\n \"edm run -e {environment} -- pip install pyside2==5.11\"\n )\n \n click.echo(\"Creating environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n\n if source:\n # Remove EDM ETS packages and install them from source\n cmd_fmt = (\n \"edm plumbing remove-package \"\n \"--environment {environment} --force \"\n )\n commands = [cmd_fmt + source_pkg for source_pkg in source_dependencies]\n execute(commands, parameters)\n source_pkgs = [\n github_url_fmt.format(pkg) for pkg in source_dependencies\n ]\n commands = [\n \"python -m pip install {pkg} --no-deps\".format(pkg=pkg)\n for pkg in source_pkgs\n ]\n commands = [\n \"edm run -e {environment} -- \" + command for command in commands\n ]\n execute(commands, parameters)\n click.echo('Done install')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef test(runtime, toolkit, environment):\n \"\"\" Run the test suite in a given environment with the specified toolkit.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n environ = environment_vars.get(toolkit, {}).copy()\n\n environ['PYTHONUNBUFFERED'] = \"1\"\n commands = [\n \"edm run -e {environment} -- coverage run -m unittest discover -v chaco\"\n ]\n\n cwd = os.getcwd()\n\n # We run in a tempdir to avoid accidentally picking up wrong traitsui\n # code from a local dir. We need to ensure a good .coveragerc is in\n # that directory, plus coverage has a bug that means a non-local coverage\n # file doesn't get populated correctly.\n click.echo(\"Running tests in '{environment}'\".format(**parameters))\n with do_in_tempdir(files=['.coveragerc'], capture_files=['./.coverage*']):\n os.environ.update(environ)\n execute(commands, parameters)\n\n click.echo('Done test')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef cleanup(runtime, toolkit, environment):\n \"\"\" Remove a development environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n commands = [\n \"edm run -e {environment} -- python setup.py clean\",\n \"edm environments remove {environment} --purge -y\",\n ]\n click.echo(\"Cleaning up environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done cleanup')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\ndef test_clean(runtime, toolkit):\n \"\"\" Run tests in a clean environment, cleaning up afterwards\n \"\"\"\n args = ['--toolkit={}'.format(toolkit),\n '--runtime={}'.format(runtime)]\n try:\n install(args=args, standalone_mode=False)\n test(args=args, standalone_mode=False)\n finally:\n cleanup(args=args, standalone_mode=False)\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef update(runtime, toolkit, environment):\n \"\"\" Update/Reinstall package into environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n commands = [\n \"edm run -e {environment} -- python setup.py install\"]\n click.echo(\"Re-installing in '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done update')\n\n\[email protected]()\ndef test_all():\n \"\"\" Run test_clean across all supported environment combinations.\n \"\"\"\n for runtime, toolkits in supported_combinations.items():\n for toolkit in toolkits:\n args = ['--toolkit={}'.format(toolkit),\n '--runtime={}'.format(runtime)]\n test_clean(args, standalone_mode=True)\n\n\n# ----------------------------------------------------------------------------\n# Utility routines\n# ----------------------------------------------------------------------------\n\ndef get_parameters(runtime, toolkit, environment):\n \"\"\"Set up parameters dictionary for format() substitution\n \"\"\"\n parameters = {'runtime': runtime, 'toolkit': toolkit,\n 'environment': environment}\n if toolkit not in supported_combinations[runtime]:\n msg = (\"Python {runtime!r}, toolkit {toolkit!r}, \"\n \"not supported by test environments ({available})\")\n available = \", \".join(\n repr(tk) for tk in sorted(supported_combinations[runtime])\n )\n raise RuntimeError(msg.format(available=available, **parameters))\n if environment is None:\n tmpl = 'chaco-test-{runtime}-{toolkit}'\n environment = tmpl.format(**parameters)\n parameters['environment'] = environment\n return parameters\n\n\n@contextmanager\ndef do_in_tempdir(files=(), capture_files=()):\n \"\"\" Create a temporary directory, cleaning up after done.\n Creates the temporary directory, and changes into it. On exit returns to\n original directory and removes temporary dir.\n Parameters\n ----------\n files : sequence of filenames\n Files to be copied across to temporary directory.\n capture_files : sequence of filenames\n Files to be copied back from temporary directory.\n \"\"\"\n path = mkdtemp()\n old_path = os.getcwd()\n\n # send across any files we need\n for filepath in files:\n click.echo('copying file to tempdir: {}'.format(filepath))\n copyfile(filepath, path)\n\n os.chdir(path)\n try:\n yield path\n # retrieve any result files we want\n for pattern in capture_files:\n for filepath in glob.iglob(pattern):\n click.echo('copying file back: {}'.format(filepath))\n copyfile(filepath, old_path)\n finally:\n os.chdir(old_path)\n rmtree(path)\n\n\ndef execute(commands, parameters):\n for command in commands:\n print(\"[EXECUTING]\", command.format(**parameters))\n try:\n subprocess.check_call(command.format(**parameters).split())\n except subprocess.CalledProcessError:\n sys.exit(1)\n\n\nif __name__ == '__main__':\n cli()\n", "path": "ci/edmtool.py"}], "after_files": [{"content": "#\n# Copyright (c) 2017, Enthought, Inc.\n# All rights reserved.\n#\n# This software is provided without warranty under the terms of the BSD\n# license included in enthought/LICENSE.txt and may be redistributed only\n# under the conditions described in the aforementioned license. The license\n# is also available online at http://www.enthought.com/licenses/BSD.txt\n#\n# Thanks for using Enthought open source!\n#\n\"\"\"\nTasks for Test Runs\n===================\nThis file is intended to be used with a python environment with the\nclick library to automate the process of setting up test environments\nand running the test within them. This improves repeatability and\nreliability of tests be removing many of the variables around the\ndeveloper's particular Python environment. Test environment setup and\npackage management is performed using `EDM http://docs.enthought.com/edm/`_\n\nTo use this to run you tests, you will need to install EDM and click\ninto your working environment. You will also need to have git\ninstalled to access required source code from github repositories.\n\nYou can then do::\n python edmtool.py install --runtime=... --toolkit=...\nto create a test environment from the current codebase and::\n python edmtool.py test --runtime=... --toolkit=...\nto run tests in that environment. You can remove the environment with::\n python edmtool.py cleanup --runtime=... --toolkit=...\n\nIf you make changes you will either need to remove and re-install the\nenvironment or manually update the environment using ``edm``, as\nthe install performs a ``python setup.py install`` rather than a ``develop``,\nso changes in your code will not be automatically mirrored in the test\nenvironment. You can update with a command like::\n edm run --environment ... -- python setup.py install\nYou can run all three tasks at once with::\n python edmtool.py test_clean --runtime=... --toolkit=...\nwhich will create, install, run tests, and then clean-up the environment. And\nyou can run tests in all supported runtimes and toolkits (with cleanup)\nusing::\n python edmtool.py test_all\n\nCurrently supported runtime values are ``3.6``, and currently\nsupported toolkits are ``null``, ``pyqt``, ``pyqt5`` and ``pyside2``. Not all\ncombinations of toolkits and runtimes will work, but the tasks will fail with\na clear error if that is the case. Tests can still be run via the usual means\nin other environments if that suits a developer's purpose.\n\nChanging This File\n------------------\nTo change the packages installed during a test run, change the dependencies\nvariable below. To install a package from github, or one which is not yet\navailable via EDM, add it to the `ci/requirements.txt` file (these will be\ninstalled by `pip`).\n\nOther changes to commands should be a straightforward change to the listed\ncommands for each task. See the EDM documentation for more information about\nhow to run commands within an EDM enviornment.\n\"\"\"\nimport glob\nimport os\nimport subprocess\nimport sys\nfrom shutil import rmtree, copy as copyfile\nfrom tempfile import mkdtemp\nfrom contextlib import contextmanager\n\nimport click\n\nsupported_combinations = {\n '3.6': {'pyside2', 'pyqt', 'pyqt5', 'null'},\n}\n\ndependencies = {\n \"six\",\n \"mock\",\n \"numpy\",\n \"pandas\",\n \"pyface\",\n \"pygments\",\n \"pyparsing\",\n \"traits\",\n \"traitsui\",\n \"cython\",\n \"enable\",\n # Needed to install enable from source\n \"swig\",\n}\n\n# Dependencies we install from source for cron tests\nsource_dependencies = {\n \"enable\",\n \"pyface\",\n \"traits\",\n \"traitsui\",\n}\n\ngithub_url_fmt = \"git+http://github.com/enthought/{0}.git#egg={0}\"\n\nextra_dependencies = {\n 'pyside2': set(), # pyside2 is pip-installed during the install step\n 'pyqt': {'pyqt'},\n 'pyqt5': {'pyqt5'},\n 'null': set()\n}\n\nenvironment_vars = {\n 'pyside2': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyside2'},\n 'pyqt': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt'},\n 'pyqt5': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt5'},\n 'null': {'ETS_TOOLKIT': 'null.image'},\n}\n\n\ndef normalize(name):\n return name.replace(\"_\", \"-\")\n\n\[email protected](context_settings={\"token_normalize_func\": normalize})\ndef cli():\n pass\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\[email protected](\n \"--source/--no-source\",\n default=False,\n help=\"Install ETS packages from source\",\n)\ndef install(runtime, toolkit, environment, source):\n \"\"\" Install project and dependencies into a clean EDM environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n parameters['packages'] = ' '.join(\n dependencies | extra_dependencies.get(toolkit, set()))\n # edm commands to setup the development environment\n commands = [\n \"edm environments create {environment} --force --version={runtime}\",\n \"edm install -y -e {environment} {packages}\",\n (\"edm run -e {environment} -- pip install -r ci/requirements.txt\"\n \" --no-dependencies\"),\n \"edm run -e {environment} -- pip install . --no-deps\",\n ]\n # pip install pyside2, because we don't have it in EDM yet\n if toolkit == 'pyside2':\n commands.append(\n \"edm run -e {environment} -- pip install pyside2==5.11\"\n )\n \n click.echo(\"Creating environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n\n if source:\n # Remove EDM ETS packages and install them from source\n cmd_fmt = (\n \"edm plumbing remove-package \"\n \"--environment {environment} --force \"\n )\n commands = [cmd_fmt + source_pkg for source_pkg in source_dependencies]\n execute(commands, parameters)\n source_pkgs = [\n github_url_fmt.format(pkg) for pkg in source_dependencies\n ]\n commands = [\n \"python -m pip install {pkg} --no-deps\".format(pkg=pkg)\n for pkg in source_pkgs\n ]\n commands = [\n \"edm run -e {environment} -- \" + command for command in commands\n ]\n execute(commands, parameters)\n click.echo('Done install')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef test(runtime, toolkit, environment):\n \"\"\" Run the test suite in a given environment with the specified toolkit.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n environ = environment_vars.get(toolkit, {}).copy()\n\n environ['PYTHONUNBUFFERED'] = \"1\"\n commands = [\n \"edm run -e {environment} -- python -W default -m \"\n \"coverage run -m unittest discover -v chaco\"\n ]\n\n cwd = os.getcwd()\n\n # We run in a tempdir to avoid accidentally picking up wrong traitsui\n # code from a local dir. We need to ensure a good .coveragerc is in\n # that directory, plus coverage has a bug that means a non-local coverage\n # file doesn't get populated correctly.\n click.echo(\"Running tests in '{environment}'\".format(**parameters))\n with do_in_tempdir(files=['.coveragerc'], capture_files=['./.coverage*']):\n os.environ.update(environ)\n execute(commands, parameters)\n\n click.echo('Done test')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef cleanup(runtime, toolkit, environment):\n \"\"\" Remove a development environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n commands = [\n \"edm run -e {environment} -- python setup.py clean\",\n \"edm environments remove {environment} --purge -y\",\n ]\n click.echo(\"Cleaning up environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done cleanup')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\ndef test_clean(runtime, toolkit):\n \"\"\" Run tests in a clean environment, cleaning up afterwards\n \"\"\"\n args = ['--toolkit={}'.format(toolkit),\n '--runtime={}'.format(runtime)]\n try:\n install(args=args, standalone_mode=False)\n test(args=args, standalone_mode=False)\n finally:\n cleanup(args=args, standalone_mode=False)\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef update(runtime, toolkit, environment):\n \"\"\" Update/Reinstall package into environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n commands = [\n \"edm run -e {environment} -- python setup.py install\"]\n click.echo(\"Re-installing in '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done update')\n\n\[email protected]()\ndef test_all():\n \"\"\" Run test_clean across all supported environment combinations.\n \"\"\"\n for runtime, toolkits in supported_combinations.items():\n for toolkit in toolkits:\n args = ['--toolkit={}'.format(toolkit),\n '--runtime={}'.format(runtime)]\n test_clean(args, standalone_mode=True)\n\n\n# ----------------------------------------------------------------------------\n# Utility routines\n# ----------------------------------------------------------------------------\n\ndef get_parameters(runtime, toolkit, environment):\n \"\"\"Set up parameters dictionary for format() substitution\n \"\"\"\n parameters = {'runtime': runtime, 'toolkit': toolkit,\n 'environment': environment}\n if toolkit not in supported_combinations[runtime]:\n msg = (\"Python {runtime!r}, toolkit {toolkit!r}, \"\n \"not supported by test environments ({available})\")\n available = \", \".join(\n repr(tk) for tk in sorted(supported_combinations[runtime])\n )\n raise RuntimeError(msg.format(available=available, **parameters))\n if environment is None:\n tmpl = 'chaco-test-{runtime}-{toolkit}'\n environment = tmpl.format(**parameters)\n parameters['environment'] = environment\n return parameters\n\n\n@contextmanager\ndef do_in_tempdir(files=(), capture_files=()):\n \"\"\" Create a temporary directory, cleaning up after done.\n Creates the temporary directory, and changes into it. On exit returns to\n original directory and removes temporary dir.\n Parameters\n ----------\n files : sequence of filenames\n Files to be copied across to temporary directory.\n capture_files : sequence of filenames\n Files to be copied back from temporary directory.\n \"\"\"\n path = mkdtemp()\n old_path = os.getcwd()\n\n # send across any files we need\n for filepath in files:\n click.echo('copying file to tempdir: {}'.format(filepath))\n copyfile(filepath, path)\n\n os.chdir(path)\n try:\n yield path\n # retrieve any result files we want\n for pattern in capture_files:\n for filepath in glob.iglob(pattern):\n click.echo('copying file back: {}'.format(filepath))\n copyfile(filepath, old_path)\n finally:\n os.chdir(old_path)\n rmtree(path)\n\n\ndef execute(commands, parameters):\n for command in commands:\n print(\"[EXECUTING]\", command.format(**parameters))\n try:\n subprocess.check_call(command.format(**parameters).split())\n except subprocess.CalledProcessError:\n sys.exit(1)\n\n\nif __name__ == '__main__':\n cli()\n", "path": "ci/edmtool.py"}]}
| 3,866 | 130 |
gh_patches_debug_1021
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-8692
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AWS user pool and identity pool resources.json has minimal information
### Describe the bug
For the below custodian policy, the resources.json is created for each of the policy on successful execution. For user-pool and identity-pool, the resources.json file does not include full description/configuration of the resources, instead it contains very minimal information like ID, NAME, creation date etc. (as mentioned below) whereas for EC2 and Lambda, the populated resources.json has hundreds of metadata information.
---
##custodian.yaml
policies:
- name: cognito-checkauditmode
resource: aws.user-pool
- name: identity-checkauditmode
resource: identity-pool
- name: ec2-checkrunning
resource: ec2
- name: find-all-lambdas
resource: aws.lambda
---
##resources.json - cognito-checkauditmode
[
{
"Id": "xxxxxxxxxxxxxx",
"Name": "xxxxxxxxxxxxxxxxxxx",
"LambdaConfig": {},
"LastModifiedDate": "2023-06-29T08:56:18.028000-05:00",
"CreationDate": "2023-06-29T08:56:17.860000-05:00",
"Tags": []
},
{
"Id": "xxxxxxxxxxxxxxxxxxx",
"Name": "xxxxxxxxxxxxxxxxxxx",
"LambdaConfig": {},
"LastModifiedDate": "2020-06-11T17:15:18.951000-05:00",
"CreationDate": "2020-02-21T11:39:18.108000-06:00",
"Tags": []
}
]
---
## resources.json - identity-checkauditmode
[
{
"IdentityPoolId": "xxxxxxxxxxxxxxxxxxx",
"IdentityPoolName": "xxxxxxxxxxxxxxxxxxx",
"Tags": []
}
]
### What did you expect to happen?
Expecting a large json file with full configuration of the resource. Below is the AWS CLI command and the truncated response from CLI. Expecting a similar response.
---
aws cognito-idp describe-user-pool --user-pool-id xxxxxxxxxxxxxxxxxxx
---
truncated response
{
"UserPool": {
"Id": "xxxxxxxxxxxxxxxxxxx",
"Name": "xxxxxxxxxxxxxxxxxxx",
"Policies": {
"PasswordPolicy": {
"MinimumLength": 8,
"RequireUppercase": true,
"RequireLowercase": true,
"RequireNumbers": true,
"RequireSymbols": true,
"TemporaryPasswordValidityDays": 7
}
},
"DeletionProtection": "INACTIVE",
"LambdaConfig": {},
"LastModifiedDate": "2020-06-11T17:15:18.951000-05:00",
"CreationDate": "2020-02-21T11:39:18.108000-06:00",
"SchemaAttributes": [
{
"Name": "sub",
"AttributeDataType": "String",
"DeveloperOnlyAttribute": false,
"Mutable": false,
"Required": true,
"StringAttributeConstraints": {
"MinLength": "1",
"MaxLength": "2048"
}
},
### Cloud Provider
Amazon Web Services (AWS)
### Cloud Custodian version and dependency information
```shell
Custodian: 0.9.27
Python: 3.11.4 (main, Jun 7 2023, 00:34:59) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Platform: posix.uname_result(sysname='Darwin', nodename='MABPWKJJ4T9RYW', release='22.5.0', version='Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:23 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6020', machine='arm64')
Using venv: False
Docker: False
Installed:
argcomplete==3.0.8
attrs==23.1.0
boto3==1.26.139
botocore==1.29.139
docutils==0.18.1
importlib-metadata==5.2.0
jmespath==1.0.1
jsonschema==4.17.3
pyrsistent==0.19.3
python-dateutil==2.8.2
pyyaml==6.0
s3transfer==0.6.1
six==1.16.0
tabulate==0.9.0
typing-extensions==4.6.3
urllib3==1.26.16
zipp==3.15.0
```
### Policy
```shell
##custodian.yaml
policies:
- name: cognito-checkauditmode
resource: aws.user-pool
- name: identity-checkauditmode
resource: identity-pool
- name: ec2-checkrunning
resource: ec2
- name: find-all-lambdas
resource: aws.lambda
```
### Relevant log/traceback output
```shell
2023-06-26 20:09:45,838 - custodian.policy - INFO - policy:cognito-checkauditmode resource:aws.user-pool region:us-east-1 count:1 time:0.00
2023-06-26 20:20:16,225 - custodian.policy - INFO - policy:cognito-checkauditmode resource:aws.user-pool region:us-east-1 count:1 time:0.70
2023-06-26 20:25:23,030 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.00
2023-06-26 23:09:38,143 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.73
2023-06-26 23:13:37,202 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.00
2023-06-26 23:17:02,042 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.00
2023-06-26 23:18:59,196 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:0 time:0.00
2023-06-26 23:28:37,082 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:0 time:0.67
2023-06-27 09:11:53,373 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.67
2023-06-27 09:13:07,745 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.00
2023-06-27 09:22:13,584 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:0 time:0.00
2023-06-27 09:22:42,984 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:0 time:0.65
2023-06-27 09:24:43,016 - custodian.policy - INFO - policy:cognito-checkauditmode resource:aws.user-pool region:us-east-1 count:0 time:0.62
2023-06-27 09:27:15,604 - custodian.policy - INFO - policy:cognito-checkauditmode resource:aws.user-pool region:us-east-1 count:1 time:0.64
2023-06-29 08:58:25,076 - custodian.policy - INFO - policy:cognito-checkauditmode resource:aws.user-pool region:us-east-1 count:2 time:0.64
```
### Extra information or context
Applied few additional filters and that as well failed. I believe, the filters will work only after the describe is successful
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/resources/cognito.py`
Content:
```
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3 from botocore.exceptions import ClientError
4
5 from c7n.actions import BaseAction
6 from c7n.manager import resources
7 from c7n.query import QueryResourceManager, TypeInfo, DescribeSource
8 from c7n.tags import universal_augment
9 from c7n.utils import local_session, type_schema
10
11
12 class DescribeIdentityPool(DescribeSource):
13 def augment(self, resources):
14 return universal_augment(self.manager, resources)
15
16
17 class DescribeUserPool(DescribeSource):
18 def augment(self, resources):
19 resources = super().augment(resources)
20 return universal_augment(self.manager, resources)
21
22
23 @resources.register('identity-pool')
24 class CognitoIdentityPool(QueryResourceManager):
25
26 class resource_type(TypeInfo):
27 service = 'cognito-identity'
28 enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})
29 detail_spec = (
30 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)
31 id = 'IdentityPoolId'
32 name = 'IdentityPoolName'
33 arn_type = "identitypool"
34 cfn_type = 'AWS::Cognito::IdentityPool'
35 universal_taggable = object()
36
37 source_mapping = {
38 'describe': DescribeIdentityPool,
39 }
40
41
42 @CognitoIdentityPool.action_registry.register('delete')
43 class DeleteIdentityPool(BaseAction):
44 """Action to delete cognito identity pool
45
46 It is recommended to use a filter to avoid unwanted deletion of pools
47
48 :example:
49
50 .. code-block:: yaml
51
52 policies:
53 - name: identity-pool-delete
54 resource: identity-pool
55 actions:
56 - delete
57 """
58
59 schema = type_schema('delete')
60 permissions = ("cognito-identity:DeleteIdentityPool",)
61
62 def process(self, pools):
63 with self.executor_factory(max_workers=2) as w:
64 list(w.map(self.process_pool, pools))
65
66 def process_pool(self, pool):
67 client = local_session(
68 self.manager.session_factory).client('cognito-identity')
69 try:
70 client.delete_identity_pool(IdentityPoolId=pool['IdentityPoolId'])
71 except ClientError as e:
72 self.log.exception(
73 "Exception deleting identity pool:\n %s" % e)
74
75
76 @resources.register('user-pool')
77 class CognitoUserPool(QueryResourceManager):
78
79 class resource_type(TypeInfo):
80 service = "cognito-idp"
81 enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})
82 detail_spec = (
83 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')
84 id = 'Id'
85 name = 'Name'
86 arn_type = "userpool"
87 cfn_type = 'AWS::Cognito::UserPool'
88 universal_taggable = object()
89
90 source_mapping = {
91 'describe': DescribeUserPool,
92 }
93
94
95 @CognitoUserPool.action_registry.register('delete')
96 class DeleteUserPool(BaseAction):
97 """Action to delete cognito user pool
98
99 It is recommended to use a filter to avoid unwanted deletion of pools
100
101 :example:
102
103 .. code-block:: yaml
104
105 policies:
106 - name: user-pool-delete
107 resource: user-pool
108 actions:
109 - delete
110 """
111
112 schema = type_schema('delete')
113 permissions = ("cognito-idp:DeleteUserPool",)
114
115 def process(self, pools):
116 with self.executor_factory(max_workers=2) as w:
117 list(w.map(self.process_pool, pools))
118
119 def process_pool(self, pool):
120 client = local_session(
121 self.manager.session_factory).client('cognito-idp')
122 try:
123 client.delete_user_pool(UserPoolId=pool['Id'])
124 except ClientError as e:
125 self.log.exception(
126 "Exception deleting user pool:\n %s" % e)
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/c7n/resources/cognito.py b/c7n/resources/cognito.py
--- a/c7n/resources/cognito.py
+++ b/c7n/resources/cognito.py
@@ -11,6 +11,7 @@
class DescribeIdentityPool(DescribeSource):
def augment(self, resources):
+ resources = super().augment(resources)
return universal_augment(self.manager, resources)
|
{"golden_diff": "diff --git a/c7n/resources/cognito.py b/c7n/resources/cognito.py\n--- a/c7n/resources/cognito.py\n+++ b/c7n/resources/cognito.py\n@@ -11,6 +11,7 @@\n \n class DescribeIdentityPool(DescribeSource):\n def augment(self, resources):\n+ resources = super().augment(resources)\n return universal_augment(self.manager, resources)\n", "issue": "AWS user pool and identity pool resources.json has minimal information\n### Describe the bug\n\nFor the below custodian policy, the resources.json is created for each of the policy on successful execution. For user-pool and identity-pool, the resources.json file does not include full description/configuration of the resources, instead it contains very minimal information like ID, NAME, creation date etc. (as mentioned below) whereas for EC2 and Lambda, the populated resources.json has hundreds of metadata information.\r\n\r\n\r\n---\r\n##custodian.yaml\r\npolicies:\r\n - name: cognito-checkauditmode\r\n resource: aws.user-pool\r\n\r\n - name: identity-checkauditmode\r\n resource: identity-pool\r\n\r\n - name: ec2-checkrunning\r\n resource: ec2\r\n\r\n- name: find-all-lambdas\r\n resource: aws.lambda\r\n\r\n--- \r\n##resources.json - cognito-checkauditmode\r\n\r\n[\r\n {\r\n \"Id\": \"xxxxxxxxxxxxxx\",\r\n \"Name\": \"xxxxxxxxxxxxxxxxxxx\",\r\n \"LambdaConfig\": {},\r\n \"LastModifiedDate\": \"2023-06-29T08:56:18.028000-05:00\",\r\n \"CreationDate\": \"2023-06-29T08:56:17.860000-05:00\",\r\n \"Tags\": []\r\n },\r\n {\r\n \"Id\": \"xxxxxxxxxxxxxxxxxxx\",\r\n \"Name\": \"xxxxxxxxxxxxxxxxxxx\",\r\n \"LambdaConfig\": {},\r\n \"LastModifiedDate\": \"2020-06-11T17:15:18.951000-05:00\",\r\n \"CreationDate\": \"2020-02-21T11:39:18.108000-06:00\",\r\n \"Tags\": []\r\n }\r\n]\r\n\r\n\r\n---\r\n## resources.json - identity-checkauditmode\r\n\r\n[\r\n {\r\n \"IdentityPoolId\": \"xxxxxxxxxxxxxxxxxxx\",\r\n \"IdentityPoolName\": \"xxxxxxxxxxxxxxxxxxx\",\r\n \"Tags\": []\r\n }\r\n]\n\n### What did you expect to happen?\n\nExpecting a large json file with full configuration of the resource. Below is the AWS CLI command and the truncated response from CLI. Expecting a similar response. \r\n\r\n\r\n---\r\naws cognito-idp describe-user-pool --user-pool-id xxxxxxxxxxxxxxxxxxx\r\n---\r\ntruncated response\r\n{\r\n \"UserPool\": {\r\n \"Id\": \"xxxxxxxxxxxxxxxxxxx\",\r\n \"Name\": \"xxxxxxxxxxxxxxxxxxx\",\r\n \"Policies\": {\r\n \"PasswordPolicy\": {\r\n \"MinimumLength\": 8,\r\n \"RequireUppercase\": true,\r\n \"RequireLowercase\": true,\r\n \"RequireNumbers\": true,\r\n \"RequireSymbols\": true,\r\n \"TemporaryPasswordValidityDays\": 7\r\n }\r\n },\r\n \"DeletionProtection\": \"INACTIVE\",\r\n \"LambdaConfig\": {},\r\n \"LastModifiedDate\": \"2020-06-11T17:15:18.951000-05:00\",\r\n \"CreationDate\": \"2020-02-21T11:39:18.108000-06:00\",\r\n \"SchemaAttributes\": [\r\n {\r\n \"Name\": \"sub\",\r\n \"AttributeDataType\": \"String\",\r\n \"DeveloperOnlyAttribute\": false,\r\n \"Mutable\": false,\r\n \"Required\": true,\r\n \"StringAttributeConstraints\": {\r\n \"MinLength\": \"1\",\r\n \"MaxLength\": \"2048\"\r\n }\r\n },\n\n### Cloud Provider\n\nAmazon Web Services (AWS)\n\n### Cloud Custodian version and dependency information\n\n```shell\nCustodian: 0.9.27\r\nPython: 3.11.4 (main, Jun 7 2023, 00:34:59) [Clang 14.0.3 (clang-1403.0.22.14.1)]\r\nPlatform: posix.uname_result(sysname='Darwin', nodename='MABPWKJJ4T9RYW', release='22.5.0', version='Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:23 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6020', machine='arm64')\r\nUsing venv: False\r\nDocker: False\r\nInstalled: \r\n\r\nargcomplete==3.0.8\r\nattrs==23.1.0\r\nboto3==1.26.139\r\nbotocore==1.29.139\r\ndocutils==0.18.1\r\nimportlib-metadata==5.2.0\r\njmespath==1.0.1\r\njsonschema==4.17.3\r\npyrsistent==0.19.3\r\npython-dateutil==2.8.2\r\npyyaml==6.0\r\ns3transfer==0.6.1\r\nsix==1.16.0\r\ntabulate==0.9.0\r\ntyping-extensions==4.6.3\r\nurllib3==1.26.16\r\nzipp==3.15.0\n```\n\n\n### Policy\n\n```shell\n##custodian.yaml\r\npolicies:\r\n - name: cognito-checkauditmode\r\n resource: aws.user-pool\r\n\r\n - name: identity-checkauditmode\r\n resource: identity-pool\r\n\r\n - name: ec2-checkrunning\r\n resource: ec2\r\n\r\n- name: find-all-lambdas\r\n resource: aws.lambda\n```\n\n\n### Relevant log/traceback output\n\n```shell\n2023-06-26 20:09:45,838 - custodian.policy - INFO - policy:cognito-checkauditmode resource:aws.user-pool region:us-east-1 count:1 time:0.00\r\n2023-06-26 20:20:16,225 - custodian.policy - INFO - policy:cognito-checkauditmode resource:aws.user-pool region:us-east-1 count:1 time:0.70\r\n2023-06-26 20:25:23,030 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.00\r\n2023-06-26 23:09:38,143 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.73\r\n2023-06-26 23:13:37,202 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.00\r\n2023-06-26 23:17:02,042 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.00\r\n2023-06-26 23:18:59,196 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:0 time:0.00\r\n2023-06-26 23:28:37,082 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:0 time:0.67\r\n2023-06-27 09:11:53,373 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.67\r\n2023-06-27 09:13:07,745 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:1 time:0.00\r\n2023-06-27 09:22:13,584 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:0 time:0.00\r\n2023-06-27 09:22:42,984 - custodian.policy - INFO - policy:cognito-checkauditmode resource:user-pool region:us-east-1 count:0 time:0.65\r\n2023-06-27 09:24:43,016 - custodian.policy - INFO - policy:cognito-checkauditmode resource:aws.user-pool region:us-east-1 count:0 time:0.62\r\n2023-06-27 09:27:15,604 - custodian.policy - INFO - policy:cognito-checkauditmode resource:aws.user-pool region:us-east-1 count:1 time:0.64\r\n2023-06-29 08:58:25,076 - custodian.policy - INFO - policy:cognito-checkauditmode resource:aws.user-pool region:us-east-1 count:2 time:0.64\n```\n\n\n### Extra information or context\n\nApplied few additional filters and that as well failed. I believe, the filters will work only after the describe is successful\n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nfrom botocore.exceptions import ClientError\n\nfrom c7n.actions import BaseAction\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager, TypeInfo, DescribeSource\nfrom c7n.tags import universal_augment\nfrom c7n.utils import local_session, type_schema\n\n\nclass DescribeIdentityPool(DescribeSource):\n def augment(self, resources):\n return universal_augment(self.manager, resources)\n\n\nclass DescribeUserPool(DescribeSource):\n def augment(self, resources):\n resources = super().augment(resources)\n return universal_augment(self.manager, resources)\n\n\[email protected]('identity-pool')\nclass CognitoIdentityPool(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'cognito-identity'\n enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)\n id = 'IdentityPoolId'\n name = 'IdentityPoolName'\n arn_type = \"identitypool\"\n cfn_type = 'AWS::Cognito::IdentityPool'\n universal_taggable = object()\n\n source_mapping = {\n 'describe': DescribeIdentityPool,\n }\n\n\[email protected]_registry.register('delete')\nclass DeleteIdentityPool(BaseAction):\n \"\"\"Action to delete cognito identity pool\n\n It is recommended to use a filter to avoid unwanted deletion of pools\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: identity-pool-delete\n resource: identity-pool\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cognito-identity:DeleteIdentityPool\",)\n\n def process(self, pools):\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.process_pool, pools))\n\n def process_pool(self, pool):\n client = local_session(\n self.manager.session_factory).client('cognito-identity')\n try:\n client.delete_identity_pool(IdentityPoolId=pool['IdentityPoolId'])\n except ClientError as e:\n self.log.exception(\n \"Exception deleting identity pool:\\n %s\" % e)\n\n\[email protected]('user-pool')\nclass CognitoUserPool(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"cognito-idp\"\n enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')\n id = 'Id'\n name = 'Name'\n arn_type = \"userpool\"\n cfn_type = 'AWS::Cognito::UserPool'\n universal_taggable = object()\n\n source_mapping = {\n 'describe': DescribeUserPool,\n }\n\n\[email protected]_registry.register('delete')\nclass DeleteUserPool(BaseAction):\n \"\"\"Action to delete cognito user pool\n\n It is recommended to use a filter to avoid unwanted deletion of pools\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: user-pool-delete\n resource: user-pool\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cognito-idp:DeleteUserPool\",)\n\n def process(self, pools):\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.process_pool, pools))\n\n def process_pool(self, pool):\n client = local_session(\n self.manager.session_factory).client('cognito-idp')\n try:\n client.delete_user_pool(UserPoolId=pool['Id'])\n except ClientError as e:\n self.log.exception(\n \"Exception deleting user pool:\\n %s\" % e)\n", "path": "c7n/resources/cognito.py"}], "after_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nfrom botocore.exceptions import ClientError\n\nfrom c7n.actions import BaseAction\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager, TypeInfo, DescribeSource\nfrom c7n.tags import universal_augment\nfrom c7n.utils import local_session, type_schema\n\n\nclass DescribeIdentityPool(DescribeSource):\n def augment(self, resources):\n resources = super().augment(resources)\n return universal_augment(self.manager, resources)\n\n\nclass DescribeUserPool(DescribeSource):\n def augment(self, resources):\n resources = super().augment(resources)\n return universal_augment(self.manager, resources)\n\n\[email protected]('identity-pool')\nclass CognitoIdentityPool(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'cognito-identity'\n enum_spec = ('list_identity_pools', 'IdentityPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_identity_pool', 'IdentityPoolId', 'IdentityPoolId', None)\n id = 'IdentityPoolId'\n name = 'IdentityPoolName'\n arn_type = \"identitypool\"\n cfn_type = 'AWS::Cognito::IdentityPool'\n universal_taggable = object()\n\n source_mapping = {\n 'describe': DescribeIdentityPool,\n }\n\n\[email protected]_registry.register('delete')\nclass DeleteIdentityPool(BaseAction):\n \"\"\"Action to delete cognito identity pool\n\n It is recommended to use a filter to avoid unwanted deletion of pools\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: identity-pool-delete\n resource: identity-pool\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cognito-identity:DeleteIdentityPool\",)\n\n def process(self, pools):\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.process_pool, pools))\n\n def process_pool(self, pool):\n client = local_session(\n self.manager.session_factory).client('cognito-identity')\n try:\n client.delete_identity_pool(IdentityPoolId=pool['IdentityPoolId'])\n except ClientError as e:\n self.log.exception(\n \"Exception deleting identity pool:\\n %s\" % e)\n\n\[email protected]('user-pool')\nclass CognitoUserPool(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = \"cognito-idp\"\n enum_spec = ('list_user_pools', 'UserPools', {'MaxResults': 60})\n detail_spec = (\n 'describe_user_pool', 'UserPoolId', 'Id', 'UserPool')\n id = 'Id'\n name = 'Name'\n arn_type = \"userpool\"\n cfn_type = 'AWS::Cognito::UserPool'\n universal_taggable = object()\n\n source_mapping = {\n 'describe': DescribeUserPool,\n }\n\n\[email protected]_registry.register('delete')\nclass DeleteUserPool(BaseAction):\n \"\"\"Action to delete cognito user pool\n\n It is recommended to use a filter to avoid unwanted deletion of pools\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: user-pool-delete\n resource: user-pool\n actions:\n - delete\n \"\"\"\n\n schema = type_schema('delete')\n permissions = (\"cognito-idp:DeleteUserPool\",)\n\n def process(self, pools):\n with self.executor_factory(max_workers=2) as w:\n list(w.map(self.process_pool, pools))\n\n def process_pool(self, pool):\n client = local_session(\n self.manager.session_factory).client('cognito-idp')\n try:\n client.delete_user_pool(UserPoolId=pool['Id'])\n except ClientError as e:\n self.log.exception(\n \"Exception deleting user pool:\\n %s\" % e)\n", "path": "c7n/resources/cognito.py"}]}
| 3,532 | 88 |
gh_patches_debug_10522
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-4437
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't serialize timedelta column
If you have a data frame with a column of timedeltas, then make a ColumnDataSource out of it, the ColumnDataSource will be created:
``` python
In [9]: source.data['delta']
Out[9]:
[Timedelta('0 days 00:33:00'),
Timedelta('0 days 00:35:00'),
Timedelta('0 days 03:01:00')]
```
But if you use that source in a plot, even if you don't use the column, when it comes time to serialize (show/save/embed etc) the plot, it fails:
``` python
Timedelta('0 days 00:33:00') is not JSON serializable
```
Maybe we can provide some validation on ColumnDataSource creation? Or, at least provide a more helpful message on failure, as it's not immediately obvious what went wrong.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/core/json_encoder.py`
Content:
```
1 ''' Provide a custom JSON encoder for serializing Bokeh models.
2
3 '''
4 from __future__ import absolute_import
5
6 import logging
7 log = logging.getLogger(__name__)
8
9 import datetime as dt
10 import decimal
11 import json
12 import time
13
14 import numpy as np
15
16 from ..settings import settings
17 from ..util.dependencies import import_optional
18 from ..util.serialization import transform_series, transform_array
19
20 pd = import_optional('pandas')
21 rd = import_optional("dateutil.relativedelta")
22
23 class BokehJSONEncoder(json.JSONEncoder):
24 ''' Encode values to be used in Bokeh documents or communicated to
25 a Bokeh server.
26
27 '''
28 def transform_python_types(self, obj):
29 ''' Handle special scalars, use default json encoder otherwise
30
31 '''
32 # Pandas Timestamp
33 if pd and isinstance(obj, pd.tslib.Timestamp):
34 return obj.value / 10**6.0 #nanosecond to millisecond
35 elif np.issubdtype(type(obj), np.float):
36 return float(obj)
37 elif np.issubdtype(type(obj), np.int):
38 return int(obj)
39 elif np.issubdtype(type(obj), np.bool_):
40 return bool(obj)
41 # Datetime
42 # datetime is a subclass of date.
43 elif isinstance(obj, dt.datetime):
44 return time.mktime(obj.timetuple()) * 1000. + obj.microsecond / 1000.
45 # Date
46 elif isinstance(obj, dt.date):
47 return time.mktime(obj.timetuple()) * 1000.
48 # Numpy datetime64
49 elif isinstance(obj, np.datetime64):
50 epoch_delta = obj - np.datetime64('1970-01-01T00:00:00Z')
51 return (epoch_delta / np.timedelta64(1, 'ms'))
52 # Time
53 elif isinstance(obj, dt.time):
54 return (obj.hour * 3600 + obj.minute * 60 + obj.second) * 1000 + obj.microsecond / 1000.
55 elif rd and isinstance(obj, rd.relativedelta):
56 return dict(years=obj.years, months=obj.months, days=obj.days, hours=obj.hours,
57 minutes=obj.minutes, seconds=obj.seconds, microseconds=obj.microseconds)
58 # Decimal
59 elif isinstance(obj, decimal.Decimal):
60 return float(obj)
61 else:
62 return super(BokehJSONEncoder, self).default(obj)
63
64 def default(self, obj):
65 #argh! local import!
66 from ..model import Model
67 from ..colors import Color
68 from .properties import HasProps
69 ## array types
70 if pd and isinstance(obj, (pd.Series, pd.Index)):
71 return transform_series(obj)
72 elif isinstance(obj, np.ndarray):
73 return transform_array(obj)
74 elif isinstance(obj, Model):
75 return obj.ref
76 elif isinstance(obj, HasProps):
77 return obj.properties_with_values(include_defaults=False)
78 elif isinstance(obj, Color):
79 return obj.to_css()
80 else:
81 return self.transform_python_types(obj)
82
83 def serialize_json(obj, encoder=BokehJSONEncoder, indent=None, **kwargs):
84 ''' Return a serialized JSON representation of a Bokeh model.
85
86 '''
87 pretty = settings.pretty(False)
88
89 if pretty:
90 separators=(",", ": ")
91 else:
92 separators=(",", ":")
93
94 if pretty and indent is None:
95 indent = 2
96
97 return json.dumps(obj, cls=encoder, allow_nan=False, indent=indent, separators=separators, sort_keys=True, **kwargs)
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bokeh/core/json_encoder.py b/bokeh/core/json_encoder.py
--- a/bokeh/core/json_encoder.py
+++ b/bokeh/core/json_encoder.py
@@ -42,6 +42,10 @@
# datetime is a subclass of date.
elif isinstance(obj, dt.datetime):
return time.mktime(obj.timetuple()) * 1000. + obj.microsecond / 1000.
+ # Timedelta
+ # timedelta is class in the datetime library
+ elif isinstance(obj, dt.timedelta):
+ return dict(days=obj.days, seconds=obj.seconds, microseconds=obj.microseconds)
# Date
elif isinstance(obj, dt.date):
return time.mktime(obj.timetuple()) * 1000.
|
{"golden_diff": "diff --git a/bokeh/core/json_encoder.py b/bokeh/core/json_encoder.py\n--- a/bokeh/core/json_encoder.py\n+++ b/bokeh/core/json_encoder.py\n@@ -42,6 +42,10 @@\n # datetime is a subclass of date.\n elif isinstance(obj, dt.datetime):\n return time.mktime(obj.timetuple()) * 1000. + obj.microsecond / 1000.\n+ # Timedelta\n+ # timedelta is class in the datetime library\n+ elif isinstance(obj, dt.timedelta):\n+ return dict(days=obj.days, seconds=obj.seconds, microseconds=obj.microseconds)\n # Date\n elif isinstance(obj, dt.date):\n return time.mktime(obj.timetuple()) * 1000.\n", "issue": "Can't serialize timedelta column\nIf you have a data frame with a column of timedeltas, then make a ColumnDataSource out of it, the ColumnDataSource will be created:\n\n``` python\nIn [9]: source.data['delta']\nOut[9]:\n[Timedelta('0 days 00:33:00'),\n Timedelta('0 days 00:35:00'),\n Timedelta('0 days 03:01:00')]\n```\n\nBut if you use that source in a plot, even if you don't use the column, when it comes time to serialize (show/save/embed etc) the plot, it fails:\n\n``` python\nTimedelta('0 days 00:33:00') is not JSON serializable\n```\n\nMaybe we can provide some validation on ColumnDataSource creation? Or, at least provide a more helpful message on failure, as it's not immediately obvious what went wrong.\n\n", "before_files": [{"content": "''' Provide a custom JSON encoder for serializing Bokeh models.\n\n'''\nfrom __future__ import absolute_import\n\nimport logging\nlog = logging.getLogger(__name__)\n\nimport datetime as dt\nimport decimal\nimport json\nimport time\n\nimport numpy as np\n\nfrom ..settings import settings\nfrom ..util.dependencies import import_optional\nfrom ..util.serialization import transform_series, transform_array\n\npd = import_optional('pandas')\nrd = import_optional(\"dateutil.relativedelta\")\n\nclass BokehJSONEncoder(json.JSONEncoder):\n ''' Encode values to be used in Bokeh documents or communicated to\n a Bokeh server.\n\n '''\n def transform_python_types(self, obj):\n ''' Handle special scalars, use default json encoder otherwise\n\n '''\n # Pandas Timestamp\n if pd and isinstance(obj, pd.tslib.Timestamp):\n return obj.value / 10**6.0 #nanosecond to millisecond\n elif np.issubdtype(type(obj), np.float):\n return float(obj)\n elif np.issubdtype(type(obj), np.int):\n return int(obj)\n elif np.issubdtype(type(obj), np.bool_):\n return bool(obj)\n # Datetime\n # datetime is a subclass of date.\n elif isinstance(obj, dt.datetime):\n return time.mktime(obj.timetuple()) * 1000. + obj.microsecond / 1000.\n # Date\n elif isinstance(obj, dt.date):\n return time.mktime(obj.timetuple()) * 1000.\n # Numpy datetime64\n elif isinstance(obj, np.datetime64):\n epoch_delta = obj - np.datetime64('1970-01-01T00:00:00Z')\n return (epoch_delta / np.timedelta64(1, 'ms'))\n # Time\n elif isinstance(obj, dt.time):\n return (obj.hour * 3600 + obj.minute * 60 + obj.second) * 1000 + obj.microsecond / 1000.\n elif rd and isinstance(obj, rd.relativedelta):\n return dict(years=obj.years, months=obj.months, days=obj.days, hours=obj.hours,\n minutes=obj.minutes, seconds=obj.seconds, microseconds=obj.microseconds)\n # Decimal\n elif isinstance(obj, decimal.Decimal):\n return float(obj)\n else:\n return super(BokehJSONEncoder, self).default(obj)\n\n def default(self, obj):\n #argh! local import!\n from ..model import Model\n from ..colors import Color\n from .properties import HasProps\n ## array types\n if pd and isinstance(obj, (pd.Series, pd.Index)):\n return transform_series(obj)\n elif isinstance(obj, np.ndarray):\n return transform_array(obj)\n elif isinstance(obj, Model):\n return obj.ref\n elif isinstance(obj, HasProps):\n return obj.properties_with_values(include_defaults=False)\n elif isinstance(obj, Color):\n return obj.to_css()\n else:\n return self.transform_python_types(obj)\n\ndef serialize_json(obj, encoder=BokehJSONEncoder, indent=None, **kwargs):\n ''' Return a serialized JSON representation of a Bokeh model.\n\n '''\n pretty = settings.pretty(False)\n\n if pretty:\n separators=(\",\", \": \")\n else:\n separators=(\",\", \":\")\n\n if pretty and indent is None:\n indent = 2\n\n return json.dumps(obj, cls=encoder, allow_nan=False, indent=indent, separators=separators, sort_keys=True, **kwargs)\n", "path": "bokeh/core/json_encoder.py"}], "after_files": [{"content": "''' Provide a custom JSON encoder for serializing Bokeh models.\n\n'''\nfrom __future__ import absolute_import\n\nimport logging\nlog = logging.getLogger(__name__)\n\nimport datetime as dt\nimport decimal\nimport json\nimport time\n\nimport numpy as np\n\nfrom ..settings import settings\nfrom ..util.dependencies import import_optional\nfrom ..util.serialization import transform_series, transform_array\n\npd = import_optional('pandas')\nrd = import_optional(\"dateutil.relativedelta\")\n\nclass BokehJSONEncoder(json.JSONEncoder):\n ''' Encode values to be used in Bokeh documents or communicated to\n a Bokeh server.\n\n '''\n def transform_python_types(self, obj):\n ''' Handle special scalars, use default json encoder otherwise\n\n '''\n # Pandas Timestamp\n if pd and isinstance(obj, pd.tslib.Timestamp):\n return obj.value / 10**6.0 #nanosecond to millisecond\n elif np.issubdtype(type(obj), np.float):\n return float(obj)\n elif np.issubdtype(type(obj), np.int):\n return int(obj)\n elif np.issubdtype(type(obj), np.bool_):\n return bool(obj)\n # Datetime\n # datetime is a subclass of date.\n elif isinstance(obj, dt.datetime):\n return time.mktime(obj.timetuple()) * 1000. + obj.microsecond / 1000.\n # Timedelta\n # timedelta is class in the datetime library\n elif isinstance(obj, dt.timedelta):\n return dict(days=obj.days, seconds=obj.seconds, microseconds=obj.microseconds)\n # Date\n elif isinstance(obj, dt.date):\n return time.mktime(obj.timetuple()) * 1000.\n # Numpy datetime64\n elif isinstance(obj, np.datetime64):\n epoch_delta = obj - np.datetime64('1970-01-01T00:00:00Z')\n return (epoch_delta / np.timedelta64(1, 'ms'))\n # Time\n elif isinstance(obj, dt.time):\n return (obj.hour * 3600 + obj.minute * 60 + obj.second) * 1000 + obj.microsecond / 1000.\n elif rd and isinstance(obj, rd.relativedelta):\n return dict(years=obj.years, months=obj.months, days=obj.days, hours=obj.hours,\n minutes=obj.minutes, seconds=obj.seconds, microseconds=obj.microseconds)\n # Decimal\n elif isinstance(obj, decimal.Decimal):\n return float(obj)\n else:\n return super(BokehJSONEncoder, self).default(obj)\n\n def default(self, obj):\n #argh! local import!\n from ..model import Model\n from ..colors import Color\n from .properties import HasProps\n ## array types\n if pd and isinstance(obj, (pd.Series, pd.Index)):\n return transform_series(obj)\n elif isinstance(obj, np.ndarray):\n return transform_array(obj)\n elif isinstance(obj, Model):\n return obj.ref\n elif isinstance(obj, HasProps):\n return obj.properties_with_values(include_defaults=False)\n elif isinstance(obj, Color):\n return obj.to_css()\n else:\n return self.transform_python_types(obj)\n\ndef serialize_json(obj, encoder=BokehJSONEncoder, indent=None, **kwargs):\n ''' Return a serialized JSON representation of a Bokeh model.\n\n '''\n pretty = settings.pretty(False)\n\n if pretty:\n separators=(\",\", \": \")\n else:\n separators=(\",\", \":\")\n\n if pretty and indent is None:\n indent = 2\n\n return json.dumps(obj, cls=encoder, allow_nan=False, indent=indent, separators=separators, sort_keys=True, **kwargs)\n", "path": "bokeh/core/json_encoder.py"}]}
| 1,416 | 170 |
gh_patches_debug_15513
|
rasdani/github-patches
|
git_diff
|
mdn__kuma-6585
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"Help us translate this document" link not working
**Summary**
<img width="1089" alt="Screen Shot 2020-02-19 at 9 29 52 AM" src="https://user-images.githubusercontent.com/26739/74843888-94cd3700-52fa-11ea-9407-d6cefd8651eb.png">
**Steps To Reproduce (STR)**
Try https://developer.mozilla.org/sv-SE/docs/Web/JavaScript/Reference/Functions
You might need to be signed in.
**Actual behavior**
That the`Snälla hjälp oss att översätta denna artikel från engelska` text is a clicklable link.
**Expected behavior**
It's not clickable because the `<a>` element lacks a `href`.
**Additional context**
Seems to happen in FirefoxNightly and latest stable Chrome.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kuma/api/v1/views.py`
Content:
```
1 from django.conf import settings
2 from django.http import Http404, HttpResponsePermanentRedirect, JsonResponse
3 from django.shortcuts import get_object_or_404
4 from django.utils.translation import activate, ugettext as _
5 from django.views.decorators.cache import never_cache
6 from django.views.decorators.http import require_GET, require_safe
7 from ratelimit.decorators import ratelimit
8 from rest_framework import serializers, status
9 from rest_framework.decorators import api_view
10 from rest_framework.renderers import JSONRenderer
11 from rest_framework.response import Response
12 from waffle.models import Flag, Sample, Switch
13
14 from kuma.api.v1.serializers import BCSignalSerializer
15 from kuma.core.urlresolvers import reverse
16 from kuma.search.filters import (
17 HighlightFilterBackend,
18 KeywordQueryBackend,
19 LanguageFilterBackend,
20 SearchQueryBackend,
21 TagGroupFilterBackend,
22 )
23 from kuma.search.search import SearchView
24 from kuma.users.models import User
25 from kuma.users.templatetags.jinja_helpers import get_avatar_url
26 from kuma.wiki.models import Document
27 from kuma.wiki.templatetags.jinja_helpers import absolutify
28
29
30 @never_cache
31 @require_GET
32 def doc(request, locale, slug):
33 """
34 Return a JSON object that includes document content and metadata
35 for the document specified by the locale and path. Raises a 404
36 error if no such document exists. This is an API with URL
37 /api/v1/doc/<locale>/<path>
38 """
39 # TODO: This API endpoint probably needs to handle redirect documents
40 # and documents that fall back to the en-US locale. See
41 # the document() function in wiki/views/document.py for a model to follow.
42
43 # Since we don't have the locale at the start of the path, our
44 # locale middleware can't set the translation language correctly
45 # and we need to do it explicitly. (We need to know the language
46 # so that we can provide translated language names for the
47 # translations menu.)
48 activate(locale)
49 document = get_object_or_404(Document, locale=locale, slug=slug)
50
51 redirect = get_content_based_redirect(document)
52 if redirect:
53 redirect_url, is_redirect_to_document = redirect
54 if is_redirect_to_document:
55 return HttpResponsePermanentRedirect(redirect_url)
56 return JsonResponse(document_api_data(redirect_url=redirect_url))
57
58 return JsonResponse(document_api_data(document))
59
60
61 def get_s3_key(
62 doc=None,
63 locale=None,
64 slug=None,
65 prefix_with_forward_slash=False,
66 suffix_file_extension=True,
67 ):
68 if doc:
69 locale, slug = doc.locale, doc.slug
70 key = reverse("api.v1.doc", args=(locale, slug))
71 if suffix_file_extension:
72 key += ".json"
73 if prefix_with_forward_slash:
74 # Redirects within an S3 bucket must be prefixed with "/".
75 return key
76 return key.lstrip("/")
77
78
79 def get_cdn_key(locale, slug):
80 """Given a document's locale and slug, return the "key" for the CDN."""
81 return get_s3_key(
82 locale=locale,
83 slug=slug,
84 prefix_with_forward_slash=True,
85 suffix_file_extension=False,
86 )
87
88
89 def get_content_based_redirect(document):
90 """
91 Returns None if the document is not a content-based redirect, otherwise a
92 tuple pair comprising the redirect URL as well as a boolean value. The
93 boolean value will be True if this is a redirect to another document,
94 otherwise False. If the document is a redirect to another document or a
95 redirect to the homepage, a relative URL will be returned, otherwise it
96 will be a full URL to the wiki site.
97 """
98 redirect_url = document.get_redirect_url()
99 if redirect_url and (redirect_url != document.get_absolute_url()):
100 redirect_document = document.get_redirect_document(id_only=False)
101 if redirect_document:
102 # This is a redirect to another document.
103 return (
104 get_s3_key(
105 redirect_document,
106 prefix_with_forward_slash=True,
107 suffix_file_extension=False,
108 ),
109 True,
110 )
111 # This is a redirect to non-document page. For now, if it's the home
112 # page, return a relative path (so we stay on the read-only domain),
113 # otherwise return the full URL for the wiki site.
114 locale = document.locale
115 is_home_page = redirect_url in ("/", "/" + locale, "/{}/".format(locale))
116 if is_home_page:
117 # Let's return a relative URL to the home page for this locale.
118 return ("/{}/".format(locale), False)
119 # Otherwise, let's return a full URL to the Wiki site.
120 return (absolutify(redirect_url, for_wiki_site=True), False)
121 return None
122
123
124 def document_api_data(doc=None, redirect_url=None):
125 """
126 Returns the JSON data for the document for the document API.
127 """
128 if redirect_url:
129 return {
130 "documentData": None,
131 "redirectURL": redirect_url,
132 }
133
134 # The original english slug for this document, for google analytics
135 if doc.locale == "en-US":
136 en_slug = doc.slug
137 elif doc.parent_id and doc.parent.locale == "en-US":
138 en_slug = doc.parent.slug
139 else:
140 en_slug = ""
141
142 other_translations = doc.get_other_translations(
143 fields=("locale", "slug", "title", "parent")
144 )
145 available_locales = {doc.locale} | set(t.locale for t in other_translations)
146
147 doc_absolute_url = doc.get_absolute_url()
148 revision = doc.current_or_latest_revision()
149 translation_status = None
150 if doc.parent_id and revision and revision.localization_in_progress:
151 translation_status = (
152 "outdated" if revision.translation_age >= 10 else "in-progress"
153 )
154 return {
155 "documentData": {
156 "locale": doc.locale,
157 "slug": doc.slug,
158 "enSlug": en_slug,
159 "id": doc.id,
160 "title": doc.title,
161 "summary": doc.get_summary_html(),
162 "language": doc.language,
163 "hrefLang": doc.get_hreflang(available_locales),
164 "absoluteURL": doc_absolute_url,
165 "wikiURL": absolutify(doc_absolute_url, for_wiki_site=True),
166 "translateURL": (
167 absolutify(
168 reverse("wiki.select_locale", args=(doc.slug,), locale=doc.locale,),
169 for_wiki_site=True,
170 )
171 if doc.is_localizable
172 else None
173 ),
174 "translationStatus": translation_status,
175 "bodyHTML": doc.get_body_html(),
176 "quickLinksHTML": doc.get_quick_links_html(),
177 "tocHTML": doc.get_toc_html(),
178 "raw": doc.html,
179 "parents": [
180 {"url": d.get_absolute_url(), "title": d.title} for d in doc.parents
181 ],
182 "translations": [
183 {
184 "language": t.language,
185 "hrefLang": t.get_hreflang(available_locales),
186 "localizedLanguage": _(settings.LOCALES[t.locale].english),
187 "locale": t.locale,
188 "url": t.get_absolute_url(),
189 "title": t.title,
190 }
191 for t in other_translations
192 ],
193 "lastModified": (
194 doc.current_revision and doc.current_revision.created.isoformat()
195 ),
196 },
197 "redirectURL": None,
198 }
199
200
201 @never_cache
202 @require_GET
203 def whoami(request):
204 """
205 Return a JSON object representing the current user, either
206 authenticated or anonymous.
207 """
208 user = request.user
209 if user.is_authenticated:
210 data = {
211 "username": user.username,
212 "timezone": user.timezone,
213 "is_authenticated": True,
214 "is_staff": user.is_staff,
215 "is_superuser": user.is_superuser,
216 "is_beta_tester": user.is_beta_tester,
217 "avatar_url": get_avatar_url(user),
218 }
219 else:
220 data = {
221 "username": None,
222 "timezone": settings.TIME_ZONE,
223 "is_authenticated": False,
224 "is_staff": False,
225 "is_superuser": False,
226 "is_beta_tester": False,
227 "avatar_url": None,
228 }
229
230 # Add waffle data to the dict we're going to be returning.
231 # This is what the waffle.wafflejs() template tag does, but we're
232 # doing it via an API instead of hardcoding the settings into
233 # the HTML page. See also from waffle.views._generate_waffle_js.
234 #
235 # Note that if we upgrade django-waffle, version 15 introduces a
236 # pluggable flag model, and the approved way to get all flag
237 # objects will then become:
238 # get_waffle_flag_model().get_all()
239 #
240 data["waffle"] = {
241 "flags": {f.name: f.is_active(request) for f in Flag.get_all()},
242 "switches": {s.name: s.is_active() for s in Switch.get_all()},
243 "samples": {s.name: s.is_active() for s in Sample.get_all()},
244 }
245 return JsonResponse(data)
246
247
248 class APIDocumentSerializer(serializers.Serializer):
249 title = serializers.CharField(read_only=True, max_length=255)
250 slug = serializers.CharField(read_only=True, max_length=255)
251 locale = serializers.CharField(read_only=True, max_length=7)
252 excerpt = serializers.ReadOnlyField(source="get_excerpt")
253
254
255 class APILanguageFilterBackend(LanguageFilterBackend):
256 """Override of kuma.search.filters:LanguageFilterBackend that is almost
257 exactly the same except the locale comes from custom code rather than
258 via kuma.core.i18n.get_language_from_request because that can't be used
259 in the API.
260
261 Basically, it's the same exact functionality but ...
262 """
263
264 def filter_queryset(self, request, queryset, view):
265 locale = request.GET.get("locale") or settings.LANGUAGE_CODE
266 if locale not in settings.ACCEPTED_LOCALES:
267 raise serializers.ValidationError({"error": "Not a valid locale code"})
268 request.LANGUAGE_CODE = locale
269 return super(APILanguageFilterBackend, self).filter_queryset(
270 request, queryset, view
271 )
272
273
274 class APISearchQueryBackend(SearchQueryBackend):
275 """Override of kuma.search.filters.SearchQueryBackend that makes a
276 stink if the 'q' query parameter is falsy."""
277
278 def filter_queryset(self, request, queryset, view):
279 search_term = (view.query_params.get("q") or "").strip()
280 if not search_term:
281 raise serializers.ValidationError({"error": "Search term 'q' must be set"})
282 return super(APISearchQueryBackend, self).filter_queryset(
283 request, queryset, view
284 )
285
286
287 class APISearchView(SearchView):
288 serializer_class = APIDocumentSerializer
289 renderer_classes = [JSONRenderer]
290 filter_backends = (
291 APISearchQueryBackend,
292 KeywordQueryBackend,
293 TagGroupFilterBackend,
294 APILanguageFilterBackend,
295 HighlightFilterBackend,
296 )
297
298
299 search = never_cache(APISearchView.as_view())
300
301
302 @ratelimit(key="user_or_ip", rate="10/d", block=True)
303 @api_view(["POST"])
304 def bc_signal(request):
305 if not settings.ENABLE_BCD_SIGNAL:
306 return Response("not enabled", status=status.HTTP_400_BAD_REQUEST)
307
308 serializer = BCSignalSerializer(data=request.data)
309 if serializer.is_valid():
310 serializer.save()
311 return Response(serializer.validated_data, status=status.HTTP_201_CREATED)
312 return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
313
314
315 @never_cache
316 @require_safe
317 def get_user(request, username):
318 """
319 Returns a JSON response with a small subset of public information if a
320 user with the given username exists, otherwise returns a status code of
321 404. The case of the username is not important, since the collation of
322 the username column of the user table in MySQL is case-insensitive.
323 """
324 fields = (
325 "username",
326 "title",
327 "fullname",
328 "organization",
329 "location",
330 "timezone",
331 "locale",
332 )
333 try:
334 user = User.objects.only(*fields).get(username=username)
335 except User.DoesNotExist:
336 raise Http404(f'No user exists with the username "{username}".')
337 data = {field: getattr(user, field) for field in fields}
338 data["avatar_url"] = get_avatar_url(user)
339 return JsonResponse(data)
340
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kuma/api/v1/views.py b/kuma/api/v1/views.py
--- a/kuma/api/v1/views.py
+++ b/kuma/api/v1/views.py
@@ -163,9 +163,13 @@
"hrefLang": doc.get_hreflang(available_locales),
"absoluteURL": doc_absolute_url,
"wikiURL": absolutify(doc_absolute_url, for_wiki_site=True),
+ "editURL": absolutify(
+ reverse("wiki.edit", args=(doc.slug,), locale=doc.locale),
+ for_wiki_site=True,
+ ),
"translateURL": (
absolutify(
- reverse("wiki.select_locale", args=(doc.slug,), locale=doc.locale,),
+ reverse("wiki.select_locale", args=(doc.slug,), locale=doc.locale),
for_wiki_site=True,
)
if doc.is_localizable
|
{"golden_diff": "diff --git a/kuma/api/v1/views.py b/kuma/api/v1/views.py\n--- a/kuma/api/v1/views.py\n+++ b/kuma/api/v1/views.py\n@@ -163,9 +163,13 @@\n \"hrefLang\": doc.get_hreflang(available_locales),\n \"absoluteURL\": doc_absolute_url,\n \"wikiURL\": absolutify(doc_absolute_url, for_wiki_site=True),\n+ \"editURL\": absolutify(\n+ reverse(\"wiki.edit\", args=(doc.slug,), locale=doc.locale),\n+ for_wiki_site=True,\n+ ),\n \"translateURL\": (\n absolutify(\n- reverse(\"wiki.select_locale\", args=(doc.slug,), locale=doc.locale,),\n+ reverse(\"wiki.select_locale\", args=(doc.slug,), locale=doc.locale),\n for_wiki_site=True,\n )\n if doc.is_localizable\n", "issue": "\"Help us translate this document\" link not working\n**Summary**\r\n<img width=\"1089\" alt=\"Screen Shot 2020-02-19 at 9 29 52 AM\" src=\"https://user-images.githubusercontent.com/26739/74843888-94cd3700-52fa-11ea-9407-d6cefd8651eb.png\">\r\n\r\n\r\n**Steps To Reproduce (STR)**\r\n\r\nTry https://developer.mozilla.org/sv-SE/docs/Web/JavaScript/Reference/Functions\r\nYou might need to be signed in. \r\n\r\n**Actual behavior**\r\nThat the`Sn\u00e4lla hj\u00e4lp oss att \u00f6vers\u00e4tta denna artikel fr\u00e5n engelska` text is a clicklable link. \r\n\r\n\r\n**Expected behavior**\r\nIt's not clickable because the `<a>` element lacks a `href`. \r\n\r\n\r\n**Additional context**\r\nSeems to happen in FirefoxNightly and latest stable Chrome. \r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.http import Http404, HttpResponsePermanentRedirect, JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.utils.translation import activate, ugettext as _\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.http import require_GET, require_safe\nfrom ratelimit.decorators import ratelimit\nfrom rest_framework import serializers, status\nfrom rest_framework.decorators import api_view\nfrom rest_framework.renderers import JSONRenderer\nfrom rest_framework.response import Response\nfrom waffle.models import Flag, Sample, Switch\n\nfrom kuma.api.v1.serializers import BCSignalSerializer\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.search.filters import (\n HighlightFilterBackend,\n KeywordQueryBackend,\n LanguageFilterBackend,\n SearchQueryBackend,\n TagGroupFilterBackend,\n)\nfrom kuma.search.search import SearchView\nfrom kuma.users.models import User\nfrom kuma.users.templatetags.jinja_helpers import get_avatar_url\nfrom kuma.wiki.models import Document\nfrom kuma.wiki.templatetags.jinja_helpers import absolutify\n\n\n@never_cache\n@require_GET\ndef doc(request, locale, slug):\n \"\"\"\n Return a JSON object that includes document content and metadata\n for the document specified by the locale and path. Raises a 404\n error if no such document exists. This is an API with URL\n /api/v1/doc/<locale>/<path>\n \"\"\"\n # TODO: This API endpoint probably needs to handle redirect documents\n # and documents that fall back to the en-US locale. See\n # the document() function in wiki/views/document.py for a model to follow.\n\n # Since we don't have the locale at the start of the path, our\n # locale middleware can't set the translation language correctly\n # and we need to do it explicitly. (We need to know the language\n # so that we can provide translated language names for the\n # translations menu.)\n activate(locale)\n document = get_object_or_404(Document, locale=locale, slug=slug)\n\n redirect = get_content_based_redirect(document)\n if redirect:\n redirect_url, is_redirect_to_document = redirect\n if is_redirect_to_document:\n return HttpResponsePermanentRedirect(redirect_url)\n return JsonResponse(document_api_data(redirect_url=redirect_url))\n\n return JsonResponse(document_api_data(document))\n\n\ndef get_s3_key(\n doc=None,\n locale=None,\n slug=None,\n prefix_with_forward_slash=False,\n suffix_file_extension=True,\n):\n if doc:\n locale, slug = doc.locale, doc.slug\n key = reverse(\"api.v1.doc\", args=(locale, slug))\n if suffix_file_extension:\n key += \".json\"\n if prefix_with_forward_slash:\n # Redirects within an S3 bucket must be prefixed with \"/\".\n return key\n return key.lstrip(\"/\")\n\n\ndef get_cdn_key(locale, slug):\n \"\"\"Given a document's locale and slug, return the \"key\" for the CDN.\"\"\"\n return get_s3_key(\n locale=locale,\n slug=slug,\n prefix_with_forward_slash=True,\n suffix_file_extension=False,\n )\n\n\ndef get_content_based_redirect(document):\n \"\"\"\n Returns None if the document is not a content-based redirect, otherwise a\n tuple pair comprising the redirect URL as well as a boolean value. The\n boolean value will be True if this is a redirect to another document,\n otherwise False. If the document is a redirect to another document or a\n redirect to the homepage, a relative URL will be returned, otherwise it\n will be a full URL to the wiki site.\n \"\"\"\n redirect_url = document.get_redirect_url()\n if redirect_url and (redirect_url != document.get_absolute_url()):\n redirect_document = document.get_redirect_document(id_only=False)\n if redirect_document:\n # This is a redirect to another document.\n return (\n get_s3_key(\n redirect_document,\n prefix_with_forward_slash=True,\n suffix_file_extension=False,\n ),\n True,\n )\n # This is a redirect to non-document page. For now, if it's the home\n # page, return a relative path (so we stay on the read-only domain),\n # otherwise return the full URL for the wiki site.\n locale = document.locale\n is_home_page = redirect_url in (\"/\", \"/\" + locale, \"/{}/\".format(locale))\n if is_home_page:\n # Let's return a relative URL to the home page for this locale.\n return (\"/{}/\".format(locale), False)\n # Otherwise, let's return a full URL to the Wiki site.\n return (absolutify(redirect_url, for_wiki_site=True), False)\n return None\n\n\ndef document_api_data(doc=None, redirect_url=None):\n \"\"\"\n Returns the JSON data for the document for the document API.\n \"\"\"\n if redirect_url:\n return {\n \"documentData\": None,\n \"redirectURL\": redirect_url,\n }\n\n # The original english slug for this document, for google analytics\n if doc.locale == \"en-US\":\n en_slug = doc.slug\n elif doc.parent_id and doc.parent.locale == \"en-US\":\n en_slug = doc.parent.slug\n else:\n en_slug = \"\"\n\n other_translations = doc.get_other_translations(\n fields=(\"locale\", \"slug\", \"title\", \"parent\")\n )\n available_locales = {doc.locale} | set(t.locale for t in other_translations)\n\n doc_absolute_url = doc.get_absolute_url()\n revision = doc.current_or_latest_revision()\n translation_status = None\n if doc.parent_id and revision and revision.localization_in_progress:\n translation_status = (\n \"outdated\" if revision.translation_age >= 10 else \"in-progress\"\n )\n return {\n \"documentData\": {\n \"locale\": doc.locale,\n \"slug\": doc.slug,\n \"enSlug\": en_slug,\n \"id\": doc.id,\n \"title\": doc.title,\n \"summary\": doc.get_summary_html(),\n \"language\": doc.language,\n \"hrefLang\": doc.get_hreflang(available_locales),\n \"absoluteURL\": doc_absolute_url,\n \"wikiURL\": absolutify(doc_absolute_url, for_wiki_site=True),\n \"translateURL\": (\n absolutify(\n reverse(\"wiki.select_locale\", args=(doc.slug,), locale=doc.locale,),\n for_wiki_site=True,\n )\n if doc.is_localizable\n else None\n ),\n \"translationStatus\": translation_status,\n \"bodyHTML\": doc.get_body_html(),\n \"quickLinksHTML\": doc.get_quick_links_html(),\n \"tocHTML\": doc.get_toc_html(),\n \"raw\": doc.html,\n \"parents\": [\n {\"url\": d.get_absolute_url(), \"title\": d.title} for d in doc.parents\n ],\n \"translations\": [\n {\n \"language\": t.language,\n \"hrefLang\": t.get_hreflang(available_locales),\n \"localizedLanguage\": _(settings.LOCALES[t.locale].english),\n \"locale\": t.locale,\n \"url\": t.get_absolute_url(),\n \"title\": t.title,\n }\n for t in other_translations\n ],\n \"lastModified\": (\n doc.current_revision and doc.current_revision.created.isoformat()\n ),\n },\n \"redirectURL\": None,\n }\n\n\n@never_cache\n@require_GET\ndef whoami(request):\n \"\"\"\n Return a JSON object representing the current user, either\n authenticated or anonymous.\n \"\"\"\n user = request.user\n if user.is_authenticated:\n data = {\n \"username\": user.username,\n \"timezone\": user.timezone,\n \"is_authenticated\": True,\n \"is_staff\": user.is_staff,\n \"is_superuser\": user.is_superuser,\n \"is_beta_tester\": user.is_beta_tester,\n \"avatar_url\": get_avatar_url(user),\n }\n else:\n data = {\n \"username\": None,\n \"timezone\": settings.TIME_ZONE,\n \"is_authenticated\": False,\n \"is_staff\": False,\n \"is_superuser\": False,\n \"is_beta_tester\": False,\n \"avatar_url\": None,\n }\n\n # Add waffle data to the dict we're going to be returning.\n # This is what the waffle.wafflejs() template tag does, but we're\n # doing it via an API instead of hardcoding the settings into\n # the HTML page. See also from waffle.views._generate_waffle_js.\n #\n # Note that if we upgrade django-waffle, version 15 introduces a\n # pluggable flag model, and the approved way to get all flag\n # objects will then become:\n # get_waffle_flag_model().get_all()\n #\n data[\"waffle\"] = {\n \"flags\": {f.name: f.is_active(request) for f in Flag.get_all()},\n \"switches\": {s.name: s.is_active() for s in Switch.get_all()},\n \"samples\": {s.name: s.is_active() for s in Sample.get_all()},\n }\n return JsonResponse(data)\n\n\nclass APIDocumentSerializer(serializers.Serializer):\n title = serializers.CharField(read_only=True, max_length=255)\n slug = serializers.CharField(read_only=True, max_length=255)\n locale = serializers.CharField(read_only=True, max_length=7)\n excerpt = serializers.ReadOnlyField(source=\"get_excerpt\")\n\n\nclass APILanguageFilterBackend(LanguageFilterBackend):\n \"\"\"Override of kuma.search.filters:LanguageFilterBackend that is almost\n exactly the same except the locale comes from custom code rather than\n via kuma.core.i18n.get_language_from_request because that can't be used\n in the API.\n\n Basically, it's the same exact functionality but ...\n \"\"\"\n\n def filter_queryset(self, request, queryset, view):\n locale = request.GET.get(\"locale\") or settings.LANGUAGE_CODE\n if locale not in settings.ACCEPTED_LOCALES:\n raise serializers.ValidationError({\"error\": \"Not a valid locale code\"})\n request.LANGUAGE_CODE = locale\n return super(APILanguageFilterBackend, self).filter_queryset(\n request, queryset, view\n )\n\n\nclass APISearchQueryBackend(SearchQueryBackend):\n \"\"\"Override of kuma.search.filters.SearchQueryBackend that makes a\n stink if the 'q' query parameter is falsy.\"\"\"\n\n def filter_queryset(self, request, queryset, view):\n search_term = (view.query_params.get(\"q\") or \"\").strip()\n if not search_term:\n raise serializers.ValidationError({\"error\": \"Search term 'q' must be set\"})\n return super(APISearchQueryBackend, self).filter_queryset(\n request, queryset, view\n )\n\n\nclass APISearchView(SearchView):\n serializer_class = APIDocumentSerializer\n renderer_classes = [JSONRenderer]\n filter_backends = (\n APISearchQueryBackend,\n KeywordQueryBackend,\n TagGroupFilterBackend,\n APILanguageFilterBackend,\n HighlightFilterBackend,\n )\n\n\nsearch = never_cache(APISearchView.as_view())\n\n\n@ratelimit(key=\"user_or_ip\", rate=\"10/d\", block=True)\n@api_view([\"POST\"])\ndef bc_signal(request):\n if not settings.ENABLE_BCD_SIGNAL:\n return Response(\"not enabled\", status=status.HTTP_400_BAD_REQUEST)\n\n serializer = BCSignalSerializer(data=request.data)\n if serializer.is_valid():\n serializer.save()\n return Response(serializer.validated_data, status=status.HTTP_201_CREATED)\n return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)\n\n\n@never_cache\n@require_safe\ndef get_user(request, username):\n \"\"\"\n Returns a JSON response with a small subset of public information if a\n user with the given username exists, otherwise returns a status code of\n 404. The case of the username is not important, since the collation of\n the username column of the user table in MySQL is case-insensitive.\n \"\"\"\n fields = (\n \"username\",\n \"title\",\n \"fullname\",\n \"organization\",\n \"location\",\n \"timezone\",\n \"locale\",\n )\n try:\n user = User.objects.only(*fields).get(username=username)\n except User.DoesNotExist:\n raise Http404(f'No user exists with the username \"{username}\".')\n data = {field: getattr(user, field) for field in fields}\n data[\"avatar_url\"] = get_avatar_url(user)\n return JsonResponse(data)\n", "path": "kuma/api/v1/views.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.http import Http404, HttpResponsePermanentRedirect, JsonResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.utils.translation import activate, ugettext as _\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.http import require_GET, require_safe\nfrom ratelimit.decorators import ratelimit\nfrom rest_framework import serializers, status\nfrom rest_framework.decorators import api_view\nfrom rest_framework.renderers import JSONRenderer\nfrom rest_framework.response import Response\nfrom waffle.models import Flag, Sample, Switch\n\nfrom kuma.api.v1.serializers import BCSignalSerializer\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.search.filters import (\n HighlightFilterBackend,\n KeywordQueryBackend,\n LanguageFilterBackend,\n SearchQueryBackend,\n TagGroupFilterBackend,\n)\nfrom kuma.search.search import SearchView\nfrom kuma.users.models import User\nfrom kuma.users.templatetags.jinja_helpers import get_avatar_url\nfrom kuma.wiki.models import Document\nfrom kuma.wiki.templatetags.jinja_helpers import absolutify\n\n\n@never_cache\n@require_GET\ndef doc(request, locale, slug):\n \"\"\"\n Return a JSON object that includes document content and metadata\n for the document specified by the locale and path. Raises a 404\n error if no such document exists. This is an API with URL\n /api/v1/doc/<locale>/<path>\n \"\"\"\n # TODO: This API endpoint probably needs to handle redirect documents\n # and documents that fall back to the en-US locale. See\n # the document() function in wiki/views/document.py for a model to follow.\n\n # Since we don't have the locale at the start of the path, our\n # locale middleware can't set the translation language correctly\n # and we need to do it explicitly. (We need to know the language\n # so that we can provide translated language names for the\n # translations menu.)\n activate(locale)\n document = get_object_or_404(Document, locale=locale, slug=slug)\n\n redirect = get_content_based_redirect(document)\n if redirect:\n redirect_url, is_redirect_to_document = redirect\n if is_redirect_to_document:\n return HttpResponsePermanentRedirect(redirect_url)\n return JsonResponse(document_api_data(redirect_url=redirect_url))\n\n return JsonResponse(document_api_data(document))\n\n\ndef get_s3_key(\n doc=None,\n locale=None,\n slug=None,\n prefix_with_forward_slash=False,\n suffix_file_extension=True,\n):\n if doc:\n locale, slug = doc.locale, doc.slug\n key = reverse(\"api.v1.doc\", args=(locale, slug))\n if suffix_file_extension:\n key += \".json\"\n if prefix_with_forward_slash:\n # Redirects within an S3 bucket must be prefixed with \"/\".\n return key\n return key.lstrip(\"/\")\n\n\ndef get_cdn_key(locale, slug):\n \"\"\"Given a document's locale and slug, return the \"key\" for the CDN.\"\"\"\n return get_s3_key(\n locale=locale,\n slug=slug,\n prefix_with_forward_slash=True,\n suffix_file_extension=False,\n )\n\n\ndef get_content_based_redirect(document):\n \"\"\"\n Returns None if the document is not a content-based redirect, otherwise a\n tuple pair comprising the redirect URL as well as a boolean value. The\n boolean value will be True if this is a redirect to another document,\n otherwise False. If the document is a redirect to another document or a\n redirect to the homepage, a relative URL will be returned, otherwise it\n will be a full URL to the wiki site.\n \"\"\"\n redirect_url = document.get_redirect_url()\n if redirect_url and (redirect_url != document.get_absolute_url()):\n redirect_document = document.get_redirect_document(id_only=False)\n if redirect_document:\n # This is a redirect to another document.\n return (\n get_s3_key(\n redirect_document,\n prefix_with_forward_slash=True,\n suffix_file_extension=False,\n ),\n True,\n )\n # This is a redirect to non-document page. For now, if it's the home\n # page, return a relative path (so we stay on the read-only domain),\n # otherwise return the full URL for the wiki site.\n locale = document.locale\n is_home_page = redirect_url in (\"/\", \"/\" + locale, \"/{}/\".format(locale))\n if is_home_page:\n # Let's return a relative URL to the home page for this locale.\n return (\"/{}/\".format(locale), False)\n # Otherwise, let's return a full URL to the Wiki site.\n return (absolutify(redirect_url, for_wiki_site=True), False)\n return None\n\n\ndef document_api_data(doc=None, redirect_url=None):\n \"\"\"\n Returns the JSON data for the document for the document API.\n \"\"\"\n if redirect_url:\n return {\n \"documentData\": None,\n \"redirectURL\": redirect_url,\n }\n\n # The original english slug for this document, for google analytics\n if doc.locale == \"en-US\":\n en_slug = doc.slug\n elif doc.parent_id and doc.parent.locale == \"en-US\":\n en_slug = doc.parent.slug\n else:\n en_slug = \"\"\n\n other_translations = doc.get_other_translations(\n fields=(\"locale\", \"slug\", \"title\", \"parent\")\n )\n available_locales = {doc.locale} | set(t.locale for t in other_translations)\n\n doc_absolute_url = doc.get_absolute_url()\n revision = doc.current_or_latest_revision()\n translation_status = None\n if doc.parent_id and revision and revision.localization_in_progress:\n translation_status = (\n \"outdated\" if revision.translation_age >= 10 else \"in-progress\"\n )\n return {\n \"documentData\": {\n \"locale\": doc.locale,\n \"slug\": doc.slug,\n \"enSlug\": en_slug,\n \"id\": doc.id,\n \"title\": doc.title,\n \"summary\": doc.get_summary_html(),\n \"language\": doc.language,\n \"hrefLang\": doc.get_hreflang(available_locales),\n \"absoluteURL\": doc_absolute_url,\n \"wikiURL\": absolutify(doc_absolute_url, for_wiki_site=True),\n \"editURL\": absolutify(\n reverse(\"wiki.edit\", args=(doc.slug,), locale=doc.locale),\n for_wiki_site=True,\n ),\n \"translateURL\": (\n absolutify(\n reverse(\"wiki.select_locale\", args=(doc.slug,), locale=doc.locale),\n for_wiki_site=True,\n )\n if doc.is_localizable\n else None\n ),\n \"translationStatus\": translation_status,\n \"bodyHTML\": doc.get_body_html(),\n \"quickLinksHTML\": doc.get_quick_links_html(),\n \"tocHTML\": doc.get_toc_html(),\n \"raw\": doc.html,\n \"parents\": [\n {\"url\": d.get_absolute_url(), \"title\": d.title} for d in doc.parents\n ],\n \"translations\": [\n {\n \"language\": t.language,\n \"hrefLang\": t.get_hreflang(available_locales),\n \"localizedLanguage\": _(settings.LOCALES[t.locale].english),\n \"locale\": t.locale,\n \"url\": t.get_absolute_url(),\n \"title\": t.title,\n }\n for t in other_translations\n ],\n \"lastModified\": (\n doc.current_revision and doc.current_revision.created.isoformat()\n ),\n },\n \"redirectURL\": None,\n }\n\n\n@never_cache\n@require_GET\ndef whoami(request):\n \"\"\"\n Return a JSON object representing the current user, either\n authenticated or anonymous.\n \"\"\"\n user = request.user\n if user.is_authenticated:\n data = {\n \"username\": user.username,\n \"timezone\": user.timezone,\n \"is_authenticated\": True,\n \"is_staff\": user.is_staff,\n \"is_superuser\": user.is_superuser,\n \"is_beta_tester\": user.is_beta_tester,\n \"avatar_url\": get_avatar_url(user),\n }\n else:\n data = {\n \"username\": None,\n \"timezone\": settings.TIME_ZONE,\n \"is_authenticated\": False,\n \"is_staff\": False,\n \"is_superuser\": False,\n \"is_beta_tester\": False,\n \"avatar_url\": None,\n }\n\n # Add waffle data to the dict we're going to be returning.\n # This is what the waffle.wafflejs() template tag does, but we're\n # doing it via an API instead of hardcoding the settings into\n # the HTML page. See also from waffle.views._generate_waffle_js.\n #\n # Note that if we upgrade django-waffle, version 15 introduces a\n # pluggable flag model, and the approved way to get all flag\n # objects will then become:\n # get_waffle_flag_model().get_all()\n #\n data[\"waffle\"] = {\n \"flags\": {f.name: f.is_active(request) for f in Flag.get_all()},\n \"switches\": {s.name: s.is_active() for s in Switch.get_all()},\n \"samples\": {s.name: s.is_active() for s in Sample.get_all()},\n }\n return JsonResponse(data)\n\n\nclass APIDocumentSerializer(serializers.Serializer):\n title = serializers.CharField(read_only=True, max_length=255)\n slug = serializers.CharField(read_only=True, max_length=255)\n locale = serializers.CharField(read_only=True, max_length=7)\n excerpt = serializers.ReadOnlyField(source=\"get_excerpt\")\n\n\nclass APILanguageFilterBackend(LanguageFilterBackend):\n \"\"\"Override of kuma.search.filters:LanguageFilterBackend that is almost\n exactly the same except the locale comes from custom code rather than\n via kuma.core.i18n.get_language_from_request because that can't be used\n in the API.\n\n Basically, it's the same exact functionality but ...\n \"\"\"\n\n def filter_queryset(self, request, queryset, view):\n locale = request.GET.get(\"locale\") or settings.LANGUAGE_CODE\n if locale not in settings.ACCEPTED_LOCALES:\n raise serializers.ValidationError({\"error\": \"Not a valid locale code\"})\n request.LANGUAGE_CODE = locale\n return super(APILanguageFilterBackend, self).filter_queryset(\n request, queryset, view\n )\n\n\nclass APISearchQueryBackend(SearchQueryBackend):\n \"\"\"Override of kuma.search.filters.SearchQueryBackend that makes a\n stink if the 'q' query parameter is falsy.\"\"\"\n\n def filter_queryset(self, request, queryset, view):\n search_term = (view.query_params.get(\"q\") or \"\").strip()\n if not search_term:\n raise serializers.ValidationError({\"error\": \"Search term 'q' must be set\"})\n return super(APISearchQueryBackend, self).filter_queryset(\n request, queryset, view\n )\n\n\nclass APISearchView(SearchView):\n serializer_class = APIDocumentSerializer\n renderer_classes = [JSONRenderer]\n filter_backends = (\n APISearchQueryBackend,\n KeywordQueryBackend,\n TagGroupFilterBackend,\n APILanguageFilterBackend,\n HighlightFilterBackend,\n )\n\n\nsearch = never_cache(APISearchView.as_view())\n\n\n@ratelimit(key=\"user_or_ip\", rate=\"10/d\", block=True)\n@api_view([\"POST\"])\ndef bc_signal(request):\n if not settings.ENABLE_BCD_SIGNAL:\n return Response(\"not enabled\", status=status.HTTP_400_BAD_REQUEST)\n\n serializer = BCSignalSerializer(data=request.data)\n if serializer.is_valid():\n serializer.save()\n return Response(serializer.validated_data, status=status.HTTP_201_CREATED)\n return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)\n\n\n@never_cache\n@require_safe\ndef get_user(request, username):\n \"\"\"\n Returns a JSON response with a small subset of public information if a\n user with the given username exists, otherwise returns a status code of\n 404. The case of the username is not important, since the collation of\n the username column of the user table in MySQL is case-insensitive.\n \"\"\"\n fields = (\n \"username\",\n \"title\",\n \"fullname\",\n \"organization\",\n \"location\",\n \"timezone\",\n \"locale\",\n )\n try:\n user = User.objects.only(*fields).get(username=username)\n except User.DoesNotExist:\n raise Http404(f'No user exists with the username \"{username}\".')\n data = {field: getattr(user, field) for field in fields}\n data[\"avatar_url\"] = get_avatar_url(user)\n return JsonResponse(data)\n", "path": "kuma/api/v1/views.py"}]}
| 4,092 | 194 |
gh_patches_debug_4039
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-921
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Websockets implementation does not clean properly tasks if handshake fails
this is a good explanation of what we see in those 2 tests and that causes flaky tests:
FWIW there's an asyncio warning when I try to run `test_send_before_handshake` or `test_missing_handshake` in isolation. Doesn't show if I run either along with other tests, eg either the full test suite, with `-k websocket`, or the two tests together with `-k "missing_handshake or before_handshake"`.
```console
$ pytest -k test_missing_handshake
====================================================== test session starts =======================================================
platform darwin -- Python 3.9.0, pytest-6.1.1, py-1.9.0, pluggy-0.13.1
rootdir: /Users/florimond/Developer/python-projects/uvicorn, configfile: setup.cfg
plugins: mock-3.3.1, asyncio-0.14.0
collected 213 items / 211 deselected / 2 selected
tests/protocols/test_websocket.py .. [100%]
=============================================== 2 passed, 211 deselected in 2.30s ================================================
Task was destroyed but it is pending!
task: <Task pending name='Task-11' coro=<WebSocketServerProtocol.handler() done, defined at /Users/florimond/Developer/python-projects/uvicorn/venv/lib/python3.9/site-packages/websockets/server.py:118> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x10a7e1760>()]>>
```
_Originally posted by @florimondmanca in https://github.com/encode/uvicorn/issues/918#issuecomment-751691297_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/protocols/websockets/websockets_impl.py`
Content:
```
1 import asyncio
2 import http
3 import logging
4 from urllib.parse import unquote
5
6 import websockets
7
8 from uvicorn.protocols.utils import get_local_addr, get_remote_addr, is_ssl
9
10
11 class Server:
12 closing = False
13
14 def register(self, ws):
15 pass
16
17 def unregister(self, ws):
18 pass
19
20 def is_serving(self):
21 return not self.closing
22
23
24 class WebSocketProtocol(websockets.WebSocketServerProtocol):
25 def __init__(self, config, server_state, _loop=None):
26 if not config.loaded:
27 config.load()
28
29 self.config = config
30 self.app = config.loaded_app
31 self.loop = _loop or asyncio.get_event_loop()
32 self.logger = logging.getLogger("uvicorn.error")
33 self.root_path = config.root_path
34
35 # Shared server state
36 self.connections = server_state.connections
37 self.tasks = server_state.tasks
38
39 # Connection state
40 self.transport = None
41 self.server = None
42 self.client = None
43 self.scheme = None
44
45 # Connection events
46 self.scope = None
47 self.handshake_started_event = asyncio.Event()
48 self.handshake_completed_event = asyncio.Event()
49 self.closed_event = asyncio.Event()
50 self.initial_response = None
51 self.connect_sent = False
52 self.accepted_subprotocol = None
53 self.transfer_data_task = None
54
55 self.ws_server = Server()
56
57 super().__init__(ws_handler=self.ws_handler, ws_server=self.ws_server)
58
59 def connection_made(self, transport):
60 self.connections.add(self)
61 self.transport = transport
62 self.server = get_local_addr(transport)
63 self.client = get_remote_addr(transport)
64 self.scheme = "wss" if is_ssl(transport) else "ws"
65 super().connection_made(transport)
66
67 def connection_lost(self, exc):
68 self.connections.remove(self)
69 self.handshake_completed_event.set()
70 super().connection_lost(exc)
71
72 def shutdown(self):
73 self.ws_server.closing = True
74 self.transport.close()
75
76 def on_task_complete(self, task):
77 self.tasks.discard(task)
78
79 async def process_request(self, path, headers):
80 """
81 This hook is called to determine if the websocket should return
82 an HTTP response and close.
83
84 Our behavior here is to start the ASGI application, and then wait
85 for either `accept` or `close` in order to determine if we should
86 close the connection.
87 """
88 path_portion, _, query_string = path.partition("?")
89
90 websockets.handshake.check_request(headers)
91
92 subprotocols = []
93 for header in headers.get_all("Sec-WebSocket-Protocol"):
94 subprotocols.extend([token.strip() for token in header.split(",")])
95
96 asgi_headers = [
97 (name.encode("ascii"), value.encode("ascii"))
98 for name, value in headers.raw_items()
99 ]
100
101 self.scope = {
102 "type": "websocket",
103 "asgi": {"version": self.config.asgi_version, "spec_version": "2.1"},
104 "scheme": self.scheme,
105 "server": self.server,
106 "client": self.client,
107 "root_path": self.root_path,
108 "path": unquote(path_portion),
109 "raw_path": path_portion,
110 "query_string": query_string.encode("ascii"),
111 "headers": asgi_headers,
112 "subprotocols": subprotocols,
113 }
114 task = self.loop.create_task(self.run_asgi())
115 task.add_done_callback(self.on_task_complete)
116 self.tasks.add(task)
117 await self.handshake_started_event.wait()
118 return self.initial_response
119
120 def process_subprotocol(self, headers, available_subprotocols):
121 """
122 We override the standard 'process_subprotocol' behavior here so that
123 we return whatever subprotocol is sent in the 'accept' message.
124 """
125 return self.accepted_subprotocol
126
127 def send_500_response(self):
128 msg = b"Internal Server Error"
129 content = [
130 b"HTTP/1.1 500 Internal Server Error\r\n"
131 b"content-type: text/plain; charset=utf-8\r\n",
132 b"content-length: " + str(len(msg)).encode("ascii") + b"\r\n",
133 b"connection: close\r\n",
134 b"\r\n",
135 msg,
136 ]
137 self.transport.write(b"".join(content))
138
139 async def ws_handler(self, protocol, path):
140 """
141 This is the main handler function for the 'websockets' implementation
142 to call into. We just wait for close then return, and instead allow
143 'send' and 'receive' events to drive the flow.
144 """
145 self.handshake_completed_event.set()
146 await self.closed_event.wait()
147
148 async def run_asgi(self):
149 """
150 Wrapper around the ASGI callable, handling exceptions and unexpected
151 termination states.
152 """
153 try:
154 result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
155 except BaseException as exc:
156 self.closed_event.set()
157 msg = "Exception in ASGI application\n"
158 self.logger.error(msg, exc_info=exc)
159 if not self.handshake_started_event.is_set():
160 self.send_500_response()
161 else:
162 await self.handshake_completed_event.wait()
163 self.transport.close()
164 else:
165 self.closed_event.set()
166 if not self.handshake_started_event.is_set():
167 msg = "ASGI callable returned without sending handshake."
168 self.logger.error(msg)
169 self.send_500_response()
170 self.transport.close()
171 elif result is not None:
172 msg = "ASGI callable should return None, but returned '%s'."
173 self.logger.error(msg, result)
174 await self.handshake_completed_event.wait()
175 self.transport.close()
176
177 async def asgi_send(self, message):
178 message_type = message["type"]
179
180 if not self.handshake_started_event.is_set():
181 if message_type == "websocket.accept":
182 self.logger.info(
183 '%s - "WebSocket %s" [accepted]',
184 self.scope["client"],
185 self.scope["root_path"] + self.scope["path"],
186 )
187 self.initial_response = None
188 self.accepted_subprotocol = message.get("subprotocol")
189 self.handshake_started_event.set()
190
191 elif message_type == "websocket.close":
192 self.logger.info(
193 '%s - "WebSocket %s" 403',
194 self.scope["client"],
195 self.scope["root_path"] + self.scope["path"],
196 )
197 self.initial_response = (http.HTTPStatus.FORBIDDEN, [], b"")
198 self.handshake_started_event.set()
199 self.closed_event.set()
200
201 else:
202 msg = (
203 "Expected ASGI message 'websocket.accept' or 'websocket.close', "
204 "but got '%s'."
205 )
206 raise RuntimeError(msg % message_type)
207
208 elif not self.closed_event.is_set():
209 await self.handshake_completed_event.wait()
210
211 if message_type == "websocket.send":
212 bytes_data = message.get("bytes")
213 text_data = message.get("text")
214 data = text_data if bytes_data is None else bytes_data
215 await self.send(data)
216
217 elif message_type == "websocket.close":
218 code = message.get("code", 1000)
219 await self.close(code)
220 self.closed_event.set()
221
222 else:
223 msg = (
224 "Expected ASGI message 'websocket.send' or 'websocket.close',"
225 " but got '%s'."
226 )
227 raise RuntimeError(msg % message_type)
228
229 else:
230 msg = "Unexpected ASGI message '%s', after sending 'websocket.close'."
231 raise RuntimeError(msg % message_type)
232
233 async def asgi_receive(self):
234 if not self.connect_sent:
235 self.connect_sent = True
236 return {"type": "websocket.connect"}
237
238 await self.handshake_completed_event.wait()
239 try:
240 await self.ensure_open()
241 data = await self.recv()
242 except websockets.ConnectionClosed as exc:
243 return {"type": "websocket.disconnect", "code": exc.code}
244
245 msg = {"type": "websocket.receive"}
246
247 if isinstance(data, str):
248 msg["text"] = data
249 else:
250 msg["bytes"] = data
251
252 return msg
253
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/uvicorn/protocols/websockets/websockets_impl.py b/uvicorn/protocols/websockets/websockets_impl.py
--- a/uvicorn/protocols/websockets/websockets_impl.py
+++ b/uvicorn/protocols/websockets/websockets_impl.py
@@ -135,6 +135,9 @@
msg,
]
self.transport.write(b"".join(content))
+ # Allow handler task to terminate cleanly, as websockets doesn't cancel it by
+ # itself (see https://github.com/encode/uvicorn/issues/920)
+ self.handshake_started_event.set()
async def ws_handler(self, protocol, path):
"""
|
{"golden_diff": "diff --git a/uvicorn/protocols/websockets/websockets_impl.py b/uvicorn/protocols/websockets/websockets_impl.py\n--- a/uvicorn/protocols/websockets/websockets_impl.py\n+++ b/uvicorn/protocols/websockets/websockets_impl.py\n@@ -135,6 +135,9 @@\n msg,\n ]\n self.transport.write(b\"\".join(content))\n+ # Allow handler task to terminate cleanly, as websockets doesn't cancel it by\n+ # itself (see https://github.com/encode/uvicorn/issues/920)\n+ self.handshake_started_event.set()\n \n async def ws_handler(self, protocol, path):\n \"\"\"\n", "issue": "Websockets implementation does not clean properly tasks if handshake fails\nthis is a good explanation of what we see in those 2 tests and that causes flaky tests:\r\n\r\n\r\nFWIW there's an asyncio warning when I try to run `test_send_before_handshake` or `test_missing_handshake` in isolation. Doesn't show if I run either along with other tests, eg either the full test suite, with `-k websocket`, or the two tests together with `-k \"missing_handshake or before_handshake\"`.\r\n\r\n```console\r\n$ pytest -k test_missing_handshake\r\n====================================================== test session starts =======================================================\r\nplatform darwin -- Python 3.9.0, pytest-6.1.1, py-1.9.0, pluggy-0.13.1\r\nrootdir: /Users/florimond/Developer/python-projects/uvicorn, configfile: setup.cfg\r\nplugins: mock-3.3.1, asyncio-0.14.0\r\ncollected 213 items / 211 deselected / 2 selected \r\n\r\ntests/protocols/test_websocket.py .. [100%]\r\n\r\n=============================================== 2 passed, 211 deselected in 2.30s ================================================\r\nTask was destroyed but it is pending!\r\ntask: <Task pending name='Task-11' coro=<WebSocketServerProtocol.handler() done, defined at /Users/florimond/Developer/python-projects/uvicorn/venv/lib/python3.9/site-packages/websockets/server.py:118> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x10a7e1760>()]>>\r\n```\r\n\r\n_Originally posted by @florimondmanca in https://github.com/encode/uvicorn/issues/918#issuecomment-751691297_\n", "before_files": [{"content": "import asyncio\nimport http\nimport logging\nfrom urllib.parse import unquote\n\nimport websockets\n\nfrom uvicorn.protocols.utils import get_local_addr, get_remote_addr, is_ssl\n\n\nclass Server:\n closing = False\n\n def register(self, ws):\n pass\n\n def unregister(self, ws):\n pass\n\n def is_serving(self):\n return not self.closing\n\n\nclass WebSocketProtocol(websockets.WebSocketServerProtocol):\n def __init__(self, config, server_state, _loop=None):\n if not config.loaded:\n config.load()\n\n self.config = config\n self.app = config.loaded_app\n self.loop = _loop or asyncio.get_event_loop()\n self.logger = logging.getLogger(\"uvicorn.error\")\n self.root_path = config.root_path\n\n # Shared server state\n self.connections = server_state.connections\n self.tasks = server_state.tasks\n\n # Connection state\n self.transport = None\n self.server = None\n self.client = None\n self.scheme = None\n\n # Connection events\n self.scope = None\n self.handshake_started_event = asyncio.Event()\n self.handshake_completed_event = asyncio.Event()\n self.closed_event = asyncio.Event()\n self.initial_response = None\n self.connect_sent = False\n self.accepted_subprotocol = None\n self.transfer_data_task = None\n\n self.ws_server = Server()\n\n super().__init__(ws_handler=self.ws_handler, ws_server=self.ws_server)\n\n def connection_made(self, transport):\n self.connections.add(self)\n self.transport = transport\n self.server = get_local_addr(transport)\n self.client = get_remote_addr(transport)\n self.scheme = \"wss\" if is_ssl(transport) else \"ws\"\n super().connection_made(transport)\n\n def connection_lost(self, exc):\n self.connections.remove(self)\n self.handshake_completed_event.set()\n super().connection_lost(exc)\n\n def shutdown(self):\n self.ws_server.closing = True\n self.transport.close()\n\n def on_task_complete(self, task):\n self.tasks.discard(task)\n\n async def process_request(self, path, headers):\n \"\"\"\n This hook is called to determine if the websocket should return\n an HTTP response and close.\n\n Our behavior here is to start the ASGI application, and then wait\n for either `accept` or `close` in order to determine if we should\n close the connection.\n \"\"\"\n path_portion, _, query_string = path.partition(\"?\")\n\n websockets.handshake.check_request(headers)\n\n subprotocols = []\n for header in headers.get_all(\"Sec-WebSocket-Protocol\"):\n subprotocols.extend([token.strip() for token in header.split(\",\")])\n\n asgi_headers = [\n (name.encode(\"ascii\"), value.encode(\"ascii\"))\n for name, value in headers.raw_items()\n ]\n\n self.scope = {\n \"type\": \"websocket\",\n \"asgi\": {\"version\": self.config.asgi_version, \"spec_version\": \"2.1\"},\n \"scheme\": self.scheme,\n \"server\": self.server,\n \"client\": self.client,\n \"root_path\": self.root_path,\n \"path\": unquote(path_portion),\n \"raw_path\": path_portion,\n \"query_string\": query_string.encode(\"ascii\"),\n \"headers\": asgi_headers,\n \"subprotocols\": subprotocols,\n }\n task = self.loop.create_task(self.run_asgi())\n task.add_done_callback(self.on_task_complete)\n self.tasks.add(task)\n await self.handshake_started_event.wait()\n return self.initial_response\n\n def process_subprotocol(self, headers, available_subprotocols):\n \"\"\"\n We override the standard 'process_subprotocol' behavior here so that\n we return whatever subprotocol is sent in the 'accept' message.\n \"\"\"\n return self.accepted_subprotocol\n\n def send_500_response(self):\n msg = b\"Internal Server Error\"\n content = [\n b\"HTTP/1.1 500 Internal Server Error\\r\\n\"\n b\"content-type: text/plain; charset=utf-8\\r\\n\",\n b\"content-length: \" + str(len(msg)).encode(\"ascii\") + b\"\\r\\n\",\n b\"connection: close\\r\\n\",\n b\"\\r\\n\",\n msg,\n ]\n self.transport.write(b\"\".join(content))\n\n async def ws_handler(self, protocol, path):\n \"\"\"\n This is the main handler function for the 'websockets' implementation\n to call into. We just wait for close then return, and instead allow\n 'send' and 'receive' events to drive the flow.\n \"\"\"\n self.handshake_completed_event.set()\n await self.closed_event.wait()\n\n async def run_asgi(self):\n \"\"\"\n Wrapper around the ASGI callable, handling exceptions and unexpected\n termination states.\n \"\"\"\n try:\n result = await self.app(self.scope, self.asgi_receive, self.asgi_send)\n except BaseException as exc:\n self.closed_event.set()\n msg = \"Exception in ASGI application\\n\"\n self.logger.error(msg, exc_info=exc)\n if not self.handshake_started_event.is_set():\n self.send_500_response()\n else:\n await self.handshake_completed_event.wait()\n self.transport.close()\n else:\n self.closed_event.set()\n if not self.handshake_started_event.is_set():\n msg = \"ASGI callable returned without sending handshake.\"\n self.logger.error(msg)\n self.send_500_response()\n self.transport.close()\n elif result is not None:\n msg = \"ASGI callable should return None, but returned '%s'.\"\n self.logger.error(msg, result)\n await self.handshake_completed_event.wait()\n self.transport.close()\n\n async def asgi_send(self, message):\n message_type = message[\"type\"]\n\n if not self.handshake_started_event.is_set():\n if message_type == \"websocket.accept\":\n self.logger.info(\n '%s - \"WebSocket %s\" [accepted]',\n self.scope[\"client\"],\n self.scope[\"root_path\"] + self.scope[\"path\"],\n )\n self.initial_response = None\n self.accepted_subprotocol = message.get(\"subprotocol\")\n self.handshake_started_event.set()\n\n elif message_type == \"websocket.close\":\n self.logger.info(\n '%s - \"WebSocket %s\" 403',\n self.scope[\"client\"],\n self.scope[\"root_path\"] + self.scope[\"path\"],\n )\n self.initial_response = (http.HTTPStatus.FORBIDDEN, [], b\"\")\n self.handshake_started_event.set()\n self.closed_event.set()\n\n else:\n msg = (\n \"Expected ASGI message 'websocket.accept' or 'websocket.close', \"\n \"but got '%s'.\"\n )\n raise RuntimeError(msg % message_type)\n\n elif not self.closed_event.is_set():\n await self.handshake_completed_event.wait()\n\n if message_type == \"websocket.send\":\n bytes_data = message.get(\"bytes\")\n text_data = message.get(\"text\")\n data = text_data if bytes_data is None else bytes_data\n await self.send(data)\n\n elif message_type == \"websocket.close\":\n code = message.get(\"code\", 1000)\n await self.close(code)\n self.closed_event.set()\n\n else:\n msg = (\n \"Expected ASGI message 'websocket.send' or 'websocket.close',\"\n \" but got '%s'.\"\n )\n raise RuntimeError(msg % message_type)\n\n else:\n msg = \"Unexpected ASGI message '%s', after sending 'websocket.close'.\"\n raise RuntimeError(msg % message_type)\n\n async def asgi_receive(self):\n if not self.connect_sent:\n self.connect_sent = True\n return {\"type\": \"websocket.connect\"}\n\n await self.handshake_completed_event.wait()\n try:\n await self.ensure_open()\n data = await self.recv()\n except websockets.ConnectionClosed as exc:\n return {\"type\": \"websocket.disconnect\", \"code\": exc.code}\n\n msg = {\"type\": \"websocket.receive\"}\n\n if isinstance(data, str):\n msg[\"text\"] = data\n else:\n msg[\"bytes\"] = data\n\n return msg\n", "path": "uvicorn/protocols/websockets/websockets_impl.py"}], "after_files": [{"content": "import asyncio\nimport http\nimport logging\nfrom urllib.parse import unquote\n\nimport websockets\n\nfrom uvicorn.protocols.utils import get_local_addr, get_remote_addr, is_ssl\n\n\nclass Server:\n closing = False\n\n def register(self, ws):\n pass\n\n def unregister(self, ws):\n pass\n\n def is_serving(self):\n return not self.closing\n\n\nclass WebSocketProtocol(websockets.WebSocketServerProtocol):\n def __init__(self, config, server_state, _loop=None):\n if not config.loaded:\n config.load()\n\n self.config = config\n self.app = config.loaded_app\n self.loop = _loop or asyncio.get_event_loop()\n self.logger = logging.getLogger(\"uvicorn.error\")\n self.root_path = config.root_path\n\n # Shared server state\n self.connections = server_state.connections\n self.tasks = server_state.tasks\n\n # Connection state\n self.transport = None\n self.server = None\n self.client = None\n self.scheme = None\n\n # Connection events\n self.scope = None\n self.handshake_started_event = asyncio.Event()\n self.handshake_completed_event = asyncio.Event()\n self.closed_event = asyncio.Event()\n self.initial_response = None\n self.connect_sent = False\n self.accepted_subprotocol = None\n self.transfer_data_task = None\n\n self.ws_server = Server()\n\n super().__init__(ws_handler=self.ws_handler, ws_server=self.ws_server)\n\n def connection_made(self, transport):\n self.connections.add(self)\n self.transport = transport\n self.server = get_local_addr(transport)\n self.client = get_remote_addr(transport)\n self.scheme = \"wss\" if is_ssl(transport) else \"ws\"\n super().connection_made(transport)\n\n def connection_lost(self, exc):\n self.connections.remove(self)\n self.handshake_completed_event.set()\n super().connection_lost(exc)\n\n def shutdown(self):\n self.ws_server.closing = True\n self.transport.close()\n\n def on_task_complete(self, task):\n self.tasks.discard(task)\n\n async def process_request(self, path, headers):\n \"\"\"\n This hook is called to determine if the websocket should return\n an HTTP response and close.\n\n Our behavior here is to start the ASGI application, and then wait\n for either `accept` or `close` in order to determine if we should\n close the connection.\n \"\"\"\n path_portion, _, query_string = path.partition(\"?\")\n\n websockets.handshake.check_request(headers)\n\n subprotocols = []\n for header in headers.get_all(\"Sec-WebSocket-Protocol\"):\n subprotocols.extend([token.strip() for token in header.split(\",\")])\n\n asgi_headers = [\n (name.encode(\"ascii\"), value.encode(\"ascii\"))\n for name, value in headers.raw_items()\n ]\n\n self.scope = {\n \"type\": \"websocket\",\n \"asgi\": {\"version\": self.config.asgi_version, \"spec_version\": \"2.1\"},\n \"scheme\": self.scheme,\n \"server\": self.server,\n \"client\": self.client,\n \"root_path\": self.root_path,\n \"path\": unquote(path_portion),\n \"raw_path\": path_portion,\n \"query_string\": query_string.encode(\"ascii\"),\n \"headers\": asgi_headers,\n \"subprotocols\": subprotocols,\n }\n task = self.loop.create_task(self.run_asgi())\n task.add_done_callback(self.on_task_complete)\n self.tasks.add(task)\n await self.handshake_started_event.wait()\n return self.initial_response\n\n def process_subprotocol(self, headers, available_subprotocols):\n \"\"\"\n We override the standard 'process_subprotocol' behavior here so that\n we return whatever subprotocol is sent in the 'accept' message.\n \"\"\"\n return self.accepted_subprotocol\n\n def send_500_response(self):\n msg = b\"Internal Server Error\"\n content = [\n b\"HTTP/1.1 500 Internal Server Error\\r\\n\"\n b\"content-type: text/plain; charset=utf-8\\r\\n\",\n b\"content-length: \" + str(len(msg)).encode(\"ascii\") + b\"\\r\\n\",\n b\"connection: close\\r\\n\",\n b\"\\r\\n\",\n msg,\n ]\n self.transport.write(b\"\".join(content))\n # Allow handler task to terminate cleanly, as websockets doesn't cancel it by\n # itself (see https://github.com/encode/uvicorn/issues/920)\n self.handshake_started_event.set()\n\n async def ws_handler(self, protocol, path):\n \"\"\"\n This is the main handler function for the 'websockets' implementation\n to call into. We just wait for close then return, and instead allow\n 'send' and 'receive' events to drive the flow.\n \"\"\"\n self.handshake_completed_event.set()\n await self.closed_event.wait()\n\n async def run_asgi(self):\n \"\"\"\n Wrapper around the ASGI callable, handling exceptions and unexpected\n termination states.\n \"\"\"\n try:\n result = await self.app(self.scope, self.asgi_receive, self.asgi_send)\n except BaseException as exc:\n self.closed_event.set()\n msg = \"Exception in ASGI application\\n\"\n self.logger.error(msg, exc_info=exc)\n if not self.handshake_started_event.is_set():\n self.send_500_response()\n else:\n await self.handshake_completed_event.wait()\n self.transport.close()\n else:\n self.closed_event.set()\n if not self.handshake_started_event.is_set():\n msg = \"ASGI callable returned without sending handshake.\"\n self.logger.error(msg)\n self.send_500_response()\n self.transport.close()\n elif result is not None:\n msg = \"ASGI callable should return None, but returned '%s'.\"\n self.logger.error(msg, result)\n await self.handshake_completed_event.wait()\n self.transport.close()\n\n async def asgi_send(self, message):\n message_type = message[\"type\"]\n\n if not self.handshake_started_event.is_set():\n if message_type == \"websocket.accept\":\n self.logger.info(\n '%s - \"WebSocket %s\" [accepted]',\n self.scope[\"client\"],\n self.scope[\"root_path\"] + self.scope[\"path\"],\n )\n self.initial_response = None\n self.accepted_subprotocol = message.get(\"subprotocol\")\n self.handshake_started_event.set()\n\n elif message_type == \"websocket.close\":\n self.logger.info(\n '%s - \"WebSocket %s\" 403',\n self.scope[\"client\"],\n self.scope[\"root_path\"] + self.scope[\"path\"],\n )\n self.initial_response = (http.HTTPStatus.FORBIDDEN, [], b\"\")\n self.handshake_started_event.set()\n self.closed_event.set()\n\n else:\n msg = (\n \"Expected ASGI message 'websocket.accept' or 'websocket.close', \"\n \"but got '%s'.\"\n )\n raise RuntimeError(msg % message_type)\n\n elif not self.closed_event.is_set():\n await self.handshake_completed_event.wait()\n\n if message_type == \"websocket.send\":\n bytes_data = message.get(\"bytes\")\n text_data = message.get(\"text\")\n data = text_data if bytes_data is None else bytes_data\n await self.send(data)\n\n elif message_type == \"websocket.close\":\n code = message.get(\"code\", 1000)\n await self.close(code)\n self.closed_event.set()\n\n else:\n msg = (\n \"Expected ASGI message 'websocket.send' or 'websocket.close',\"\n \" but got '%s'.\"\n )\n raise RuntimeError(msg % message_type)\n\n else:\n msg = \"Unexpected ASGI message '%s', after sending 'websocket.close'.\"\n raise RuntimeError(msg % message_type)\n\n async def asgi_receive(self):\n if not self.connect_sent:\n self.connect_sent = True\n return {\"type\": \"websocket.connect\"}\n\n await self.handshake_completed_event.wait()\n try:\n await self.ensure_open()\n data = await self.recv()\n except websockets.ConnectionClosed as exc:\n return {\"type\": \"websocket.disconnect\", \"code\": exc.code}\n\n msg = {\"type\": \"websocket.receive\"}\n\n if isinstance(data, str):\n msg[\"text\"] = data\n else:\n msg[\"bytes\"] = data\n\n return msg\n", "path": "uvicorn/protocols/websockets/websockets_impl.py"}]}
| 3,091 | 147 |
gh_patches_debug_21530
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-2194
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Il n'y a plus d'alertes sur les messages non lus
Je n'ai plus d'alertes (entendez par là la petite boite rouge sur l'icone des messages) qui m'indique le nombre de MP non lus.
Il s'agit d'une regression introduit avec la 1.5 que je qualifierait de bloquante quand même car la fonctionnalité est très utilisée.
Screen :

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/utils/templatetags/interventions.py`
Content:
```
1 # coding: utf-8
2
3 from datetime import datetime, timedelta
4 import time
5
6 from django import template
7 from django.db.models import F
8
9 from zds.article.models import Reaction, ArticleRead
10 from zds.forum.models import TopicFollowed, never_read as never_read_topic, Post, TopicRead
11 from zds.mp.models import PrivateTopic
12 from zds.tutorial.models import Note, TutorialRead
13 from zds.utils.models import Alert
14
15
16 register = template.Library()
17
18
19 @register.filter('is_read')
20 def is_read(topic):
21 if never_read_topic(topic):
22 return False
23 else:
24 return True
25
26
27 @register.filter('humane_delta')
28 def humane_delta(value):
29 # mapping between label day and key
30 const = {1: "Aujourd'hui", 2: "Hier", 3: "Cette semaine", 4: "Ce mois-ci", 5: "Cette année"}
31
32 return const[value]
33
34
35 @register.filter('followed_topics')
36 def followed_topics(user):
37 topicsfollowed = TopicFollowed.objects.select_related("topic").filter(user=user)\
38 .order_by('-topic__last_message__pubdate')[:10]
39 # This period is a map for link a moment (Today, yesterday, this week, this month, etc.) with
40 # the number of days for which we can say we're still in the period
41 # for exemple, the tuple (2, 1) means for the period "2" corresponding to "Yesterday" according
42 # to humane_delta, means if your pubdate hasn't exceeded one day, we are always at "Yesterday"
43 # Number is use for index for sort map easily
44 period = ((1, 0), (2, 1), (3, 7), (4, 30), (5, 360))
45 topics = {}
46 for tf in topicsfollowed:
47 for p in period:
48 if tf.topic.last_message.pubdate.date() >= (datetime.now() - timedelta(days=int(p[1]),
49 hours=0, minutes=0,
50 seconds=0)).date():
51 if p[0] in topics:
52 topics[p[0]].append(tf.topic)
53 else:
54 topics[p[0]] = [tf.topic]
55 break
56 return topics
57
58
59 def comp(d1, d2):
60 v1 = int(time.mktime(d1['pubdate'].timetuple()))
61 v2 = int(time.mktime(d2['pubdate'].timetuple()))
62 if v1 > v2:
63 return -1
64 elif v1 < v2:
65 return 1
66 else:
67 return 0
68
69
70 @register.filter('interventions_topics')
71 def interventions_topics(user):
72 topicsfollowed = TopicFollowed.objects.filter(user=user).values("topic").distinct().all()
73
74 topics_never_read = TopicRead.objects\
75 .filter(user=user)\
76 .filter(topic__in=topicsfollowed)\
77 .select_related("topic")\
78 .exclude(post=F('topic__last_message'))
79
80 articlesfollowed = Reaction.objects\
81 .filter(author=user, article__sha_public__isnull=False)\
82 .values('article')\
83 .distinct().all()
84
85 articles_never_read = ArticleRead.objects\
86 .filter(user=user)\
87 .filter(article__in=articlesfollowed)\
88 .select_related("article")\
89 .exclude(reaction=F('article__last_reaction'))
90
91 tutorialsfollowed = Note.objects\
92 .filter(author=user, tutorial__sha_public__isnull=False)\
93 .values('tutorial')\
94 .distinct().all()
95
96 tutorials_never_read = TutorialRead.objects\
97 .filter(user=user)\
98 .filter(tutorial__in=tutorialsfollowed)\
99 .exclude(note=F('tutorial__last_note'))
100
101 posts_unread = []
102
103 for art in articles_never_read:
104 content = art.article.first_unread_reaction()
105 posts_unread.append({'pubdate': content.pubdate,
106 'author': content.author,
107 'title': art.article.title,
108 'url': content.get_absolute_url()})
109
110 for tuto in tutorials_never_read:
111 content = tuto.tutorial.first_unread_note()
112 posts_unread.append({'pubdate': content.pubdate,
113 'author': content.author,
114 'title': tuto.tutorial.title,
115 'url': content.get_absolute_url()})
116
117 for top in topics_never_read:
118 content = top.topic.first_unread_post()
119 if content is None:
120 content = top.topic.last_message
121 posts_unread.append({'pubdate': content.pubdate,
122 'author': content.author,
123 'title': top.topic.title,
124 'url': content.get_absolute_url()})
125
126 posts_unread.sort(cmp=comp)
127
128 return posts_unread
129
130
131 @register.filter('interventions_privatetopics')
132 def interventions_privatetopics(user):
133
134 # Raw query because ORM doesn't seems to allow this kind of "left outer join" clauses.
135 # Parameters = list with 3x the same ID because SQLite backend doesn't allow map parameters.
136 privatetopics_unread = PrivateTopic.objects.raw(
137 '''
138 select distinct t.*
139 from mp_privatetopic t
140 inner join mp_privatetopic_participants p on p.privatetopic_id = t.id
141 left outer join mp_privatetopicread r on r.user_id = %s and r.privatepost_id = t.last_message_id
142 where (t.author_id = %s or p.user_id = %s)
143 and r.id is null
144 order by t.pubdate desc''',
145 [user.id, user.id, user.id])
146
147 # "total" re-do the query, but there is no other way to get the length as __len__ is not available on raw queries.
148 return {'unread': privatetopics_unread, 'total': len(list(privatetopics_unread))}
149
150
151 @register.filter(name='alerts_list')
152 def alerts_list(user):
153 total = []
154 alerts = Alert.objects.select_related("author").all().order_by('-pubdate')[:10]
155 for alert in alerts:
156 if alert.scope == Alert.FORUM:
157 post = Post.objects.select_related("topic").get(pk=alert.comment.pk)
158 total.append({'title': post.topic.title,
159 'url': post.get_absolute_url(),
160 'pubdate': alert.pubdate,
161 'author': alert.author,
162 'text': alert.text})
163 if alert.scope == Alert.ARTICLE:
164 reaction = Reaction.objects.select_related("article").get(pk=alert.comment.pk)
165 total.append({'title': reaction.article.title,
166 'url': reaction.get_absolute_url(),
167 'pubdate': alert.pubdate,
168 'author': alert.author,
169 'text': alert.text})
170 if alert.scope == Alert.TUTORIAL:
171 note = Note.objects.select_related("tutorial").get(pk=alert.comment.pk)
172 total.append({'title': note.tutorial.title,
173 'url': note.get_absolute_url(),
174 'pubdate': alert.pubdate,
175 'author': alert.author,
176 'text': alert.text})
177
178 return total
179
180
181 @register.filter(name='alerts_count')
182 def alerts_count(user):
183 if user.is_authenticated():
184 return Alert.objects.count()
185 else:
186 return 0
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zds/utils/templatetags/interventions.py b/zds/utils/templatetags/interventions.py
--- a/zds/utils/templatetags/interventions.py
+++ b/zds/utils/templatetags/interventions.py
@@ -137,7 +137,7 @@
'''
select distinct t.*
from mp_privatetopic t
- inner join mp_privatetopic_participants p on p.privatetopic_id = t.id
+ left outer join mp_privatetopic_participants p on p.privatetopic_id = t.id
left outer join mp_privatetopicread r on r.user_id = %s and r.privatepost_id = t.last_message_id
where (t.author_id = %s or p.user_id = %s)
and r.id is null
@@ -145,7 +145,8 @@
[user.id, user.id, user.id])
# "total" re-do the query, but there is no other way to get the length as __len__ is not available on raw queries.
- return {'unread': privatetopics_unread, 'total': len(list(privatetopics_unread))}
+ topics = list(privatetopics_unread)
+ return {'unread': topics, 'total': len(topics)}
@register.filter(name='alerts_list')
|
{"golden_diff": "diff --git a/zds/utils/templatetags/interventions.py b/zds/utils/templatetags/interventions.py\n--- a/zds/utils/templatetags/interventions.py\n+++ b/zds/utils/templatetags/interventions.py\n@@ -137,7 +137,7 @@\n '''\n select distinct t.*\n from mp_privatetopic t\n- inner join mp_privatetopic_participants p on p.privatetopic_id = t.id\n+ left outer join mp_privatetopic_participants p on p.privatetopic_id = t.id\n left outer join mp_privatetopicread r on r.user_id = %s and r.privatepost_id = t.last_message_id\n where (t.author_id = %s or p.user_id = %s)\n and r.id is null\n@@ -145,7 +145,8 @@\n [user.id, user.id, user.id])\n \n # \"total\" re-do the query, but there is no other way to get the length as __len__ is not available on raw queries.\n- return {'unread': privatetopics_unread, 'total': len(list(privatetopics_unread))}\n+ topics = list(privatetopics_unread)\n+ return {'unread': topics, 'total': len(topics)}\n \n \n @register.filter(name='alerts_list')\n", "issue": "Il n'y a plus d'alertes sur les messages non lus\nJe n'ai plus d'alertes (entendez par l\u00e0 la petite boite rouge sur l'icone des messages) qui m'indique le nombre de MP non lus.\n\nIl s'agit d'une regression introduit avec la 1.5 que je qualifierait de bloquante quand m\u00eame car la fonctionnalit\u00e9 est tr\u00e8s utilis\u00e9e.\n\nScreen : \n\n\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom datetime import datetime, timedelta\nimport time\n\nfrom django import template\nfrom django.db.models import F\n\nfrom zds.article.models import Reaction, ArticleRead\nfrom zds.forum.models import TopicFollowed, never_read as never_read_topic, Post, TopicRead\nfrom zds.mp.models import PrivateTopic\nfrom zds.tutorial.models import Note, TutorialRead\nfrom zds.utils.models import Alert\n\n\nregister = template.Library()\n\n\[email protected]('is_read')\ndef is_read(topic):\n if never_read_topic(topic):\n return False\n else:\n return True\n\n\[email protected]('humane_delta')\ndef humane_delta(value):\n # mapping between label day and key\n const = {1: \"Aujourd'hui\", 2: \"Hier\", 3: \"Cette semaine\", 4: \"Ce mois-ci\", 5: \"Cette ann\u00e9e\"}\n\n return const[value]\n\n\[email protected]('followed_topics')\ndef followed_topics(user):\n topicsfollowed = TopicFollowed.objects.select_related(\"topic\").filter(user=user)\\\n .order_by('-topic__last_message__pubdate')[:10]\n # This period is a map for link a moment (Today, yesterday, this week, this month, etc.) with\n # the number of days for which we can say we're still in the period\n # for exemple, the tuple (2, 1) means for the period \"2\" corresponding to \"Yesterday\" according\n # to humane_delta, means if your pubdate hasn't exceeded one day, we are always at \"Yesterday\"\n # Number is use for index for sort map easily\n period = ((1, 0), (2, 1), (3, 7), (4, 30), (5, 360))\n topics = {}\n for tf in topicsfollowed:\n for p in period:\n if tf.topic.last_message.pubdate.date() >= (datetime.now() - timedelta(days=int(p[1]),\n hours=0, minutes=0,\n seconds=0)).date():\n if p[0] in topics:\n topics[p[0]].append(tf.topic)\n else:\n topics[p[0]] = [tf.topic]\n break\n return topics\n\n\ndef comp(d1, d2):\n v1 = int(time.mktime(d1['pubdate'].timetuple()))\n v2 = int(time.mktime(d2['pubdate'].timetuple()))\n if v1 > v2:\n return -1\n elif v1 < v2:\n return 1\n else:\n return 0\n\n\[email protected]('interventions_topics')\ndef interventions_topics(user):\n topicsfollowed = TopicFollowed.objects.filter(user=user).values(\"topic\").distinct().all()\n\n topics_never_read = TopicRead.objects\\\n .filter(user=user)\\\n .filter(topic__in=topicsfollowed)\\\n .select_related(\"topic\")\\\n .exclude(post=F('topic__last_message'))\n\n articlesfollowed = Reaction.objects\\\n .filter(author=user, article__sha_public__isnull=False)\\\n .values('article')\\\n .distinct().all()\n\n articles_never_read = ArticleRead.objects\\\n .filter(user=user)\\\n .filter(article__in=articlesfollowed)\\\n .select_related(\"article\")\\\n .exclude(reaction=F('article__last_reaction'))\n\n tutorialsfollowed = Note.objects\\\n .filter(author=user, tutorial__sha_public__isnull=False)\\\n .values('tutorial')\\\n .distinct().all()\n\n tutorials_never_read = TutorialRead.objects\\\n .filter(user=user)\\\n .filter(tutorial__in=tutorialsfollowed)\\\n .exclude(note=F('tutorial__last_note'))\n\n posts_unread = []\n\n for art in articles_never_read:\n content = art.article.first_unread_reaction()\n posts_unread.append({'pubdate': content.pubdate,\n 'author': content.author,\n 'title': art.article.title,\n 'url': content.get_absolute_url()})\n\n for tuto in tutorials_never_read:\n content = tuto.tutorial.first_unread_note()\n posts_unread.append({'pubdate': content.pubdate,\n 'author': content.author,\n 'title': tuto.tutorial.title,\n 'url': content.get_absolute_url()})\n\n for top in topics_never_read:\n content = top.topic.first_unread_post()\n if content is None:\n content = top.topic.last_message\n posts_unread.append({'pubdate': content.pubdate,\n 'author': content.author,\n 'title': top.topic.title,\n 'url': content.get_absolute_url()})\n\n posts_unread.sort(cmp=comp)\n\n return posts_unread\n\n\[email protected]('interventions_privatetopics')\ndef interventions_privatetopics(user):\n\n # Raw query because ORM doesn't seems to allow this kind of \"left outer join\" clauses.\n # Parameters = list with 3x the same ID because SQLite backend doesn't allow map parameters.\n privatetopics_unread = PrivateTopic.objects.raw(\n '''\n select distinct t.*\n from mp_privatetopic t\n inner join mp_privatetopic_participants p on p.privatetopic_id = t.id\n left outer join mp_privatetopicread r on r.user_id = %s and r.privatepost_id = t.last_message_id\n where (t.author_id = %s or p.user_id = %s)\n and r.id is null\n order by t.pubdate desc''',\n [user.id, user.id, user.id])\n\n # \"total\" re-do the query, but there is no other way to get the length as __len__ is not available on raw queries.\n return {'unread': privatetopics_unread, 'total': len(list(privatetopics_unread))}\n\n\[email protected](name='alerts_list')\ndef alerts_list(user):\n total = []\n alerts = Alert.objects.select_related(\"author\").all().order_by('-pubdate')[:10]\n for alert in alerts:\n if alert.scope == Alert.FORUM:\n post = Post.objects.select_related(\"topic\").get(pk=alert.comment.pk)\n total.append({'title': post.topic.title,\n 'url': post.get_absolute_url(),\n 'pubdate': alert.pubdate,\n 'author': alert.author,\n 'text': alert.text})\n if alert.scope == Alert.ARTICLE:\n reaction = Reaction.objects.select_related(\"article\").get(pk=alert.comment.pk)\n total.append({'title': reaction.article.title,\n 'url': reaction.get_absolute_url(),\n 'pubdate': alert.pubdate,\n 'author': alert.author,\n 'text': alert.text})\n if alert.scope == Alert.TUTORIAL:\n note = Note.objects.select_related(\"tutorial\").get(pk=alert.comment.pk)\n total.append({'title': note.tutorial.title,\n 'url': note.get_absolute_url(),\n 'pubdate': alert.pubdate,\n 'author': alert.author,\n 'text': alert.text})\n\n return total\n\n\[email protected](name='alerts_count')\ndef alerts_count(user):\n if user.is_authenticated():\n return Alert.objects.count()\n else:\n return 0\n", "path": "zds/utils/templatetags/interventions.py"}], "after_files": [{"content": "# coding: utf-8\n\nfrom datetime import datetime, timedelta\nimport time\n\nfrom django import template\nfrom django.db.models import F\n\nfrom zds.article.models import Reaction, ArticleRead\nfrom zds.forum.models import TopicFollowed, never_read as never_read_topic, Post, TopicRead\nfrom zds.mp.models import PrivateTopic\nfrom zds.tutorial.models import Note, TutorialRead\nfrom zds.utils.models import Alert\n\n\nregister = template.Library()\n\n\[email protected]('is_read')\ndef is_read(topic):\n if never_read_topic(topic):\n return False\n else:\n return True\n\n\[email protected]('humane_delta')\ndef humane_delta(value):\n # mapping between label day and key\n const = {1: \"Aujourd'hui\", 2: \"Hier\", 3: \"Cette semaine\", 4: \"Ce mois-ci\", 5: \"Cette ann\u00e9e\"}\n\n return const[value]\n\n\[email protected]('followed_topics')\ndef followed_topics(user):\n topicsfollowed = TopicFollowed.objects.select_related(\"topic\").filter(user=user)\\\n .order_by('-topic__last_message__pubdate')[:10]\n # This period is a map for link a moment (Today, yesterday, this week, this month, etc.) with\n # the number of days for which we can say we're still in the period\n # for exemple, the tuple (2, 1) means for the period \"2\" corresponding to \"Yesterday\" according\n # to humane_delta, means if your pubdate hasn't exceeded one day, we are always at \"Yesterday\"\n # Number is use for index for sort map easily\n period = ((1, 0), (2, 1), (3, 7), (4, 30), (5, 360))\n topics = {}\n for tf in topicsfollowed:\n for p in period:\n if tf.topic.last_message.pubdate.date() >= (datetime.now() - timedelta(days=int(p[1]),\n hours=0, minutes=0,\n seconds=0)).date():\n if p[0] in topics:\n topics[p[0]].append(tf.topic)\n else:\n topics[p[0]] = [tf.topic]\n break\n return topics\n\n\ndef comp(d1, d2):\n v1 = int(time.mktime(d1['pubdate'].timetuple()))\n v2 = int(time.mktime(d2['pubdate'].timetuple()))\n if v1 > v2:\n return -1\n elif v1 < v2:\n return 1\n else:\n return 0\n\n\[email protected]('interventions_topics')\ndef interventions_topics(user):\n topicsfollowed = TopicFollowed.objects.filter(user=user).values(\"topic\").distinct().all()\n\n topics_never_read = TopicRead.objects\\\n .filter(user=user)\\\n .filter(topic__in=topicsfollowed)\\\n .select_related(\"topic\")\\\n .exclude(post=F('topic__last_message'))\n\n articlesfollowed = Reaction.objects\\\n .filter(author=user, article__sha_public__isnull=False)\\\n .values('article')\\\n .distinct().all()\n\n articles_never_read = ArticleRead.objects\\\n .filter(user=user)\\\n .filter(article__in=articlesfollowed)\\\n .select_related(\"article\")\\\n .exclude(reaction=F('article__last_reaction'))\n\n tutorialsfollowed = Note.objects\\\n .filter(author=user, tutorial__sha_public__isnull=False)\\\n .values('tutorial')\\\n .distinct().all()\n\n tutorials_never_read = TutorialRead.objects\\\n .filter(user=user)\\\n .filter(tutorial__in=tutorialsfollowed)\\\n .exclude(note=F('tutorial__last_note'))\n\n posts_unread = []\n\n for art in articles_never_read:\n content = art.article.first_unread_reaction()\n posts_unread.append({'pubdate': content.pubdate,\n 'author': content.author,\n 'title': art.article.title,\n 'url': content.get_absolute_url()})\n\n for tuto in tutorials_never_read:\n content = tuto.tutorial.first_unread_note()\n posts_unread.append({'pubdate': content.pubdate,\n 'author': content.author,\n 'title': tuto.tutorial.title,\n 'url': content.get_absolute_url()})\n\n for top in topics_never_read:\n content = top.topic.first_unread_post()\n if content is None:\n content = top.topic.last_message\n posts_unread.append({'pubdate': content.pubdate,\n 'author': content.author,\n 'title': top.topic.title,\n 'url': content.get_absolute_url()})\n\n posts_unread.sort(cmp=comp)\n\n return posts_unread\n\n\[email protected]('interventions_privatetopics')\ndef interventions_privatetopics(user):\n\n # Raw query because ORM doesn't seems to allow this kind of \"left outer join\" clauses.\n # Parameters = list with 3x the same ID because SQLite backend doesn't allow map parameters.\n privatetopics_unread = PrivateTopic.objects.raw(\n '''\n select distinct t.*\n from mp_privatetopic t\n left outer join mp_privatetopic_participants p on p.privatetopic_id = t.id\n left outer join mp_privatetopicread r on r.user_id = %s and r.privatepost_id = t.last_message_id\n where (t.author_id = %s or p.user_id = %s)\n and r.id is null\n order by t.pubdate desc''',\n [user.id, user.id, user.id])\n\n # \"total\" re-do the query, but there is no other way to get the length as __len__ is not available on raw queries.\n topics = list(privatetopics_unread)\n return {'unread': topics, 'total': len(topics)}\n\n\[email protected](name='alerts_list')\ndef alerts_list(user):\n total = []\n alerts = Alert.objects.select_related(\"author\").all().order_by('-pubdate')[:10]\n for alert in alerts:\n if alert.scope == Alert.FORUM:\n post = Post.objects.select_related(\"topic\").get(pk=alert.comment.pk)\n total.append({'title': post.topic.title,\n 'url': post.get_absolute_url(),\n 'pubdate': alert.pubdate,\n 'author': alert.author,\n 'text': alert.text})\n if alert.scope == Alert.ARTICLE:\n reaction = Reaction.objects.select_related(\"article\").get(pk=alert.comment.pk)\n total.append({'title': reaction.article.title,\n 'url': reaction.get_absolute_url(),\n 'pubdate': alert.pubdate,\n 'author': alert.author,\n 'text': alert.text})\n if alert.scope == Alert.TUTORIAL:\n note = Note.objects.select_related(\"tutorial\").get(pk=alert.comment.pk)\n total.append({'title': note.tutorial.title,\n 'url': note.get_absolute_url(),\n 'pubdate': alert.pubdate,\n 'author': alert.author,\n 'text': alert.text})\n\n return total\n\n\[email protected](name='alerts_count')\ndef alerts_count(user):\n if user.is_authenticated():\n return Alert.objects.count()\n else:\n return 0\n", "path": "zds/utils/templatetags/interventions.py"}]}
| 2,448 | 310 |
gh_patches_debug_19515
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-493
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove SCOUT_LOG_LEVEL deprecation warning
The Heroku addon sets this environment variable automatically, and it can't vary based on language, so we shouldn't emit a deprecation warning (on Heroku only?) there since there's nothing users can do about it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/core/core_agent_manager.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import hashlib
5 import json
6 import logging
7 import os
8 import subprocess
9 import tarfile
10 import time
11 import warnings
12
13 from urllib3.exceptions import HTTPError
14
15 from scout_apm.compat import urllib3_cert_pool_manager
16 from scout_apm.core.config import scout_config
17
18 logger = logging.getLogger(__name__)
19
20
21 class CoreAgentManager(object):
22 def __init__(self):
23 self.core_agent_bin_path = None
24 self.core_agent_bin_version = None
25 self.core_agent_dir = "{}/{}".format(
26 scout_config.value("core_agent_dir"),
27 scout_config.value("core_agent_full_name"),
28 )
29 self.downloader = CoreAgentDownloader(
30 self.core_agent_dir, scout_config.value("core_agent_full_name")
31 )
32
33 def launch(self):
34 if not scout_config.value("core_agent_launch"):
35 logger.debug(
36 "Not attempting to launch Core Agent "
37 "due to 'core_agent_launch' setting."
38 )
39 return False
40
41 if not self.verify():
42 if not scout_config.value("core_agent_download"):
43 logger.debug(
44 "Not attempting to download Core Agent due "
45 "to 'core_agent_download' setting."
46 )
47 return False
48
49 self.download()
50
51 if not self.verify():
52 logger.debug("Failed to verify Core Agent. Not launching Core Agent.")
53 return False
54
55 return self.run()
56
57 def download(self):
58 self.downloader.download()
59
60 def run(self):
61 try:
62 subprocess.check_call(
63 (
64 self.agent_binary()
65 + self.daemonize_flag()
66 + self.log_level()
67 + self.log_file()
68 + self.config_file()
69 + self.socket_path()
70 ),
71 close_fds=True,
72 )
73 except Exception:
74 # TODO detect failure of launch properly
75 logger.exception("Error running Core Agent")
76 return False
77 return True
78
79 def agent_binary(self):
80 return [self.core_agent_bin_path, "start"]
81
82 def daemonize_flag(self):
83 return ["--daemonize", "true"]
84
85 def socket_path(self):
86 socket_path = scout_config.value("socket_path")
87 return ["--socket", socket_path]
88
89 def log_level(self):
90 # Old deprecated name "log_level"
91 log_level = scout_config.value("log_level")
92 if log_level is not None:
93 warnings.warn(
94 "The config name 'log_level' is deprecated - "
95 + "please use the new name 'core_agent_log_level' instead. "
96 + "This might be configured in your environment variables or "
97 + "framework settings as SCOUT_LOG_LEVEL.",
98 DeprecationWarning,
99 )
100 else:
101 log_level = scout_config.value("core_agent_log_level")
102 return ["--log-level", log_level]
103
104 def log_file(self):
105 path = scout_config.value("log_file")
106 if path is not None:
107 return ["--log-file", path]
108 else:
109 return []
110
111 def config_file(self):
112 path = scout_config.value("config_file")
113 if path is not None:
114 return ["--config-file", path]
115 else:
116 return []
117
118 def verify(self):
119 manifest = CoreAgentManifest(self.core_agent_dir + "/manifest.json")
120 if not manifest.is_valid():
121 logger.debug(
122 "Core Agent verification failed: CoreAgentManifest is not valid."
123 )
124 self.core_agent_bin_path = None
125 self.core_agent_bin_version = None
126 return False
127
128 bin_path = self.core_agent_dir + "/" + manifest.bin_name
129 if sha256_digest(bin_path) == manifest.sha256:
130 self.core_agent_bin_path = bin_path
131 self.core_agent_bin_version = manifest.bin_version
132 return True
133 else:
134 logger.debug("Core Agent verification failed: SHA mismatch.")
135 self.core_agent_bin_path = None
136 self.core_agent_bin_version = None
137 return False
138
139
140 class CoreAgentDownloader(object):
141 def __init__(self, download_destination, core_agent_full_name):
142 self.stale_download_secs = 120
143 self.destination = download_destination
144 self.core_agent_full_name = core_agent_full_name
145 self.package_location = self.destination + "/{}.tgz".format(
146 self.core_agent_full_name
147 )
148 self.download_lock_path = self.destination + "/download.lock"
149 self.download_lock_fd = None
150
151 def download(self):
152 self.create_core_agent_dir()
153 self.obtain_download_lock()
154 if self.download_lock_fd is not None:
155 try:
156 downloaded = self.download_package()
157 if downloaded:
158 self.untar()
159 except (OSError, HTTPError):
160 logger.exception("Exception raised while downloading Core Agent")
161 finally:
162 self.release_download_lock()
163
164 def create_core_agent_dir(self):
165 try:
166 os.makedirs(self.destination, scout_config.core_agent_permissions())
167 except OSError:
168 pass
169
170 def obtain_download_lock(self):
171 self.clean_stale_download_lock()
172 try:
173 self.download_lock_fd = os.open(
174 self.download_lock_path,
175 os.O_RDWR | os.O_CREAT | os.O_EXCL | os.O_NONBLOCK,
176 )
177 except OSError as exc:
178 logger.debug(
179 "Could not obtain download lock on %s",
180 self.download_lock_path,
181 exc_info=exc,
182 )
183 self.download_lock_fd = None
184
185 def clean_stale_download_lock(self):
186 try:
187 delta = time.time() - os.stat(self.download_lock_path).st_ctime
188 if delta > self.stale_download_secs:
189 logger.debug("Clearing stale download lock file.")
190 os.unlink(self.download_lock_path)
191 except OSError:
192 pass
193
194 def release_download_lock(self):
195 if self.download_lock_fd is not None:
196 os.unlink(self.download_lock_path)
197 os.close(self.download_lock_fd)
198
199 def download_package(self):
200 full_url = self.full_url()
201 logger.debug("Downloading: %s to %s", full_url, self.package_location)
202 http = urllib3_cert_pool_manager()
203 response = http.request(
204 "GET", full_url, preload_content=False, timeout=10.0, retries=3
205 )
206 try:
207 if response.status != 200:
208 return False
209 with open(self.package_location, "wb") as fp:
210 for chunk in response.stream():
211 fp.write(chunk)
212 finally:
213 response.release_conn()
214 return True
215
216 def untar(self):
217 t = tarfile.open(self.package_location, "r")
218 t.extractall(self.destination)
219
220 def full_url(self):
221 return "{root_url}/{core_agent_full_name}.tgz".format(
222 root_url=self.root_url(), core_agent_full_name=self.core_agent_full_name
223 )
224
225 def root_url(self):
226 return scout_config.value("download_url")
227
228
229 class CoreAgentManifest(object):
230 def __init__(self, path):
231 self.manifest_path = path
232 self.bin_name = None
233 self.bin_version = None
234 self.sha256 = None
235 self.valid = False
236 try:
237 self.parse()
238 # noqa for this issue: https://github.com/PyCQA/flake8-bugbear/issues/110
239 except (ValueError, TypeError, OSError, IOError) as exc: # noqa: B014
240 logger.debug("Error parsing Core Agent Manifest", exc_info=exc)
241
242 def parse(self):
243 logger.debug("Parsing Core Agent manifest path: %s", self.manifest_path)
244 with open(self.manifest_path) as manifest_file:
245 self.raw = manifest_file.read()
246 self.json = json.loads(self.raw)
247 self.version = self.json["version"]
248 self.bin_version = self.json["core_agent_version"]
249 self.bin_name = self.json["core_agent_binary"]
250 self.sha256 = self.json["core_agent_binary_sha256"]
251 self.valid = True
252 logger.debug("Core Agent manifest json: %s", self.json)
253
254 def is_valid(self):
255 return self.valid
256
257
258 def sha256_digest(filename, block_size=65536):
259 try:
260 sha256 = hashlib.sha256()
261 with open(filename, "rb") as f:
262 for block in iter(lambda: f.read(block_size), b""):
263 sha256.update(block)
264 return sha256.hexdigest()
265 except OSError as exc:
266 logger.debug("Error on digest", exc_info=exc)
267 return None
268
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/scout_apm/core/core_agent_manager.py b/src/scout_apm/core/core_agent_manager.py
--- a/src/scout_apm/core/core_agent_manager.py
+++ b/src/scout_apm/core/core_agent_manager.py
@@ -8,7 +8,6 @@
import subprocess
import tarfile
import time
-import warnings
from urllib3.exceptions import HTTPError
@@ -89,15 +88,7 @@
def log_level(self):
# Old deprecated name "log_level"
log_level = scout_config.value("log_level")
- if log_level is not None:
- warnings.warn(
- "The config name 'log_level' is deprecated - "
- + "please use the new name 'core_agent_log_level' instead. "
- + "This might be configured in your environment variables or "
- + "framework settings as SCOUT_LOG_LEVEL.",
- DeprecationWarning,
- )
- else:
+ if log_level is None:
log_level = scout_config.value("core_agent_log_level")
return ["--log-level", log_level]
|
{"golden_diff": "diff --git a/src/scout_apm/core/core_agent_manager.py b/src/scout_apm/core/core_agent_manager.py\n--- a/src/scout_apm/core/core_agent_manager.py\n+++ b/src/scout_apm/core/core_agent_manager.py\n@@ -8,7 +8,6 @@\n import subprocess\n import tarfile\n import time\n-import warnings\n \n from urllib3.exceptions import HTTPError\n \n@@ -89,15 +88,7 @@\n def log_level(self):\n # Old deprecated name \"log_level\"\n log_level = scout_config.value(\"log_level\")\n- if log_level is not None:\n- warnings.warn(\n- \"The config name 'log_level' is deprecated - \"\n- + \"please use the new name 'core_agent_log_level' instead. \"\n- + \"This might be configured in your environment variables or \"\n- + \"framework settings as SCOUT_LOG_LEVEL.\",\n- DeprecationWarning,\n- )\n- else:\n+ if log_level is None:\n log_level = scout_config.value(\"core_agent_log_level\")\n return [\"--log-level\", log_level]\n", "issue": "Remove SCOUT_LOG_LEVEL deprecation warning\nThe Heroku addon sets this environment variable automatically, and it can't vary based on language, so we shouldn't emit a deprecation warning (on Heroku only?) there since there's nothing users can do about it.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport hashlib\nimport json\nimport logging\nimport os\nimport subprocess\nimport tarfile\nimport time\nimport warnings\n\nfrom urllib3.exceptions import HTTPError\n\nfrom scout_apm.compat import urllib3_cert_pool_manager\nfrom scout_apm.core.config import scout_config\n\nlogger = logging.getLogger(__name__)\n\n\nclass CoreAgentManager(object):\n def __init__(self):\n self.core_agent_bin_path = None\n self.core_agent_bin_version = None\n self.core_agent_dir = \"{}/{}\".format(\n scout_config.value(\"core_agent_dir\"),\n scout_config.value(\"core_agent_full_name\"),\n )\n self.downloader = CoreAgentDownloader(\n self.core_agent_dir, scout_config.value(\"core_agent_full_name\")\n )\n\n def launch(self):\n if not scout_config.value(\"core_agent_launch\"):\n logger.debug(\n \"Not attempting to launch Core Agent \"\n \"due to 'core_agent_launch' setting.\"\n )\n return False\n\n if not self.verify():\n if not scout_config.value(\"core_agent_download\"):\n logger.debug(\n \"Not attempting to download Core Agent due \"\n \"to 'core_agent_download' setting.\"\n )\n return False\n\n self.download()\n\n if not self.verify():\n logger.debug(\"Failed to verify Core Agent. Not launching Core Agent.\")\n return False\n\n return self.run()\n\n def download(self):\n self.downloader.download()\n\n def run(self):\n try:\n subprocess.check_call(\n (\n self.agent_binary()\n + self.daemonize_flag()\n + self.log_level()\n + self.log_file()\n + self.config_file()\n + self.socket_path()\n ),\n close_fds=True,\n )\n except Exception:\n # TODO detect failure of launch properly\n logger.exception(\"Error running Core Agent\")\n return False\n return True\n\n def agent_binary(self):\n return [self.core_agent_bin_path, \"start\"]\n\n def daemonize_flag(self):\n return [\"--daemonize\", \"true\"]\n\n def socket_path(self):\n socket_path = scout_config.value(\"socket_path\")\n return [\"--socket\", socket_path]\n\n def log_level(self):\n # Old deprecated name \"log_level\"\n log_level = scout_config.value(\"log_level\")\n if log_level is not None:\n warnings.warn(\n \"The config name 'log_level' is deprecated - \"\n + \"please use the new name 'core_agent_log_level' instead. \"\n + \"This might be configured in your environment variables or \"\n + \"framework settings as SCOUT_LOG_LEVEL.\",\n DeprecationWarning,\n )\n else:\n log_level = scout_config.value(\"core_agent_log_level\")\n return [\"--log-level\", log_level]\n\n def log_file(self):\n path = scout_config.value(\"log_file\")\n if path is not None:\n return [\"--log-file\", path]\n else:\n return []\n\n def config_file(self):\n path = scout_config.value(\"config_file\")\n if path is not None:\n return [\"--config-file\", path]\n else:\n return []\n\n def verify(self):\n manifest = CoreAgentManifest(self.core_agent_dir + \"/manifest.json\")\n if not manifest.is_valid():\n logger.debug(\n \"Core Agent verification failed: CoreAgentManifest is not valid.\"\n )\n self.core_agent_bin_path = None\n self.core_agent_bin_version = None\n return False\n\n bin_path = self.core_agent_dir + \"/\" + manifest.bin_name\n if sha256_digest(bin_path) == manifest.sha256:\n self.core_agent_bin_path = bin_path\n self.core_agent_bin_version = manifest.bin_version\n return True\n else:\n logger.debug(\"Core Agent verification failed: SHA mismatch.\")\n self.core_agent_bin_path = None\n self.core_agent_bin_version = None\n return False\n\n\nclass CoreAgentDownloader(object):\n def __init__(self, download_destination, core_agent_full_name):\n self.stale_download_secs = 120\n self.destination = download_destination\n self.core_agent_full_name = core_agent_full_name\n self.package_location = self.destination + \"/{}.tgz\".format(\n self.core_agent_full_name\n )\n self.download_lock_path = self.destination + \"/download.lock\"\n self.download_lock_fd = None\n\n def download(self):\n self.create_core_agent_dir()\n self.obtain_download_lock()\n if self.download_lock_fd is not None:\n try:\n downloaded = self.download_package()\n if downloaded:\n self.untar()\n except (OSError, HTTPError):\n logger.exception(\"Exception raised while downloading Core Agent\")\n finally:\n self.release_download_lock()\n\n def create_core_agent_dir(self):\n try:\n os.makedirs(self.destination, scout_config.core_agent_permissions())\n except OSError:\n pass\n\n def obtain_download_lock(self):\n self.clean_stale_download_lock()\n try:\n self.download_lock_fd = os.open(\n self.download_lock_path,\n os.O_RDWR | os.O_CREAT | os.O_EXCL | os.O_NONBLOCK,\n )\n except OSError as exc:\n logger.debug(\n \"Could not obtain download lock on %s\",\n self.download_lock_path,\n exc_info=exc,\n )\n self.download_lock_fd = None\n\n def clean_stale_download_lock(self):\n try:\n delta = time.time() - os.stat(self.download_lock_path).st_ctime\n if delta > self.stale_download_secs:\n logger.debug(\"Clearing stale download lock file.\")\n os.unlink(self.download_lock_path)\n except OSError:\n pass\n\n def release_download_lock(self):\n if self.download_lock_fd is not None:\n os.unlink(self.download_lock_path)\n os.close(self.download_lock_fd)\n\n def download_package(self):\n full_url = self.full_url()\n logger.debug(\"Downloading: %s to %s\", full_url, self.package_location)\n http = urllib3_cert_pool_manager()\n response = http.request(\n \"GET\", full_url, preload_content=False, timeout=10.0, retries=3\n )\n try:\n if response.status != 200:\n return False\n with open(self.package_location, \"wb\") as fp:\n for chunk in response.stream():\n fp.write(chunk)\n finally:\n response.release_conn()\n return True\n\n def untar(self):\n t = tarfile.open(self.package_location, \"r\")\n t.extractall(self.destination)\n\n def full_url(self):\n return \"{root_url}/{core_agent_full_name}.tgz\".format(\n root_url=self.root_url(), core_agent_full_name=self.core_agent_full_name\n )\n\n def root_url(self):\n return scout_config.value(\"download_url\")\n\n\nclass CoreAgentManifest(object):\n def __init__(self, path):\n self.manifest_path = path\n self.bin_name = None\n self.bin_version = None\n self.sha256 = None\n self.valid = False\n try:\n self.parse()\n # noqa for this issue: https://github.com/PyCQA/flake8-bugbear/issues/110\n except (ValueError, TypeError, OSError, IOError) as exc: # noqa: B014\n logger.debug(\"Error parsing Core Agent Manifest\", exc_info=exc)\n\n def parse(self):\n logger.debug(\"Parsing Core Agent manifest path: %s\", self.manifest_path)\n with open(self.manifest_path) as manifest_file:\n self.raw = manifest_file.read()\n self.json = json.loads(self.raw)\n self.version = self.json[\"version\"]\n self.bin_version = self.json[\"core_agent_version\"]\n self.bin_name = self.json[\"core_agent_binary\"]\n self.sha256 = self.json[\"core_agent_binary_sha256\"]\n self.valid = True\n logger.debug(\"Core Agent manifest json: %s\", self.json)\n\n def is_valid(self):\n return self.valid\n\n\ndef sha256_digest(filename, block_size=65536):\n try:\n sha256 = hashlib.sha256()\n with open(filename, \"rb\") as f:\n for block in iter(lambda: f.read(block_size), b\"\"):\n sha256.update(block)\n return sha256.hexdigest()\n except OSError as exc:\n logger.debug(\"Error on digest\", exc_info=exc)\n return None\n", "path": "src/scout_apm/core/core_agent_manager.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport hashlib\nimport json\nimport logging\nimport os\nimport subprocess\nimport tarfile\nimport time\n\nfrom urllib3.exceptions import HTTPError\n\nfrom scout_apm.compat import urllib3_cert_pool_manager\nfrom scout_apm.core.config import scout_config\n\nlogger = logging.getLogger(__name__)\n\n\nclass CoreAgentManager(object):\n def __init__(self):\n self.core_agent_bin_path = None\n self.core_agent_bin_version = None\n self.core_agent_dir = \"{}/{}\".format(\n scout_config.value(\"core_agent_dir\"),\n scout_config.value(\"core_agent_full_name\"),\n )\n self.downloader = CoreAgentDownloader(\n self.core_agent_dir, scout_config.value(\"core_agent_full_name\")\n )\n\n def launch(self):\n if not scout_config.value(\"core_agent_launch\"):\n logger.debug(\n \"Not attempting to launch Core Agent \"\n \"due to 'core_agent_launch' setting.\"\n )\n return False\n\n if not self.verify():\n if not scout_config.value(\"core_agent_download\"):\n logger.debug(\n \"Not attempting to download Core Agent due \"\n \"to 'core_agent_download' setting.\"\n )\n return False\n\n self.download()\n\n if not self.verify():\n logger.debug(\"Failed to verify Core Agent. Not launching Core Agent.\")\n return False\n\n return self.run()\n\n def download(self):\n self.downloader.download()\n\n def run(self):\n try:\n subprocess.check_call(\n (\n self.agent_binary()\n + self.daemonize_flag()\n + self.log_level()\n + self.log_file()\n + self.config_file()\n + self.socket_path()\n ),\n close_fds=True,\n )\n except Exception:\n # TODO detect failure of launch properly\n logger.exception(\"Error running Core Agent\")\n return False\n return True\n\n def agent_binary(self):\n return [self.core_agent_bin_path, \"start\"]\n\n def daemonize_flag(self):\n return [\"--daemonize\", \"true\"]\n\n def socket_path(self):\n socket_path = scout_config.value(\"socket_path\")\n return [\"--socket\", socket_path]\n\n def log_level(self):\n # Old deprecated name \"log_level\"\n log_level = scout_config.value(\"log_level\")\n if log_level is None:\n log_level = scout_config.value(\"core_agent_log_level\")\n return [\"--log-level\", log_level]\n\n def log_file(self):\n path = scout_config.value(\"log_file\")\n if path is not None:\n return [\"--log-file\", path]\n else:\n return []\n\n def config_file(self):\n path = scout_config.value(\"config_file\")\n if path is not None:\n return [\"--config-file\", path]\n else:\n return []\n\n def verify(self):\n manifest = CoreAgentManifest(self.core_agent_dir + \"/manifest.json\")\n if not manifest.is_valid():\n logger.debug(\n \"Core Agent verification failed: CoreAgentManifest is not valid.\"\n )\n self.core_agent_bin_path = None\n self.core_agent_bin_version = None\n return False\n\n bin_path = self.core_agent_dir + \"/\" + manifest.bin_name\n if sha256_digest(bin_path) == manifest.sha256:\n self.core_agent_bin_path = bin_path\n self.core_agent_bin_version = manifest.bin_version\n return True\n else:\n logger.debug(\"Core Agent verification failed: SHA mismatch.\")\n self.core_agent_bin_path = None\n self.core_agent_bin_version = None\n return False\n\n\nclass CoreAgentDownloader(object):\n def __init__(self, download_destination, core_agent_full_name):\n self.stale_download_secs = 120\n self.destination = download_destination\n self.core_agent_full_name = core_agent_full_name\n self.package_location = self.destination + \"/{}.tgz\".format(\n self.core_agent_full_name\n )\n self.download_lock_path = self.destination + \"/download.lock\"\n self.download_lock_fd = None\n\n def download(self):\n self.create_core_agent_dir()\n self.obtain_download_lock()\n if self.download_lock_fd is not None:\n try:\n downloaded = self.download_package()\n if downloaded:\n self.untar()\n except (OSError, HTTPError):\n logger.exception(\"Exception raised while downloading Core Agent\")\n finally:\n self.release_download_lock()\n\n def create_core_agent_dir(self):\n try:\n os.makedirs(self.destination, scout_config.core_agent_permissions())\n except OSError:\n pass\n\n def obtain_download_lock(self):\n self.clean_stale_download_lock()\n try:\n self.download_lock_fd = os.open(\n self.download_lock_path,\n os.O_RDWR | os.O_CREAT | os.O_EXCL | os.O_NONBLOCK,\n )\n except OSError as exc:\n logger.debug(\n \"Could not obtain download lock on %s\",\n self.download_lock_path,\n exc_info=exc,\n )\n self.download_lock_fd = None\n\n def clean_stale_download_lock(self):\n try:\n delta = time.time() - os.stat(self.download_lock_path).st_ctime\n if delta > self.stale_download_secs:\n logger.debug(\"Clearing stale download lock file.\")\n os.unlink(self.download_lock_path)\n except OSError:\n pass\n\n def release_download_lock(self):\n if self.download_lock_fd is not None:\n os.unlink(self.download_lock_path)\n os.close(self.download_lock_fd)\n\n def download_package(self):\n full_url = self.full_url()\n logger.debug(\"Downloading: %s to %s\", full_url, self.package_location)\n http = urllib3_cert_pool_manager()\n response = http.request(\n \"GET\", full_url, preload_content=False, timeout=10.0, retries=3\n )\n try:\n if response.status != 200:\n return False\n with open(self.package_location, \"wb\") as fp:\n for chunk in response.stream():\n fp.write(chunk)\n finally:\n response.release_conn()\n return True\n\n def untar(self):\n t = tarfile.open(self.package_location, \"r\")\n t.extractall(self.destination)\n\n def full_url(self):\n return \"{root_url}/{core_agent_full_name}.tgz\".format(\n root_url=self.root_url(), core_agent_full_name=self.core_agent_full_name\n )\n\n def root_url(self):\n return scout_config.value(\"download_url\")\n\n\nclass CoreAgentManifest(object):\n def __init__(self, path):\n self.manifest_path = path\n self.bin_name = None\n self.bin_version = None\n self.sha256 = None\n self.valid = False\n try:\n self.parse()\n # noqa for this issue: https://github.com/PyCQA/flake8-bugbear/issues/110\n except (ValueError, TypeError, OSError, IOError) as exc: # noqa: B014\n logger.debug(\"Error parsing Core Agent Manifest\", exc_info=exc)\n\n def parse(self):\n logger.debug(\"Parsing Core Agent manifest path: %s\", self.manifest_path)\n with open(self.manifest_path) as manifest_file:\n self.raw = manifest_file.read()\n self.json = json.loads(self.raw)\n self.version = self.json[\"version\"]\n self.bin_version = self.json[\"core_agent_version\"]\n self.bin_name = self.json[\"core_agent_binary\"]\n self.sha256 = self.json[\"core_agent_binary_sha256\"]\n self.valid = True\n logger.debug(\"Core Agent manifest json: %s\", self.json)\n\n def is_valid(self):\n return self.valid\n\n\ndef sha256_digest(filename, block_size=65536):\n try:\n sha256 = hashlib.sha256()\n with open(filename, \"rb\") as f:\n for block in iter(lambda: f.read(block_size), b\"\"):\n sha256.update(block)\n return sha256.hexdigest()\n except OSError as exc:\n logger.debug(\"Error on digest\", exc_info=exc)\n return None\n", "path": "src/scout_apm/core/core_agent_manager.py"}]}
| 2,846 | 239 |
gh_patches_debug_33112
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-5770
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[ENH] Adding a parameter to forecasters from statsmodels to control verbosity of output
**Is your feature request related to a problem? Please describe.**
Some forecasters (such as SARIMAX) that are direct wrappers of statsmodels forecasters, print multiple messages when fitting the model (see image for an example). There is currently no way of controling the level of the printed output.

**Describe the solution you'd like**
It would be nice to have a parameter `verbose` to control the level of verbosity. The stasmodels implementation has a parameter called `disp` (see https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.fit.html#statsmodels.tsa.statespace.sarimax.SARIMAX.fit) that can do this, but there is currently no way of accessing this parameter from the sktime implementation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/forecasting/sarimax.py`
Content:
```
1 # !/usr/bin/env python3 -u
2 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
3 """Implements SARIMAX."""
4
5 __all__ = ["SARIMAX"]
6 __author__ = ["TNTran92", "yarnabrina"]
7
8 import pandas as pd
9
10 from sktime.forecasting.base.adapters import _StatsModelsAdapter
11
12
13 class SARIMAX(_StatsModelsAdapter):
14 """SARIMAX forecaster.
15
16 Direct interface for `statsmodels.tsa.api.SARIMAX`.
17
18 Parameters
19 ----------
20 order : iterable or iterable of iterables, optional, default=(1,0,0)
21 The (p,d,q) order of the model for the number of AR parameters,
22 differences, and MA parameters. `d` must be an integer
23 indicating the integration order of the process, while
24 `p` and `q` may either be an integers indicating the AR and MA
25 orders (so that all lags up to those orders are included) or else
26 iterables giving specific AR and / or MA lags to include. Default is
27 an AR(1) model: (1,0,0).
28 seasonal_order : iterable, optional, default=(0,0,0,0)
29 The (P,D,Q,s) order of the seasonal component of the model for the
30 AR parameters, differences, MA parameters, and periodicity.
31 `D` must be an integer indicating the integration order of the process,
32 while `P` and `Q` may either be an integers indicating the AR and MA
33 orders (so that all lags up to those orders are included) or else
34 iterables giving specific AR and / or MA lags to include. `s` is an
35 integer giving the periodicity (number of periods in season), often it
36 is 4 for quarterly data or 12 for monthly data. Default is no seasonal
37 effect.
38 trend : str{'n','c','t','ct'} or iterable, optional, default="c"
39 Parameter controlling the deterministic trend polynomial :math:`A(t)`.
40 Can be specified as a string where 'c' indicates a constant (i.e. a
41 degree zero component of the trend polynomial), 't' indicates a
42 linear trend with time, and 'ct' is both. Can also be specified as an
43 iterable defining the non-zero polynomial exponents to include, in
44 increasing order. For example, `[1,1,0,1]` denotes
45 :math:`a + bt + ct^3`. Default is to not include a trend component.
46 measurement_error : bool, optional, default=False
47 Whether or not to assume the endogenous observations `endog` were
48 measured with error.
49 time_varying_regression : bool, optional, default=False
50 Used when an explanatory variables, `exog`, are provided
51 to select whether or not coefficients on the exogenous regressors are
52 allowed to vary over time.
53 mle_regression : bool, optional, default=True
54 Whether or not to use estimate the regression coefficients for the
55 exogenous variables as part of maximum likelihood estimation or through
56 the Kalman filter (i.e. recursive least squares). If
57 `time_varying_regression` is True, this must be set to False.
58 simple_differencing : bool, optional, default=False
59 Whether or not to use partially conditional maximum likelihood
60 estimation. If True, differencing is performed prior to estimation,
61 which discards the first :math:`s D + d` initial rows but results in a
62 smaller state-space formulation. See the Notes section for important
63 details about interpreting results when this option is used. If False,
64 the full SARIMAX model is put in state-space form so that all
65 datapoints can be used in estimation.
66 enforce_stationarity : bool, optional, default=True
67 Whether or not to transform the AR parameters to enforce stationarity
68 in the autoregressive component of the model.
69 enforce_invertibility : bool, optional, default=True
70 Whether or not to transform the MA parameters to enforce invertibility
71 in the moving average component of the model.
72 hamilton_representation : bool, optional, default=False
73 Whether or not to use the Hamilton representation of an ARMA process
74 (if True) or the Harvey representation (if False).
75 concentrate_scale : bool, optional, default=False
76 Whether or not to concentrate the scale (variance of the error term)
77 out of the likelihood. This reduces the number of parameters estimated
78 by maximum likelihood by one, but standard errors will then not
79 be available for the scale parameter.
80 trend_offset : int, optional, default=1
81 The offset at which to start time trend values. Default is 1, so that
82 if `trend='t'` the trend is equal to 1, 2, ..., nobs. Typically is only
83 set when the model created by extending a previous dataset.
84 use_exact_diffuse : bool, optional, default=False
85 Whether or not to use exact diffuse initialization for non-stationary
86 states. Default is False (in which case approximate diffuse
87 initialization is used).
88 random_state : int, RandomState instance or None, optional ,
89 default=None – If int, random_state is the seed used by the random
90 number generator; If RandomState instance, random_state is the random
91 number generator; If None, the random number generator is the
92 RandomState instance used by np.random.
93
94 See Also
95 --------
96 ARIMA
97 AutoARIMA
98 StatsForecastAutoARIMA
99
100 References
101 ----------
102 .. [1] Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles
103 and practice. OTexts, 2014.
104
105 Examples
106 --------
107 >>> from sktime.datasets import load_airline
108 >>> from sktime.forecasting.sarimax import SARIMAX
109 >>> y = load_airline()
110 >>> forecaster = SARIMAX(
111 ... order=(1, 0, 0), trend="t", seasonal_order=(1, 0, 0, 6)) # doctest: +SKIP
112 ... )
113 >>> forecaster.fit(y) # doctest: +SKIP
114 SARIMAX(...)
115 >>> y_pred = forecaster.predict(fh=y.index) # doctest: +SKIP
116 """
117
118 _tags = {
119 "ignores-exogeneous-X": False,
120 "capability:pred_int": True,
121 "capability:pred_int:insample": True,
122 }
123
124 def __init__(
125 self,
126 order=(1, 0, 0),
127 seasonal_order=(0, 0, 0, 0),
128 trend="c",
129 measurement_error=False,
130 time_varying_regression=False,
131 mle_regression=True,
132 simple_differencing=False,
133 enforce_stationarity=True,
134 enforce_invertibility=True,
135 hamilton_representation=False,
136 concentrate_scale=False,
137 trend_offset=1,
138 use_exact_diffuse=False,
139 dates=None,
140 freq=None,
141 missing="none",
142 validate_specification=True,
143 random_state=None,
144 ):
145 self.order = order
146 self.seasonal_order = seasonal_order
147 self.trend = trend
148 self.measurement_error = measurement_error
149 self.time_varying_regression = time_varying_regression
150 self.mle_regression = mle_regression
151 self.simple_differencing = simple_differencing
152 self.enforce_stationarity = enforce_stationarity
153 self.enforce_invertibility = enforce_invertibility
154 self.hamilton_representation = hamilton_representation
155 self.concentrate_scale = concentrate_scale
156 self.trend_offset = trend_offset
157 self.use_exact_diffuse = use_exact_diffuse
158 self.dates = dates
159 self.freq = freq
160 self.missing = missing
161 self.validate_specification = validate_specification
162
163 super().__init__(random_state=random_state)
164
165 def _fit_forecaster(self, y, X=None):
166 from statsmodels.tsa.api import SARIMAX as _SARIMAX
167
168 self._forecaster = _SARIMAX(
169 endog=y,
170 exog=X,
171 order=self.order,
172 seasonal_order=self.seasonal_order,
173 trend=self.trend,
174 measurement_error=self.measurement_error,
175 time_varying_regression=self.time_varying_regression,
176 mle_regression=self.mle_regression,
177 simple_differencing=self.simple_differencing,
178 enforce_stationarity=self.enforce_stationarity,
179 enforce_invertibility=self.enforce_invertibility,
180 hamilton_representation=self.hamilton_representation,
181 concentrate_scale=self.concentrate_scale,
182 trend_offset=self.trend_offset,
183 use_exact_diffuse=self.use_exact_diffuse,
184 dates=self.dates,
185 freq=self.freq,
186 missing=self.missing,
187 validate_specification=self.validate_specification,
188 )
189 self._fitted_forecaster = self._forecaster.fit()
190
191 def summary(self):
192 """Get a summary of the fitted forecaster.
193
194 This is the same as the implementation in statsmodels:
195 https://www.statsmodels.org/dev/examples/notebooks/generated/statespace_structural_harvey_jaeger.html
196 """
197 return self._fitted_forecaster.summary()
198
199 @staticmethod
200 def _extract_conf_int(prediction_results, alpha) -> pd.DataFrame:
201 """Construct confidence interval at specified `alpha` for each timestep.
202
203 Parameters
204 ----------
205 prediction_results : PredictionResults
206 results class, as returned by ``self._fitted_forecaster.get_prediction``
207 alpha : float
208 one minus nominal coverage
209
210 Returns
211 -------
212 pd.DataFrame
213 confidence intervals at each timestep
214
215 The dataframe must have at least two columns ``lower`` and ``upper``, and
216 the row indices must be integers relative to ``self.cutoff``. Order of
217 columns do not matter, and row indices must be a superset of relative
218 integer horizon of ``fh``.
219 """
220 conf_int = prediction_results.conf_int(alpha=alpha)
221 conf_int.columns = ["lower", "upper"]
222
223 return conf_int
224
225 @classmethod
226 def get_test_params(cls, parameter_set="default"):
227 """Return testing parameter settings for the estimator.
228
229 Parameters
230 ----------
231 parameter_set : str, default="default"
232 Name of the set of test parameters to return, for use in tests. If no
233 special parameters are defined for a value, will return `"default"` set.
234 There are currently no reserved values for forecasters.
235
236 Returns
237 -------
238 params : dict or list of dict, default = {}
239 Parameters to create testing instances of the class
240 Each dict are parameters to construct an "interesting" test instance, i.e.,
241 `MyClass(**params)` or `MyClass(**params[i])` creates a valid test instance.
242 `create_test_instance` uses the first (or only) dictionary in `params`
243 """
244 return [
245 # this fails - seems like statsmodels error
246 # {
247 # "order": (4, 1, 2),
248 # "trend": "ct",
249 # "time_varying_regression": True,
250 # "enforce_stationarity": False,
251 # "enforce_invertibility": False,
252 # "concentrate_scale": True,
253 # "use_exact_diffuse": True,
254 # "mle_regression": False,
255 # },
256 {
257 "order": (2, 1, 2),
258 "trend": "ct",
259 "enforce_stationarity": False,
260 "enforce_invertibility": False,
261 },
262 {
263 "order": [1, 0, 1],
264 "trend": [1, 1, 0, 1],
265 # It does not work with measurement_error, not sure why.
266 # "measurement_error": True,
267 "seasonal_order": (1, 0, 1, 2),
268 "hamilton_representation": True,
269 "simple_differencing": True,
270 },
271 ]
272
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sktime/forecasting/sarimax.py b/sktime/forecasting/sarimax.py
--- a/sktime/forecasting/sarimax.py
+++ b/sktime/forecasting/sarimax.py
@@ -85,7 +85,9 @@
Whether or not to use exact diffuse initialization for non-stationary
states. Default is False (in which case approximate diffuse
initialization is used).
- random_state : int, RandomState instance or None, optional ,
+ disp : bool, optional, default=False
+ Set to True to print convergence messages.
+ random_state : int, RandomState instance or None, optional, default=None
default=None – If int, random_state is the seed used by the random
number generator; If RandomState instance, random_state is the random
number generator; If None, the random number generator is the
@@ -140,6 +142,7 @@
freq=None,
missing="none",
validate_specification=True,
+ disp=False,
random_state=None,
):
self.order = order
@@ -160,6 +163,9 @@
self.missing = missing
self.validate_specification = validate_specification
+ # Fit params
+ self.disp = disp
+
super().__init__(random_state=random_state)
def _fit_forecaster(self, y, X=None):
@@ -186,7 +192,7 @@
missing=self.missing,
validate_specification=self.validate_specification,
)
- self._fitted_forecaster = self._forecaster.fit()
+ self._fitted_forecaster = self._forecaster.fit(disp=self.disp)
def summary(self):
"""Get a summary of the fitted forecaster.
|
{"golden_diff": "diff --git a/sktime/forecasting/sarimax.py b/sktime/forecasting/sarimax.py\n--- a/sktime/forecasting/sarimax.py\n+++ b/sktime/forecasting/sarimax.py\n@@ -85,7 +85,9 @@\n Whether or not to use exact diffuse initialization for non-stationary\n states. Default is False (in which case approximate diffuse\n initialization is used).\n- random_state : int, RandomState instance or None, optional ,\n+ disp : bool, optional, default=False\n+ Set to True to print convergence messages.\n+ random_state : int, RandomState instance or None, optional, default=None\n default=None \u2013 If int, random_state is the seed used by the random\n number generator; If RandomState instance, random_state is the random\n number generator; If None, the random number generator is the\n@@ -140,6 +142,7 @@\n freq=None,\n missing=\"none\",\n validate_specification=True,\n+ disp=False,\n random_state=None,\n ):\n self.order = order\n@@ -160,6 +163,9 @@\n self.missing = missing\n self.validate_specification = validate_specification\n \n+ # Fit params\n+ self.disp = disp\n+\n super().__init__(random_state=random_state)\n \n def _fit_forecaster(self, y, X=None):\n@@ -186,7 +192,7 @@\n missing=self.missing,\n validate_specification=self.validate_specification,\n )\n- self._fitted_forecaster = self._forecaster.fit()\n+ self._fitted_forecaster = self._forecaster.fit(disp=self.disp)\n \n def summary(self):\n \"\"\"Get a summary of the fitted forecaster.\n", "issue": "[ENH] Adding a parameter to forecasters from statsmodels to control verbosity of output\n**Is your feature request related to a problem? Please describe.**\r\nSome forecasters (such as SARIMAX) that are direct wrappers of statsmodels forecasters, print multiple messages when fitting the model (see image for an example). There is currently no way of controling the level of the printed output.\r\n\r\n\r\n\r\n\r\n**Describe the solution you'd like**\r\nIt would be nice to have a parameter `verbose` to control the level of verbosity. The stasmodels implementation has a parameter called `disp` (see https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.fit.html#statsmodels.tsa.statespace.sarimax.SARIMAX.fit) that can do this, but there is currently no way of accessing this parameter from the sktime implementation.\r\n\r\n\n", "before_files": [{"content": "# !/usr/bin/env python3 -u\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Implements SARIMAX.\"\"\"\n\n__all__ = [\"SARIMAX\"]\n__author__ = [\"TNTran92\", \"yarnabrina\"]\n\nimport pandas as pd\n\nfrom sktime.forecasting.base.adapters import _StatsModelsAdapter\n\n\nclass SARIMAX(_StatsModelsAdapter):\n \"\"\"SARIMAX forecaster.\n\n Direct interface for `statsmodels.tsa.api.SARIMAX`.\n\n Parameters\n ----------\n order : iterable or iterable of iterables, optional, default=(1,0,0)\n The (p,d,q) order of the model for the number of AR parameters,\n differences, and MA parameters. `d` must be an integer\n indicating the integration order of the process, while\n `p` and `q` may either be an integers indicating the AR and MA\n orders (so that all lags up to those orders are included) or else\n iterables giving specific AR and / or MA lags to include. Default is\n an AR(1) model: (1,0,0).\n seasonal_order : iterable, optional, default=(0,0,0,0)\n The (P,D,Q,s) order of the seasonal component of the model for the\n AR parameters, differences, MA parameters, and periodicity.\n `D` must be an integer indicating the integration order of the process,\n while `P` and `Q` may either be an integers indicating the AR and MA\n orders (so that all lags up to those orders are included) or else\n iterables giving specific AR and / or MA lags to include. `s` is an\n integer giving the periodicity (number of periods in season), often it\n is 4 for quarterly data or 12 for monthly data. Default is no seasonal\n effect.\n trend : str{'n','c','t','ct'} or iterable, optional, default=\"c\"\n Parameter controlling the deterministic trend polynomial :math:`A(t)`.\n Can be specified as a string where 'c' indicates a constant (i.e. a\n degree zero component of the trend polynomial), 't' indicates a\n linear trend with time, and 'ct' is both. Can also be specified as an\n iterable defining the non-zero polynomial exponents to include, in\n increasing order. For example, `[1,1,0,1]` denotes\n :math:`a + bt + ct^3`. Default is to not include a trend component.\n measurement_error : bool, optional, default=False\n Whether or not to assume the endogenous observations `endog` were\n measured with error.\n time_varying_regression : bool, optional, default=False\n Used when an explanatory variables, `exog`, are provided\n to select whether or not coefficients on the exogenous regressors are\n allowed to vary over time.\n mle_regression : bool, optional, default=True\n Whether or not to use estimate the regression coefficients for the\n exogenous variables as part of maximum likelihood estimation or through\n the Kalman filter (i.e. recursive least squares). If\n `time_varying_regression` is True, this must be set to False.\n simple_differencing : bool, optional, default=False\n Whether or not to use partially conditional maximum likelihood\n estimation. If True, differencing is performed prior to estimation,\n which discards the first :math:`s D + d` initial rows but results in a\n smaller state-space formulation. See the Notes section for important\n details about interpreting results when this option is used. If False,\n the full SARIMAX model is put in state-space form so that all\n datapoints can be used in estimation.\n enforce_stationarity : bool, optional, default=True\n Whether or not to transform the AR parameters to enforce stationarity\n in the autoregressive component of the model.\n enforce_invertibility : bool, optional, default=True\n Whether or not to transform the MA parameters to enforce invertibility\n in the moving average component of the model.\n hamilton_representation : bool, optional, default=False\n Whether or not to use the Hamilton representation of an ARMA process\n (if True) or the Harvey representation (if False).\n concentrate_scale : bool, optional, default=False\n Whether or not to concentrate the scale (variance of the error term)\n out of the likelihood. This reduces the number of parameters estimated\n by maximum likelihood by one, but standard errors will then not\n be available for the scale parameter.\n trend_offset : int, optional, default=1\n The offset at which to start time trend values. Default is 1, so that\n if `trend='t'` the trend is equal to 1, 2, ..., nobs. Typically is only\n set when the model created by extending a previous dataset.\n use_exact_diffuse : bool, optional, default=False\n Whether or not to use exact diffuse initialization for non-stationary\n states. Default is False (in which case approximate diffuse\n initialization is used).\n random_state : int, RandomState instance or None, optional ,\n default=None \u2013 If int, random_state is the seed used by the random\n number generator; If RandomState instance, random_state is the random\n number generator; If None, the random number generator is the\n RandomState instance used by np.random.\n\n See Also\n --------\n ARIMA\n AutoARIMA\n StatsForecastAutoARIMA\n\n References\n ----------\n .. [1] Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles\n and practice. OTexts, 2014.\n\n Examples\n --------\n >>> from sktime.datasets import load_airline\n >>> from sktime.forecasting.sarimax import SARIMAX\n >>> y = load_airline()\n >>> forecaster = SARIMAX(\n ... order=(1, 0, 0), trend=\"t\", seasonal_order=(1, 0, 0, 6)) # doctest: +SKIP\n ... )\n >>> forecaster.fit(y) # doctest: +SKIP\n SARIMAX(...)\n >>> y_pred = forecaster.predict(fh=y.index) # doctest: +SKIP\n \"\"\"\n\n _tags = {\n \"ignores-exogeneous-X\": False,\n \"capability:pred_int\": True,\n \"capability:pred_int:insample\": True,\n }\n\n def __init__(\n self,\n order=(1, 0, 0),\n seasonal_order=(0, 0, 0, 0),\n trend=\"c\",\n measurement_error=False,\n time_varying_regression=False,\n mle_regression=True,\n simple_differencing=False,\n enforce_stationarity=True,\n enforce_invertibility=True,\n hamilton_representation=False,\n concentrate_scale=False,\n trend_offset=1,\n use_exact_diffuse=False,\n dates=None,\n freq=None,\n missing=\"none\",\n validate_specification=True,\n random_state=None,\n ):\n self.order = order\n self.seasonal_order = seasonal_order\n self.trend = trend\n self.measurement_error = measurement_error\n self.time_varying_regression = time_varying_regression\n self.mle_regression = mle_regression\n self.simple_differencing = simple_differencing\n self.enforce_stationarity = enforce_stationarity\n self.enforce_invertibility = enforce_invertibility\n self.hamilton_representation = hamilton_representation\n self.concentrate_scale = concentrate_scale\n self.trend_offset = trend_offset\n self.use_exact_diffuse = use_exact_diffuse\n self.dates = dates\n self.freq = freq\n self.missing = missing\n self.validate_specification = validate_specification\n\n super().__init__(random_state=random_state)\n\n def _fit_forecaster(self, y, X=None):\n from statsmodels.tsa.api import SARIMAX as _SARIMAX\n\n self._forecaster = _SARIMAX(\n endog=y,\n exog=X,\n order=self.order,\n seasonal_order=self.seasonal_order,\n trend=self.trend,\n measurement_error=self.measurement_error,\n time_varying_regression=self.time_varying_regression,\n mle_regression=self.mle_regression,\n simple_differencing=self.simple_differencing,\n enforce_stationarity=self.enforce_stationarity,\n enforce_invertibility=self.enforce_invertibility,\n hamilton_representation=self.hamilton_representation,\n concentrate_scale=self.concentrate_scale,\n trend_offset=self.trend_offset,\n use_exact_diffuse=self.use_exact_diffuse,\n dates=self.dates,\n freq=self.freq,\n missing=self.missing,\n validate_specification=self.validate_specification,\n )\n self._fitted_forecaster = self._forecaster.fit()\n\n def summary(self):\n \"\"\"Get a summary of the fitted forecaster.\n\n This is the same as the implementation in statsmodels:\n https://www.statsmodels.org/dev/examples/notebooks/generated/statespace_structural_harvey_jaeger.html\n \"\"\"\n return self._fitted_forecaster.summary()\n\n @staticmethod\n def _extract_conf_int(prediction_results, alpha) -> pd.DataFrame:\n \"\"\"Construct confidence interval at specified `alpha` for each timestep.\n\n Parameters\n ----------\n prediction_results : PredictionResults\n results class, as returned by ``self._fitted_forecaster.get_prediction``\n alpha : float\n one minus nominal coverage\n\n Returns\n -------\n pd.DataFrame\n confidence intervals at each timestep\n\n The dataframe must have at least two columns ``lower`` and ``upper``, and\n the row indices must be integers relative to ``self.cutoff``. Order of\n columns do not matter, and row indices must be a superset of relative\n integer horizon of ``fh``.\n \"\"\"\n conf_int = prediction_results.conf_int(alpha=alpha)\n conf_int.columns = [\"lower\", \"upper\"]\n\n return conf_int\n\n @classmethod\n def get_test_params(cls, parameter_set=\"default\"):\n \"\"\"Return testing parameter settings for the estimator.\n\n Parameters\n ----------\n parameter_set : str, default=\"default\"\n Name of the set of test parameters to return, for use in tests. If no\n special parameters are defined for a value, will return `\"default\"` set.\n There are currently no reserved values for forecasters.\n\n Returns\n -------\n params : dict or list of dict, default = {}\n Parameters to create testing instances of the class\n Each dict are parameters to construct an \"interesting\" test instance, i.e.,\n `MyClass(**params)` or `MyClass(**params[i])` creates a valid test instance.\n `create_test_instance` uses the first (or only) dictionary in `params`\n \"\"\"\n return [\n # this fails - seems like statsmodels error\n # {\n # \"order\": (4, 1, 2),\n # \"trend\": \"ct\",\n # \"time_varying_regression\": True,\n # \"enforce_stationarity\": False,\n # \"enforce_invertibility\": False,\n # \"concentrate_scale\": True,\n # \"use_exact_diffuse\": True,\n # \"mle_regression\": False,\n # },\n {\n \"order\": (2, 1, 2),\n \"trend\": \"ct\",\n \"enforce_stationarity\": False,\n \"enforce_invertibility\": False,\n },\n {\n \"order\": [1, 0, 1],\n \"trend\": [1, 1, 0, 1],\n # It does not work with measurement_error, not sure why.\n # \"measurement_error\": True,\n \"seasonal_order\": (1, 0, 1, 2),\n \"hamilton_representation\": True,\n \"simple_differencing\": True,\n },\n ]\n", "path": "sktime/forecasting/sarimax.py"}], "after_files": [{"content": "# !/usr/bin/env python3 -u\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Implements SARIMAX.\"\"\"\n\n__all__ = [\"SARIMAX\"]\n__author__ = [\"TNTran92\", \"yarnabrina\"]\n\nimport pandas as pd\n\nfrom sktime.forecasting.base.adapters import _StatsModelsAdapter\n\n\nclass SARIMAX(_StatsModelsAdapter):\n \"\"\"SARIMAX forecaster.\n\n Direct interface for `statsmodels.tsa.api.SARIMAX`.\n\n Parameters\n ----------\n order : iterable or iterable of iterables, optional, default=(1,0,0)\n The (p,d,q) order of the model for the number of AR parameters,\n differences, and MA parameters. `d` must be an integer\n indicating the integration order of the process, while\n `p` and `q` may either be an integers indicating the AR and MA\n orders (so that all lags up to those orders are included) or else\n iterables giving specific AR and / or MA lags to include. Default is\n an AR(1) model: (1,0,0).\n seasonal_order : iterable, optional, default=(0,0,0,0)\n The (P,D,Q,s) order of the seasonal component of the model for the\n AR parameters, differences, MA parameters, and periodicity.\n `D` must be an integer indicating the integration order of the process,\n while `P` and `Q` may either be an integers indicating the AR and MA\n orders (so that all lags up to those orders are included) or else\n iterables giving specific AR and / or MA lags to include. `s` is an\n integer giving the periodicity (number of periods in season), often it\n is 4 for quarterly data or 12 for monthly data. Default is no seasonal\n effect.\n trend : str{'n','c','t','ct'} or iterable, optional, default=\"c\"\n Parameter controlling the deterministic trend polynomial :math:`A(t)`.\n Can be specified as a string where 'c' indicates a constant (i.e. a\n degree zero component of the trend polynomial), 't' indicates a\n linear trend with time, and 'ct' is both. Can also be specified as an\n iterable defining the non-zero polynomial exponents to include, in\n increasing order. For example, `[1,1,0,1]` denotes\n :math:`a + bt + ct^3`. Default is to not include a trend component.\n measurement_error : bool, optional, default=False\n Whether or not to assume the endogenous observations `endog` were\n measured with error.\n time_varying_regression : bool, optional, default=False\n Used when an explanatory variables, `exog`, are provided\n to select whether or not coefficients on the exogenous regressors are\n allowed to vary over time.\n mle_regression : bool, optional, default=True\n Whether or not to use estimate the regression coefficients for the\n exogenous variables as part of maximum likelihood estimation or through\n the Kalman filter (i.e. recursive least squares). If\n `time_varying_regression` is True, this must be set to False.\n simple_differencing : bool, optional, default=False\n Whether or not to use partially conditional maximum likelihood\n estimation. If True, differencing is performed prior to estimation,\n which discards the first :math:`s D + d` initial rows but results in a\n smaller state-space formulation. See the Notes section for important\n details about interpreting results when this option is used. If False,\n the full SARIMAX model is put in state-space form so that all\n datapoints can be used in estimation.\n enforce_stationarity : bool, optional, default=True\n Whether or not to transform the AR parameters to enforce stationarity\n in the autoregressive component of the model.\n enforce_invertibility : bool, optional, default=True\n Whether or not to transform the MA parameters to enforce invertibility\n in the moving average component of the model.\n hamilton_representation : bool, optional, default=False\n Whether or not to use the Hamilton representation of an ARMA process\n (if True) or the Harvey representation (if False).\n concentrate_scale : bool, optional, default=False\n Whether or not to concentrate the scale (variance of the error term)\n out of the likelihood. This reduces the number of parameters estimated\n by maximum likelihood by one, but standard errors will then not\n be available for the scale parameter.\n trend_offset : int, optional, default=1\n The offset at which to start time trend values. Default is 1, so that\n if `trend='t'` the trend is equal to 1, 2, ..., nobs. Typically is only\n set when the model created by extending a previous dataset.\n use_exact_diffuse : bool, optional, default=False\n Whether or not to use exact diffuse initialization for non-stationary\n states. Default is False (in which case approximate diffuse\n initialization is used).\n disp : bool, optional, default=False\n Set to True to print convergence messages.\n random_state : int, RandomState instance or None, optional, default=None\n default=None \u2013 If int, random_state is the seed used by the random\n number generator; If RandomState instance, random_state is the random\n number generator; If None, the random number generator is the\n RandomState instance used by np.random.\n\n See Also\n --------\n ARIMA\n AutoARIMA\n StatsForecastAutoARIMA\n\n References\n ----------\n .. [1] Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles\n and practice. OTexts, 2014.\n\n Examples\n --------\n >>> from sktime.datasets import load_airline\n >>> from sktime.forecasting.sarimax import SARIMAX\n >>> y = load_airline()\n >>> forecaster = SARIMAX(\n ... order=(1, 0, 0), trend=\"t\", seasonal_order=(1, 0, 0, 6)) # doctest: +SKIP\n ... )\n >>> forecaster.fit(y) # doctest: +SKIP\n SARIMAX(...)\n >>> y_pred = forecaster.predict(fh=y.index) # doctest: +SKIP\n \"\"\"\n\n _tags = {\n \"ignores-exogeneous-X\": False,\n \"capability:pred_int\": True,\n \"capability:pred_int:insample\": True,\n }\n\n def __init__(\n self,\n order=(1, 0, 0),\n seasonal_order=(0, 0, 0, 0),\n trend=\"c\",\n measurement_error=False,\n time_varying_regression=False,\n mle_regression=True,\n simple_differencing=False,\n enforce_stationarity=True,\n enforce_invertibility=True,\n hamilton_representation=False,\n concentrate_scale=False,\n trend_offset=1,\n use_exact_diffuse=False,\n dates=None,\n freq=None,\n missing=\"none\",\n validate_specification=True,\n disp=False,\n random_state=None,\n ):\n self.order = order\n self.seasonal_order = seasonal_order\n self.trend = trend\n self.measurement_error = measurement_error\n self.time_varying_regression = time_varying_regression\n self.mle_regression = mle_regression\n self.simple_differencing = simple_differencing\n self.enforce_stationarity = enforce_stationarity\n self.enforce_invertibility = enforce_invertibility\n self.hamilton_representation = hamilton_representation\n self.concentrate_scale = concentrate_scale\n self.trend_offset = trend_offset\n self.use_exact_diffuse = use_exact_diffuse\n self.dates = dates\n self.freq = freq\n self.missing = missing\n self.validate_specification = validate_specification\n\n # Fit params\n self.disp = disp\n\n super().__init__(random_state=random_state)\n\n def _fit_forecaster(self, y, X=None):\n from statsmodels.tsa.api import SARIMAX as _SARIMAX\n\n self._forecaster = _SARIMAX(\n endog=y,\n exog=X,\n order=self.order,\n seasonal_order=self.seasonal_order,\n trend=self.trend,\n measurement_error=self.measurement_error,\n time_varying_regression=self.time_varying_regression,\n mle_regression=self.mle_regression,\n simple_differencing=self.simple_differencing,\n enforce_stationarity=self.enforce_stationarity,\n enforce_invertibility=self.enforce_invertibility,\n hamilton_representation=self.hamilton_representation,\n concentrate_scale=self.concentrate_scale,\n trend_offset=self.trend_offset,\n use_exact_diffuse=self.use_exact_diffuse,\n dates=self.dates,\n freq=self.freq,\n missing=self.missing,\n validate_specification=self.validate_specification,\n )\n self._fitted_forecaster = self._forecaster.fit(disp=self.disp)\n\n def summary(self):\n \"\"\"Get a summary of the fitted forecaster.\n\n This is the same as the implementation in statsmodels:\n https://www.statsmodels.org/dev/examples/notebooks/generated/statespace_structural_harvey_jaeger.html\n \"\"\"\n return self._fitted_forecaster.summary()\n\n @staticmethod\n def _extract_conf_int(prediction_results, alpha) -> pd.DataFrame:\n \"\"\"Construct confidence interval at specified `alpha` for each timestep.\n\n Parameters\n ----------\n prediction_results : PredictionResults\n results class, as returned by ``self._fitted_forecaster.get_prediction``\n alpha : float\n one minus nominal coverage\n\n Returns\n -------\n pd.DataFrame\n confidence intervals at each timestep\n\n The dataframe must have at least two columns ``lower`` and ``upper``, and\n the row indices must be integers relative to ``self.cutoff``. Order of\n columns do not matter, and row indices must be a superset of relative\n integer horizon of ``fh``.\n \"\"\"\n conf_int = prediction_results.conf_int(alpha=alpha)\n conf_int.columns = [\"lower\", \"upper\"]\n\n return conf_int\n\n @classmethod\n def get_test_params(cls, parameter_set=\"default\"):\n \"\"\"Return testing parameter settings for the estimator.\n\n Parameters\n ----------\n parameter_set : str, default=\"default\"\n Name of the set of test parameters to return, for use in tests. If no\n special parameters are defined for a value, will return `\"default\"` set.\n There are currently no reserved values for forecasters.\n\n Returns\n -------\n params : dict or list of dict, default = {}\n Parameters to create testing instances of the class\n Each dict are parameters to construct an \"interesting\" test instance, i.e.,\n `MyClass(**params)` or `MyClass(**params[i])` creates a valid test instance.\n `create_test_instance` uses the first (or only) dictionary in `params`\n \"\"\"\n return [\n # this fails - seems like statsmodels error\n # {\n # \"order\": (4, 1, 2),\n # \"trend\": \"ct\",\n # \"time_varying_regression\": True,\n # \"enforce_stationarity\": False,\n # \"enforce_invertibility\": False,\n # \"concentrate_scale\": True,\n # \"use_exact_diffuse\": True,\n # \"mle_regression\": False,\n # },\n {\n \"order\": (2, 1, 2),\n \"trend\": \"ct\",\n \"enforce_stationarity\": False,\n \"enforce_invertibility\": False,\n },\n {\n \"order\": [1, 0, 1],\n \"trend\": [1, 1, 0, 1],\n # It does not work with measurement_error, not sure why.\n # \"measurement_error\": True,\n \"seasonal_order\": (1, 0, 1, 2),\n \"hamilton_representation\": True,\n \"simple_differencing\": True,\n },\n ]\n", "path": "sktime/forecasting/sarimax.py"}]}
| 3,842 | 397 |
gh_patches_debug_629
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-8684
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
lint rules: Prevent `return undefined;`
We should sweep the code to replace `return undefined;` with `return;`, and then make a lint rule for it, either via eslint (if they support that) or by making a custom rule.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `version.py`
Content:
```
1 ZULIP_VERSION = "1.7.1+git"
2
3 # Bump the minor PROVISION_VERSION to indicate that folks should provision
4 # only when going from an old version of the code to a newer version. Bump
5 # the major version to indicate that folks should provision in both
6 # directions.
7
8 # Typically, adding a dependency only requires a minor version bump, and
9 # removing a dependency requires a major version bump.
10
11 PROVISION_VERSION = '15.9'
12
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -8,4 +8,4 @@
# Typically, adding a dependency only requires a minor version bump, and
# removing a dependency requires a major version bump.
-PROVISION_VERSION = '15.9'
+PROVISION_VERSION = '15.10'
|
{"golden_diff": "diff --git a/version.py b/version.py\n--- a/version.py\n+++ b/version.py\n@@ -8,4 +8,4 @@\n # Typically, adding a dependency only requires a minor version bump, and\n # removing a dependency requires a major version bump.\n \n-PROVISION_VERSION = '15.9'\n+PROVISION_VERSION = '15.10'\n", "issue": "lint rules: Prevent `return undefined;`\nWe should sweep the code to replace `return undefined;` with `return;`, and then make a lint rule for it, either via eslint (if they support that) or by making a custom rule.\n", "before_files": [{"content": "ZULIP_VERSION = \"1.7.1+git\"\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically, adding a dependency only requires a minor version bump, and\n# removing a dependency requires a major version bump.\n\nPROVISION_VERSION = '15.9'\n", "path": "version.py"}], "after_files": [{"content": "ZULIP_VERSION = \"1.7.1+git\"\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically, adding a dependency only requires a minor version bump, and\n# removing a dependency requires a major version bump.\n\nPROVISION_VERSION = '15.10'\n", "path": "version.py"}]}
| 424 | 79 |
gh_patches_debug_3601
|
rasdani/github-patches
|
git_diff
|
cornellius-gp__gpytorch-1647
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Slow convergence of MultiTask regressor and unreliable results.
Hello everyone,
Currently I am trying to learn a model to predict multiple real-valued properties of a cellular image. I have a baseline that uses CNN (e.g. resnet) as feature extractor and FC head as predictor.
I would like to try to use GP as predictor and I implemented some simple network following multitask tutorial and gpshot repo ( https://github.com/BayesWatch/deep-kernel-transfer/blob/master/methods/gpshot_regression.py ).
Simplified code of GP layer and the main model.
```
class GPBasedModel(torch.nn.Module):
def __init__(self, model_name, likelihood, out_features=10):
super(GPBasedModel, self).__init__()
self.body, feature_dim = create_body(...)
train_feats = torch.FloatTensor(torch.zeros(32, feature_dim)).cuda()
train_y = torch.FloatTensor(torch.zeros(32,out_features)).cuda()
self.gp_layer = BatchIndependentMultitaskGPModel(train_feats, train_y, likelihood, out_dim)
self.feature_norm = torch.nn.Sequential(
torch.nn.BatchNorm1d(input_dim)
)
def forward(self, batch):
features = self.body(batch)
features = features.view(features.size(0), -1)
features = self.feature_norm(features) #z-score features
self.gp_layer.set_train_data(inputs=features)
res = self.gp_layer(features)
return res
class BatchIndependentMultitaskGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood, out_dim):
super().__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean(batch_shape=torch.Size([out_dim]))
self.covar_module = gpytorch.kernels.ScaleKernel(
gpytorch.kernels.RBFKernel(batch_shape=torch.Size([out_dim])),
batch_shape=torch.Size([out_dim])
)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultitaskMultivariateNormal.from_batch_mvn(
gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
)
likelihood = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=10)
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model.gp_layer)
optimizer = torch.optim.Adam([
{'params': model.body.parameters(), 'lr': 1e-3},
{'params': model.meta_head.parameters(), 'lr': 1e-3},
{'params': model.gp_layer.hyperparameters(), 'lr': 1e-1},
], lr=1e-3)
```
likelihood defined as MultitaskGaussian and loss function as ExactMarginalLogLikelihood.
However, results look really bad: convergence is painfully slow comparing to the model with FC head & results are much worse (distribution of predicted values does not look alike real ones) + MLL is relatively high (~1000-2000).
I have tried to z-score both target & features (add BN for features) but it didnt change results a lot: despite MLL dropped to ~45 but it stays around this value during training.
So my question is kinda simple: is smth wrong with the data, my code or GP model is not suitable for that case?
Thanks in advance!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpytorch/mlls/exact_marginal_log_likelihood.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from ..distributions import MultivariateNormal
4 from ..likelihoods import _GaussianLikelihoodBase
5 from .marginal_log_likelihood import MarginalLogLikelihood
6
7
8 class ExactMarginalLogLikelihood(MarginalLogLikelihood):
9 """
10 The exact marginal log likelihood (MLL) for an exact Gaussian process with a
11 Gaussian likelihood.
12
13 .. note::
14 This module will not work with anything other than a :obj:`~gpytorch.likelihoods.GaussianLikelihood`
15 and a :obj:`~gpytorch.models.ExactGP`. It also cannot be used in conjunction with
16 stochastic optimization.
17
18 :param ~gpytorch.likelihoods.GaussianLikelihood likelihood: The Gaussian likelihood for the model
19 :param ~gpytorch.models.ExactGP model: The exact GP model
20
21 Example:
22 >>> # model is a gpytorch.models.ExactGP
23 >>> # likelihood is a gpytorch.likelihoods.Likelihood
24 >>> mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
25 >>>
26 >>> output = model(train_x)
27 >>> loss = -mll(output, train_y)
28 >>> loss.backward()
29 """
30
31 def __init__(self, likelihood, model):
32 if not isinstance(likelihood, _GaussianLikelihoodBase):
33 raise RuntimeError("Likelihood must be Gaussian for exact inference")
34 super(ExactMarginalLogLikelihood, self).__init__(likelihood, model)
35
36 def _add_other_terms(self, res, params):
37 # Add additional terms (SGPR / learned inducing points, heteroskedastic likelihood models)
38 for added_loss_term in self.model.added_loss_terms():
39 res = res.add(added_loss_term.loss(*params))
40
41 # Add log probs of priors on the (functions of) parameters
42 for name, module, prior, closure, _ in self.named_priors():
43 res.add_(prior.log_prob(closure(module)).sum())
44
45 return res
46
47 def forward(self, function_dist, target, *params):
48 r"""
49 Computes the MLL given :math:`p(\mathbf f)` and :math:`\mathbf y`.
50
51 :param ~gpytorch.distributions.MultivariateNormal function_dist: :math:`p(\mathbf f)`
52 the outputs of the latent function (the :obj:`gpytorch.models.ExactGP`)
53 :param torch.Tensor target: :math:`\mathbf y` The target values
54 :rtype: torch.Tensor
55 :return: Exact MLL. Output shape corresponds to batch shape of the model/input data.
56 """
57 if not isinstance(function_dist, MultivariateNormal):
58 raise RuntimeError("ExactMarginalLogLikelihood can only operate on Gaussian random variables")
59
60 # Get the log prob of the marginal distribution
61 output = self.likelihood(function_dist, *params)
62 res = output.log_prob(target)
63 res = self._add_other_terms(res, params)
64
65 # Scale by the amount of data we have
66 num_data = target.size(-1)
67 return res.div_(num_data)
68
69 def pyro_factor(self, output, target, *params):
70 import pyro
71
72 mll = target.size(-1) * self(output, target, *params)
73 pyro.factor("gp_mll", mll)
74 return mll
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gpytorch/mlls/exact_marginal_log_likelihood.py b/gpytorch/mlls/exact_marginal_log_likelihood.py
--- a/gpytorch/mlls/exact_marginal_log_likelihood.py
+++ b/gpytorch/mlls/exact_marginal_log_likelihood.py
@@ -63,7 +63,7 @@
res = self._add_other_terms(res, params)
# Scale by the amount of data we have
- num_data = target.size(-1)
+ num_data = function_dist.event_shape.numel()
return res.div_(num_data)
def pyro_factor(self, output, target, *params):
|
{"golden_diff": "diff --git a/gpytorch/mlls/exact_marginal_log_likelihood.py b/gpytorch/mlls/exact_marginal_log_likelihood.py\n--- a/gpytorch/mlls/exact_marginal_log_likelihood.py\n+++ b/gpytorch/mlls/exact_marginal_log_likelihood.py\n@@ -63,7 +63,7 @@\n res = self._add_other_terms(res, params)\n \n # Scale by the amount of data we have\n- num_data = target.size(-1)\n+ num_data = function_dist.event_shape.numel()\n return res.div_(num_data)\n \n def pyro_factor(self, output, target, *params):\n", "issue": "Slow convergence of MultiTask regressor and unreliable results.\nHello everyone,\r\n\r\nCurrently I am trying to learn a model to predict multiple real-valued properties of a cellular image. I have a baseline that uses CNN (e.g. resnet) as feature extractor and FC head as predictor.\r\nI would like to try to use GP as predictor and I implemented some simple network following multitask tutorial and gpshot repo ( https://github.com/BayesWatch/deep-kernel-transfer/blob/master/methods/gpshot_regression.py ). \r\n\r\nSimplified code of GP layer and the main model.\r\n\r\n```\r\nclass GPBasedModel(torch.nn.Module):\r\n def __init__(self, model_name, likelihood, out_features=10):\r\n super(GPBasedModel, self).__init__()\r\n \r\n self.body, feature_dim = create_body(...)\r\n\r\n train_feats = torch.FloatTensor(torch.zeros(32, feature_dim)).cuda()\r\n train_y = torch.FloatTensor(torch.zeros(32,out_features)).cuda()\r\n \r\n self.gp_layer = BatchIndependentMultitaskGPModel(train_feats, train_y, likelihood, out_dim)\r\n \r\n self.feature_norm = torch.nn.Sequential(\r\n torch.nn.BatchNorm1d(input_dim)\r\n )\r\n \r\n def forward(self, batch):\r\n features = self.body(batch)\r\n features = features.view(features.size(0), -1)\r\n\r\n features = self.feature_norm(features) #z-score features\r\n \r\n self.gp_layer.set_train_data(inputs=features)\r\n res = self.gp_layer(features)\r\n \r\n return res \r\n\r\nclass BatchIndependentMultitaskGPModel(gpytorch.models.ExactGP):\r\n def __init__(self, train_x, train_y, likelihood, out_dim):\r\n super().__init__(train_x, train_y, likelihood)\r\n self.mean_module = gpytorch.means.ConstantMean(batch_shape=torch.Size([out_dim]))\r\n self.covar_module = gpytorch.kernels.ScaleKernel(\r\n gpytorch.kernels.RBFKernel(batch_shape=torch.Size([out_dim])),\r\n batch_shape=torch.Size([out_dim])\r\n )\r\n\r\n def forward(self, x):\r\n mean_x = self.mean_module(x)\r\n covar_x = self.covar_module(x)\r\n return gpytorch.distributions.MultitaskMultivariateNormal.from_batch_mvn(\r\n gpytorch.distributions.MultivariateNormal(mean_x, covar_x)\r\n )\r\n\r\nlikelihood = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=10)\r\nmll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model.gp_layer)\r\n\r\noptimizer = torch.optim.Adam([\r\n {'params': model.body.parameters(), 'lr': 1e-3},\r\n {'params': model.meta_head.parameters(), 'lr': 1e-3},\r\n {'params': model.gp_layer.hyperparameters(), 'lr': 1e-1},\r\n ], lr=1e-3)\r\n\r\n```\r\n\r\nlikelihood defined as MultitaskGaussian and loss function as ExactMarginalLogLikelihood. \r\n\r\nHowever, results look really bad: convergence is painfully slow comparing to the model with FC head & results are much worse (distribution of predicted values does not look alike real ones) + MLL is relatively high (~1000-2000).\r\n\r\nI have tried to z-score both target & features (add BN for features) but it didnt change results a lot: despite MLL dropped to ~45 but it stays around this value during training.\r\n\r\nSo my question is kinda simple: is smth wrong with the data, my code or GP model is not suitable for that case? \r\n\r\nThanks in advance!\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom ..distributions import MultivariateNormal\nfrom ..likelihoods import _GaussianLikelihoodBase\nfrom .marginal_log_likelihood import MarginalLogLikelihood\n\n\nclass ExactMarginalLogLikelihood(MarginalLogLikelihood):\n \"\"\"\n The exact marginal log likelihood (MLL) for an exact Gaussian process with a\n Gaussian likelihood.\n\n .. note::\n This module will not work with anything other than a :obj:`~gpytorch.likelihoods.GaussianLikelihood`\n and a :obj:`~gpytorch.models.ExactGP`. It also cannot be used in conjunction with\n stochastic optimization.\n\n :param ~gpytorch.likelihoods.GaussianLikelihood likelihood: The Gaussian likelihood for the model\n :param ~gpytorch.models.ExactGP model: The exact GP model\n\n Example:\n >>> # model is a gpytorch.models.ExactGP\n >>> # likelihood is a gpytorch.likelihoods.Likelihood\n >>> mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)\n >>>\n >>> output = model(train_x)\n >>> loss = -mll(output, train_y)\n >>> loss.backward()\n \"\"\"\n\n def __init__(self, likelihood, model):\n if not isinstance(likelihood, _GaussianLikelihoodBase):\n raise RuntimeError(\"Likelihood must be Gaussian for exact inference\")\n super(ExactMarginalLogLikelihood, self).__init__(likelihood, model)\n\n def _add_other_terms(self, res, params):\n # Add additional terms (SGPR / learned inducing points, heteroskedastic likelihood models)\n for added_loss_term in self.model.added_loss_terms():\n res = res.add(added_loss_term.loss(*params))\n\n # Add log probs of priors on the (functions of) parameters\n for name, module, prior, closure, _ in self.named_priors():\n res.add_(prior.log_prob(closure(module)).sum())\n\n return res\n\n def forward(self, function_dist, target, *params):\n r\"\"\"\n Computes the MLL given :math:`p(\\mathbf f)` and :math:`\\mathbf y`.\n\n :param ~gpytorch.distributions.MultivariateNormal function_dist: :math:`p(\\mathbf f)`\n the outputs of the latent function (the :obj:`gpytorch.models.ExactGP`)\n :param torch.Tensor target: :math:`\\mathbf y` The target values\n :rtype: torch.Tensor\n :return: Exact MLL. Output shape corresponds to batch shape of the model/input data.\n \"\"\"\n if not isinstance(function_dist, MultivariateNormal):\n raise RuntimeError(\"ExactMarginalLogLikelihood can only operate on Gaussian random variables\")\n\n # Get the log prob of the marginal distribution\n output = self.likelihood(function_dist, *params)\n res = output.log_prob(target)\n res = self._add_other_terms(res, params)\n\n # Scale by the amount of data we have\n num_data = target.size(-1)\n return res.div_(num_data)\n\n def pyro_factor(self, output, target, *params):\n import pyro\n\n mll = target.size(-1) * self(output, target, *params)\n pyro.factor(\"gp_mll\", mll)\n return mll\n", "path": "gpytorch/mlls/exact_marginal_log_likelihood.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom ..distributions import MultivariateNormal\nfrom ..likelihoods import _GaussianLikelihoodBase\nfrom .marginal_log_likelihood import MarginalLogLikelihood\n\n\nclass ExactMarginalLogLikelihood(MarginalLogLikelihood):\n \"\"\"\n The exact marginal log likelihood (MLL) for an exact Gaussian process with a\n Gaussian likelihood.\n\n .. note::\n This module will not work with anything other than a :obj:`~gpytorch.likelihoods.GaussianLikelihood`\n and a :obj:`~gpytorch.models.ExactGP`. It also cannot be used in conjunction with\n stochastic optimization.\n\n :param ~gpytorch.likelihoods.GaussianLikelihood likelihood: The Gaussian likelihood for the model\n :param ~gpytorch.models.ExactGP model: The exact GP model\n\n Example:\n >>> # model is a gpytorch.models.ExactGP\n >>> # likelihood is a gpytorch.likelihoods.Likelihood\n >>> mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)\n >>>\n >>> output = model(train_x)\n >>> loss = -mll(output, train_y)\n >>> loss.backward()\n \"\"\"\n\n def __init__(self, likelihood, model):\n if not isinstance(likelihood, _GaussianLikelihoodBase):\n raise RuntimeError(\"Likelihood must be Gaussian for exact inference\")\n super(ExactMarginalLogLikelihood, self).__init__(likelihood, model)\n\n def _add_other_terms(self, res, params):\n # Add additional terms (SGPR / learned inducing points, heteroskedastic likelihood models)\n for added_loss_term in self.model.added_loss_terms():\n res = res.add(added_loss_term.loss(*params))\n\n # Add log probs of priors on the (functions of) parameters\n for name, module, prior, closure, _ in self.named_priors():\n res.add_(prior.log_prob(closure(module)).sum())\n\n return res\n\n def forward(self, function_dist, target, *params):\n r\"\"\"\n Computes the MLL given :math:`p(\\mathbf f)` and :math:`\\mathbf y`.\n\n :param ~gpytorch.distributions.MultivariateNormal function_dist: :math:`p(\\mathbf f)`\n the outputs of the latent function (the :obj:`gpytorch.models.ExactGP`)\n :param torch.Tensor target: :math:`\\mathbf y` The target values\n :rtype: torch.Tensor\n :return: Exact MLL. Output shape corresponds to batch shape of the model/input data.\n \"\"\"\n if not isinstance(function_dist, MultivariateNormal):\n raise RuntimeError(\"ExactMarginalLogLikelihood can only operate on Gaussian random variables\")\n\n # Get the log prob of the marginal distribution\n output = self.likelihood(function_dist, *params)\n res = output.log_prob(target)\n res = self._add_other_terms(res, params)\n\n # Scale by the amount of data we have\n num_data = function_dist.event_shape.numel()\n return res.div_(num_data)\n\n def pyro_factor(self, output, target, *params):\n import pyro\n\n mll = target.size(-1) * self(output, target, *params)\n pyro.factor(\"gp_mll\", mll)\n return mll\n", "path": "gpytorch/mlls/exact_marginal_log_likelihood.py"}]}
| 1,895 | 149 |
gh_patches_debug_37974
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-6956
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FEAT: Support sqlalchemy connections in read_sql by converting them to strings
We can use the trick that we use [here](https://github.com/modin-project/modin/blob/01c529cf06cfaf412b5725f41c81a5f914b44b95/modin/core/io/sql/sql_dispatcher.py#L152) for `to_sql` to support reading from sqlalchemy connections in `read_sql`. Currently, for the distributed read, we [require](https://github.com/modin-project/modin/blob/01c529cf06cfaf412b5725f41c81a5f914b44b95/modin/core/io/sql/sql_dispatcher.py#L64) the user to supply a connection string or a `ModinDatabaseConnection` object that usually contains their credentials. Otherwise we default to pandas.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modin/core/io/sql/sql_dispatcher.py`
Content:
```
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 """
15 Module houses `SQLDispatcher` class.
16
17 `SQLDispatcher` contains utils for handling SQL queries or database tables,
18 inherits util functions for handling files from `FileDispatcher` class and can be
19 used as base class for dipatchers of SQL queries.
20 """
21
22 import math
23
24 import numpy as np
25 import pandas
26
27 from modin.config import NPartitions, ReadSqlEngine
28 from modin.core.io.file_dispatcher import FileDispatcher
29 from modin.db_conn import ModinDatabaseConnection
30
31
32 class SQLDispatcher(FileDispatcher):
33 """Class handles utils for reading SQL queries or database tables."""
34
35 @classmethod
36 def _read(cls, sql, con, index_col=None, **kwargs):
37 """
38 Read a SQL query or database table into a query compiler.
39
40 Parameters
41 ----------
42 sql : str or SQLAlchemy Selectable (select or text object)
43 SQL query to be executed or a table name.
44 con : SQLAlchemy connectable, str, sqlite3 connection, or ModinDatabaseConnection
45 Connection object to database.
46 index_col : str or list of str, optional
47 Column(s) to set as index(MultiIndex).
48 **kwargs : dict
49 Parameters to pass into `pandas.read_sql` function.
50
51 Returns
52 -------
53 BaseQueryCompiler
54 Query compiler with imported data for further processing.
55 """
56 if isinstance(con, str):
57 con = ModinDatabaseConnection("sqlalchemy", con)
58 if not isinstance(con, ModinDatabaseConnection):
59 return cls.single_worker_read(
60 sql,
61 con=con,
62 index_col=index_col,
63 read_sql_engine=ReadSqlEngine.get(),
64 reason="To use the parallel implementation of `read_sql`, pass either "
65 + "the SQL connection string or a ModinDatabaseConnection "
66 + "with the arguments required to make a connection, instead "
67 + f"of {type(con)}. For documentation on the ModinDatabaseConnection, see "
68 + "https://modin.readthedocs.io/en/latest/supported_apis/io_supported.html#connecting-to-a-database-for-read-sql",
69 **kwargs,
70 )
71 row_count_query = con.row_count_query(sql)
72 connection_for_pandas = con.get_connection()
73 colum_names_query = con.column_names_query(sql)
74 row_cnt = pandas.read_sql(row_count_query, connection_for_pandas).squeeze()
75 cols_names_df = pandas.read_sql(
76 colum_names_query, connection_for_pandas, index_col=index_col
77 )
78 cols_names = cols_names_df.columns
79 num_partitions = NPartitions.get()
80 partition_ids = [None] * num_partitions
81 index_ids = [None] * num_partitions
82 dtypes_ids = [None] * num_partitions
83 limit = math.ceil(row_cnt / num_partitions)
84 for part in range(num_partitions):
85 offset = part * limit
86 query = con.partition_query(sql, limit, offset)
87 *partition_ids[part], index_ids[part], dtypes_ids[part] = cls.deploy(
88 func=cls.parse,
89 f_kwargs={
90 "num_splits": num_partitions,
91 "sql": query,
92 "con": con,
93 "index_col": index_col,
94 "read_sql_engine": ReadSqlEngine.get(),
95 **kwargs,
96 },
97 num_returns=num_partitions + 2,
98 )
99 partition_ids[part] = [
100 cls.frame_partition_cls(obj) for obj in partition_ids[part]
101 ]
102 if index_col is None: # sum all lens returned from partitions
103 index_lens = cls.materialize(index_ids)
104 new_index = pandas.RangeIndex(sum(index_lens))
105 else: # concat index returned from partitions
106 index_lst = [
107 x for part_index in cls.materialize(index_ids) for x in part_index
108 ]
109 new_index = pandas.Index(index_lst).set_names(index_col)
110 new_frame = cls.frame_cls(np.array(partition_ids), new_index, cols_names)
111 new_frame.synchronize_labels(axis=0)
112 return cls.query_compiler_cls(new_frame)
113
114 @classmethod
115 def _is_supported_sqlalchemy_object(cls, obj): # noqa: GL08
116 supported = None
117 try:
118 import sqlalchemy as sa
119
120 supported = isinstance(obj, (sa.engine.Engine, sa.engine.Connection))
121 except ImportError:
122 supported = False
123 return supported
124
125 @classmethod
126 def write(cls, qc, **kwargs):
127 """
128 Write records stored in the `qc` to a SQL database.
129
130 Parameters
131 ----------
132 qc : BaseQueryCompiler
133 The query compiler of the Modin dataframe that we want to run ``to_sql`` on.
134 **kwargs : dict
135 Parameters for ``pandas.to_sql(**kwargs)``.
136 """
137 # we first insert an empty DF in order to create the full table in the database
138 # This also helps to validate the input against pandas
139 # we would like to_sql() to complete only when all rows have been inserted into the database
140 # since the mapping operation is non-blocking, each partition will return an empty DF
141 # so at the end, the blocking operation will be this empty DF to_pandas
142
143 if not isinstance(
144 kwargs["con"], str
145 ) and not cls._is_supported_sqlalchemy_object(kwargs["con"]):
146 return cls.base_io.to_sql(qc, **kwargs)
147
148 # In the case that we are given a SQLAlchemy Connection or Engine, the objects
149 # are not pickleable. We have to convert it to the URL string and connect from
150 # each of the workers.
151 if cls._is_supported_sqlalchemy_object(kwargs["con"]):
152 kwargs["con"] = kwargs["con"].engine.url.render_as_string(
153 hide_password=False
154 )
155
156 empty_df = qc.getitem_row_array([0]).to_pandas().head(0)
157 empty_df.to_sql(**kwargs)
158 # so each partition will append its respective DF
159 kwargs["if_exists"] = "append"
160 columns = qc.columns
161
162 def func(df): # pragma: no cover
163 """
164 Override column names in the wrapped dataframe and convert it to SQL.
165
166 Notes
167 -----
168 This function returns an empty ``pandas.DataFrame`` because ``apply_full_axis``
169 expects a Frame object as a result of operation (and ``to_sql`` has no dataframe result).
170 """
171 df.columns = columns
172 df.to_sql(**kwargs)
173 return pandas.DataFrame()
174
175 # Ensure that the metadata is synchronized
176 qc._modin_frame._propagate_index_objs(axis=None)
177 result = qc._modin_frame.apply_full_axis(1, func, new_index=[], new_columns=[])
178 cls.materialize(
179 [part.list_of_blocks[0] for row in result._partitions for part in row]
180 )
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/modin/core/io/sql/sql_dispatcher.py b/modin/core/io/sql/sql_dispatcher.py
--- a/modin/core/io/sql/sql_dispatcher.py
+++ b/modin/core/io/sql/sql_dispatcher.py
@@ -32,6 +32,17 @@
class SQLDispatcher(FileDispatcher):
"""Class handles utils for reading SQL queries or database tables."""
+ @classmethod
+ def _is_supported_sqlalchemy_object(cls, obj): # noqa: GL08
+ supported = None
+ try:
+ import sqlalchemy as sa
+
+ supported = isinstance(obj, (sa.engine.Engine, sa.engine.Connection))
+ except ImportError:
+ supported = False
+ return supported
+
@classmethod
def _read(cls, sql, con, index_col=None, **kwargs):
"""
@@ -55,6 +66,12 @@
"""
if isinstance(con, str):
con = ModinDatabaseConnection("sqlalchemy", con)
+
+ if cls._is_supported_sqlalchemy_object(con):
+ con = ModinDatabaseConnection(
+ "sqlalchemy", con.engine.url.render_as_string(hide_password=False)
+ )
+
if not isinstance(con, ModinDatabaseConnection):
return cls.single_worker_read(
sql,
@@ -62,7 +79,7 @@
index_col=index_col,
read_sql_engine=ReadSqlEngine.get(),
reason="To use the parallel implementation of `read_sql`, pass either "
- + "the SQL connection string or a ModinDatabaseConnection "
+ + "a SQLAlchemy connectable, the SQL connection string, or a ModinDatabaseConnection "
+ "with the arguments required to make a connection, instead "
+ f"of {type(con)}. For documentation on the ModinDatabaseConnection, see "
+ "https://modin.readthedocs.io/en/latest/supported_apis/io_supported.html#connecting-to-a-database-for-read-sql",
@@ -111,17 +128,6 @@
new_frame.synchronize_labels(axis=0)
return cls.query_compiler_cls(new_frame)
- @classmethod
- def _is_supported_sqlalchemy_object(cls, obj): # noqa: GL08
- supported = None
- try:
- import sqlalchemy as sa
-
- supported = isinstance(obj, (sa.engine.Engine, sa.engine.Connection))
- except ImportError:
- supported = False
- return supported
-
@classmethod
def write(cls, qc, **kwargs):
"""
|
{"golden_diff": "diff --git a/modin/core/io/sql/sql_dispatcher.py b/modin/core/io/sql/sql_dispatcher.py\n--- a/modin/core/io/sql/sql_dispatcher.py\n+++ b/modin/core/io/sql/sql_dispatcher.py\n@@ -32,6 +32,17 @@\n class SQLDispatcher(FileDispatcher):\n \"\"\"Class handles utils for reading SQL queries or database tables.\"\"\"\n \n+ @classmethod\n+ def _is_supported_sqlalchemy_object(cls, obj): # noqa: GL08\n+ supported = None\n+ try:\n+ import sqlalchemy as sa\n+\n+ supported = isinstance(obj, (sa.engine.Engine, sa.engine.Connection))\n+ except ImportError:\n+ supported = False\n+ return supported\n+\n @classmethod\n def _read(cls, sql, con, index_col=None, **kwargs):\n \"\"\"\n@@ -55,6 +66,12 @@\n \"\"\"\n if isinstance(con, str):\n con = ModinDatabaseConnection(\"sqlalchemy\", con)\n+\n+ if cls._is_supported_sqlalchemy_object(con):\n+ con = ModinDatabaseConnection(\n+ \"sqlalchemy\", con.engine.url.render_as_string(hide_password=False)\n+ )\n+\n if not isinstance(con, ModinDatabaseConnection):\n return cls.single_worker_read(\n sql,\n@@ -62,7 +79,7 @@\n index_col=index_col,\n read_sql_engine=ReadSqlEngine.get(),\n reason=\"To use the parallel implementation of `read_sql`, pass either \"\n- + \"the SQL connection string or a ModinDatabaseConnection \"\n+ + \"a SQLAlchemy connectable, the SQL connection string, or a ModinDatabaseConnection \"\n + \"with the arguments required to make a connection, instead \"\n + f\"of {type(con)}. For documentation on the ModinDatabaseConnection, see \"\n + \"https://modin.readthedocs.io/en/latest/supported_apis/io_supported.html#connecting-to-a-database-for-read-sql\",\n@@ -111,17 +128,6 @@\n new_frame.synchronize_labels(axis=0)\n return cls.query_compiler_cls(new_frame)\n \n- @classmethod\n- def _is_supported_sqlalchemy_object(cls, obj): # noqa: GL08\n- supported = None\n- try:\n- import sqlalchemy as sa\n-\n- supported = isinstance(obj, (sa.engine.Engine, sa.engine.Connection))\n- except ImportError:\n- supported = False\n- return supported\n-\n @classmethod\n def write(cls, qc, **kwargs):\n \"\"\"\n", "issue": "FEAT: Support sqlalchemy connections in read_sql by converting them to strings\nWe can use the trick that we use [here](https://github.com/modin-project/modin/blob/01c529cf06cfaf412b5725f41c81a5f914b44b95/modin/core/io/sql/sql_dispatcher.py#L152) for `to_sql` to support reading from sqlalchemy connections in `read_sql`. Currently, for the distributed read, we [require](https://github.com/modin-project/modin/blob/01c529cf06cfaf412b5725f41c81a5f914b44b95/modin/core/io/sql/sql_dispatcher.py#L64) the user to supply a connection string or a `ModinDatabaseConnection` object that usually contains their credentials. Otherwise we default to pandas.\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"\nModule houses `SQLDispatcher` class.\n\n`SQLDispatcher` contains utils for handling SQL queries or database tables,\ninherits util functions for handling files from `FileDispatcher` class and can be\nused as base class for dipatchers of SQL queries.\n\"\"\"\n\nimport math\n\nimport numpy as np\nimport pandas\n\nfrom modin.config import NPartitions, ReadSqlEngine\nfrom modin.core.io.file_dispatcher import FileDispatcher\nfrom modin.db_conn import ModinDatabaseConnection\n\n\nclass SQLDispatcher(FileDispatcher):\n \"\"\"Class handles utils for reading SQL queries or database tables.\"\"\"\n\n @classmethod\n def _read(cls, sql, con, index_col=None, **kwargs):\n \"\"\"\n Read a SQL query or database table into a query compiler.\n\n Parameters\n ----------\n sql : str or SQLAlchemy Selectable (select or text object)\n SQL query to be executed or a table name.\n con : SQLAlchemy connectable, str, sqlite3 connection, or ModinDatabaseConnection\n Connection object to database.\n index_col : str or list of str, optional\n Column(s) to set as index(MultiIndex).\n **kwargs : dict\n Parameters to pass into `pandas.read_sql` function.\n\n Returns\n -------\n BaseQueryCompiler\n Query compiler with imported data for further processing.\n \"\"\"\n if isinstance(con, str):\n con = ModinDatabaseConnection(\"sqlalchemy\", con)\n if not isinstance(con, ModinDatabaseConnection):\n return cls.single_worker_read(\n sql,\n con=con,\n index_col=index_col,\n read_sql_engine=ReadSqlEngine.get(),\n reason=\"To use the parallel implementation of `read_sql`, pass either \"\n + \"the SQL connection string or a ModinDatabaseConnection \"\n + \"with the arguments required to make a connection, instead \"\n + f\"of {type(con)}. For documentation on the ModinDatabaseConnection, see \"\n + \"https://modin.readthedocs.io/en/latest/supported_apis/io_supported.html#connecting-to-a-database-for-read-sql\",\n **kwargs,\n )\n row_count_query = con.row_count_query(sql)\n connection_for_pandas = con.get_connection()\n colum_names_query = con.column_names_query(sql)\n row_cnt = pandas.read_sql(row_count_query, connection_for_pandas).squeeze()\n cols_names_df = pandas.read_sql(\n colum_names_query, connection_for_pandas, index_col=index_col\n )\n cols_names = cols_names_df.columns\n num_partitions = NPartitions.get()\n partition_ids = [None] * num_partitions\n index_ids = [None] * num_partitions\n dtypes_ids = [None] * num_partitions\n limit = math.ceil(row_cnt / num_partitions)\n for part in range(num_partitions):\n offset = part * limit\n query = con.partition_query(sql, limit, offset)\n *partition_ids[part], index_ids[part], dtypes_ids[part] = cls.deploy(\n func=cls.parse,\n f_kwargs={\n \"num_splits\": num_partitions,\n \"sql\": query,\n \"con\": con,\n \"index_col\": index_col,\n \"read_sql_engine\": ReadSqlEngine.get(),\n **kwargs,\n },\n num_returns=num_partitions + 2,\n )\n partition_ids[part] = [\n cls.frame_partition_cls(obj) for obj in partition_ids[part]\n ]\n if index_col is None: # sum all lens returned from partitions\n index_lens = cls.materialize(index_ids)\n new_index = pandas.RangeIndex(sum(index_lens))\n else: # concat index returned from partitions\n index_lst = [\n x for part_index in cls.materialize(index_ids) for x in part_index\n ]\n new_index = pandas.Index(index_lst).set_names(index_col)\n new_frame = cls.frame_cls(np.array(partition_ids), new_index, cols_names)\n new_frame.synchronize_labels(axis=0)\n return cls.query_compiler_cls(new_frame)\n\n @classmethod\n def _is_supported_sqlalchemy_object(cls, obj): # noqa: GL08\n supported = None\n try:\n import sqlalchemy as sa\n\n supported = isinstance(obj, (sa.engine.Engine, sa.engine.Connection))\n except ImportError:\n supported = False\n return supported\n\n @classmethod\n def write(cls, qc, **kwargs):\n \"\"\"\n Write records stored in the `qc` to a SQL database.\n\n Parameters\n ----------\n qc : BaseQueryCompiler\n The query compiler of the Modin dataframe that we want to run ``to_sql`` on.\n **kwargs : dict\n Parameters for ``pandas.to_sql(**kwargs)``.\n \"\"\"\n # we first insert an empty DF in order to create the full table in the database\n # This also helps to validate the input against pandas\n # we would like to_sql() to complete only when all rows have been inserted into the database\n # since the mapping operation is non-blocking, each partition will return an empty DF\n # so at the end, the blocking operation will be this empty DF to_pandas\n\n if not isinstance(\n kwargs[\"con\"], str\n ) and not cls._is_supported_sqlalchemy_object(kwargs[\"con\"]):\n return cls.base_io.to_sql(qc, **kwargs)\n\n # In the case that we are given a SQLAlchemy Connection or Engine, the objects\n # are not pickleable. We have to convert it to the URL string and connect from\n # each of the workers.\n if cls._is_supported_sqlalchemy_object(kwargs[\"con\"]):\n kwargs[\"con\"] = kwargs[\"con\"].engine.url.render_as_string(\n hide_password=False\n )\n\n empty_df = qc.getitem_row_array([0]).to_pandas().head(0)\n empty_df.to_sql(**kwargs)\n # so each partition will append its respective DF\n kwargs[\"if_exists\"] = \"append\"\n columns = qc.columns\n\n def func(df): # pragma: no cover\n \"\"\"\n Override column names in the wrapped dataframe and convert it to SQL.\n\n Notes\n -----\n This function returns an empty ``pandas.DataFrame`` because ``apply_full_axis``\n expects a Frame object as a result of operation (and ``to_sql`` has no dataframe result).\n \"\"\"\n df.columns = columns\n df.to_sql(**kwargs)\n return pandas.DataFrame()\n\n # Ensure that the metadata is synchronized\n qc._modin_frame._propagate_index_objs(axis=None)\n result = qc._modin_frame.apply_full_axis(1, func, new_index=[], new_columns=[])\n cls.materialize(\n [part.list_of_blocks[0] for row in result._partitions for part in row]\n )\n", "path": "modin/core/io/sql/sql_dispatcher.py"}], "after_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"\nModule houses `SQLDispatcher` class.\n\n`SQLDispatcher` contains utils for handling SQL queries or database tables,\ninherits util functions for handling files from `FileDispatcher` class and can be\nused as base class for dipatchers of SQL queries.\n\"\"\"\n\nimport math\n\nimport numpy as np\nimport pandas\n\nfrom modin.config import NPartitions, ReadSqlEngine\nfrom modin.core.io.file_dispatcher import FileDispatcher\nfrom modin.db_conn import ModinDatabaseConnection\n\n\nclass SQLDispatcher(FileDispatcher):\n \"\"\"Class handles utils for reading SQL queries or database tables.\"\"\"\n\n @classmethod\n def _is_supported_sqlalchemy_object(cls, obj): # noqa: GL08\n supported = None\n try:\n import sqlalchemy as sa\n\n supported = isinstance(obj, (sa.engine.Engine, sa.engine.Connection))\n except ImportError:\n supported = False\n return supported\n\n @classmethod\n def _read(cls, sql, con, index_col=None, **kwargs):\n \"\"\"\n Read a SQL query or database table into a query compiler.\n\n Parameters\n ----------\n sql : str or SQLAlchemy Selectable (select or text object)\n SQL query to be executed or a table name.\n con : SQLAlchemy connectable, str, sqlite3 connection, or ModinDatabaseConnection\n Connection object to database.\n index_col : str or list of str, optional\n Column(s) to set as index(MultiIndex).\n **kwargs : dict\n Parameters to pass into `pandas.read_sql` function.\n\n Returns\n -------\n BaseQueryCompiler\n Query compiler with imported data for further processing.\n \"\"\"\n if isinstance(con, str):\n con = ModinDatabaseConnection(\"sqlalchemy\", con)\n\n if cls._is_supported_sqlalchemy_object(con):\n con = ModinDatabaseConnection(\n \"sqlalchemy\", con.engine.url.render_as_string(hide_password=False)\n )\n\n if not isinstance(con, ModinDatabaseConnection):\n return cls.single_worker_read(\n sql,\n con=con,\n index_col=index_col,\n read_sql_engine=ReadSqlEngine.get(),\n reason=\"To use the parallel implementation of `read_sql`, pass either \"\n + \"a SQLAlchemy connectable, the SQL connection string, or a ModinDatabaseConnection \"\n + \"with the arguments required to make a connection, instead \"\n + f\"of {type(con)}. For documentation on the ModinDatabaseConnection, see \"\n + \"https://modin.readthedocs.io/en/latest/supported_apis/io_supported.html#connecting-to-a-database-for-read-sql\",\n **kwargs,\n )\n row_count_query = con.row_count_query(sql)\n connection_for_pandas = con.get_connection()\n colum_names_query = con.column_names_query(sql)\n row_cnt = pandas.read_sql(row_count_query, connection_for_pandas).squeeze()\n cols_names_df = pandas.read_sql(\n colum_names_query, connection_for_pandas, index_col=index_col\n )\n cols_names = cols_names_df.columns\n num_partitions = NPartitions.get()\n partition_ids = [None] * num_partitions\n index_ids = [None] * num_partitions\n dtypes_ids = [None] * num_partitions\n limit = math.ceil(row_cnt / num_partitions)\n for part in range(num_partitions):\n offset = part * limit\n query = con.partition_query(sql, limit, offset)\n *partition_ids[part], index_ids[part], dtypes_ids[part] = cls.deploy(\n func=cls.parse,\n f_kwargs={\n \"num_splits\": num_partitions,\n \"sql\": query,\n \"con\": con,\n \"index_col\": index_col,\n \"read_sql_engine\": ReadSqlEngine.get(),\n **kwargs,\n },\n num_returns=num_partitions + 2,\n )\n partition_ids[part] = [\n cls.frame_partition_cls(obj) for obj in partition_ids[part]\n ]\n if index_col is None: # sum all lens returned from partitions\n index_lens = cls.materialize(index_ids)\n new_index = pandas.RangeIndex(sum(index_lens))\n else: # concat index returned from partitions\n index_lst = [\n x for part_index in cls.materialize(index_ids) for x in part_index\n ]\n new_index = pandas.Index(index_lst).set_names(index_col)\n new_frame = cls.frame_cls(np.array(partition_ids), new_index, cols_names)\n new_frame.synchronize_labels(axis=0)\n return cls.query_compiler_cls(new_frame)\n\n @classmethod\n def write(cls, qc, **kwargs):\n \"\"\"\n Write records stored in the `qc` to a SQL database.\n\n Parameters\n ----------\n qc : BaseQueryCompiler\n The query compiler of the Modin dataframe that we want to run ``to_sql`` on.\n **kwargs : dict\n Parameters for ``pandas.to_sql(**kwargs)``.\n \"\"\"\n # we first insert an empty DF in order to create the full table in the database\n # This also helps to validate the input against pandas\n # we would like to_sql() to complete only when all rows have been inserted into the database\n # since the mapping operation is non-blocking, each partition will return an empty DF\n # so at the end, the blocking operation will be this empty DF to_pandas\n\n if not isinstance(\n kwargs[\"con\"], str\n ) and not cls._is_supported_sqlalchemy_object(kwargs[\"con\"]):\n return cls.base_io.to_sql(qc, **kwargs)\n\n # In the case that we are given a SQLAlchemy Connection or Engine, the objects\n # are not pickleable. We have to convert it to the URL string and connect from\n # each of the workers.\n if cls._is_supported_sqlalchemy_object(kwargs[\"con\"]):\n kwargs[\"con\"] = kwargs[\"con\"].engine.url.render_as_string(\n hide_password=False\n )\n\n empty_df = qc.getitem_row_array([0]).to_pandas().head(0)\n empty_df.to_sql(**kwargs)\n # so each partition will append its respective DF\n kwargs[\"if_exists\"] = \"append\"\n columns = qc.columns\n\n def func(df): # pragma: no cover\n \"\"\"\n Override column names in the wrapped dataframe and convert it to SQL.\n\n Notes\n -----\n This function returns an empty ``pandas.DataFrame`` because ``apply_full_axis``\n expects a Frame object as a result of operation (and ``to_sql`` has no dataframe result).\n \"\"\"\n df.columns = columns\n df.to_sql(**kwargs)\n return pandas.DataFrame()\n\n # Ensure that the metadata is synchronized\n qc._modin_frame._propagate_index_objs(axis=None)\n result = qc._modin_frame.apply_full_axis(1, func, new_index=[], new_columns=[])\n cls.materialize(\n [part.list_of_blocks[0] for row in result._partitions for part in row]\n )\n", "path": "modin/core/io/sql/sql_dispatcher.py"}]}
| 2,506 | 549 |
gh_patches_debug_51665
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-2960
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Variable defined multiple times
This code from [`nilearn/_utils/numpy_conversions.py`](https://github.com/nilearn/nilearn/blob/ac1a934/nilearn/_utils/numpy_conversions.py#L106-L107) sets `ret` twice:
```python
ret = np.array(arr, copy=True)
ret = _asarray(arr, dtype=dtype, order=order)
```
Perhaps the intent was::
```python
ret = np.array(arr, copy=True)
ret = _asarray(ret, dtype=dtype, order=order)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nilearn/_utils/numpy_conversions.py`
Content:
```
1 """
2 Validation and conversion utilities for numpy.
3 """
4 # Author: Gael Varoquaux, Alexandre Abraham, Philippe Gervais
5 # License: simplified BSD
6
7 import csv
8 import numpy as np
9
10
11 def _asarray(arr, dtype=None, order=None):
12 # np.asarray does not take "K" and "A" orders in version 1.3.0
13 if order in ("K", "A", None):
14 if (arr.itemsize == 1 and dtype in (bool, np.bool_)) \
15 or (arr.dtype in (bool, np.bool_) and
16 np.dtype(dtype).itemsize == 1):
17 ret = arr.view(dtype=dtype)
18 else:
19 ret = np.asarray(arr, dtype=dtype)
20 else:
21 if (((arr.itemsize == 1 and dtype in (bool, np.bool)) or
22 (arr.dtype in (bool, np.bool_) and np.dtype(dtype).itemsize == 1))
23 and (order == "F" and arr.flags["F_CONTIGUOUS"]
24 or order == "C" and arr.flags["C_CONTIGUOUS"])):
25 ret = arr.view(dtype=dtype)
26 else:
27 ret = np.asarray(arr, dtype=dtype, order=order)
28
29 return ret
30
31
32 def as_ndarray(arr, copy=False, dtype=None, order='K'):
33 """Starting with an arbitrary array, convert to numpy.ndarray.
34
35 In the case of a memmap array, a copy is automatically made to break the
36 link with the underlying file (whatever the value of the "copy" keyword).
37
38 The purpose of this function is mainly to get rid of memmap objects, but
39 it can be used for other purposes. In particular, combining copying and
40 casting can lead to performance improvements in some cases, by avoiding
41 unnecessary copies.
42
43 If not specified, input array order is preserved, in all cases, even when
44 a copy is requested.
45
46 Caveat: this function does not copy during bool to/from 1-byte dtype
47 conversions. This can lead to some surprising results in some rare cases.
48 Example:
49
50 a = numpy.asarray([0, 1, 2], dtype=numpy.int8)
51 b = as_ndarray(a, dtype=bool) # array([False, True, True], dtype=bool)
52 c = as_ndarray(b, dtype=numpy.int8) # array([0, 1, 2], dtype=numpy.int8)
53
54 The usually expected result for the last line would be array([0, 1, 1])
55 because True evaluates to 1. Since there is no copy made here, the original
56 array is recovered.
57
58 Parameters
59 ----------
60 arr: array-like
61 input array. Any value accepted by numpy.asarray is valid.
62
63 copy: bool
64 if True, force a copy of the array. Always True when arr is a memmap.
65
66 dtype: any numpy dtype
67 dtype of the returned array. Performing copy and type conversion at the
68 same time can in some cases avoid an additional copy.
69
70 order: string
71 gives the order of the returned array.
72 Valid values are: "C", "F", "A", "K", None.
73 default is "K". See ndarray.copy() for more information.
74
75 Returns
76 -------
77 ret: numpy.ndarray
78 Numpy array containing the same data as arr, always of class
79 numpy.ndarray, and with no link to any underlying file.
80 """
81 # This function should work on numpy 1.3
82 # in this version, astype() and copy() have no "order" keyword.
83 # and asarray() does not accept the "K" and "A" values for order.
84
85 # numpy.asarray never copies a subclass of numpy.ndarray (even for
86 # memmaps) when dtype is unchanged.
87 # .astype() always copies
88
89 if order not in ("C", "F", "A", "K", None):
90 raise ValueError("Invalid value for 'order': %s" % str(order))
91
92 if isinstance(arr, np.memmap):
93 if dtype is None:
94 if order in ("K", "A", None):
95 ret = np.array(np.asarray(arr), copy=True)
96 else:
97 ret = np.array(np.asarray(arr), copy=True, order=order)
98 else:
99 if order in ("K", "A", None):
100 # always copy (even when dtype does not change)
101 ret = np.asarray(arr).astype(dtype)
102 else:
103 # First load data from disk without changing order
104 # Changing order while reading through a memmap is incredibly
105 # inefficient.
106 ret = np.array(arr, copy=True)
107 ret = _asarray(arr, dtype=dtype, order=order)
108
109 elif isinstance(arr, np.ndarray):
110 ret = _asarray(arr, dtype=dtype, order=order)
111 # In the present cas, np.may_share_memory result is always reliable.
112 if np.may_share_memory(ret, arr) and copy:
113 # order-preserving copy
114 if ret.flags["F_CONTIGUOUS"]:
115 ret = ret.T.copy().T
116 else:
117 ret = ret.copy()
118
119 elif isinstance(arr, (list, tuple)):
120 if order in ("A", "K"):
121 ret = np.asarray(arr, dtype=dtype)
122 else:
123 ret = np.asarray(arr, dtype=dtype, order=order)
124
125 else:
126 raise ValueError("Type not handled: %s" % arr.__class__)
127
128 return ret
129
130
131 def csv_to_array(csv_path, delimiters=' \t,;', **kwargs):
132 """Read a CSV file by trying to guess its delimiter
133
134 Parameters
135 ----------
136 csv_path: string
137 Path of the CSV file to load.
138
139 delimiters: string
140 Each character of the delimiters string is a potential delimiters for
141 the CSV file.
142
143 kwargs: keyword arguments
144 The additional keyword arguments are passed to numpy.genfromtxt when
145 loading the CSV.
146
147 Returns
148 -------
149 array: numpy.ndarray
150 An array containing the data loaded from the CSV file.
151 """
152 if not isinstance(csv_path, str):
153 raise TypeError('CSV must be a file path. Got a CSV of type: %s' %
154 type(csv_path))
155
156 try:
157 # First, we try genfromtxt which works in most cases.
158 array = np.genfromtxt(csv_path, loose=False, **kwargs)
159 except ValueError:
160 # There was an error during the conversion to numpy array, probably
161 # because the delimiter is wrong.
162 # In that case, we try to guess the delimiter.
163 try:
164 with open(csv_path, 'r') as csv_file:
165 dialect = csv.Sniffer().sniff(csv_file.readline(), delimiters)
166 except csv.Error as e:
167 raise TypeError(
168 'Could not read CSV file [%s]: %s' % (csv_path, e.args[0]))
169
170 array = np.genfromtxt(csv_path, delimiter=dialect.delimiter, **kwargs)
171
172 return array
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nilearn/_utils/numpy_conversions.py b/nilearn/_utils/numpy_conversions.py
--- a/nilearn/_utils/numpy_conversions.py
+++ b/nilearn/_utils/numpy_conversions.py
@@ -104,7 +104,7 @@
# Changing order while reading through a memmap is incredibly
# inefficient.
ret = np.array(arr, copy=True)
- ret = _asarray(arr, dtype=dtype, order=order)
+ ret = _asarray(ret, dtype=dtype, order=order)
elif isinstance(arr, np.ndarray):
ret = _asarray(arr, dtype=dtype, order=order)
|
{"golden_diff": "diff --git a/nilearn/_utils/numpy_conversions.py b/nilearn/_utils/numpy_conversions.py\n--- a/nilearn/_utils/numpy_conversions.py\n+++ b/nilearn/_utils/numpy_conversions.py\n@@ -104,7 +104,7 @@\n # Changing order while reading through a memmap is incredibly\n # inefficient.\n ret = np.array(arr, copy=True)\n- ret = _asarray(arr, dtype=dtype, order=order)\n+ ret = _asarray(ret, dtype=dtype, order=order)\n \n elif isinstance(arr, np.ndarray):\n ret = _asarray(arr, dtype=dtype, order=order)\n", "issue": "Variable defined multiple times\nThis code from [`nilearn/_utils/numpy_conversions.py`](https://github.com/nilearn/nilearn/blob/ac1a934/nilearn/_utils/numpy_conversions.py#L106-L107) sets `ret` twice:\r\n```python\r\n ret = np.array(arr, copy=True)\r\n ret = _asarray(arr, dtype=dtype, order=order)\r\n```\r\nPerhaps the intent was::\r\n```python\r\n ret = np.array(arr, copy=True)\r\n ret = _asarray(ret, dtype=dtype, order=order)\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nValidation and conversion utilities for numpy.\n\"\"\"\n# Author: Gael Varoquaux, Alexandre Abraham, Philippe Gervais\n# License: simplified BSD\n\nimport csv\nimport numpy as np\n\n\ndef _asarray(arr, dtype=None, order=None):\n # np.asarray does not take \"K\" and \"A\" orders in version 1.3.0\n if order in (\"K\", \"A\", None):\n if (arr.itemsize == 1 and dtype in (bool, np.bool_)) \\\n or (arr.dtype in (bool, np.bool_) and\n np.dtype(dtype).itemsize == 1):\n ret = arr.view(dtype=dtype)\n else:\n ret = np.asarray(arr, dtype=dtype)\n else:\n if (((arr.itemsize == 1 and dtype in (bool, np.bool)) or\n (arr.dtype in (bool, np.bool_) and np.dtype(dtype).itemsize == 1))\n and (order == \"F\" and arr.flags[\"F_CONTIGUOUS\"]\n or order == \"C\" and arr.flags[\"C_CONTIGUOUS\"])):\n ret = arr.view(dtype=dtype)\n else:\n ret = np.asarray(arr, dtype=dtype, order=order)\n\n return ret\n\n\ndef as_ndarray(arr, copy=False, dtype=None, order='K'):\n \"\"\"Starting with an arbitrary array, convert to numpy.ndarray.\n\n In the case of a memmap array, a copy is automatically made to break the\n link with the underlying file (whatever the value of the \"copy\" keyword).\n\n The purpose of this function is mainly to get rid of memmap objects, but\n it can be used for other purposes. In particular, combining copying and\n casting can lead to performance improvements in some cases, by avoiding\n unnecessary copies.\n\n If not specified, input array order is preserved, in all cases, even when\n a copy is requested.\n\n Caveat: this function does not copy during bool to/from 1-byte dtype\n conversions. This can lead to some surprising results in some rare cases.\n Example:\n\n a = numpy.asarray([0, 1, 2], dtype=numpy.int8)\n b = as_ndarray(a, dtype=bool) # array([False, True, True], dtype=bool)\n c = as_ndarray(b, dtype=numpy.int8) # array([0, 1, 2], dtype=numpy.int8)\n\n The usually expected result for the last line would be array([0, 1, 1])\n because True evaluates to 1. Since there is no copy made here, the original\n array is recovered.\n\n Parameters\n ----------\n arr: array-like\n input array. Any value accepted by numpy.asarray is valid.\n\n copy: bool\n if True, force a copy of the array. Always True when arr is a memmap.\n\n dtype: any numpy dtype\n dtype of the returned array. Performing copy and type conversion at the\n same time can in some cases avoid an additional copy.\n\n order: string\n gives the order of the returned array.\n Valid values are: \"C\", \"F\", \"A\", \"K\", None.\n default is \"K\". See ndarray.copy() for more information.\n\n Returns\n -------\n ret: numpy.ndarray\n Numpy array containing the same data as arr, always of class\n numpy.ndarray, and with no link to any underlying file.\n \"\"\"\n # This function should work on numpy 1.3\n # in this version, astype() and copy() have no \"order\" keyword.\n # and asarray() does not accept the \"K\" and \"A\" values for order.\n\n # numpy.asarray never copies a subclass of numpy.ndarray (even for\n # memmaps) when dtype is unchanged.\n # .astype() always copies\n\n if order not in (\"C\", \"F\", \"A\", \"K\", None):\n raise ValueError(\"Invalid value for 'order': %s\" % str(order))\n\n if isinstance(arr, np.memmap):\n if dtype is None:\n if order in (\"K\", \"A\", None):\n ret = np.array(np.asarray(arr), copy=True)\n else:\n ret = np.array(np.asarray(arr), copy=True, order=order)\n else:\n if order in (\"K\", \"A\", None):\n # always copy (even when dtype does not change)\n ret = np.asarray(arr).astype(dtype)\n else:\n # First load data from disk without changing order\n # Changing order while reading through a memmap is incredibly\n # inefficient.\n ret = np.array(arr, copy=True)\n ret = _asarray(arr, dtype=dtype, order=order)\n\n elif isinstance(arr, np.ndarray):\n ret = _asarray(arr, dtype=dtype, order=order)\n # In the present cas, np.may_share_memory result is always reliable.\n if np.may_share_memory(ret, arr) and copy:\n # order-preserving copy\n if ret.flags[\"F_CONTIGUOUS\"]:\n ret = ret.T.copy().T\n else:\n ret = ret.copy()\n\n elif isinstance(arr, (list, tuple)):\n if order in (\"A\", \"K\"):\n ret = np.asarray(arr, dtype=dtype)\n else:\n ret = np.asarray(arr, dtype=dtype, order=order)\n\n else:\n raise ValueError(\"Type not handled: %s\" % arr.__class__)\n\n return ret\n\n\ndef csv_to_array(csv_path, delimiters=' \\t,;', **kwargs):\n \"\"\"Read a CSV file by trying to guess its delimiter\n\n Parameters\n ----------\n csv_path: string\n Path of the CSV file to load.\n\n delimiters: string\n Each character of the delimiters string is a potential delimiters for\n the CSV file.\n\n kwargs: keyword arguments\n The additional keyword arguments are passed to numpy.genfromtxt when\n loading the CSV.\n\n Returns\n -------\n array: numpy.ndarray\n An array containing the data loaded from the CSV file.\n \"\"\"\n if not isinstance(csv_path, str):\n raise TypeError('CSV must be a file path. Got a CSV of type: %s' %\n type(csv_path))\n\n try:\n # First, we try genfromtxt which works in most cases.\n array = np.genfromtxt(csv_path, loose=False, **kwargs)\n except ValueError:\n # There was an error during the conversion to numpy array, probably\n # because the delimiter is wrong.\n # In that case, we try to guess the delimiter.\n try:\n with open(csv_path, 'r') as csv_file:\n dialect = csv.Sniffer().sniff(csv_file.readline(), delimiters)\n except csv.Error as e:\n raise TypeError(\n 'Could not read CSV file [%s]: %s' % (csv_path, e.args[0]))\n\n array = np.genfromtxt(csv_path, delimiter=dialect.delimiter, **kwargs)\n\n return array\n", "path": "nilearn/_utils/numpy_conversions.py"}], "after_files": [{"content": "\"\"\"\nValidation and conversion utilities for numpy.\n\"\"\"\n# Author: Gael Varoquaux, Alexandre Abraham, Philippe Gervais\n# License: simplified BSD\n\nimport csv\nimport numpy as np\n\n\ndef _asarray(arr, dtype=None, order=None):\n # np.asarray does not take \"K\" and \"A\" orders in version 1.3.0\n if order in (\"K\", \"A\", None):\n if (arr.itemsize == 1 and dtype in (bool, np.bool_)) \\\n or (arr.dtype in (bool, np.bool_) and\n np.dtype(dtype).itemsize == 1):\n ret = arr.view(dtype=dtype)\n else:\n ret = np.asarray(arr, dtype=dtype)\n else:\n if (((arr.itemsize == 1 and dtype in (bool, np.bool)) or\n (arr.dtype in (bool, np.bool_) and np.dtype(dtype).itemsize == 1))\n and (order == \"F\" and arr.flags[\"F_CONTIGUOUS\"]\n or order == \"C\" and arr.flags[\"C_CONTIGUOUS\"])):\n ret = arr.view(dtype=dtype)\n else:\n ret = np.asarray(arr, dtype=dtype, order=order)\n\n return ret\n\n\ndef as_ndarray(arr, copy=False, dtype=None, order='K'):\n \"\"\"Starting with an arbitrary array, convert to numpy.ndarray.\n\n In the case of a memmap array, a copy is automatically made to break the\n link with the underlying file (whatever the value of the \"copy\" keyword).\n\n The purpose of this function is mainly to get rid of memmap objects, but\n it can be used for other purposes. In particular, combining copying and\n casting can lead to performance improvements in some cases, by avoiding\n unnecessary copies.\n\n If not specified, input array order is preserved, in all cases, even when\n a copy is requested.\n\n Caveat: this function does not copy during bool to/from 1-byte dtype\n conversions. This can lead to some surprising results in some rare cases.\n Example:\n\n a = numpy.asarray([0, 1, 2], dtype=numpy.int8)\n b = as_ndarray(a, dtype=bool) # array([False, True, True], dtype=bool)\n c = as_ndarray(b, dtype=numpy.int8) # array([0, 1, 2], dtype=numpy.int8)\n\n The usually expected result for the last line would be array([0, 1, 1])\n because True evaluates to 1. Since there is no copy made here, the original\n array is recovered.\n\n Parameters\n ----------\n arr: array-like\n input array. Any value accepted by numpy.asarray is valid.\n\n copy: bool\n if True, force a copy of the array. Always True when arr is a memmap.\n\n dtype: any numpy dtype\n dtype of the returned array. Performing copy and type conversion at the\n same time can in some cases avoid an additional copy.\n\n order: string\n gives the order of the returned array.\n Valid values are: \"C\", \"F\", \"A\", \"K\", None.\n default is \"K\". See ndarray.copy() for more information.\n\n Returns\n -------\n ret: numpy.ndarray\n Numpy array containing the same data as arr, always of class\n numpy.ndarray, and with no link to any underlying file.\n \"\"\"\n # This function should work on numpy 1.3\n # in this version, astype() and copy() have no \"order\" keyword.\n # and asarray() does not accept the \"K\" and \"A\" values for order.\n\n # numpy.asarray never copies a subclass of numpy.ndarray (even for\n # memmaps) when dtype is unchanged.\n # .astype() always copies\n\n if order not in (\"C\", \"F\", \"A\", \"K\", None):\n raise ValueError(\"Invalid value for 'order': %s\" % str(order))\n\n if isinstance(arr, np.memmap):\n if dtype is None:\n if order in (\"K\", \"A\", None):\n ret = np.array(np.asarray(arr), copy=True)\n else:\n ret = np.array(np.asarray(arr), copy=True, order=order)\n else:\n if order in (\"K\", \"A\", None):\n # always copy (even when dtype does not change)\n ret = np.asarray(arr).astype(dtype)\n else:\n # First load data from disk without changing order\n # Changing order while reading through a memmap is incredibly\n # inefficient.\n ret = np.array(arr, copy=True)\n ret = _asarray(ret, dtype=dtype, order=order)\n\n elif isinstance(arr, np.ndarray):\n ret = _asarray(arr, dtype=dtype, order=order)\n # In the present cas, np.may_share_memory result is always reliable.\n if np.may_share_memory(ret, arr) and copy:\n # order-preserving copy\n if ret.flags[\"F_CONTIGUOUS\"]:\n ret = ret.T.copy().T\n else:\n ret = ret.copy()\n\n elif isinstance(arr, (list, tuple)):\n if order in (\"A\", \"K\"):\n ret = np.asarray(arr, dtype=dtype)\n else:\n ret = np.asarray(arr, dtype=dtype, order=order)\n\n else:\n raise ValueError(\"Type not handled: %s\" % arr.__class__)\n\n return ret\n\n\ndef csv_to_array(csv_path, delimiters=' \\t,;', **kwargs):\n \"\"\"Read a CSV file by trying to guess its delimiter\n\n Parameters\n ----------\n csv_path: string\n Path of the CSV file to load.\n\n delimiters: string\n Each character of the delimiters string is a potential delimiters for\n the CSV file.\n\n kwargs: keyword arguments\n The additional keyword arguments are passed to numpy.genfromtxt when\n loading the CSV.\n\n Returns\n -------\n array: numpy.ndarray\n An array containing the data loaded from the CSV file.\n \"\"\"\n if not isinstance(csv_path, str):\n raise TypeError('CSV must be a file path. Got a CSV of type: %s' %\n type(csv_path))\n\n try:\n # First, we try genfromtxt which works in most cases.\n array = np.genfromtxt(csv_path, loose=False, **kwargs)\n except ValueError:\n # There was an error during the conversion to numpy array, probably\n # because the delimiter is wrong.\n # In that case, we try to guess the delimiter.\n try:\n with open(csv_path, 'r') as csv_file:\n dialect = csv.Sniffer().sniff(csv_file.readline(), delimiters)\n except csv.Error as e:\n raise TypeError(\n 'Could not read CSV file [%s]: %s' % (csv_path, e.args[0]))\n\n array = np.genfromtxt(csv_path, delimiter=dialect.delimiter, **kwargs)\n\n return array\n", "path": "nilearn/_utils/numpy_conversions.py"}]}
| 2,339 | 152 |
gh_patches_debug_20921
|
rasdani/github-patches
|
git_diff
|
StackStorm__st2-4512
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Matching action alias for 'pack install xxx' returns an error
##### SUMMARY
Noticed by `@ravi` on Slack.
When running `st2 action-alias match 'pack install vsphere'` multiple action-aliases are matched, causing an error.
This is confusing in two ways:
1) Running that action-alias should not be ambiguous
2) The help message for `st2 action-alias match` says it returns a list of matching aliases, but instead it either returns one or it errors.
##### ISSUE TYPE
- Bug Report
##### STACKSTORM VERSION
```shell
$ st2 --version
st2 2.10.1, on Python 2.7.5
```
##### OS / ENVIRONMENT / INSTALL METHOD
```shell
# OS
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)
# Install method
puppet-st2
```
##### STEPS TO REPRODUCE
```shell
$ st2 action-alias match 'pack install vsphere'
```
##### EXPECTED RESULTS
```shell
$ st2 action-alias match 'pack install vsphere'
+--------------+-----------------------------------+
| name | description |
+--------------+-----------------------------------+
| pack_install | Install/upgrade StackStorm packs. |
+--------------+-----------------------------------+
```
##### ACTUAL RESULTS
```shell
$ st2 action-alias match 'pack install vsphere'
ERROR: 400 Client Error: Bad Request
MESSAGE: Command 'pack install vsphere' matched more than 1 pattern for url: http://127.0.0.1:9101/v1/actionalias/match
```
This is also confusing because `st2 action-alias match --help` says it should return a list of matching aliases, when instead it either returns a single alias or it errors out if multiple are found.
```shell
$ st2 action-alias match --help
usage: st2 action-alias match [-h] [-t TOKEN] [--api-key API_KEY] [-j] [-y]
[-a ATTR [ATTR ...]] [-w WIDTH [WIDTH ...]]
command
Get the list of action aliases that match the command text.
```
##### INVESTIGATION
It appears that the action-alias `packs.pack_install` has redundant patterns defined: https://github.com/StackStorm/st2/blob/master/contrib/packs/aliases/pack_install.yaml#L7-L12
I think the simplest fix is to remove the redundant pattern and change the display text for the one that is left behind.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `st2client/st2client/commands/action_alias.py`
Content:
```
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from __future__ import absolute_import
17
18 from st2client.models import core
19 from st2client.models.action_alias import ActionAlias
20 from st2client.models.action_alias import ActionAliasMatch
21 from st2client.commands import resource
22 from st2client.formatters import table
23
24
25 __all__ = [
26 'ActionAliasBranch',
27 'ActionAliasMatchCommand',
28 'ActionAliasExecuteCommand'
29 ]
30
31
32 class ActionAliasBranch(resource.ResourceBranch):
33 def __init__(self, description, app, subparsers, parent_parser=None):
34 super(ActionAliasBranch, self).__init__(
35 ActionAlias, description, app, subparsers,
36 parent_parser=parent_parser, read_only=False,
37 commands={
38 'list': ActionAliasListCommand,
39 'get': ActionAliasGetCommand
40 })
41
42 self.commands['match'] = ActionAliasMatchCommand(
43 self.resource, self.app, self.subparsers,
44 add_help=True)
45 self.commands['execute'] = ActionAliasExecuteCommand(
46 self.resource, self.app, self.subparsers,
47 add_help=True)
48
49
50 class ActionAliasListCommand(resource.ContentPackResourceListCommand):
51 display_attributes = ['ref', 'pack', 'description', 'enabled']
52
53
54 class ActionAliasGetCommand(resource.ContentPackResourceGetCommand):
55 display_attributes = ['all']
56 attribute_display_order = ['id', 'ref', 'pack', 'name', 'description',
57 'enabled', 'action_ref', 'formats']
58
59
60 class ActionAliasMatchCommand(resource.ResourceCommand):
61 display_attributes = ['name', 'description']
62
63 def __init__(self, resource, *args, **kwargs):
64 super(ActionAliasMatchCommand, self).__init__(
65 resource, 'match',
66 'Get the list of %s that match the command text.' %
67 resource.get_plural_display_name().lower(),
68 *args, **kwargs)
69
70 self.parser.add_argument('match_text',
71 metavar='command',
72 help=('Get the list of %s that match the command text.' %
73 resource.get_display_name().lower()))
74 self.parser.add_argument('-a', '--attr', nargs='+',
75 default=self.display_attributes,
76 help=('List of attributes to include in the '
77 'output. "all" will return all '
78 'attributes.'))
79 self.parser.add_argument('-w', '--width', nargs='+', type=int,
80 default=None,
81 help=('Set the width of columns in output.'))
82
83 @resource.add_auth_token_to_kwargs_from_cli
84 def run(self, args, **kwargs):
85 alias_match = ActionAliasMatch()
86 alias_match.command = args.match_text
87
88 match, _ = self.manager.match(alias_match, **kwargs)
89 return [match]
90
91 def run_and_print(self, args, **kwargs):
92 instances = self.run(args, **kwargs)
93 self.print_output(instances, table.MultiColumnTable,
94 attributes=args.attr, widths=args.width,
95 json=args.json, yaml=args.yaml)
96
97
98 class ActionAliasExecuteCommand(resource.ResourceCommand):
99 display_attributes = ['name']
100
101 def __init__(self, resource, *args, **kwargs):
102 super(ActionAliasExecuteCommand, self).__init__(
103 resource, 'execute',
104 ('Execute the command text by finding a matching %s.' %
105 resource.get_display_name().lower()), *args, **kwargs)
106
107 self.parser.add_argument('command_text',
108 metavar='command',
109 help=('Execute the command text by finding a matching %s.' %
110 resource.get_display_name().lower()))
111 self.parser.add_argument('-u', '--user', type=str, default=None,
112 help='User under which to run the action (admins only).')
113
114 @resource.add_auth_token_to_kwargs_from_cli
115 def run(self, args, **kwargs):
116 payload = core.Resource()
117 payload.command = args.command_text
118 payload.user = args.user
119 payload.source_channel = 'cli'
120
121 alias_execution_mgr = self.app.client.managers['ActionAliasExecution']
122 execution = alias_execution_mgr.match_and_execute(payload)
123 return execution
124
125 def run_and_print(self, args, **kwargs):
126 execution = self.run(args, **kwargs)
127 print("Matching Action-alias: '%s'" % execution.actionalias['ref'])
128 print("To get the results, execute:\n st2 execution get %s" %
129 (execution.execution['id']))
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/st2client/st2client/commands/action_alias.py b/st2client/st2client/commands/action_alias.py
--- a/st2client/st2client/commands/action_alias.py
+++ b/st2client/st2client/commands/action_alias.py
@@ -63,13 +63,13 @@
def __init__(self, resource, *args, **kwargs):
super(ActionAliasMatchCommand, self).__init__(
resource, 'match',
- 'Get the list of %s that match the command text.' %
- resource.get_plural_display_name().lower(),
+ 'Get the %s that match the command text.' %
+ resource.get_display_name().lower(),
*args, **kwargs)
self.parser.add_argument('match_text',
metavar='command',
- help=('Get the list of %s that match the command text.' %
+ help=('Get the %s that match the command text.' %
resource.get_display_name().lower()))
self.parser.add_argument('-a', '--attr', nargs='+',
default=self.display_attributes,
|
{"golden_diff": "diff --git a/st2client/st2client/commands/action_alias.py b/st2client/st2client/commands/action_alias.py\n--- a/st2client/st2client/commands/action_alias.py\n+++ b/st2client/st2client/commands/action_alias.py\n@@ -63,13 +63,13 @@\n def __init__(self, resource, *args, **kwargs):\n super(ActionAliasMatchCommand, self).__init__(\n resource, 'match',\n- 'Get the list of %s that match the command text.' %\n- resource.get_plural_display_name().lower(),\n+ 'Get the %s that match the command text.' %\n+ resource.get_display_name().lower(),\n *args, **kwargs)\n \n self.parser.add_argument('match_text',\n metavar='command',\n- help=('Get the list of %s that match the command text.' %\n+ help=('Get the %s that match the command text.' %\n resource.get_display_name().lower()))\n self.parser.add_argument('-a', '--attr', nargs='+',\n default=self.display_attributes,\n", "issue": "Matching action alias for 'pack install xxx' returns an error\n##### SUMMARY\r\n\r\nNoticed by `@ravi` on Slack.\r\n\r\nWhen running `st2 action-alias match 'pack install vsphere'` multiple action-aliases are matched, causing an error.\r\n\r\nThis is confusing in two ways: \r\n1) Running that action-alias should not be ambiguous\r\n2) The help message for `st2 action-alias match` says it returns a list of matching aliases, but instead it either returns one or it errors.\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### STACKSTORM VERSION\r\n```shell\r\n$ st2 --version\r\nst2 2.10.1, on Python 2.7.5\r\n```\r\n\r\n##### OS / ENVIRONMENT / INSTALL METHOD\r\n```shell\r\n# OS\r\n$ cat /etc/redhat-release \r\nRed Hat Enterprise Linux Server release 7.6 (Maipo)\r\n\r\n# Install method\r\npuppet-st2\r\n```\r\n\r\n##### STEPS TO REPRODUCE\r\n```shell\r\n$ st2 action-alias match 'pack install vsphere'\r\n```\r\n\r\n##### EXPECTED RESULTS\r\n```shell\r\n$ st2 action-alias match 'pack install vsphere'\r\n+--------------+-----------------------------------+\r\n| name | description |\r\n+--------------+-----------------------------------+\r\n| pack_install | Install/upgrade StackStorm packs. |\r\n+--------------+-----------------------------------+\r\n```\r\n\r\n##### ACTUAL RESULTS\r\n```shell\r\n$ st2 action-alias match 'pack install vsphere'\r\nERROR: 400 Client Error: Bad Request\r\nMESSAGE: Command 'pack install vsphere' matched more than 1 pattern for url: http://127.0.0.1:9101/v1/actionalias/match\r\n```\r\n\r\nThis is also confusing because `st2 action-alias match --help` says it should return a list of matching aliases, when instead it either returns a single alias or it errors out if multiple are found.\r\n\r\n```shell\r\n$ st2 action-alias match --help\r\nusage: st2 action-alias match [-h] [-t TOKEN] [--api-key API_KEY] [-j] [-y]\r\n [-a ATTR [ATTR ...]] [-w WIDTH [WIDTH ...]]\r\n command\r\n\r\nGet the list of action aliases that match the command text.\r\n```\r\n\r\n##### INVESTIGATION\r\n\r\nIt appears that the action-alias `packs.pack_install` has redundant patterns defined: https://github.com/StackStorm/st2/blob/master/contrib/packs/aliases/pack_install.yaml#L7-L12\r\n\r\nI think the simplest fix is to remove the redundant pattern and change the display text for the one that is left behind.\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom st2client.models import core\nfrom st2client.models.action_alias import ActionAlias\nfrom st2client.models.action_alias import ActionAliasMatch\nfrom st2client.commands import resource\nfrom st2client.formatters import table\n\n\n__all__ = [\n 'ActionAliasBranch',\n 'ActionAliasMatchCommand',\n 'ActionAliasExecuteCommand'\n]\n\n\nclass ActionAliasBranch(resource.ResourceBranch):\n def __init__(self, description, app, subparsers, parent_parser=None):\n super(ActionAliasBranch, self).__init__(\n ActionAlias, description, app, subparsers,\n parent_parser=parent_parser, read_only=False,\n commands={\n 'list': ActionAliasListCommand,\n 'get': ActionAliasGetCommand\n })\n\n self.commands['match'] = ActionAliasMatchCommand(\n self.resource, self.app, self.subparsers,\n add_help=True)\n self.commands['execute'] = ActionAliasExecuteCommand(\n self.resource, self.app, self.subparsers,\n add_help=True)\n\n\nclass ActionAliasListCommand(resource.ContentPackResourceListCommand):\n display_attributes = ['ref', 'pack', 'description', 'enabled']\n\n\nclass ActionAliasGetCommand(resource.ContentPackResourceGetCommand):\n display_attributes = ['all']\n attribute_display_order = ['id', 'ref', 'pack', 'name', 'description',\n 'enabled', 'action_ref', 'formats']\n\n\nclass ActionAliasMatchCommand(resource.ResourceCommand):\n display_attributes = ['name', 'description']\n\n def __init__(self, resource, *args, **kwargs):\n super(ActionAliasMatchCommand, self).__init__(\n resource, 'match',\n 'Get the list of %s that match the command text.' %\n resource.get_plural_display_name().lower(),\n *args, **kwargs)\n\n self.parser.add_argument('match_text',\n metavar='command',\n help=('Get the list of %s that match the command text.' %\n resource.get_display_name().lower()))\n self.parser.add_argument('-a', '--attr', nargs='+',\n default=self.display_attributes,\n help=('List of attributes to include in the '\n 'output. \"all\" will return all '\n 'attributes.'))\n self.parser.add_argument('-w', '--width', nargs='+', type=int,\n default=None,\n help=('Set the width of columns in output.'))\n\n @resource.add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n alias_match = ActionAliasMatch()\n alias_match.command = args.match_text\n\n match, _ = self.manager.match(alias_match, **kwargs)\n return [match]\n\n def run_and_print(self, args, **kwargs):\n instances = self.run(args, **kwargs)\n self.print_output(instances, table.MultiColumnTable,\n attributes=args.attr, widths=args.width,\n json=args.json, yaml=args.yaml)\n\n\nclass ActionAliasExecuteCommand(resource.ResourceCommand):\n display_attributes = ['name']\n\n def __init__(self, resource, *args, **kwargs):\n super(ActionAliasExecuteCommand, self).__init__(\n resource, 'execute',\n ('Execute the command text by finding a matching %s.' %\n resource.get_display_name().lower()), *args, **kwargs)\n\n self.parser.add_argument('command_text',\n metavar='command',\n help=('Execute the command text by finding a matching %s.' %\n resource.get_display_name().lower()))\n self.parser.add_argument('-u', '--user', type=str, default=None,\n help='User under which to run the action (admins only).')\n\n @resource.add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n payload = core.Resource()\n payload.command = args.command_text\n payload.user = args.user\n payload.source_channel = 'cli'\n\n alias_execution_mgr = self.app.client.managers['ActionAliasExecution']\n execution = alias_execution_mgr.match_and_execute(payload)\n return execution\n\n def run_and_print(self, args, **kwargs):\n execution = self.run(args, **kwargs)\n print(\"Matching Action-alias: '%s'\" % execution.actionalias['ref'])\n print(\"To get the results, execute:\\n st2 execution get %s\" %\n (execution.execution['id']))\n", "path": "st2client/st2client/commands/action_alias.py"}], "after_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom st2client.models import core\nfrom st2client.models.action_alias import ActionAlias\nfrom st2client.models.action_alias import ActionAliasMatch\nfrom st2client.commands import resource\nfrom st2client.formatters import table\n\n\n__all__ = [\n 'ActionAliasBranch',\n 'ActionAliasMatchCommand',\n 'ActionAliasExecuteCommand'\n]\n\n\nclass ActionAliasBranch(resource.ResourceBranch):\n def __init__(self, description, app, subparsers, parent_parser=None):\n super(ActionAliasBranch, self).__init__(\n ActionAlias, description, app, subparsers,\n parent_parser=parent_parser, read_only=False,\n commands={\n 'list': ActionAliasListCommand,\n 'get': ActionAliasGetCommand\n })\n\n self.commands['match'] = ActionAliasMatchCommand(\n self.resource, self.app, self.subparsers,\n add_help=True)\n self.commands['execute'] = ActionAliasExecuteCommand(\n self.resource, self.app, self.subparsers,\n add_help=True)\n\n\nclass ActionAliasListCommand(resource.ContentPackResourceListCommand):\n display_attributes = ['ref', 'pack', 'description', 'enabled']\n\n\nclass ActionAliasGetCommand(resource.ContentPackResourceGetCommand):\n display_attributes = ['all']\n attribute_display_order = ['id', 'ref', 'pack', 'name', 'description',\n 'enabled', 'action_ref', 'formats']\n\n\nclass ActionAliasMatchCommand(resource.ResourceCommand):\n display_attributes = ['name', 'description']\n\n def __init__(self, resource, *args, **kwargs):\n super(ActionAliasMatchCommand, self).__init__(\n resource, 'match',\n 'Get the %s that match the command text.' %\n resource.get_display_name().lower(),\n *args, **kwargs)\n\n self.parser.add_argument('match_text',\n metavar='command',\n help=('Get the %s that match the command text.' %\n resource.get_display_name().lower()))\n self.parser.add_argument('-a', '--attr', nargs='+',\n default=self.display_attributes,\n help=('List of attributes to include in the '\n 'output. \"all\" will return all '\n 'attributes.'))\n self.parser.add_argument('-w', '--width', nargs='+', type=int,\n default=None,\n help=('Set the width of columns in output.'))\n\n @resource.add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n alias_match = ActionAliasMatch()\n alias_match.command = args.match_text\n\n match, _ = self.manager.match(alias_match, **kwargs)\n return [match]\n\n def run_and_print(self, args, **kwargs):\n instances = self.run(args, **kwargs)\n self.print_output(instances, table.MultiColumnTable,\n attributes=args.attr, widths=args.width,\n json=args.json, yaml=args.yaml)\n\n\nclass ActionAliasExecuteCommand(resource.ResourceCommand):\n display_attributes = ['name']\n\n def __init__(self, resource, *args, **kwargs):\n super(ActionAliasExecuteCommand, self).__init__(\n resource, 'execute',\n ('Execute the command text by finding a matching %s.' %\n resource.get_display_name().lower()), *args, **kwargs)\n\n self.parser.add_argument('command_text',\n metavar='command',\n help=('Execute the command text by finding a matching %s.' %\n resource.get_display_name().lower()))\n self.parser.add_argument('-u', '--user', type=str, default=None,\n help='User under which to run the action (admins only).')\n\n @resource.add_auth_token_to_kwargs_from_cli\n def run(self, args, **kwargs):\n payload = core.Resource()\n payload.command = args.command_text\n payload.user = args.user\n payload.source_channel = 'cli'\n\n alias_execution_mgr = self.app.client.managers['ActionAliasExecution']\n execution = alias_execution_mgr.match_and_execute(payload)\n return execution\n\n def run_and_print(self, args, **kwargs):\n execution = self.run(args, **kwargs)\n print(\"Matching Action-alias: '%s'\" % execution.actionalias['ref'])\n print(\"To get the results, execute:\\n st2 execution get %s\" %\n (execution.execution['id']))\n", "path": "st2client/st2client/commands/action_alias.py"}]}
| 2,191 | 233 |
gh_patches_debug_15556
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-580
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Database error if a source goes back and resubmits the /generate page
A IntegrityError is thrown by SqlAlchemy if a user goes back to the /generate form and resubmits it. There is an attempt to create another Source entry with a non unqiue filesystem_id/codename. Instead the user should probably just be redirected to their /lookup page
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/source.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import os
3 from datetime import datetime
4 import uuid
5 from functools import wraps
6 import zipfile
7 from cStringIO import StringIO
8 import subprocess
9
10 import logging
11 # This module's logger is explicitly labeled so the correct logger is used,
12 # even when this is run from the command line (e.g. during development)
13 log = logging.getLogger('source')
14
15 from flask import (Flask, request, render_template, session, redirect, url_for,
16 flash, abort, g, send_file)
17 from flask_wtf.csrf import CsrfProtect
18
19 from sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound
20
21 import config
22 import version
23 import crypto_util
24 import store
25 import background
26 from db import db_session, Source, Submission
27 from request_that_secures_file_uploads import RequestThatSecuresFileUploads
28
29 app = Flask(__name__, template_folder=config.SOURCE_TEMPLATES_DIR)
30 app.request_class = RequestThatSecuresFileUploads
31 app.config.from_object(config.FlaskConfig)
32 CsrfProtect(app)
33
34 SUBMIT_DOC_NOTIFY_STR = "Thanks! We received your document"
35 SUBMIT_MSG_NOTIFY_STR = "Thanks! We received your message"
36 SUBMIT_CODENAME_NOTIFY_STR = "Please remember your codename: you can use it to log back into this site to read responses from us and to submit follow-up documents and messages."
37
38 app.jinja_env.globals['version'] = version.__version__
39 if getattr(config, 'CUSTOM_HEADER_IMAGE', None):
40 app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE
41 app.jinja_env.globals['use_custom_header_image'] = True
42 else:
43 app.jinja_env.globals['header_image'] = 'logo.png'
44 app.jinja_env.globals['use_custom_header_image'] = False
45
46 @app.template_filter('datetimeformat')
47 def _jinja2_datetimeformat(dt, fmt=None):
48 """Template filter for readable formatting of datetime.datetime"""
49 fmt = fmt or '%b %d, %Y %I:%M %p'
50 return dt.strftime(fmt)
51
52
53 @app.teardown_appcontext
54 def shutdown_session(exception=None):
55 """Automatically remove database sessions at the end of the request, or
56 when the application shuts down"""
57 db_session.remove()
58
59
60 def logged_in():
61 if 'logged_in' in session:
62 return True
63
64
65 def login_required(f):
66 @wraps(f)
67 def decorated_function(*args, **kwargs):
68 if not logged_in():
69 return redirect(url_for('login'))
70 return f(*args, **kwargs)
71 return decorated_function
72
73
74 def ignore_static(f):
75 """Only executes the wrapped function if we're not loading a static resource."""
76 @wraps(f)
77 def decorated_function(*args, **kwargs):
78 if request.path.startswith('/static'):
79 return # don't execute the decorated function
80 return f(*args, **kwargs)
81 return decorated_function
82
83
84 @app.before_request
85 @ignore_static
86 def setup_g():
87 """Store commonly used values in Flask's special g object"""
88 # ignore_static here because `crypto_util.hash_codename` is scrypt (very
89 # time consuming), and we don't need to waste time running if we're just
90 # serving a static resource that won't need to access these common values.
91 if logged_in():
92 g.codename = session['codename']
93 g.sid = crypto_util.hash_codename(g.codename)
94 try:
95 g.source = Source.query.filter(Source.filesystem_id == g.sid).one()
96 except MultipleResultsFound as e:
97 app.logger.error("Found multiple Sources when one was expected: %s" % (e,))
98 abort(500)
99 except NoResultFound as e:
100 app.logger.error("Found no Sources when one was expected: %s" % (e,))
101 del session['logged_in']
102 del session['codename']
103 return redirect(url_for('index'))
104 g.loc = store.path(g.sid)
105
106
107 @app.before_request
108 @ignore_static
109 def check_tor2web():
110 # ignore_static here so we only flash a single message warning about Tor2Web,
111 # corresponding to the intial page load.
112 if 'X-tor2web' in request.headers:
113 flash('<strong>WARNING:</strong> You appear to be using Tor2Web. '
114 'This <strong>does not</strong> provide anonymity. '
115 '<a href="/tor2web-warning">Why is this dangerous?</a>',
116 "banner-warning")
117
118
119 @app.route('/')
120 def index():
121 return render_template('index.html')
122
123
124 def generate_unique_codename(num_words):
125 """Generate random codenames until we get an unused one"""
126 while True:
127 codename = crypto_util.genrandomid(num_words)
128 sid = crypto_util.hash_codename(codename) # scrypt (slow)
129 matching_sources = Source.query.filter(Source.filesystem_id == sid).all()
130 if len(matching_sources) == 0:
131 return codename
132
133
134 @app.route('/generate', methods=('GET', 'POST'))
135 def generate():
136 # Popping this key prevents errors when a logged in user returns to /generate.
137 # TODO: is this the best experience? A logged in user will be automatically
138 # logged out if they navigate to /generate by accident, which could be
139 # confusing. It might be better to instead redirect them to the lookup
140 # page, or inform them that they're logged in.
141 session.pop('logged_in', None)
142
143 number_words = 8
144 if request.method == 'POST':
145 number_words = int(request.form['number-words'])
146 if number_words not in range(7, 11):
147 abort(403)
148
149 codename = generate_unique_codename(number_words)
150 session['codename'] = codename
151 return render_template('generate.html', codename=codename)
152
153
154 @app.route('/create', methods=['POST'])
155 def create():
156 sid = crypto_util.hash_codename(session['codename'])
157
158 source = Source(sid, crypto_util.display_id())
159 db_session.add(source)
160 db_session.commit()
161
162 os.mkdir(store.path(sid))
163
164 session['logged_in'] = True
165 return redirect(url_for('lookup'))
166
167
168 @app.route('/lookup', methods=('GET',))
169 @login_required
170 def lookup():
171 replies = []
172 for fn in os.listdir(g.loc):
173 if fn.endswith('-reply.gpg'):
174 try:
175 msg = crypto_util.decrypt(g.codename,
176 file(store.path(g.sid, fn)).read()).decode("utf-8")
177 except UnicodeDecodeError:
178 app.logger.error("Could not decode reply %s" % fn)
179 else:
180 date = datetime.fromtimestamp(os.stat(store.path(g.sid, fn)).st_mtime).strftime("%b %d, %Y %I:%M %p")
181 replies.append(dict(id=fn, date=date, msg=msg))
182
183 def async_genkey(sid, codename):
184 with app.app_context():
185 background.execute(lambda: crypto_util.genkeypair(sid, codename))
186
187 # Generate a keypair to encrypt replies from the journalist
188 # Only do this if the journalist has flagged the source as one
189 # that they would like to reply to. (Issue #140.)
190 if not crypto_util.getkey(g.sid) and g.source.flagged:
191 async_genkey(g.sid, g.codename)
192
193 # if this was a redirect from the login page, flash a message if there are
194 # no replies to clarify "check for replies" flow (#393)
195 if request.args.get('from_login') == '1' and len(replies) == 0:
196 flash("There are no replies at this time. You can submit more documents from this code name below.", "notification")
197
198 return render_template('lookup.html', codename=g.codename, replies=replies,
199 flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))
200
201
202 def normalize_timestamps(sid):
203 """
204 Update the timestamps on all of the source's submissions to match that of
205 the latest submission. This minimizes metadata that could be useful to
206 investigators. See #301.
207 """
208 sub_paths = [ store.path(sid, submission.filename)
209 for submission in g.source.submissions ]
210 if len(sub_paths) > 1:
211 args = ["touch"]
212 args.extend(sub_paths[:-1])
213 rc = subprocess.call(args)
214 if rc != 0:
215 app.logger.warning("Couldn't normalize submission timestamps (touch exited with %d)" % rc)
216
217
218 @app.route('/submit', methods=('POST',))
219 @login_required
220 def submit():
221 msg = request.form['msg']
222 fh = request.files['fh']
223
224 fnames = []
225 journalist_filename = g.source.journalist_filename()
226
227 if msg:
228 g.source.interaction_count += 1
229 fnames.append(store.save_message_submission(g.sid, g.source.interaction_count,
230 journalist_filename, msg))
231 flash("{}. {}".format(SUBMIT_MSG_NOTIFY_STR,
232 SUBMIT_CODENAME_NOTIFY_STR), "notification")
233 if fh:
234 g.source.interaction_count += 1
235 fnames.append(store.save_file_submission(g.sid, g.source.interaction_count,
236 journalist_filename, fh.filename, fh.stream))
237 flash("{} '{}'. {}".format(SUBMIT_DOC_NOTIFY_STR,
238 fh.filename or '[unnamed]',
239 SUBMIT_CODENAME_NOTIFY_STR), "notification")
240 for fname in fnames:
241 submission = Submission(g.source, fname)
242 db_session.add(submission)
243
244 if g.source.pending:
245 g.source.pending = False
246
247 # Generate a keypair now, if there's enough entropy (issue #303)
248 entropy_avail = int(open('/proc/sys/kernel/random/entropy_avail').read())
249 if entropy_avail >= 2400:
250 crypto_util.genkeypair(g.sid, g.codename)
251
252 g.source.last_updated = datetime.now()
253 db_session.commit()
254 normalize_timestamps(g.sid)
255
256 return redirect(url_for('lookup'))
257
258
259 @app.route('/delete', methods=('POST',))
260 @login_required
261 def delete():
262 msgid = request.form['msgid']
263 assert '/' not in msgid
264 potential_files = os.listdir(g.loc)
265 if msgid not in potential_files:
266 abort(404) # TODO are the checks necessary?
267 store.secure_unlink(store.path(g.sid, msgid))
268 flash("Reply deleted.", "notification")
269
270 return redirect(url_for('lookup'))
271
272
273 def valid_codename(codename):
274 return os.path.exists(store.path(crypto_util.hash_codename(codename)))
275
276 @app.route('/login', methods=('GET', 'POST'))
277 def login():
278 if request.method == 'POST':
279 codename = request.form['codename']
280 try:
281 valid = valid_codename(codename)
282 except crypto_util.CryptoException:
283 pass
284 else:
285 if valid:
286 session.update(codename=codename, logged_in=True)
287 return redirect(url_for('lookup', from_login='1'))
288 flash("Sorry, that is not a recognized codename.", "error")
289 return render_template('login.html')
290
291
292 @app.route('/howto-disable-js')
293 def howto_disable_js():
294 return render_template("howto-disable-js.html")
295
296
297 @app.route('/tor2web-warning')
298 def tor2web_warning():
299 return render_template("tor2web-warning.html")
300
301
302 @app.route('/journalist-key')
303 def download_journalist_pubkey():
304 journalist_pubkey = crypto_util.gpg.export_keys(config.JOURNALIST_KEY)
305 return send_file(StringIO(journalist_pubkey),
306 mimetype="application/pgp-keys",
307 attachment_filename=config.JOURNALIST_KEY + ".asc",
308 as_attachment=True)
309
310
311 @app.route('/why-journalist-key')
312 def why_download_journalist_pubkey():
313 return render_template("why-journalist-key.html")
314
315
316 @app.errorhandler(404)
317 def page_not_found(error):
318 return render_template('notfound.html'), 404
319
320 @app.errorhandler(500)
321 def internal_error(error):
322 return render_template('error.html'), 500
323
324 def write_pidfile():
325 pid = str(os.getpid())
326 with open(config.SOURCE_PIDFILE, 'w') as fp:
327 fp.write(pid)
328
329 if __name__ == "__main__":
330 write_pidfile()
331 # TODO make sure debug is not on in production
332 app.run(debug=True, host='0.0.0.0', port=8080)
333
334
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/securedrop/source.py b/securedrop/source.py
--- a/securedrop/source.py
+++ b/securedrop/source.py
@@ -17,6 +17,7 @@
from flask_wtf.csrf import CsrfProtect
from sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound
+from sqlalchemy.exc import IntegrityError
import config
import version
@@ -157,9 +158,12 @@
source = Source(sid, crypto_util.display_id())
db_session.add(source)
- db_session.commit()
-
- os.mkdir(store.path(sid))
+ try:
+ db_session.commit()
+ except IntegrityError as e:
+ app.logger.error("Attempt to create a source with duplicate codename: %s" % (e,))
+ else:
+ os.mkdir(store.path(sid))
session['logged_in'] = True
return redirect(url_for('lookup'))
|
{"golden_diff": "diff --git a/securedrop/source.py b/securedrop/source.py\n--- a/securedrop/source.py\n+++ b/securedrop/source.py\n@@ -17,6 +17,7 @@\n from flask_wtf.csrf import CsrfProtect\n \n from sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\n+from sqlalchemy.exc import IntegrityError\n \n import config\n import version\n@@ -157,9 +158,12 @@\n \n source = Source(sid, crypto_util.display_id())\n db_session.add(source)\n- db_session.commit()\n-\n- os.mkdir(store.path(sid))\n+ try:\n+ db_session.commit()\n+ except IntegrityError as e: \n+ app.logger.error(\"Attempt to create a source with duplicate codename: %s\" % (e,))\n+ else:\n+ os.mkdir(store.path(sid))\n \n session['logged_in'] = True\n return redirect(url_for('lookup'))\n", "issue": "Database error if a source goes back and resubmits the /generate page\nA IntegrityError is thrown by SqlAlchemy if a user goes back to the /generate form and resubmits it. There is an attempt to create another Source entry with a non unqiue filesystem_id/codename. Instead the user should probably just be redirected to their /lookup page\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nfrom datetime import datetime\nimport uuid\nfrom functools import wraps\nimport zipfile\nfrom cStringIO import StringIO\nimport subprocess\n\nimport logging\n# This module's logger is explicitly labeled so the correct logger is used,\n# even when this is run from the command line (e.g. during development)\nlog = logging.getLogger('source')\n\nfrom flask import (Flask, request, render_template, session, redirect, url_for,\n flash, abort, g, send_file)\nfrom flask_wtf.csrf import CsrfProtect\n\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\n\nimport config\nimport version\nimport crypto_util\nimport store\nimport background\nfrom db import db_session, Source, Submission\nfrom request_that_secures_file_uploads import RequestThatSecuresFileUploads\n\napp = Flask(__name__, template_folder=config.SOURCE_TEMPLATES_DIR)\napp.request_class = RequestThatSecuresFileUploads\napp.config.from_object(config.FlaskConfig)\nCsrfProtect(app)\n\nSUBMIT_DOC_NOTIFY_STR = \"Thanks! We received your document\"\nSUBMIT_MSG_NOTIFY_STR = \"Thanks! We received your message\"\nSUBMIT_CODENAME_NOTIFY_STR = \"Please remember your codename: you can use it to log back into this site to read responses from us and to submit follow-up documents and messages.\"\n\napp.jinja_env.globals['version'] = version.__version__\nif getattr(config, 'CUSTOM_HEADER_IMAGE', None):\n app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE\n app.jinja_env.globals['use_custom_header_image'] = True\nelse:\n app.jinja_env.globals['header_image'] = 'logo.png'\n app.jinja_env.globals['use_custom_header_image'] = False\n\[email protected]_filter('datetimeformat')\ndef _jinja2_datetimeformat(dt, fmt=None):\n \"\"\"Template filter for readable formatting of datetime.datetime\"\"\"\n fmt = fmt or '%b %d, %Y %I:%M %p'\n return dt.strftime(fmt)\n\n\[email protected]_appcontext\ndef shutdown_session(exception=None):\n \"\"\"Automatically remove database sessions at the end of the request, or\n when the application shuts down\"\"\"\n db_session.remove()\n\n\ndef logged_in():\n if 'logged_in' in session:\n return True\n\n\ndef login_required(f):\n @wraps(f)\n def decorated_function(*args, **kwargs):\n if not logged_in():\n return redirect(url_for('login'))\n return f(*args, **kwargs)\n return decorated_function\n\n\ndef ignore_static(f):\n \"\"\"Only executes the wrapped function if we're not loading a static resource.\"\"\"\n @wraps(f)\n def decorated_function(*args, **kwargs):\n if request.path.startswith('/static'):\n return # don't execute the decorated function\n return f(*args, **kwargs)\n return decorated_function\n\n\[email protected]_request\n@ignore_static\ndef setup_g():\n \"\"\"Store commonly used values in Flask's special g object\"\"\"\n # ignore_static here because `crypto_util.hash_codename` is scrypt (very\n # time consuming), and we don't need to waste time running if we're just\n # serving a static resource that won't need to access these common values.\n if logged_in():\n g.codename = session['codename']\n g.sid = crypto_util.hash_codename(g.codename)\n try:\n g.source = Source.query.filter(Source.filesystem_id == g.sid).one()\n except MultipleResultsFound as e:\n app.logger.error(\"Found multiple Sources when one was expected: %s\" % (e,))\n abort(500)\n except NoResultFound as e:\n app.logger.error(\"Found no Sources when one was expected: %s\" % (e,))\n del session['logged_in']\n del session['codename']\n return redirect(url_for('index'))\n g.loc = store.path(g.sid)\n\n\[email protected]_request\n@ignore_static\ndef check_tor2web():\n # ignore_static here so we only flash a single message warning about Tor2Web,\n # corresponding to the intial page load.\n if 'X-tor2web' in request.headers:\n flash('<strong>WARNING:</strong> You appear to be using Tor2Web. '\n 'This <strong>does not</strong> provide anonymity. '\n '<a href=\"/tor2web-warning\">Why is this dangerous?</a>',\n \"banner-warning\")\n\n\[email protected]('/')\ndef index():\n return render_template('index.html')\n\n\ndef generate_unique_codename(num_words):\n \"\"\"Generate random codenames until we get an unused one\"\"\"\n while True:\n codename = crypto_util.genrandomid(num_words)\n sid = crypto_util.hash_codename(codename) # scrypt (slow)\n matching_sources = Source.query.filter(Source.filesystem_id == sid).all()\n if len(matching_sources) == 0:\n return codename\n\n\[email protected]('/generate', methods=('GET', 'POST'))\ndef generate():\n # Popping this key prevents errors when a logged in user returns to /generate.\n # TODO: is this the best experience? A logged in user will be automatically\n # logged out if they navigate to /generate by accident, which could be\n # confusing. It might be better to instead redirect them to the lookup\n # page, or inform them that they're logged in.\n session.pop('logged_in', None)\n\n number_words = 8\n if request.method == 'POST':\n number_words = int(request.form['number-words'])\n if number_words not in range(7, 11):\n abort(403)\n\n codename = generate_unique_codename(number_words)\n session['codename'] = codename\n return render_template('generate.html', codename=codename)\n\n\[email protected]('/create', methods=['POST'])\ndef create():\n sid = crypto_util.hash_codename(session['codename'])\n\n source = Source(sid, crypto_util.display_id())\n db_session.add(source)\n db_session.commit()\n\n os.mkdir(store.path(sid))\n\n session['logged_in'] = True\n return redirect(url_for('lookup'))\n\n\[email protected]('/lookup', methods=('GET',))\n@login_required\ndef lookup():\n replies = []\n for fn in os.listdir(g.loc):\n if fn.endswith('-reply.gpg'):\n try:\n msg = crypto_util.decrypt(g.codename,\n file(store.path(g.sid, fn)).read()).decode(\"utf-8\")\n except UnicodeDecodeError:\n app.logger.error(\"Could not decode reply %s\" % fn)\n else:\n date = datetime.fromtimestamp(os.stat(store.path(g.sid, fn)).st_mtime).strftime(\"%b %d, %Y %I:%M %p\")\n replies.append(dict(id=fn, date=date, msg=msg))\n\n def async_genkey(sid, codename):\n with app.app_context():\n background.execute(lambda: crypto_util.genkeypair(sid, codename))\n\n # Generate a keypair to encrypt replies from the journalist\n # Only do this if the journalist has flagged the source as one\n # that they would like to reply to. (Issue #140.)\n if not crypto_util.getkey(g.sid) and g.source.flagged:\n async_genkey(g.sid, g.codename)\n\n # if this was a redirect from the login page, flash a message if there are\n # no replies to clarify \"check for replies\" flow (#393)\n if request.args.get('from_login') == '1' and len(replies) == 0:\n flash(\"There are no replies at this time. You can submit more documents from this code name below.\", \"notification\")\n\n return render_template('lookup.html', codename=g.codename, replies=replies,\n flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))\n\n\ndef normalize_timestamps(sid):\n \"\"\"\n Update the timestamps on all of the source's submissions to match that of\n the latest submission. This minimizes metadata that could be useful to\n investigators. See #301.\n \"\"\"\n sub_paths = [ store.path(sid, submission.filename)\n for submission in g.source.submissions ]\n if len(sub_paths) > 1:\n args = [\"touch\"]\n args.extend(sub_paths[:-1])\n rc = subprocess.call(args)\n if rc != 0:\n app.logger.warning(\"Couldn't normalize submission timestamps (touch exited with %d)\" % rc)\n\n\[email protected]('/submit', methods=('POST',))\n@login_required\ndef submit():\n msg = request.form['msg']\n fh = request.files['fh']\n\n fnames = []\n journalist_filename = g.source.journalist_filename()\n\n if msg:\n g.source.interaction_count += 1\n fnames.append(store.save_message_submission(g.sid, g.source.interaction_count,\n journalist_filename, msg))\n flash(\"{}. {}\".format(SUBMIT_MSG_NOTIFY_STR,\n SUBMIT_CODENAME_NOTIFY_STR), \"notification\")\n if fh:\n g.source.interaction_count += 1\n fnames.append(store.save_file_submission(g.sid, g.source.interaction_count,\n journalist_filename, fh.filename, fh.stream))\n flash(\"{} '{}'. {}\".format(SUBMIT_DOC_NOTIFY_STR,\n fh.filename or '[unnamed]',\n SUBMIT_CODENAME_NOTIFY_STR), \"notification\")\n for fname in fnames:\n submission = Submission(g.source, fname)\n db_session.add(submission)\n\n if g.source.pending:\n g.source.pending = False\n\n # Generate a keypair now, if there's enough entropy (issue #303)\n entropy_avail = int(open('/proc/sys/kernel/random/entropy_avail').read())\n if entropy_avail >= 2400:\n crypto_util.genkeypair(g.sid, g.codename)\n\n g.source.last_updated = datetime.now()\n db_session.commit()\n normalize_timestamps(g.sid)\n\n return redirect(url_for('lookup'))\n\n\[email protected]('/delete', methods=('POST',))\n@login_required\ndef delete():\n msgid = request.form['msgid']\n assert '/' not in msgid\n potential_files = os.listdir(g.loc)\n if msgid not in potential_files:\n abort(404) # TODO are the checks necessary?\n store.secure_unlink(store.path(g.sid, msgid))\n flash(\"Reply deleted.\", \"notification\")\n\n return redirect(url_for('lookup'))\n\n\ndef valid_codename(codename):\n return os.path.exists(store.path(crypto_util.hash_codename(codename)))\n\[email protected]('/login', methods=('GET', 'POST'))\ndef login():\n if request.method == 'POST':\n codename = request.form['codename']\n try:\n valid = valid_codename(codename)\n except crypto_util.CryptoException:\n pass\n else:\n if valid:\n session.update(codename=codename, logged_in=True)\n return redirect(url_for('lookup', from_login='1'))\n flash(\"Sorry, that is not a recognized codename.\", \"error\")\n return render_template('login.html')\n\n\[email protected]('/howto-disable-js')\ndef howto_disable_js():\n return render_template(\"howto-disable-js.html\")\n\n\[email protected]('/tor2web-warning')\ndef tor2web_warning():\n return render_template(\"tor2web-warning.html\")\n\n\[email protected]('/journalist-key')\ndef download_journalist_pubkey():\n journalist_pubkey = crypto_util.gpg.export_keys(config.JOURNALIST_KEY)\n return send_file(StringIO(journalist_pubkey),\n mimetype=\"application/pgp-keys\",\n attachment_filename=config.JOURNALIST_KEY + \".asc\",\n as_attachment=True)\n\n\[email protected]('/why-journalist-key')\ndef why_download_journalist_pubkey():\n return render_template(\"why-journalist-key.html\")\n\n\[email protected](404)\ndef page_not_found(error):\n return render_template('notfound.html'), 404\n\[email protected](500)\ndef internal_error(error):\n return render_template('error.html'), 500\n\ndef write_pidfile():\n pid = str(os.getpid())\n with open(config.SOURCE_PIDFILE, 'w') as fp:\n fp.write(pid)\n\nif __name__ == \"__main__\":\n write_pidfile()\n # TODO make sure debug is not on in production\n app.run(debug=True, host='0.0.0.0', port=8080)\n\n", "path": "securedrop/source.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nfrom datetime import datetime\nimport uuid\nfrom functools import wraps\nimport zipfile\nfrom cStringIO import StringIO\nimport subprocess\n\nimport logging\n# This module's logger is explicitly labeled so the correct logger is used,\n# even when this is run from the command line (e.g. during development)\nlog = logging.getLogger('source')\n\nfrom flask import (Flask, request, render_template, session, redirect, url_for,\n flash, abort, g, send_file)\nfrom flask_wtf.csrf import CsrfProtect\n\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\nfrom sqlalchemy.exc import IntegrityError\n\nimport config\nimport version\nimport crypto_util\nimport store\nimport background\nfrom db import db_session, Source, Submission\nfrom request_that_secures_file_uploads import RequestThatSecuresFileUploads\n\napp = Flask(__name__, template_folder=config.SOURCE_TEMPLATES_DIR)\napp.request_class = RequestThatSecuresFileUploads\napp.config.from_object(config.FlaskConfig)\nCsrfProtect(app)\n\nSUBMIT_DOC_NOTIFY_STR = \"Thanks! We received your document\"\nSUBMIT_MSG_NOTIFY_STR = \"Thanks! We received your message\"\nSUBMIT_CODENAME_NOTIFY_STR = \"Please remember your codename: you can use it to log back into this site to read responses from us and to submit follow-up documents and messages.\"\n\napp.jinja_env.globals['version'] = version.__version__\nif getattr(config, 'CUSTOM_HEADER_IMAGE', None):\n app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE\n app.jinja_env.globals['use_custom_header_image'] = True\nelse:\n app.jinja_env.globals['header_image'] = 'logo.png'\n app.jinja_env.globals['use_custom_header_image'] = False\n\[email protected]_filter('datetimeformat')\ndef _jinja2_datetimeformat(dt, fmt=None):\n \"\"\"Template filter for readable formatting of datetime.datetime\"\"\"\n fmt = fmt or '%b %d, %Y %I:%M %p'\n return dt.strftime(fmt)\n\n\[email protected]_appcontext\ndef shutdown_session(exception=None):\n \"\"\"Automatically remove database sessions at the end of the request, or\n when the application shuts down\"\"\"\n db_session.remove()\n\n\ndef logged_in():\n if 'logged_in' in session:\n return True\n\n\ndef login_required(f):\n @wraps(f)\n def decorated_function(*args, **kwargs):\n if not logged_in():\n return redirect(url_for('login'))\n return f(*args, **kwargs)\n return decorated_function\n\n\ndef ignore_static(f):\n \"\"\"Only executes the wrapped function if we're not loading a static resource.\"\"\"\n @wraps(f)\n def decorated_function(*args, **kwargs):\n if request.path.startswith('/static'):\n return # don't execute the decorated function\n return f(*args, **kwargs)\n return decorated_function\n\n\[email protected]_request\n@ignore_static\ndef setup_g():\n \"\"\"Store commonly used values in Flask's special g object\"\"\"\n # ignore_static here because `crypto_util.hash_codename` is scrypt (very\n # time consuming), and we don't need to waste time running if we're just\n # serving a static resource that won't need to access these common values.\n if logged_in():\n g.codename = session['codename']\n g.sid = crypto_util.hash_codename(g.codename)\n try:\n g.source = Source.query.filter(Source.filesystem_id == g.sid).one()\n except MultipleResultsFound as e:\n app.logger.error(\"Found multiple Sources when one was expected: %s\" % (e,))\n abort(500)\n except NoResultFound as e:\n app.logger.error(\"Found no Sources when one was expected: %s\" % (e,))\n del session['logged_in']\n del session['codename']\n return redirect(url_for('index'))\n g.loc = store.path(g.sid)\n\n\[email protected]_request\n@ignore_static\ndef check_tor2web():\n # ignore_static here so we only flash a single message warning about Tor2Web,\n # corresponding to the intial page load.\n if 'X-tor2web' in request.headers:\n flash('<strong>WARNING:</strong> You appear to be using Tor2Web. '\n 'This <strong>does not</strong> provide anonymity. '\n '<a href=\"/tor2web-warning\">Why is this dangerous?</a>',\n \"banner-warning\")\n\n\[email protected]('/')\ndef index():\n return render_template('index.html')\n\n\ndef generate_unique_codename(num_words):\n \"\"\"Generate random codenames until we get an unused one\"\"\"\n while True:\n codename = crypto_util.genrandomid(num_words)\n sid = crypto_util.hash_codename(codename) # scrypt (slow)\n matching_sources = Source.query.filter(Source.filesystem_id == sid).all()\n if len(matching_sources) == 0:\n return codename\n\n\[email protected]('/generate', methods=('GET', 'POST'))\ndef generate():\n # Popping this key prevents errors when a logged in user returns to /generate.\n # TODO: is this the best experience? A logged in user will be automatically\n # logged out if they navigate to /generate by accident, which could be\n # confusing. It might be better to instead redirect them to the lookup\n # page, or inform them that they're logged in.\n session.pop('logged_in', None)\n\n number_words = 8\n if request.method == 'POST':\n number_words = int(request.form['number-words'])\n if number_words not in range(7, 11):\n abort(403)\n\n codename = generate_unique_codename(number_words)\n session['codename'] = codename\n return render_template('generate.html', codename=codename)\n\n\[email protected]('/create', methods=['POST'])\ndef create():\n sid = crypto_util.hash_codename(session['codename'])\n\n source = Source(sid, crypto_util.display_id())\n db_session.add(source)\n try:\n db_session.commit()\n except IntegrityError as e: \n app.logger.error(\"Attempt to create a source with duplicate codename: %s\" % (e,))\n else:\n os.mkdir(store.path(sid))\n\n session['logged_in'] = True\n return redirect(url_for('lookup'))\n\n\[email protected]('/lookup', methods=('GET',))\n@login_required\ndef lookup():\n replies = []\n for fn in os.listdir(g.loc):\n if fn.endswith('-reply.gpg'):\n try:\n msg = crypto_util.decrypt(g.codename,\n file(store.path(g.sid, fn)).read()).decode(\"utf-8\")\n except UnicodeDecodeError:\n app.logger.error(\"Could not decode reply %s\" % fn)\n else:\n date = datetime.fromtimestamp(os.stat(store.path(g.sid, fn)).st_mtime).strftime(\"%b %d, %Y %I:%M %p\")\n replies.append(dict(id=fn, date=date, msg=msg))\n\n def async_genkey(sid, codename):\n with app.app_context():\n background.execute(lambda: crypto_util.genkeypair(sid, codename))\n\n # Generate a keypair to encrypt replies from the journalist\n # Only do this if the journalist has flagged the source as one\n # that they would like to reply to. (Issue #140.)\n if not crypto_util.getkey(g.sid) and g.source.flagged:\n async_genkey(g.sid, g.codename)\n\n # if this was a redirect from the login page, flash a message if there are\n # no replies to clarify \"check for replies\" flow (#393)\n if request.args.get('from_login') == '1' and len(replies) == 0:\n flash(\"There are no replies at this time. You can submit more documents from this code name below.\", \"notification\")\n\n return render_template('lookup.html', codename=g.codename, replies=replies,\n flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))\n\n\ndef normalize_timestamps(sid):\n \"\"\"\n Update the timestamps on all of the source's submissions to match that of\n the latest submission. This minimizes metadata that could be useful to\n investigators. See #301.\n \"\"\"\n sub_paths = [ store.path(sid, submission.filename)\n for submission in g.source.submissions ]\n if len(sub_paths) > 1:\n args = [\"touch\"]\n args.extend(sub_paths[:-1])\n rc = subprocess.call(args)\n if rc != 0:\n app.logger.warning(\"Couldn't normalize submission timestamps (touch exited with %d)\" % rc)\n\n\[email protected]('/submit', methods=('POST',))\n@login_required\ndef submit():\n msg = request.form['msg']\n fh = request.files['fh']\n\n fnames = []\n journalist_filename = g.source.journalist_filename()\n\n if msg:\n g.source.interaction_count += 1\n fnames.append(store.save_message_submission(g.sid, g.source.interaction_count,\n journalist_filename, msg))\n flash(\"{}. {}\".format(SUBMIT_MSG_NOTIFY_STR,\n SUBMIT_CODENAME_NOTIFY_STR), \"notification\")\n if fh:\n g.source.interaction_count += 1\n fnames.append(store.save_file_submission(g.sid, g.source.interaction_count,\n journalist_filename, fh.filename, fh.stream))\n flash(\"{} '{}'. {}\".format(SUBMIT_DOC_NOTIFY_STR,\n fh.filename or '[unnamed]',\n SUBMIT_CODENAME_NOTIFY_STR), \"notification\")\n for fname in fnames:\n submission = Submission(g.source, fname)\n db_session.add(submission)\n\n if g.source.pending:\n g.source.pending = False\n\n # Generate a keypair now, if there's enough entropy (issue #303)\n entropy_avail = int(open('/proc/sys/kernel/random/entropy_avail').read())\n if entropy_avail >= 2400:\n crypto_util.genkeypair(g.sid, g.codename)\n\n g.source.last_updated = datetime.now()\n db_session.commit()\n normalize_timestamps(g.sid)\n\n return redirect(url_for('lookup'))\n\n\[email protected]('/delete', methods=('POST',))\n@login_required\ndef delete():\n msgid = request.form['msgid']\n assert '/' not in msgid\n potential_files = os.listdir(g.loc)\n if msgid not in potential_files:\n abort(404) # TODO are the checks necessary?\n store.secure_unlink(store.path(g.sid, msgid))\n flash(\"Reply deleted.\", \"notification\")\n\n return redirect(url_for('lookup'))\n\n\ndef valid_codename(codename):\n return os.path.exists(store.path(crypto_util.hash_codename(codename)))\n\[email protected]('/login', methods=('GET', 'POST'))\ndef login():\n if request.method == 'POST':\n codename = request.form['codename']\n try:\n valid = valid_codename(codename)\n except crypto_util.CryptoException:\n pass\n else:\n if valid:\n session.update(codename=codename, logged_in=True)\n return redirect(url_for('lookup', from_login='1'))\n flash(\"Sorry, that is not a recognized codename.\", \"error\")\n return render_template('login.html')\n\n\[email protected]('/howto-disable-js')\ndef howto_disable_js():\n return render_template(\"howto-disable-js.html\")\n\n\[email protected]('/tor2web-warning')\ndef tor2web_warning():\n return render_template(\"tor2web-warning.html\")\n\n\[email protected]('/journalist-key')\ndef download_journalist_pubkey():\n journalist_pubkey = crypto_util.gpg.export_keys(config.JOURNALIST_KEY)\n return send_file(StringIO(journalist_pubkey),\n mimetype=\"application/pgp-keys\",\n attachment_filename=config.JOURNALIST_KEY + \".asc\",\n as_attachment=True)\n\n\[email protected]('/why-journalist-key')\ndef why_download_journalist_pubkey():\n return render_template(\"why-journalist-key.html\")\n\n\[email protected](404)\ndef page_not_found(error):\n return render_template('notfound.html'), 404\n\[email protected](500)\ndef internal_error(error):\n return render_template('error.html'), 500\n\ndef write_pidfile():\n pid = str(os.getpid())\n with open(config.SOURCE_PIDFILE, 'w') as fp:\n fp.write(pid)\n\nif __name__ == \"__main__\":\n write_pidfile()\n # TODO make sure debug is not on in production\n app.run(debug=True, host='0.0.0.0', port=8080)\n\n", "path": "securedrop/source.py"}]}
| 3,933 | 201 |
gh_patches_debug_13766
|
rasdani/github-patches
|
git_diff
|
cobbler__cobbler-3396
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Backport] autoinstall_templates are installed into /var/lib/cobbler/templates
### Original feature issue
- PR: #2590
### Target release
- [ ] release33
- [x] release32
- [ ] release30
### Reason
Stabilization of Cobbler 3.2.x in Fedora Ecosystem.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cobbler/actions/sync.py`
Content:
```
1 """
2 Builds out filesystem trees/data based on the object tree.
3 This is the code behind 'cobbler sync'.
4
5 Copyright 2006-2009, Red Hat, Inc and Others
6 Michael DeHaan <michael.dehaan AT gmail>
7
8 This program is free software; you can redistribute it and/or modify
9 it under the terms of the GNU General Public License as published by
10 the Free Software Foundation; either version 2 of the License, or
11 (at your option) any later version.
12
13 This program is distributed in the hope that it will be useful,
14 but WITHOUT ANY WARRANTY; without even the implied warranty of
15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 GNU General Public License for more details.
17
18 You should have received a copy of the GNU General Public License
19 along with this program; if not, write to the Free Software
20 Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
21 02110-1301 USA
22 """
23
24 import glob
25 import os
26 import time
27
28 from cobbler.cexceptions import CX
29 from cobbler import clogger
30 from cobbler import templar
31 from cobbler import tftpgen
32 from cobbler import utils
33
34
35 class CobblerSync:
36 """
37 Handles conversion of internal state to the tftpboot tree layout
38 """
39
40 def __init__(self, collection_mgr, verbose: bool = True, dhcp=None, dns=None, logger=None, tftpd=None):
41 """
42 Constructor
43
44 :param collection_mgr: The collection manager instance which holds all information about cobbler.
45 :param verbose: Whether to log the actions performed in this module verbose or not.
46 :param dhcp: The DHCP manager which can update the DHCP config.
47 :param dns: The DNS manager which can update the DNS config.
48 :param logger: The logger to audit all action with.
49 :param tftpd: The TFTP manager which can update the TFTP config.
50 """
51 self.logger = logger
52 if logger is None:
53 self.logger = clogger.Logger()
54
55 self.verbose = verbose
56 self.collection_mgr = collection_mgr
57 self.api = collection_mgr.api
58 self.distros = collection_mgr.distros()
59 self.profiles = collection_mgr.profiles()
60 self.systems = collection_mgr.systems()
61 self.settings = collection_mgr.settings()
62 self.repos = collection_mgr.repos()
63 self.templar = templar.Templar(collection_mgr, self.logger)
64 self.tftpgen = tftpgen.TFTPGen(collection_mgr, self.logger)
65 self.dns = dns
66 self.dhcp = dhcp
67 self.tftpd = tftpd
68 self.bootloc = self.settings.tftpboot_location
69 self.tftpgen.verbose = verbose
70 self.dns.verbose = verbose
71 self.dhcp.verbose = verbose
72
73 self.pxelinux_dir = os.path.join(self.bootloc, "pxelinux.cfg")
74 self.grub_dir = os.path.join(self.bootloc, "grub")
75 self.images_dir = os.path.join(self.bootloc, "images")
76 self.yaboot_bin_dir = os.path.join(self.bootloc, "ppc")
77 self.yaboot_cfg_dir = os.path.join(self.bootloc, "etc")
78 self.rendered_dir = os.path.join(self.settings.webdir, "rendered")
79
80 def run(self):
81 """
82 Syncs the current configuration file with the config tree.
83 Using the ``Check().run_`` functions previously is recommended
84 """
85 if not os.path.exists(self.bootloc):
86 utils.die(self.logger, "cannot find directory: %s" % self.bootloc)
87
88 self.logger.info("running pre-sync triggers")
89
90 # run pre-triggers...
91 utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/sync/pre/*")
92
93 self.distros = self.collection_mgr.distros()
94 self.profiles = self.collection_mgr.profiles()
95 self.systems = self.collection_mgr.systems()
96 self.settings = self.collection_mgr.settings()
97 self.repos = self.collection_mgr.repos()
98
99 # execute the core of the sync operation
100 self.logger.info("cleaning trees")
101 self.clean_trees()
102
103 # Have the tftpd module handle copying bootloaders, distros, images, and all_system_files
104 self.tftpd.sync(self.verbose)
105 # Copy distros to the webdir
106 # Adding in the exception handling to not blow up if files have been moved (or the path references an NFS
107 # directory that's no longer mounted)
108 for d in self.distros:
109 try:
110 self.logger.info("copying files for distro: %s" % d.name)
111 self.tftpgen.copy_single_distro_files(d, self.settings.webdir, True)
112 self.tftpgen.write_templates(d, write_file=True)
113 except CX as e:
114 self.logger.error(e.value)
115
116 # make the default pxe menu anyway...
117 self.tftpgen.make_pxe_menu()
118
119 if self.settings.manage_dhcp:
120 self.write_dhcp()
121 if self.settings.manage_dns:
122 self.logger.info("rendering DNS files")
123 self.dns.regen_hosts()
124 self.dns.write_dns_files()
125
126 if self.settings.manage_tftpd:
127 # copy in boot_files
128 self.tftpd.write_boot_files()
129
130 self.logger.info("cleaning link caches")
131 self.clean_link_cache()
132
133 if self.settings.manage_rsync:
134 self.logger.info("rendering Rsync files")
135 self.rsync_gen()
136
137 # run post-triggers
138 self.logger.info("running post-sync triggers")
139 utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/sync/post/*", logger=self.logger)
140 utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/change/*", logger=self.logger)
141
142 def make_tftpboot(self):
143 """
144 Make directories for tftpboot images
145 """
146 if not os.path.exists(self.pxelinux_dir):
147 utils.mkdir(self.pxelinux_dir, logger=self.logger)
148 if not os.path.exists(self.grub_dir):
149 utils.mkdir(self.grub_dir, logger=self.logger)
150 grub_images_link = os.path.join(self.grub_dir, "images")
151 if not os.path.exists(grub_images_link):
152 os.symlink("../images", grub_images_link)
153 if not os.path.exists(self.images_dir):
154 utils.mkdir(self.images_dir, logger=self.logger)
155 if not os.path.exists(self.rendered_dir):
156 utils.mkdir(self.rendered_dir, logger=self.logger)
157 if not os.path.exists(self.yaboot_bin_dir):
158 utils.mkdir(self.yaboot_bin_dir, logger=self.logger)
159 if not os.path.exists(self.yaboot_cfg_dir):
160 utils.mkdir(self.yaboot_cfg_dir, logger=self.logger)
161
162 def clean_trees(self):
163 """
164 Delete any previously built pxelinux.cfg tree and virt tree info and then create directories.
165
166 Note: for SELinux reasons, some information goes in ``/tftpboot``, some in ``/var/www/cobbler`` and some must be
167 duplicated in both. This is because PXE needs tftp, and automatic installation and Virt operations need http.
168 Only the kernel and initrd images are duplicated, which is unfortunate, though SELinux won't let me give them
169 two contexts, so symlinks are not a solution. *Otherwise* duplication is minimal.
170 """
171
172 # clean out parts of webdir and all of /tftpboot/images and /tftpboot/pxelinux.cfg
173 for x in os.listdir(self.settings.webdir):
174 path = os.path.join(self.settings.webdir, x)
175 if os.path.isfile(path):
176 if not x.endswith(".py"):
177 utils.rmfile(path, logger=self.logger)
178 if os.path.isdir(path):
179 if x not in self.settings.webdir_whitelist:
180 # delete directories that shouldn't exist
181 utils.rmtree(path, logger=self.logger)
182 if x in ["autoinstall_templates", "autoinstall_templates_sys", "images", "systems", "distros", "profiles", "repo_profile", "repo_system", "rendered"]:
183 # clean out directory contents
184 utils.rmtree_contents(path, logger=self.logger)
185 #
186 self.make_tftpboot()
187 utils.rmtree_contents(self.pxelinux_dir, logger=self.logger)
188 utils.rmtree_contents(self.grub_dir, logger=self.logger)
189 utils.rmtree_contents(self.images_dir, logger=self.logger)
190 utils.rmtree_contents(self.yaboot_bin_dir, logger=self.logger)
191 utils.rmtree_contents(self.yaboot_cfg_dir, logger=self.logger)
192 utils.rmtree_contents(self.rendered_dir, logger=self.logger)
193
194 def write_dhcp(self):
195 """
196 Write all files which are associated to DHCP.
197 """
198 self.logger.info("rendering DHCP files")
199 self.dhcp.write_dhcp_file()
200 self.dhcp.regen_ethers()
201
202 def sync_dhcp(self):
203 """
204 This calls write_dhcp and restarts the DHCP server.
205 """
206 if self.settings.manage_dhcp:
207 self.write_dhcp()
208 self.dhcp.sync_dhcp()
209
210 def clean_link_cache(self):
211 """
212 All files which are linked into the cache will be deleted so the cache can be rebuild.
213 """
214 for dirtree in [os.path.join(self.bootloc, 'images'), self.settings.webdir]:
215 cachedir = os.path.join(dirtree, '.link_cache')
216 if os.path.isdir(cachedir):
217 cmd = "find %s -maxdepth 1 -type f -links 1 -exec rm -f '{}' ';'" % cachedir
218 utils.subprocess_call(self.logger, cmd)
219
220 def rsync_gen(self):
221 """
222 Generate rsync modules of all repositories and distributions
223 """
224 template_file = "/etc/cobbler/rsync.template"
225
226 try:
227 template = open(template_file, "r")
228 except:
229 raise CX("error reading template %s" % template_file)
230
231 template_data = ""
232 template_data = template.read()
233 template.close()
234
235 distros = []
236
237 for link in glob.glob(os.path.join(self.settings.webdir, 'links', '*')):
238 distro = {}
239 distro["path"] = os.path.realpath(link)
240 distro["name"] = os.path.basename(link)
241 distros.append(distro)
242
243 repos = [repo.name for repo in self.api.repos()
244 if os.path.isdir(os.path.join(self.settings.webdir, "repo_mirror", repo.name))]
245
246 metadata = {
247 "date": time.asctime(time.gmtime()),
248 "cobbler_server": self.settings.server,
249 "distros": distros,
250 "repos": repos,
251 "webdir": self.settings.webdir
252 }
253
254 self.templar.render(template_data, metadata, "/etc/rsyncd.conf")
255
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cobbler/actions/sync.py b/cobbler/actions/sync.py
--- a/cobbler/actions/sync.py
+++ b/cobbler/actions/sync.py
@@ -179,7 +179,7 @@
if x not in self.settings.webdir_whitelist:
# delete directories that shouldn't exist
utils.rmtree(path, logger=self.logger)
- if x in ["autoinstall_templates", "autoinstall_templates_sys", "images", "systems", "distros", "profiles", "repo_profile", "repo_system", "rendered"]:
+ if x in ["templates", "images", "systems", "distros", "profiles", "repo_profile", "repo_system", "rendered"]:
# clean out directory contents
utils.rmtree_contents(path, logger=self.logger)
#
|
{"golden_diff": "diff --git a/cobbler/actions/sync.py b/cobbler/actions/sync.py\n--- a/cobbler/actions/sync.py\n+++ b/cobbler/actions/sync.py\n@@ -179,7 +179,7 @@\n if x not in self.settings.webdir_whitelist:\n # delete directories that shouldn't exist\n utils.rmtree(path, logger=self.logger)\n- if x in [\"autoinstall_templates\", \"autoinstall_templates_sys\", \"images\", \"systems\", \"distros\", \"profiles\", \"repo_profile\", \"repo_system\", \"rendered\"]:\n+ if x in [\"templates\", \"images\", \"systems\", \"distros\", \"profiles\", \"repo_profile\", \"repo_system\", \"rendered\"]:\n # clean out directory contents\n utils.rmtree_contents(path, logger=self.logger)\n #\n", "issue": "[Backport] autoinstall_templates are installed into /var/lib/cobbler/templates\n### Original feature issue\r\n\r\n- PR: #2590\r\n\r\n### Target release\r\n\r\n- [ ] release33\r\n- [x] release32\r\n- [ ] release30\r\n\r\n### Reason\r\n\r\nStabilization of Cobbler 3.2.x in Fedora Ecosystem.\r\n\n", "before_files": [{"content": "\"\"\"\nBuilds out filesystem trees/data based on the object tree.\nThis is the code behind 'cobbler sync'.\n\nCopyright 2006-2009, Red Hat, Inc and Others\nMichael DeHaan <michael.dehaan AT gmail>\n\nThis program is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 2 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program; if not, write to the Free Software\nFoundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n02110-1301 USA\n\"\"\"\n\nimport glob\nimport os\nimport time\n\nfrom cobbler.cexceptions import CX\nfrom cobbler import clogger\nfrom cobbler import templar\nfrom cobbler import tftpgen\nfrom cobbler import utils\n\n\nclass CobblerSync:\n \"\"\"\n Handles conversion of internal state to the tftpboot tree layout\n \"\"\"\n\n def __init__(self, collection_mgr, verbose: bool = True, dhcp=None, dns=None, logger=None, tftpd=None):\n \"\"\"\n Constructor\n\n :param collection_mgr: The collection manager instance which holds all information about cobbler.\n :param verbose: Whether to log the actions performed in this module verbose or not.\n :param dhcp: The DHCP manager which can update the DHCP config.\n :param dns: The DNS manager which can update the DNS config.\n :param logger: The logger to audit all action with.\n :param tftpd: The TFTP manager which can update the TFTP config.\n \"\"\"\n self.logger = logger\n if logger is None:\n self.logger = clogger.Logger()\n\n self.verbose = verbose\n self.collection_mgr = collection_mgr\n self.api = collection_mgr.api\n self.distros = collection_mgr.distros()\n self.profiles = collection_mgr.profiles()\n self.systems = collection_mgr.systems()\n self.settings = collection_mgr.settings()\n self.repos = collection_mgr.repos()\n self.templar = templar.Templar(collection_mgr, self.logger)\n self.tftpgen = tftpgen.TFTPGen(collection_mgr, self.logger)\n self.dns = dns\n self.dhcp = dhcp\n self.tftpd = tftpd\n self.bootloc = self.settings.tftpboot_location\n self.tftpgen.verbose = verbose\n self.dns.verbose = verbose\n self.dhcp.verbose = verbose\n\n self.pxelinux_dir = os.path.join(self.bootloc, \"pxelinux.cfg\")\n self.grub_dir = os.path.join(self.bootloc, \"grub\")\n self.images_dir = os.path.join(self.bootloc, \"images\")\n self.yaboot_bin_dir = os.path.join(self.bootloc, \"ppc\")\n self.yaboot_cfg_dir = os.path.join(self.bootloc, \"etc\")\n self.rendered_dir = os.path.join(self.settings.webdir, \"rendered\")\n\n def run(self):\n \"\"\"\n Syncs the current configuration file with the config tree.\n Using the ``Check().run_`` functions previously is recommended\n \"\"\"\n if not os.path.exists(self.bootloc):\n utils.die(self.logger, \"cannot find directory: %s\" % self.bootloc)\n\n self.logger.info(\"running pre-sync triggers\")\n\n # run pre-triggers...\n utils.run_triggers(self.api, None, \"/var/lib/cobbler/triggers/sync/pre/*\")\n\n self.distros = self.collection_mgr.distros()\n self.profiles = self.collection_mgr.profiles()\n self.systems = self.collection_mgr.systems()\n self.settings = self.collection_mgr.settings()\n self.repos = self.collection_mgr.repos()\n\n # execute the core of the sync operation\n self.logger.info(\"cleaning trees\")\n self.clean_trees()\n\n # Have the tftpd module handle copying bootloaders, distros, images, and all_system_files\n self.tftpd.sync(self.verbose)\n # Copy distros to the webdir\n # Adding in the exception handling to not blow up if files have been moved (or the path references an NFS\n # directory that's no longer mounted)\n for d in self.distros:\n try:\n self.logger.info(\"copying files for distro: %s\" % d.name)\n self.tftpgen.copy_single_distro_files(d, self.settings.webdir, True)\n self.tftpgen.write_templates(d, write_file=True)\n except CX as e:\n self.logger.error(e.value)\n\n # make the default pxe menu anyway...\n self.tftpgen.make_pxe_menu()\n\n if self.settings.manage_dhcp:\n self.write_dhcp()\n if self.settings.manage_dns:\n self.logger.info(\"rendering DNS files\")\n self.dns.regen_hosts()\n self.dns.write_dns_files()\n\n if self.settings.manage_tftpd:\n # copy in boot_files\n self.tftpd.write_boot_files()\n\n self.logger.info(\"cleaning link caches\")\n self.clean_link_cache()\n\n if self.settings.manage_rsync:\n self.logger.info(\"rendering Rsync files\")\n self.rsync_gen()\n\n # run post-triggers\n self.logger.info(\"running post-sync triggers\")\n utils.run_triggers(self.api, None, \"/var/lib/cobbler/triggers/sync/post/*\", logger=self.logger)\n utils.run_triggers(self.api, None, \"/var/lib/cobbler/triggers/change/*\", logger=self.logger)\n\n def make_tftpboot(self):\n \"\"\"\n Make directories for tftpboot images\n \"\"\"\n if not os.path.exists(self.pxelinux_dir):\n utils.mkdir(self.pxelinux_dir, logger=self.logger)\n if not os.path.exists(self.grub_dir):\n utils.mkdir(self.grub_dir, logger=self.logger)\n grub_images_link = os.path.join(self.grub_dir, \"images\")\n if not os.path.exists(grub_images_link):\n os.symlink(\"../images\", grub_images_link)\n if not os.path.exists(self.images_dir):\n utils.mkdir(self.images_dir, logger=self.logger)\n if not os.path.exists(self.rendered_dir):\n utils.mkdir(self.rendered_dir, logger=self.logger)\n if not os.path.exists(self.yaboot_bin_dir):\n utils.mkdir(self.yaboot_bin_dir, logger=self.logger)\n if not os.path.exists(self.yaboot_cfg_dir):\n utils.mkdir(self.yaboot_cfg_dir, logger=self.logger)\n\n def clean_trees(self):\n \"\"\"\n Delete any previously built pxelinux.cfg tree and virt tree info and then create directories.\n\n Note: for SELinux reasons, some information goes in ``/tftpboot``, some in ``/var/www/cobbler`` and some must be\n duplicated in both. This is because PXE needs tftp, and automatic installation and Virt operations need http.\n Only the kernel and initrd images are duplicated, which is unfortunate, though SELinux won't let me give them\n two contexts, so symlinks are not a solution. *Otherwise* duplication is minimal.\n \"\"\"\n\n # clean out parts of webdir and all of /tftpboot/images and /tftpboot/pxelinux.cfg\n for x in os.listdir(self.settings.webdir):\n path = os.path.join(self.settings.webdir, x)\n if os.path.isfile(path):\n if not x.endswith(\".py\"):\n utils.rmfile(path, logger=self.logger)\n if os.path.isdir(path):\n if x not in self.settings.webdir_whitelist:\n # delete directories that shouldn't exist\n utils.rmtree(path, logger=self.logger)\n if x in [\"autoinstall_templates\", \"autoinstall_templates_sys\", \"images\", \"systems\", \"distros\", \"profiles\", \"repo_profile\", \"repo_system\", \"rendered\"]:\n # clean out directory contents\n utils.rmtree_contents(path, logger=self.logger)\n #\n self.make_tftpboot()\n utils.rmtree_contents(self.pxelinux_dir, logger=self.logger)\n utils.rmtree_contents(self.grub_dir, logger=self.logger)\n utils.rmtree_contents(self.images_dir, logger=self.logger)\n utils.rmtree_contents(self.yaboot_bin_dir, logger=self.logger)\n utils.rmtree_contents(self.yaboot_cfg_dir, logger=self.logger)\n utils.rmtree_contents(self.rendered_dir, logger=self.logger)\n\n def write_dhcp(self):\n \"\"\"\n Write all files which are associated to DHCP.\n \"\"\"\n self.logger.info(\"rendering DHCP files\")\n self.dhcp.write_dhcp_file()\n self.dhcp.regen_ethers()\n\n def sync_dhcp(self):\n \"\"\"\n This calls write_dhcp and restarts the DHCP server.\n \"\"\"\n if self.settings.manage_dhcp:\n self.write_dhcp()\n self.dhcp.sync_dhcp()\n\n def clean_link_cache(self):\n \"\"\"\n All files which are linked into the cache will be deleted so the cache can be rebuild.\n \"\"\"\n for dirtree in [os.path.join(self.bootloc, 'images'), self.settings.webdir]:\n cachedir = os.path.join(dirtree, '.link_cache')\n if os.path.isdir(cachedir):\n cmd = \"find %s -maxdepth 1 -type f -links 1 -exec rm -f '{}' ';'\" % cachedir\n utils.subprocess_call(self.logger, cmd)\n\n def rsync_gen(self):\n \"\"\"\n Generate rsync modules of all repositories and distributions\n \"\"\"\n template_file = \"/etc/cobbler/rsync.template\"\n\n try:\n template = open(template_file, \"r\")\n except:\n raise CX(\"error reading template %s\" % template_file)\n\n template_data = \"\"\n template_data = template.read()\n template.close()\n\n distros = []\n\n for link in glob.glob(os.path.join(self.settings.webdir, 'links', '*')):\n distro = {}\n distro[\"path\"] = os.path.realpath(link)\n distro[\"name\"] = os.path.basename(link)\n distros.append(distro)\n\n repos = [repo.name for repo in self.api.repos()\n if os.path.isdir(os.path.join(self.settings.webdir, \"repo_mirror\", repo.name))]\n\n metadata = {\n \"date\": time.asctime(time.gmtime()),\n \"cobbler_server\": self.settings.server,\n \"distros\": distros,\n \"repos\": repos,\n \"webdir\": self.settings.webdir\n }\n\n self.templar.render(template_data, metadata, \"/etc/rsyncd.conf\")\n", "path": "cobbler/actions/sync.py"}], "after_files": [{"content": "\"\"\"\nBuilds out filesystem trees/data based on the object tree.\nThis is the code behind 'cobbler sync'.\n\nCopyright 2006-2009, Red Hat, Inc and Others\nMichael DeHaan <michael.dehaan AT gmail>\n\nThis program is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 2 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program; if not, write to the Free Software\nFoundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n02110-1301 USA\n\"\"\"\n\nimport glob\nimport os\nimport time\n\nfrom cobbler.cexceptions import CX\nfrom cobbler import clogger\nfrom cobbler import templar\nfrom cobbler import tftpgen\nfrom cobbler import utils\n\n\nclass CobblerSync:\n \"\"\"\n Handles conversion of internal state to the tftpboot tree layout\n \"\"\"\n\n def __init__(self, collection_mgr, verbose: bool = True, dhcp=None, dns=None, logger=None, tftpd=None):\n \"\"\"\n Constructor\n\n :param collection_mgr: The collection manager instance which holds all information about cobbler.\n :param verbose: Whether to log the actions performed in this module verbose or not.\n :param dhcp: The DHCP manager which can update the DHCP config.\n :param dns: The DNS manager which can update the DNS config.\n :param logger: The logger to audit all action with.\n :param tftpd: The TFTP manager which can update the TFTP config.\n \"\"\"\n self.logger = logger\n if logger is None:\n self.logger = clogger.Logger()\n\n self.verbose = verbose\n self.collection_mgr = collection_mgr\n self.api = collection_mgr.api\n self.distros = collection_mgr.distros()\n self.profiles = collection_mgr.profiles()\n self.systems = collection_mgr.systems()\n self.settings = collection_mgr.settings()\n self.repos = collection_mgr.repos()\n self.templar = templar.Templar(collection_mgr, self.logger)\n self.tftpgen = tftpgen.TFTPGen(collection_mgr, self.logger)\n self.dns = dns\n self.dhcp = dhcp\n self.tftpd = tftpd\n self.bootloc = self.settings.tftpboot_location\n self.tftpgen.verbose = verbose\n self.dns.verbose = verbose\n self.dhcp.verbose = verbose\n\n self.pxelinux_dir = os.path.join(self.bootloc, \"pxelinux.cfg\")\n self.grub_dir = os.path.join(self.bootloc, \"grub\")\n self.images_dir = os.path.join(self.bootloc, \"images\")\n self.yaboot_bin_dir = os.path.join(self.bootloc, \"ppc\")\n self.yaboot_cfg_dir = os.path.join(self.bootloc, \"etc\")\n self.rendered_dir = os.path.join(self.settings.webdir, \"rendered\")\n\n def run(self):\n \"\"\"\n Syncs the current configuration file with the config tree.\n Using the ``Check().run_`` functions previously is recommended\n \"\"\"\n if not os.path.exists(self.bootloc):\n utils.die(self.logger, \"cannot find directory: %s\" % self.bootloc)\n\n self.logger.info(\"running pre-sync triggers\")\n\n # run pre-triggers...\n utils.run_triggers(self.api, None, \"/var/lib/cobbler/triggers/sync/pre/*\")\n\n self.distros = self.collection_mgr.distros()\n self.profiles = self.collection_mgr.profiles()\n self.systems = self.collection_mgr.systems()\n self.settings = self.collection_mgr.settings()\n self.repos = self.collection_mgr.repos()\n\n # execute the core of the sync operation\n self.logger.info(\"cleaning trees\")\n self.clean_trees()\n\n # Have the tftpd module handle copying bootloaders, distros, images, and all_system_files\n self.tftpd.sync(self.verbose)\n # Copy distros to the webdir\n # Adding in the exception handling to not blow up if files have been moved (or the path references an NFS\n # directory that's no longer mounted)\n for d in self.distros:\n try:\n self.logger.info(\"copying files for distro: %s\" % d.name)\n self.tftpgen.copy_single_distro_files(d, self.settings.webdir, True)\n self.tftpgen.write_templates(d, write_file=True)\n except CX as e:\n self.logger.error(e.value)\n\n # make the default pxe menu anyway...\n self.tftpgen.make_pxe_menu()\n\n if self.settings.manage_dhcp:\n self.write_dhcp()\n if self.settings.manage_dns:\n self.logger.info(\"rendering DNS files\")\n self.dns.regen_hosts()\n self.dns.write_dns_files()\n\n if self.settings.manage_tftpd:\n # copy in boot_files\n self.tftpd.write_boot_files()\n\n self.logger.info(\"cleaning link caches\")\n self.clean_link_cache()\n\n if self.settings.manage_rsync:\n self.logger.info(\"rendering Rsync files\")\n self.rsync_gen()\n\n # run post-triggers\n self.logger.info(\"running post-sync triggers\")\n utils.run_triggers(self.api, None, \"/var/lib/cobbler/triggers/sync/post/*\", logger=self.logger)\n utils.run_triggers(self.api, None, \"/var/lib/cobbler/triggers/change/*\", logger=self.logger)\n\n def make_tftpboot(self):\n \"\"\"\n Make directories for tftpboot images\n \"\"\"\n if not os.path.exists(self.pxelinux_dir):\n utils.mkdir(self.pxelinux_dir, logger=self.logger)\n if not os.path.exists(self.grub_dir):\n utils.mkdir(self.grub_dir, logger=self.logger)\n grub_images_link = os.path.join(self.grub_dir, \"images\")\n if not os.path.exists(grub_images_link):\n os.symlink(\"../images\", grub_images_link)\n if not os.path.exists(self.images_dir):\n utils.mkdir(self.images_dir, logger=self.logger)\n if not os.path.exists(self.rendered_dir):\n utils.mkdir(self.rendered_dir, logger=self.logger)\n if not os.path.exists(self.yaboot_bin_dir):\n utils.mkdir(self.yaboot_bin_dir, logger=self.logger)\n if not os.path.exists(self.yaboot_cfg_dir):\n utils.mkdir(self.yaboot_cfg_dir, logger=self.logger)\n\n def clean_trees(self):\n \"\"\"\n Delete any previously built pxelinux.cfg tree and virt tree info and then create directories.\n\n Note: for SELinux reasons, some information goes in ``/tftpboot``, some in ``/var/www/cobbler`` and some must be\n duplicated in both. This is because PXE needs tftp, and automatic installation and Virt operations need http.\n Only the kernel and initrd images are duplicated, which is unfortunate, though SELinux won't let me give them\n two contexts, so symlinks are not a solution. *Otherwise* duplication is minimal.\n \"\"\"\n\n # clean out parts of webdir and all of /tftpboot/images and /tftpboot/pxelinux.cfg\n for x in os.listdir(self.settings.webdir):\n path = os.path.join(self.settings.webdir, x)\n if os.path.isfile(path):\n if not x.endswith(\".py\"):\n utils.rmfile(path, logger=self.logger)\n if os.path.isdir(path):\n if x not in self.settings.webdir_whitelist:\n # delete directories that shouldn't exist\n utils.rmtree(path, logger=self.logger)\n if x in [\"templates\", \"images\", \"systems\", \"distros\", \"profiles\", \"repo_profile\", \"repo_system\", \"rendered\"]:\n # clean out directory contents\n utils.rmtree_contents(path, logger=self.logger)\n #\n self.make_tftpboot()\n utils.rmtree_contents(self.pxelinux_dir, logger=self.logger)\n utils.rmtree_contents(self.grub_dir, logger=self.logger)\n utils.rmtree_contents(self.images_dir, logger=self.logger)\n utils.rmtree_contents(self.yaboot_bin_dir, logger=self.logger)\n utils.rmtree_contents(self.yaboot_cfg_dir, logger=self.logger)\n utils.rmtree_contents(self.rendered_dir, logger=self.logger)\n\n def write_dhcp(self):\n \"\"\"\n Write all files which are associated to DHCP.\n \"\"\"\n self.logger.info(\"rendering DHCP files\")\n self.dhcp.write_dhcp_file()\n self.dhcp.regen_ethers()\n\n def sync_dhcp(self):\n \"\"\"\n This calls write_dhcp and restarts the DHCP server.\n \"\"\"\n if self.settings.manage_dhcp:\n self.write_dhcp()\n self.dhcp.sync_dhcp()\n\n def clean_link_cache(self):\n \"\"\"\n All files which are linked into the cache will be deleted so the cache can be rebuild.\n \"\"\"\n for dirtree in [os.path.join(self.bootloc, 'images'), self.settings.webdir]:\n cachedir = os.path.join(dirtree, '.link_cache')\n if os.path.isdir(cachedir):\n cmd = \"find %s -maxdepth 1 -type f -links 1 -exec rm -f '{}' ';'\" % cachedir\n utils.subprocess_call(self.logger, cmd)\n\n def rsync_gen(self):\n \"\"\"\n Generate rsync modules of all repositories and distributions\n \"\"\"\n template_file = \"/etc/cobbler/rsync.template\"\n\n try:\n template = open(template_file, \"r\")\n except:\n raise CX(\"error reading template %s\" % template_file)\n\n template_data = \"\"\n template_data = template.read()\n template.close()\n\n distros = []\n\n for link in glob.glob(os.path.join(self.settings.webdir, 'links', '*')):\n distro = {}\n distro[\"path\"] = os.path.realpath(link)\n distro[\"name\"] = os.path.basename(link)\n distros.append(distro)\n\n repos = [repo.name for repo in self.api.repos()\n if os.path.isdir(os.path.join(self.settings.webdir, \"repo_mirror\", repo.name))]\n\n metadata = {\n \"date\": time.asctime(time.gmtime()),\n \"cobbler_server\": self.settings.server,\n \"distros\": distros,\n \"repos\": repos,\n \"webdir\": self.settings.webdir\n }\n\n self.templar.render(template_data, metadata, \"/etc/rsyncd.conf\")\n", "path": "cobbler/actions/sync.py"}]}
| 3,343 | 177 |
gh_patches_debug_21806
|
rasdani/github-patches
|
git_diff
|
psychopy__psychopy-4624
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
404 errors on some of the page in the HELP menus
Version used
- psychopy: 2021.2.3
https://www.psychopy.org/builder/builder.html
https://www.psychopy.org/api/api.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `psychopy/app/urls.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """A central location to store information about urls
5 """
6 urls = dict()
7
8 # links based on string names
9 urls['builder'] = "https://www.psychopy.org/builder/builder.html"
10 urls['builder.loops'] = "https://www.psychopy.org/builder/flow.html#loops"
11 # NB. builder components get their urls defined by the component code
12 # (so a custom component can have a url)
13
14 urls['downloads'] = "https://github.com/psychopy/psychopy/releases"
15 urls['changelog'] = "https://www.psychopy.org/changelog.html"
16
17 general = "https://www.psychopy.org/general/"
18 urls['prefs'] = general + "prefs.html"
19 urls['prefs.general'] = general + "prefs.html#general-settings"
20 urls['prefs.app'] = general + "prefs.html#application-settings"
21 urls['prefs.coder'] = general + "prefs.html#coder-settings"
22 urls['prefs.builder'] = general + "prefs.html#builder-settings"
23 urls['prefs.connections'] = general + "prefs.html#connection-settings"
24
25 # links keyed by wxIDs (e.g. menu item IDs)
26 urls['psychopyHome'] = "https://www.psychopy.org/"
27 urls['psychopyReference'] = "https://www.psychopy.org/api/api.html"
28 urls['coderTutorial'] = "https://www.psychopy.org/coder/tutorial1.html"
29 urls['builderHelp'] = urls['builder']
30 urls['builderDemos'] = "http://code.google.com/p/psychopy/downloads/list?can=2&q=demos"
31 urls['projsAbout'] = "https://www.psychopy.org/general/projects.html"
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/psychopy/app/urls.py b/psychopy/app/urls.py
--- a/psychopy/app/urls.py
+++ b/psychopy/app/urls.py
@@ -6,7 +6,7 @@
urls = dict()
# links based on string names
-urls['builder'] = "https://www.psychopy.org/builder/builder.html"
+urls['builder'] = "https://www.psychopy.org/builder"
urls['builder.loops'] = "https://www.psychopy.org/builder/flow.html#loops"
# NB. builder components get their urls defined by the component code
# (so a custom component can have a url)
@@ -24,7 +24,7 @@
# links keyed by wxIDs (e.g. menu item IDs)
urls['psychopyHome'] = "https://www.psychopy.org/"
-urls['psychopyReference'] = "https://www.psychopy.org/api/api.html"
+urls['psychopyReference'] = "https://www.psychopy.org/api"
urls['coderTutorial'] = "https://www.psychopy.org/coder/tutorial1.html"
urls['builderHelp'] = urls['builder']
urls['builderDemos'] = "http://code.google.com/p/psychopy/downloads/list?can=2&q=demos"
|
{"golden_diff": "diff --git a/psychopy/app/urls.py b/psychopy/app/urls.py\n--- a/psychopy/app/urls.py\n+++ b/psychopy/app/urls.py\n@@ -6,7 +6,7 @@\n urls = dict()\n \n # links based on string names\n-urls['builder'] = \"https://www.psychopy.org/builder/builder.html\"\n+urls['builder'] = \"https://www.psychopy.org/builder\"\n urls['builder.loops'] = \"https://www.psychopy.org/builder/flow.html#loops\"\n # NB. builder components get their urls defined by the component code\n # (so a custom component can have a url)\n@@ -24,7 +24,7 @@\n \n # links keyed by wxIDs (e.g. menu item IDs)\n urls['psychopyHome'] = \"https://www.psychopy.org/\"\n-urls['psychopyReference'] = \"https://www.psychopy.org/api/api.html\"\n+urls['psychopyReference'] = \"https://www.psychopy.org/api\"\n urls['coderTutorial'] = \"https://www.psychopy.org/coder/tutorial1.html\"\n urls['builderHelp'] = urls['builder']\n urls['builderDemos'] = \"http://code.google.com/p/psychopy/downloads/list?can=2&q=demos\"\n", "issue": "404 errors on some of the page in the HELP menus\nVersion used\r\n- psychopy: 2021.2.3\r\n\r\nhttps://www.psychopy.org/builder/builder.html\r\nhttps://www.psychopy.org/api/api.html\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"A central location to store information about urls\n\"\"\"\nurls = dict()\n\n# links based on string names\nurls['builder'] = \"https://www.psychopy.org/builder/builder.html\"\nurls['builder.loops'] = \"https://www.psychopy.org/builder/flow.html#loops\"\n# NB. builder components get their urls defined by the component code\n# (so a custom component can have a url)\n\nurls['downloads'] = \"https://github.com/psychopy/psychopy/releases\"\nurls['changelog'] = \"https://www.psychopy.org/changelog.html\"\n\ngeneral = \"https://www.psychopy.org/general/\"\nurls['prefs'] = general + \"prefs.html\"\nurls['prefs.general'] = general + \"prefs.html#general-settings\"\nurls['prefs.app'] = general + \"prefs.html#application-settings\"\nurls['prefs.coder'] = general + \"prefs.html#coder-settings\"\nurls['prefs.builder'] = general + \"prefs.html#builder-settings\"\nurls['prefs.connections'] = general + \"prefs.html#connection-settings\"\n\n# links keyed by wxIDs (e.g. menu item IDs)\nurls['psychopyHome'] = \"https://www.psychopy.org/\"\nurls['psychopyReference'] = \"https://www.psychopy.org/api/api.html\"\nurls['coderTutorial'] = \"https://www.psychopy.org/coder/tutorial1.html\"\nurls['builderHelp'] = urls['builder']\nurls['builderDemos'] = \"http://code.google.com/p/psychopy/downloads/list?can=2&q=demos\"\nurls['projsAbout'] = \"https://www.psychopy.org/general/projects.html\"\n", "path": "psychopy/app/urls.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"A central location to store information about urls\n\"\"\"\nurls = dict()\n\n# links based on string names\nurls['builder'] = \"https://www.psychopy.org/builder\"\nurls['builder.loops'] = \"https://www.psychopy.org/builder/flow.html#loops\"\n# NB. builder components get their urls defined by the component code\n# (so a custom component can have a url)\n\nurls['downloads'] = \"https://github.com/psychopy/psychopy/releases\"\nurls['changelog'] = \"https://www.psychopy.org/changelog.html\"\n\ngeneral = \"https://www.psychopy.org/general/\"\nurls['prefs'] = general + \"prefs.html\"\nurls['prefs.general'] = general + \"prefs.html#general-settings\"\nurls['prefs.app'] = general + \"prefs.html#application-settings\"\nurls['prefs.coder'] = general + \"prefs.html#coder-settings\"\nurls['prefs.builder'] = general + \"prefs.html#builder-settings\"\nurls['prefs.connections'] = general + \"prefs.html#connection-settings\"\n\n# links keyed by wxIDs (e.g. menu item IDs)\nurls['psychopyHome'] = \"https://www.psychopy.org/\"\nurls['psychopyReference'] = \"https://www.psychopy.org/api\"\nurls['coderTutorial'] = \"https://www.psychopy.org/coder/tutorial1.html\"\nurls['builderHelp'] = urls['builder']\nurls['builderDemos'] = \"http://code.google.com/p/psychopy/downloads/list?can=2&q=demos\"\nurls['projsAbout'] = \"https://www.psychopy.org/general/projects.html\"\n", "path": "psychopy/app/urls.py"}]}
| 739 | 285 |
gh_patches_debug_43307
|
rasdani/github-patches
|
git_diff
|
crytic__slither-447
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add "now" to the timestamp detector
https://github.com/crytic/slither/blob/7cb6cf4870036f780088fa7dfec83ae3220322e2/slither/detectors/operations/block_timestamp.py#L39-L44
This could also warns about the use of `now`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/detectors/operations/block_timestamp.py`
Content:
```
1 """
2 Module detecting dangerous use of block.timestamp
3
4 """
5 from slither.core.declarations import Function
6 from slither.analyses.data_dependency.data_dependency import is_tainted, is_dependent
7 from slither.core.declarations.solidity_variables import (SolidityFunction,
8 SolidityVariableComposed)
9 from slither.detectors.abstract_detector import (AbstractDetector,
10 DetectorClassification)
11 from slither.slithir.operations import Binary, BinaryType
12
13
14 class Timestamp(AbstractDetector):
15 """
16 """
17
18 ARGUMENT = 'timestamp'
19 HELP = 'Dangerous usage of `block.timestamp`'
20 IMPACT = DetectorClassification.LOW
21 CONFIDENCE = DetectorClassification.MEDIUM
22
23 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#block-timestamp'
24
25
26 WIKI_TITLE = 'Block timestamp'
27 WIKI_DESCRIPTION = 'Dangerous usage of `block.timestamp`. `block.timestamp` can be manipulated by miners.'
28 WIKI_EXPLOIT_SCENARIO = '''"Bob's contract relies on `block.timestamp` for its randomness. Eve is a miner and manipulates `block.timestamp` to exploit Bob's contract.'''
29 WIKI_RECOMMENDATION = 'Avoid relying on `block.timestamp`.'
30
31 def timestamp(self, func):
32 """
33 """
34
35 ret = set()
36 for node in func.nodes:
37 if node.contains_require_or_assert():
38 for var in node.variables_read:
39 if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):
40 ret.add(node)
41 for ir in node.irs:
42 if isinstance(ir, Binary) and BinaryType.return_bool(ir.type):
43 for var in ir.read:
44 if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):
45 ret.add(node)
46 return list(ret)
47
48
49 def detect_dangerous_timestamp(self, contract):
50 """
51 Args:
52 contract (Contract)
53 Returns:
54 list((Function), (list (Node)))
55 """
56 ret = []
57 for f in [f for f in contract.functions if f.contract_declarer == contract]:
58 nodes = self.timestamp(f)
59 if nodes:
60 ret.append((f, nodes))
61 return ret
62
63 def _detect(self):
64 """
65 """
66 results = []
67
68 for c in self.contracts:
69 dangerous_timestamp = self.detect_dangerous_timestamp(c)
70 for (func, nodes) in dangerous_timestamp:
71
72 info = [func, " uses timestamp for comparisons\n"]
73
74 info += ['\tDangerous comparisons:\n']
75 for node in nodes:
76 info += ['\t- ', node, '\n']
77
78 res = self.generate_result(info)
79
80 results.append(res)
81
82 return results
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/slither/detectors/operations/block_timestamp.py b/slither/detectors/operations/block_timestamp.py
--- a/slither/detectors/operations/block_timestamp.py
+++ b/slither/detectors/operations/block_timestamp.py
@@ -2,15 +2,51 @@
Module detecting dangerous use of block.timestamp
"""
-from slither.core.declarations import Function
-from slither.analyses.data_dependency.data_dependency import is_tainted, is_dependent
-from slither.core.declarations.solidity_variables import (SolidityFunction,
- SolidityVariableComposed)
+from typing import List, Tuple
+
+from slither.analyses.data_dependency.data_dependency import is_dependent
+from slither.core.cfg.node import Node
+from slither.core.declarations import Function, Contract
+from slither.core.declarations.solidity_variables import (SolidityVariableComposed, SolidityVariable)
from slither.detectors.abstract_detector import (AbstractDetector,
DetectorClassification)
from slither.slithir.operations import Binary, BinaryType
+def _timestamp(func: Function) -> List[Node]:
+ ret = set()
+ for node in func.nodes:
+ if node.contains_require_or_assert():
+ for var in node.variables_read:
+ if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):
+ ret.add(node)
+ if is_dependent(var, SolidityVariable('now'), func.contract):
+ ret.add(node)
+ for ir in node.irs:
+ if isinstance(ir, Binary) and BinaryType.return_bool(ir.type):
+ for var in ir.read:
+ if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):
+ ret.add(node)
+ if is_dependent(var, SolidityVariable('now'), func.contract):
+ ret.add(node)
+ return list(ret)
+
+
+def _detect_dangerous_timestamp(contract: Contract) -> List[Tuple[Function, List[Node]]]:
+ """
+ Args:
+ contract (Contract)
+ Returns:
+ list((Function), (list (Node)))
+ """
+ ret = []
+ for f in [f for f in contract.functions if f.contract_declarer == contract]:
+ nodes = _timestamp(f)
+ if nodes:
+ ret.append((f, nodes))
+ return ret
+
+
class Timestamp(AbstractDetector):
"""
"""
@@ -22,51 +58,18 @@
WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#block-timestamp'
-
WIKI_TITLE = 'Block timestamp'
WIKI_DESCRIPTION = 'Dangerous usage of `block.timestamp`. `block.timestamp` can be manipulated by miners.'
WIKI_EXPLOIT_SCENARIO = '''"Bob's contract relies on `block.timestamp` for its randomness. Eve is a miner and manipulates `block.timestamp` to exploit Bob's contract.'''
WIKI_RECOMMENDATION = 'Avoid relying on `block.timestamp`.'
- def timestamp(self, func):
- """
- """
-
- ret = set()
- for node in func.nodes:
- if node.contains_require_or_assert():
- for var in node.variables_read:
- if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):
- ret.add(node)
- for ir in node.irs:
- if isinstance(ir, Binary) and BinaryType.return_bool(ir.type):
- for var in ir.read:
- if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):
- ret.add(node)
- return list(ret)
-
-
- def detect_dangerous_timestamp(self, contract):
- """
- Args:
- contract (Contract)
- Returns:
- list((Function), (list (Node)))
- """
- ret = []
- for f in [f for f in contract.functions if f.contract_declarer == contract]:
- nodes = self.timestamp(f)
- if nodes:
- ret.append((f, nodes))
- return ret
-
def _detect(self):
"""
"""
results = []
for c in self.contracts:
- dangerous_timestamp = self.detect_dangerous_timestamp(c)
+ dangerous_timestamp = _detect_dangerous_timestamp(c)
for (func, nodes) in dangerous_timestamp:
info = [func, " uses timestamp for comparisons\n"]
|
{"golden_diff": "diff --git a/slither/detectors/operations/block_timestamp.py b/slither/detectors/operations/block_timestamp.py\n--- a/slither/detectors/operations/block_timestamp.py\n+++ b/slither/detectors/operations/block_timestamp.py\n@@ -2,15 +2,51 @@\n Module detecting dangerous use of block.timestamp\n \n \"\"\"\n-from slither.core.declarations import Function\n-from slither.analyses.data_dependency.data_dependency import is_tainted, is_dependent\n-from slither.core.declarations.solidity_variables import (SolidityFunction,\n- SolidityVariableComposed)\n+from typing import List, Tuple\n+\n+from slither.analyses.data_dependency.data_dependency import is_dependent\n+from slither.core.cfg.node import Node\n+from slither.core.declarations import Function, Contract\n+from slither.core.declarations.solidity_variables import (SolidityVariableComposed, SolidityVariable)\n from slither.detectors.abstract_detector import (AbstractDetector,\n DetectorClassification)\n from slither.slithir.operations import Binary, BinaryType\n \n \n+def _timestamp(func: Function) -> List[Node]:\n+ ret = set()\n+ for node in func.nodes:\n+ if node.contains_require_or_assert():\n+ for var in node.variables_read:\n+ if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):\n+ ret.add(node)\n+ if is_dependent(var, SolidityVariable('now'), func.contract):\n+ ret.add(node)\n+ for ir in node.irs:\n+ if isinstance(ir, Binary) and BinaryType.return_bool(ir.type):\n+ for var in ir.read:\n+ if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):\n+ ret.add(node)\n+ if is_dependent(var, SolidityVariable('now'), func.contract):\n+ ret.add(node)\n+ return list(ret)\n+\n+\n+def _detect_dangerous_timestamp(contract: Contract) -> List[Tuple[Function, List[Node]]]:\n+ \"\"\"\n+ Args:\n+ contract (Contract)\n+ Returns:\n+ list((Function), (list (Node)))\n+ \"\"\"\n+ ret = []\n+ for f in [f for f in contract.functions if f.contract_declarer == contract]:\n+ nodes = _timestamp(f)\n+ if nodes:\n+ ret.append((f, nodes))\n+ return ret\n+\n+\n class Timestamp(AbstractDetector):\n \"\"\"\n \"\"\"\n@@ -22,51 +58,18 @@\n \n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#block-timestamp'\n \n-\n WIKI_TITLE = 'Block timestamp'\n WIKI_DESCRIPTION = 'Dangerous usage of `block.timestamp`. `block.timestamp` can be manipulated by miners.'\n WIKI_EXPLOIT_SCENARIO = '''\"Bob's contract relies on `block.timestamp` for its randomness. Eve is a miner and manipulates `block.timestamp` to exploit Bob's contract.'''\n WIKI_RECOMMENDATION = 'Avoid relying on `block.timestamp`.'\n \n- def timestamp(self, func):\n- \"\"\"\n- \"\"\"\n-\n- ret = set()\n- for node in func.nodes:\n- if node.contains_require_or_assert():\n- for var in node.variables_read:\n- if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):\n- ret.add(node)\n- for ir in node.irs:\n- if isinstance(ir, Binary) and BinaryType.return_bool(ir.type):\n- for var in ir.read:\n- if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):\n- ret.add(node)\n- return list(ret)\n-\n-\n- def detect_dangerous_timestamp(self, contract):\n- \"\"\"\n- Args:\n- contract (Contract)\n- Returns:\n- list((Function), (list (Node)))\n- \"\"\"\n- ret = []\n- for f in [f for f in contract.functions if f.contract_declarer == contract]:\n- nodes = self.timestamp(f)\n- if nodes:\n- ret.append((f, nodes))\n- return ret\n-\n def _detect(self):\n \"\"\"\n \"\"\"\n results = []\n \n for c in self.contracts:\n- dangerous_timestamp = self.detect_dangerous_timestamp(c)\n+ dangerous_timestamp = _detect_dangerous_timestamp(c)\n for (func, nodes) in dangerous_timestamp:\n \n info = [func, \" uses timestamp for comparisons\\n\"]\n", "issue": "Add \"now\" to the timestamp detector\nhttps://github.com/crytic/slither/blob/7cb6cf4870036f780088fa7dfec83ae3220322e2/slither/detectors/operations/block_timestamp.py#L39-L44\r\n\r\nThis could also warns about the use of `now`\n", "before_files": [{"content": "\"\"\"\n Module detecting dangerous use of block.timestamp\n\n\"\"\"\nfrom slither.core.declarations import Function\nfrom slither.analyses.data_dependency.data_dependency import is_tainted, is_dependent\nfrom slither.core.declarations.solidity_variables import (SolidityFunction,\n SolidityVariableComposed)\nfrom slither.detectors.abstract_detector import (AbstractDetector,\n DetectorClassification)\nfrom slither.slithir.operations import Binary, BinaryType\n\n\nclass Timestamp(AbstractDetector):\n \"\"\"\n \"\"\"\n\n ARGUMENT = 'timestamp'\n HELP = 'Dangerous usage of `block.timestamp`'\n IMPACT = DetectorClassification.LOW\n CONFIDENCE = DetectorClassification.MEDIUM\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#block-timestamp'\n\n\n WIKI_TITLE = 'Block timestamp'\n WIKI_DESCRIPTION = 'Dangerous usage of `block.timestamp`. `block.timestamp` can be manipulated by miners.'\n WIKI_EXPLOIT_SCENARIO = '''\"Bob's contract relies on `block.timestamp` for its randomness. Eve is a miner and manipulates `block.timestamp` to exploit Bob's contract.'''\n WIKI_RECOMMENDATION = 'Avoid relying on `block.timestamp`.'\n\n def timestamp(self, func):\n \"\"\"\n \"\"\"\n\n ret = set()\n for node in func.nodes:\n if node.contains_require_or_assert():\n for var in node.variables_read:\n if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):\n ret.add(node)\n for ir in node.irs:\n if isinstance(ir, Binary) and BinaryType.return_bool(ir.type):\n for var in ir.read:\n if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):\n ret.add(node)\n return list(ret)\n\n\n def detect_dangerous_timestamp(self, contract):\n \"\"\"\n Args:\n contract (Contract)\n Returns:\n list((Function), (list (Node)))\n \"\"\"\n ret = []\n for f in [f for f in contract.functions if f.contract_declarer == contract]:\n nodes = self.timestamp(f)\n if nodes:\n ret.append((f, nodes))\n return ret\n\n def _detect(self):\n \"\"\"\n \"\"\"\n results = []\n\n for c in self.contracts:\n dangerous_timestamp = self.detect_dangerous_timestamp(c)\n for (func, nodes) in dangerous_timestamp:\n\n info = [func, \" uses timestamp for comparisons\\n\"]\n\n info += ['\\tDangerous comparisons:\\n']\n for node in nodes:\n info += ['\\t- ', node, '\\n']\n\n res = self.generate_result(info)\n\n results.append(res)\n\n return results\n", "path": "slither/detectors/operations/block_timestamp.py"}], "after_files": [{"content": "\"\"\"\n Module detecting dangerous use of block.timestamp\n\n\"\"\"\nfrom typing import List, Tuple\n\nfrom slither.analyses.data_dependency.data_dependency import is_dependent\nfrom slither.core.cfg.node import Node\nfrom slither.core.declarations import Function, Contract\nfrom slither.core.declarations.solidity_variables import (SolidityVariableComposed, SolidityVariable)\nfrom slither.detectors.abstract_detector import (AbstractDetector,\n DetectorClassification)\nfrom slither.slithir.operations import Binary, BinaryType\n\n\ndef _timestamp(func: Function) -> List[Node]:\n ret = set()\n for node in func.nodes:\n if node.contains_require_or_assert():\n for var in node.variables_read:\n if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):\n ret.add(node)\n if is_dependent(var, SolidityVariable('now'), func.contract):\n ret.add(node)\n for ir in node.irs:\n if isinstance(ir, Binary) and BinaryType.return_bool(ir.type):\n for var in ir.read:\n if is_dependent(var, SolidityVariableComposed('block.timestamp'), func.contract):\n ret.add(node)\n if is_dependent(var, SolidityVariable('now'), func.contract):\n ret.add(node)\n return list(ret)\n\n\ndef _detect_dangerous_timestamp(contract: Contract) -> List[Tuple[Function, List[Node]]]:\n \"\"\"\n Args:\n contract (Contract)\n Returns:\n list((Function), (list (Node)))\n \"\"\"\n ret = []\n for f in [f for f in contract.functions if f.contract_declarer == contract]:\n nodes = _timestamp(f)\n if nodes:\n ret.append((f, nodes))\n return ret\n\n\nclass Timestamp(AbstractDetector):\n \"\"\"\n \"\"\"\n\n ARGUMENT = 'timestamp'\n HELP = 'Dangerous usage of `block.timestamp`'\n IMPACT = DetectorClassification.LOW\n CONFIDENCE = DetectorClassification.MEDIUM\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#block-timestamp'\n\n WIKI_TITLE = 'Block timestamp'\n WIKI_DESCRIPTION = 'Dangerous usage of `block.timestamp`. `block.timestamp` can be manipulated by miners.'\n WIKI_EXPLOIT_SCENARIO = '''\"Bob's contract relies on `block.timestamp` for its randomness. Eve is a miner and manipulates `block.timestamp` to exploit Bob's contract.'''\n WIKI_RECOMMENDATION = 'Avoid relying on `block.timestamp`.'\n\n def _detect(self):\n \"\"\"\n \"\"\"\n results = []\n\n for c in self.contracts:\n dangerous_timestamp = _detect_dangerous_timestamp(c)\n for (func, nodes) in dangerous_timestamp:\n\n info = [func, \" uses timestamp for comparisons\\n\"]\n\n info += ['\\tDangerous comparisons:\\n']\n for node in nodes:\n info += ['\\t- ', node, '\\n']\n\n res = self.generate_result(info)\n\n results.append(res)\n\n return results\n", "path": "slither/detectors/operations/block_timestamp.py"}]}
| 1,093 | 979 |
gh_patches_debug_9266
|
rasdani/github-patches
|
git_diff
|
encode__starlette-1397
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
JSONResponse Wrong Content-Length Value
### Describe the bug
`JSONResponse` populates the response header with a **non-zero** `Content-Length` whenever the response object is instantiated **without any content**.
### Checklist
- [X] The bug is reproducible against the latest release and/or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### To reproduce
```python
from starlette.datastructures import MutableHeaders
from starlette.responses import JSONResponse
r1 = JSONResponse('')
r1.headers
r2 = JSONResponse()
r2.headers
```
### Expected behavior
```python
( r1.headers == r2.headers
and
r1.headers == MutableHeaders({'content-length': '0', 'content-type': 'application/json'}) )
```
### Actual behavior
```python
( r1.headers != r2.headers
and
r1.headers == MutableHeaders({'content-length': '2', 'content-type': 'application/json'})
and
r2.headers == MutableHeaders({'content-length': '4', 'content-type': 'application/json'}))
```
### Debugging material
All that is needed is the currently stable release of `Starlette`, and the trusty `terminal`.
### Environment
- OS: same on all of Linux/Windows/macOS
- Python version: 3.7.x
- Starlette version: 0.14.x
### Additional context
I was trying to adhere to the HTTP ref spec when constructing responses to the OPTIONS and HEAD methods.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/responses.py`
Content:
```
1 import http.cookies
2 import json
3 import os
4 import stat
5 import sys
6 import typing
7 from email.utils import formatdate
8 from functools import partial
9 from mimetypes import guess_type as mimetypes_guess_type
10 from urllib.parse import quote
11
12 import anyio
13
14 from starlette._compat import md5_hexdigest
15 from starlette.background import BackgroundTask
16 from starlette.concurrency import iterate_in_threadpool
17 from starlette.datastructures import URL, MutableHeaders
18 from starlette.types import Receive, Scope, Send
19
20 # Workaround for adding samesite support to pre 3.8 python
21 http.cookies.Morsel._reserved["samesite"] = "SameSite" # type: ignore
22
23
24 # Compatibility wrapper for `mimetypes.guess_type` to support `os.PathLike` on <py3.8
25 def guess_type(
26 url: typing.Union[str, "os.PathLike[str]"], strict: bool = True
27 ) -> typing.Tuple[typing.Optional[str], typing.Optional[str]]:
28 if sys.version_info < (3, 8): # pragma: no cover
29 url = os.fspath(url)
30 return mimetypes_guess_type(url, strict)
31
32
33 class Response:
34 media_type = None
35 charset = "utf-8"
36
37 def __init__(
38 self,
39 content: typing.Any = None,
40 status_code: int = 200,
41 headers: dict = None,
42 media_type: str = None,
43 background: BackgroundTask = None,
44 ) -> None:
45 self.status_code = status_code
46 if media_type is not None:
47 self.media_type = media_type
48 self.background = background
49 self.body = self.render(content)
50 self.init_headers(headers)
51
52 def render(self, content: typing.Any) -> bytes:
53 if content is None:
54 return b""
55 if isinstance(content, bytes):
56 return content
57 return content.encode(self.charset)
58
59 def init_headers(self, headers: typing.Mapping[str, str] = None) -> None:
60 if headers is None:
61 raw_headers: typing.List[typing.Tuple[bytes, bytes]] = []
62 populate_content_length = True
63 populate_content_type = True
64 else:
65 raw_headers = [
66 (k.lower().encode("latin-1"), v.encode("latin-1"))
67 for k, v in headers.items()
68 ]
69 keys = [h[0] for h in raw_headers]
70 populate_content_length = b"content-length" not in keys
71 populate_content_type = b"content-type" not in keys
72
73 body = getattr(self, "body", None)
74 if body is not None and populate_content_length:
75 content_length = str(len(body))
76 raw_headers.append((b"content-length", content_length.encode("latin-1")))
77
78 content_type = self.media_type
79 if content_type is not None and populate_content_type:
80 if content_type.startswith("text/"):
81 content_type += "; charset=" + self.charset
82 raw_headers.append((b"content-type", content_type.encode("latin-1")))
83
84 self.raw_headers = raw_headers
85
86 @property
87 def headers(self) -> MutableHeaders:
88 if not hasattr(self, "_headers"):
89 self._headers = MutableHeaders(raw=self.raw_headers)
90 return self._headers
91
92 def set_cookie(
93 self,
94 key: str,
95 value: str = "",
96 max_age: int = None,
97 expires: int = None,
98 path: str = "/",
99 domain: str = None,
100 secure: bool = False,
101 httponly: bool = False,
102 samesite: str = "lax",
103 ) -> None:
104 cookie: http.cookies.BaseCookie = http.cookies.SimpleCookie()
105 cookie[key] = value
106 if max_age is not None:
107 cookie[key]["max-age"] = max_age
108 if expires is not None:
109 cookie[key]["expires"] = expires
110 if path is not None:
111 cookie[key]["path"] = path
112 if domain is not None:
113 cookie[key]["domain"] = domain
114 if secure:
115 cookie[key]["secure"] = True
116 if httponly:
117 cookie[key]["httponly"] = True
118 if samesite is not None:
119 assert samesite.lower() in [
120 "strict",
121 "lax",
122 "none",
123 ], "samesite must be either 'strict', 'lax' or 'none'"
124 cookie[key]["samesite"] = samesite
125 cookie_val = cookie.output(header="").strip()
126 self.raw_headers.append((b"set-cookie", cookie_val.encode("latin-1")))
127
128 def delete_cookie(
129 self,
130 key: str,
131 path: str = "/",
132 domain: str = None,
133 secure: bool = False,
134 httponly: bool = False,
135 samesite: str = "lax",
136 ) -> None:
137 self.set_cookie(
138 key,
139 max_age=0,
140 expires=0,
141 path=path,
142 domain=domain,
143 secure=secure,
144 httponly=httponly,
145 samesite=samesite,
146 )
147
148 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
149 await send(
150 {
151 "type": "http.response.start",
152 "status": self.status_code,
153 "headers": self.raw_headers,
154 }
155 )
156 await send({"type": "http.response.body", "body": self.body})
157
158 if self.background is not None:
159 await self.background()
160
161
162 class HTMLResponse(Response):
163 media_type = "text/html"
164
165
166 class PlainTextResponse(Response):
167 media_type = "text/plain"
168
169
170 class JSONResponse(Response):
171 media_type = "application/json"
172
173 def render(self, content: typing.Any) -> bytes:
174 return json.dumps(
175 content,
176 ensure_ascii=False,
177 allow_nan=False,
178 indent=None,
179 separators=(",", ":"),
180 ).encode("utf-8")
181
182
183 class RedirectResponse(Response):
184 def __init__(
185 self,
186 url: typing.Union[str, URL],
187 status_code: int = 307,
188 headers: dict = None,
189 background: BackgroundTask = None,
190 ) -> None:
191 super().__init__(
192 content=b"", status_code=status_code, headers=headers, background=background
193 )
194 self.headers["location"] = quote(str(url), safe=":/%#?=@[]!$&'()*+,;")
195
196
197 class StreamingResponse(Response):
198 def __init__(
199 self,
200 content: typing.Any,
201 status_code: int = 200,
202 headers: dict = None,
203 media_type: str = None,
204 background: BackgroundTask = None,
205 ) -> None:
206 if isinstance(content, typing.AsyncIterable):
207 self.body_iterator = content
208 else:
209 self.body_iterator = iterate_in_threadpool(content)
210 self.status_code = status_code
211 self.media_type = self.media_type if media_type is None else media_type
212 self.background = background
213 self.init_headers(headers)
214
215 async def listen_for_disconnect(self, receive: Receive) -> None:
216 while True:
217 message = await receive()
218 if message["type"] == "http.disconnect":
219 break
220
221 async def stream_response(self, send: Send) -> None:
222 await send(
223 {
224 "type": "http.response.start",
225 "status": self.status_code,
226 "headers": self.raw_headers,
227 }
228 )
229 async for chunk in self.body_iterator:
230 if not isinstance(chunk, bytes):
231 chunk = chunk.encode(self.charset)
232 await send({"type": "http.response.body", "body": chunk, "more_body": True})
233
234 await send({"type": "http.response.body", "body": b"", "more_body": False})
235
236 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
237 async with anyio.create_task_group() as task_group:
238
239 async def wrap(func: typing.Callable[[], typing.Coroutine]) -> None:
240 await func()
241 task_group.cancel_scope.cancel()
242
243 task_group.start_soon(wrap, partial(self.stream_response, send))
244 await wrap(partial(self.listen_for_disconnect, receive))
245
246 if self.background is not None:
247 await self.background()
248
249
250 class FileResponse(Response):
251 chunk_size = 64 * 1024
252
253 def __init__(
254 self,
255 path: typing.Union[str, "os.PathLike[str]"],
256 status_code: int = 200,
257 headers: dict = None,
258 media_type: str = None,
259 background: BackgroundTask = None,
260 filename: str = None,
261 stat_result: os.stat_result = None,
262 method: str = None,
263 ) -> None:
264 self.path = path
265 self.status_code = status_code
266 self.filename = filename
267 self.send_header_only = method is not None and method.upper() == "HEAD"
268 if media_type is None:
269 media_type = guess_type(filename or path)[0] or "text/plain"
270 self.media_type = media_type
271 self.background = background
272 self.init_headers(headers)
273 if self.filename is not None:
274 content_disposition_filename = quote(self.filename)
275 if content_disposition_filename != self.filename:
276 content_disposition = "attachment; filename*=utf-8''{}".format(
277 content_disposition_filename
278 )
279 else:
280 content_disposition = f'attachment; filename="{self.filename}"'
281 self.headers.setdefault("content-disposition", content_disposition)
282 self.stat_result = stat_result
283 if stat_result is not None:
284 self.set_stat_headers(stat_result)
285
286 def set_stat_headers(self, stat_result: os.stat_result) -> None:
287 content_length = str(stat_result.st_size)
288 last_modified = formatdate(stat_result.st_mtime, usegmt=True)
289 etag_base = str(stat_result.st_mtime) + "-" + str(stat_result.st_size)
290 etag = md5_hexdigest(etag_base.encode(), usedforsecurity=False)
291
292 self.headers.setdefault("content-length", content_length)
293 self.headers.setdefault("last-modified", last_modified)
294 self.headers.setdefault("etag", etag)
295
296 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
297 if self.stat_result is None:
298 try:
299 stat_result = await anyio.to_thread.run_sync(os.stat, self.path)
300 self.set_stat_headers(stat_result)
301 except FileNotFoundError:
302 raise RuntimeError(f"File at path {self.path} does not exist.")
303 else:
304 mode = stat_result.st_mode
305 if not stat.S_ISREG(mode):
306 raise RuntimeError(f"File at path {self.path} is not a file.")
307 await send(
308 {
309 "type": "http.response.start",
310 "status": self.status_code,
311 "headers": self.raw_headers,
312 }
313 )
314 if self.send_header_only:
315 await send({"type": "http.response.body", "body": b"", "more_body": False})
316 else:
317 async with await anyio.open_file(self.path, mode="rb") as file:
318 more_body = True
319 while more_body:
320 chunk = await file.read(self.chunk_size)
321 more_body = len(chunk) == self.chunk_size
322 await send(
323 {
324 "type": "http.response.body",
325 "body": chunk,
326 "more_body": more_body,
327 }
328 )
329 if self.background is not None:
330 await self.background()
331
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/starlette/responses.py b/starlette/responses.py
--- a/starlette/responses.py
+++ b/starlette/responses.py
@@ -71,7 +71,11 @@
populate_content_type = b"content-type" not in keys
body = getattr(self, "body", None)
- if body is not None and populate_content_length:
+ if (
+ body is not None
+ and populate_content_length
+ and not (self.status_code < 200 or self.status_code in (204, 304))
+ ):
content_length = str(len(body))
raw_headers.append((b"content-length", content_length.encode("latin-1")))
|
{"golden_diff": "diff --git a/starlette/responses.py b/starlette/responses.py\n--- a/starlette/responses.py\n+++ b/starlette/responses.py\n@@ -71,7 +71,11 @@\n populate_content_type = b\"content-type\" not in keys\n \n body = getattr(self, \"body\", None)\n- if body is not None and populate_content_length:\n+ if (\n+ body is not None\n+ and populate_content_length\n+ and not (self.status_code < 200 or self.status_code in (204, 304))\n+ ):\n content_length = str(len(body))\n raw_headers.append((b\"content-length\", content_length.encode(\"latin-1\")))\n", "issue": "JSONResponse Wrong Content-Length Value\n### Describe the bug\r\n\r\n`JSONResponse` populates the response header with a **non-zero** `Content-Length` whenever the response object is instantiated **without any content**.\r\n\r\n### Checklist\r\n\r\n- [X] The bug is reproducible against the latest release and/or `master`.\r\n- [X] There are no similar issues or pull requests to fix it yet.\r\n\r\n### To reproduce\r\n\r\n```python\r\nfrom starlette.datastructures import MutableHeaders\r\nfrom starlette.responses import JSONResponse\r\n\r\nr1 = JSONResponse('')\r\nr1.headers\r\n\r\nr2 = JSONResponse()\r\nr2.headers\r\n```\r\n\r\n### Expected behavior\r\n\r\n```python\r\n( r1.headers == r2.headers\r\n and\r\n r1.headers == MutableHeaders({'content-length': '0', 'content-type': 'application/json'}) )\r\n```\r\n\r\n### Actual behavior\r\n\r\n```python\r\n( r1.headers != r2.headers\r\n and\r\n r1.headers == MutableHeaders({'content-length': '2', 'content-type': 'application/json'})\r\n and\r\n r2.headers == MutableHeaders({'content-length': '4', 'content-type': 'application/json'}))\r\n```\r\n\r\n### Debugging material\r\n\r\nAll that is needed is the currently stable release of `Starlette`, and the trusty `terminal`.\r\n\r\n### Environment\r\n\r\n- OS: same on all of Linux/Windows/macOS\r\n- Python version: 3.7.x\r\n- Starlette version: 0.14.x\r\n\r\n### Additional context\r\n\r\nI was trying to adhere to the HTTP ref spec when constructing responses to the OPTIONS and HEAD methods.\r\n\n", "before_files": [{"content": "import http.cookies\nimport json\nimport os\nimport stat\nimport sys\nimport typing\nfrom email.utils import formatdate\nfrom functools import partial\nfrom mimetypes import guess_type as mimetypes_guess_type\nfrom urllib.parse import quote\n\nimport anyio\n\nfrom starlette._compat import md5_hexdigest\nfrom starlette.background import BackgroundTask\nfrom starlette.concurrency import iterate_in_threadpool\nfrom starlette.datastructures import URL, MutableHeaders\nfrom starlette.types import Receive, Scope, Send\n\n# Workaround for adding samesite support to pre 3.8 python\nhttp.cookies.Morsel._reserved[\"samesite\"] = \"SameSite\" # type: ignore\n\n\n# Compatibility wrapper for `mimetypes.guess_type` to support `os.PathLike` on <py3.8\ndef guess_type(\n url: typing.Union[str, \"os.PathLike[str]\"], strict: bool = True\n) -> typing.Tuple[typing.Optional[str], typing.Optional[str]]:\n if sys.version_info < (3, 8): # pragma: no cover\n url = os.fspath(url)\n return mimetypes_guess_type(url, strict)\n\n\nclass Response:\n media_type = None\n charset = \"utf-8\"\n\n def __init__(\n self,\n content: typing.Any = None,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> None:\n self.status_code = status_code\n if media_type is not None:\n self.media_type = media_type\n self.background = background\n self.body = self.render(content)\n self.init_headers(headers)\n\n def render(self, content: typing.Any) -> bytes:\n if content is None:\n return b\"\"\n if isinstance(content, bytes):\n return content\n return content.encode(self.charset)\n\n def init_headers(self, headers: typing.Mapping[str, str] = None) -> None:\n if headers is None:\n raw_headers: typing.List[typing.Tuple[bytes, bytes]] = []\n populate_content_length = True\n populate_content_type = True\n else:\n raw_headers = [\n (k.lower().encode(\"latin-1\"), v.encode(\"latin-1\"))\n for k, v in headers.items()\n ]\n keys = [h[0] for h in raw_headers]\n populate_content_length = b\"content-length\" not in keys\n populate_content_type = b\"content-type\" not in keys\n\n body = getattr(self, \"body\", None)\n if body is not None and populate_content_length:\n content_length = str(len(body))\n raw_headers.append((b\"content-length\", content_length.encode(\"latin-1\")))\n\n content_type = self.media_type\n if content_type is not None and populate_content_type:\n if content_type.startswith(\"text/\"):\n content_type += \"; charset=\" + self.charset\n raw_headers.append((b\"content-type\", content_type.encode(\"latin-1\")))\n\n self.raw_headers = raw_headers\n\n @property\n def headers(self) -> MutableHeaders:\n if not hasattr(self, \"_headers\"):\n self._headers = MutableHeaders(raw=self.raw_headers)\n return self._headers\n\n def set_cookie(\n self,\n key: str,\n value: str = \"\",\n max_age: int = None,\n expires: int = None,\n path: str = \"/\",\n domain: str = None,\n secure: bool = False,\n httponly: bool = False,\n samesite: str = \"lax\",\n ) -> None:\n cookie: http.cookies.BaseCookie = http.cookies.SimpleCookie()\n cookie[key] = value\n if max_age is not None:\n cookie[key][\"max-age\"] = max_age\n if expires is not None:\n cookie[key][\"expires\"] = expires\n if path is not None:\n cookie[key][\"path\"] = path\n if domain is not None:\n cookie[key][\"domain\"] = domain\n if secure:\n cookie[key][\"secure\"] = True\n if httponly:\n cookie[key][\"httponly\"] = True\n if samesite is not None:\n assert samesite.lower() in [\n \"strict\",\n \"lax\",\n \"none\",\n ], \"samesite must be either 'strict', 'lax' or 'none'\"\n cookie[key][\"samesite\"] = samesite\n cookie_val = cookie.output(header=\"\").strip()\n self.raw_headers.append((b\"set-cookie\", cookie_val.encode(\"latin-1\")))\n\n def delete_cookie(\n self,\n key: str,\n path: str = \"/\",\n domain: str = None,\n secure: bool = False,\n httponly: bool = False,\n samesite: str = \"lax\",\n ) -> None:\n self.set_cookie(\n key,\n max_age=0,\n expires=0,\n path=path,\n domain=domain,\n secure=secure,\n httponly=httponly,\n samesite=samesite,\n )\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n await send(\n {\n \"type\": \"http.response.start\",\n \"status\": self.status_code,\n \"headers\": self.raw_headers,\n }\n )\n await send({\"type\": \"http.response.body\", \"body\": self.body})\n\n if self.background is not None:\n await self.background()\n\n\nclass HTMLResponse(Response):\n media_type = \"text/html\"\n\n\nclass PlainTextResponse(Response):\n media_type = \"text/plain\"\n\n\nclass JSONResponse(Response):\n media_type = \"application/json\"\n\n def render(self, content: typing.Any) -> bytes:\n return json.dumps(\n content,\n ensure_ascii=False,\n allow_nan=False,\n indent=None,\n separators=(\",\", \":\"),\n ).encode(\"utf-8\")\n\n\nclass RedirectResponse(Response):\n def __init__(\n self,\n url: typing.Union[str, URL],\n status_code: int = 307,\n headers: dict = None,\n background: BackgroundTask = None,\n ) -> None:\n super().__init__(\n content=b\"\", status_code=status_code, headers=headers, background=background\n )\n self.headers[\"location\"] = quote(str(url), safe=\":/%#?=@[]!$&'()*+,;\")\n\n\nclass StreamingResponse(Response):\n def __init__(\n self,\n content: typing.Any,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> None:\n if isinstance(content, typing.AsyncIterable):\n self.body_iterator = content\n else:\n self.body_iterator = iterate_in_threadpool(content)\n self.status_code = status_code\n self.media_type = self.media_type if media_type is None else media_type\n self.background = background\n self.init_headers(headers)\n\n async def listen_for_disconnect(self, receive: Receive) -> None:\n while True:\n message = await receive()\n if message[\"type\"] == \"http.disconnect\":\n break\n\n async def stream_response(self, send: Send) -> None:\n await send(\n {\n \"type\": \"http.response.start\",\n \"status\": self.status_code,\n \"headers\": self.raw_headers,\n }\n )\n async for chunk in self.body_iterator:\n if not isinstance(chunk, bytes):\n chunk = chunk.encode(self.charset)\n await send({\"type\": \"http.response.body\", \"body\": chunk, \"more_body\": True})\n\n await send({\"type\": \"http.response.body\", \"body\": b\"\", \"more_body\": False})\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n async with anyio.create_task_group() as task_group:\n\n async def wrap(func: typing.Callable[[], typing.Coroutine]) -> None:\n await func()\n task_group.cancel_scope.cancel()\n\n task_group.start_soon(wrap, partial(self.stream_response, send))\n await wrap(partial(self.listen_for_disconnect, receive))\n\n if self.background is not None:\n await self.background()\n\n\nclass FileResponse(Response):\n chunk_size = 64 * 1024\n\n def __init__(\n self,\n path: typing.Union[str, \"os.PathLike[str]\"],\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n filename: str = None,\n stat_result: os.stat_result = None,\n method: str = None,\n ) -> None:\n self.path = path\n self.status_code = status_code\n self.filename = filename\n self.send_header_only = method is not None and method.upper() == \"HEAD\"\n if media_type is None:\n media_type = guess_type(filename or path)[0] or \"text/plain\"\n self.media_type = media_type\n self.background = background\n self.init_headers(headers)\n if self.filename is not None:\n content_disposition_filename = quote(self.filename)\n if content_disposition_filename != self.filename:\n content_disposition = \"attachment; filename*=utf-8''{}\".format(\n content_disposition_filename\n )\n else:\n content_disposition = f'attachment; filename=\"{self.filename}\"'\n self.headers.setdefault(\"content-disposition\", content_disposition)\n self.stat_result = stat_result\n if stat_result is not None:\n self.set_stat_headers(stat_result)\n\n def set_stat_headers(self, stat_result: os.stat_result) -> None:\n content_length = str(stat_result.st_size)\n last_modified = formatdate(stat_result.st_mtime, usegmt=True)\n etag_base = str(stat_result.st_mtime) + \"-\" + str(stat_result.st_size)\n etag = md5_hexdigest(etag_base.encode(), usedforsecurity=False)\n\n self.headers.setdefault(\"content-length\", content_length)\n self.headers.setdefault(\"last-modified\", last_modified)\n self.headers.setdefault(\"etag\", etag)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if self.stat_result is None:\n try:\n stat_result = await anyio.to_thread.run_sync(os.stat, self.path)\n self.set_stat_headers(stat_result)\n except FileNotFoundError:\n raise RuntimeError(f\"File at path {self.path} does not exist.\")\n else:\n mode = stat_result.st_mode\n if not stat.S_ISREG(mode):\n raise RuntimeError(f\"File at path {self.path} is not a file.\")\n await send(\n {\n \"type\": \"http.response.start\",\n \"status\": self.status_code,\n \"headers\": self.raw_headers,\n }\n )\n if self.send_header_only:\n await send({\"type\": \"http.response.body\", \"body\": b\"\", \"more_body\": False})\n else:\n async with await anyio.open_file(self.path, mode=\"rb\") as file:\n more_body = True\n while more_body:\n chunk = await file.read(self.chunk_size)\n more_body = len(chunk) == self.chunk_size\n await send(\n {\n \"type\": \"http.response.body\",\n \"body\": chunk,\n \"more_body\": more_body,\n }\n )\n if self.background is not None:\n await self.background()\n", "path": "starlette/responses.py"}], "after_files": [{"content": "import http.cookies\nimport json\nimport os\nimport stat\nimport sys\nimport typing\nfrom email.utils import formatdate\nfrom functools import partial\nfrom mimetypes import guess_type as mimetypes_guess_type\nfrom urllib.parse import quote\n\nimport anyio\n\nfrom starlette._compat import md5_hexdigest\nfrom starlette.background import BackgroundTask\nfrom starlette.concurrency import iterate_in_threadpool\nfrom starlette.datastructures import URL, MutableHeaders\nfrom starlette.types import Receive, Scope, Send\n\n# Workaround for adding samesite support to pre 3.8 python\nhttp.cookies.Morsel._reserved[\"samesite\"] = \"SameSite\" # type: ignore\n\n\n# Compatibility wrapper for `mimetypes.guess_type` to support `os.PathLike` on <py3.8\ndef guess_type(\n url: typing.Union[str, \"os.PathLike[str]\"], strict: bool = True\n) -> typing.Tuple[typing.Optional[str], typing.Optional[str]]:\n if sys.version_info < (3, 8): # pragma: no cover\n url = os.fspath(url)\n return mimetypes_guess_type(url, strict)\n\n\nclass Response:\n media_type = None\n charset = \"utf-8\"\n\n def __init__(\n self,\n content: typing.Any = None,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> None:\n self.status_code = status_code\n if media_type is not None:\n self.media_type = media_type\n self.background = background\n self.body = self.render(content)\n self.init_headers(headers)\n\n def render(self, content: typing.Any) -> bytes:\n if content is None:\n return b\"\"\n if isinstance(content, bytes):\n return content\n return content.encode(self.charset)\n\n def init_headers(self, headers: typing.Mapping[str, str] = None) -> None:\n if headers is None:\n raw_headers: typing.List[typing.Tuple[bytes, bytes]] = []\n populate_content_length = True\n populate_content_type = True\n else:\n raw_headers = [\n (k.lower().encode(\"latin-1\"), v.encode(\"latin-1\"))\n for k, v in headers.items()\n ]\n keys = [h[0] for h in raw_headers]\n populate_content_length = b\"content-length\" not in keys\n populate_content_type = b\"content-type\" not in keys\n\n body = getattr(self, \"body\", None)\n if (\n body is not None\n and populate_content_length\n and not (self.status_code < 200 or self.status_code in (204, 304))\n ):\n content_length = str(len(body))\n raw_headers.append((b\"content-length\", content_length.encode(\"latin-1\")))\n\n content_type = self.media_type\n if content_type is not None and populate_content_type:\n if content_type.startswith(\"text/\"):\n content_type += \"; charset=\" + self.charset\n raw_headers.append((b\"content-type\", content_type.encode(\"latin-1\")))\n\n self.raw_headers = raw_headers\n\n @property\n def headers(self) -> MutableHeaders:\n if not hasattr(self, \"_headers\"):\n self._headers = MutableHeaders(raw=self.raw_headers)\n return self._headers\n\n def set_cookie(\n self,\n key: str,\n value: str = \"\",\n max_age: int = None,\n expires: int = None,\n path: str = \"/\",\n domain: str = None,\n secure: bool = False,\n httponly: bool = False,\n samesite: str = \"lax\",\n ) -> None:\n cookie: http.cookies.BaseCookie = http.cookies.SimpleCookie()\n cookie[key] = value\n if max_age is not None:\n cookie[key][\"max-age\"] = max_age\n if expires is not None:\n cookie[key][\"expires\"] = expires\n if path is not None:\n cookie[key][\"path\"] = path\n if domain is not None:\n cookie[key][\"domain\"] = domain\n if secure:\n cookie[key][\"secure\"] = True\n if httponly:\n cookie[key][\"httponly\"] = True\n if samesite is not None:\n assert samesite.lower() in [\n \"strict\",\n \"lax\",\n \"none\",\n ], \"samesite must be either 'strict', 'lax' or 'none'\"\n cookie[key][\"samesite\"] = samesite\n cookie_val = cookie.output(header=\"\").strip()\n self.raw_headers.append((b\"set-cookie\", cookie_val.encode(\"latin-1\")))\n\n def delete_cookie(\n self,\n key: str,\n path: str = \"/\",\n domain: str = None,\n secure: bool = False,\n httponly: bool = False,\n samesite: str = \"lax\",\n ) -> None:\n self.set_cookie(\n key,\n max_age=0,\n expires=0,\n path=path,\n domain=domain,\n secure=secure,\n httponly=httponly,\n samesite=samesite,\n )\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n await send(\n {\n \"type\": \"http.response.start\",\n \"status\": self.status_code,\n \"headers\": self.raw_headers,\n }\n )\n await send({\"type\": \"http.response.body\", \"body\": self.body})\n\n if self.background is not None:\n await self.background()\n\n\nclass HTMLResponse(Response):\n media_type = \"text/html\"\n\n\nclass PlainTextResponse(Response):\n media_type = \"text/plain\"\n\n\nclass JSONResponse(Response):\n media_type = \"application/json\"\n\n def render(self, content: typing.Any) -> bytes:\n return json.dumps(\n content,\n ensure_ascii=False,\n allow_nan=False,\n indent=None,\n separators=(\",\", \":\"),\n ).encode(\"utf-8\")\n\n\nclass RedirectResponse(Response):\n def __init__(\n self,\n url: typing.Union[str, URL],\n status_code: int = 307,\n headers: dict = None,\n background: BackgroundTask = None,\n ) -> None:\n super().__init__(\n content=b\"\", status_code=status_code, headers=headers, background=background\n )\n self.headers[\"location\"] = quote(str(url), safe=\":/%#?=@[]!$&'()*+,;\")\n\n\nclass StreamingResponse(Response):\n def __init__(\n self,\n content: typing.Any,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> None:\n if isinstance(content, typing.AsyncIterable):\n self.body_iterator = content\n else:\n self.body_iterator = iterate_in_threadpool(content)\n self.status_code = status_code\n self.media_type = self.media_type if media_type is None else media_type\n self.background = background\n self.init_headers(headers)\n\n async def listen_for_disconnect(self, receive: Receive) -> None:\n while True:\n message = await receive()\n if message[\"type\"] == \"http.disconnect\":\n break\n\n async def stream_response(self, send: Send) -> None:\n await send(\n {\n \"type\": \"http.response.start\",\n \"status\": self.status_code,\n \"headers\": self.raw_headers,\n }\n )\n async for chunk in self.body_iterator:\n if not isinstance(chunk, bytes):\n chunk = chunk.encode(self.charset)\n await send({\"type\": \"http.response.body\", \"body\": chunk, \"more_body\": True})\n\n await send({\"type\": \"http.response.body\", \"body\": b\"\", \"more_body\": False})\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n async with anyio.create_task_group() as task_group:\n\n async def wrap(func: typing.Callable[[], typing.Coroutine]) -> None:\n await func()\n task_group.cancel_scope.cancel()\n\n task_group.start_soon(wrap, partial(self.stream_response, send))\n await wrap(partial(self.listen_for_disconnect, receive))\n\n if self.background is not None:\n await self.background()\n\n\nclass FileResponse(Response):\n chunk_size = 64 * 1024\n\n def __init__(\n self,\n path: typing.Union[str, \"os.PathLike[str]\"],\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n filename: str = None,\n stat_result: os.stat_result = None,\n method: str = None,\n ) -> None:\n self.path = path\n self.status_code = status_code\n self.filename = filename\n self.send_header_only = method is not None and method.upper() == \"HEAD\"\n if media_type is None:\n media_type = guess_type(filename or path)[0] or \"text/plain\"\n self.media_type = media_type\n self.background = background\n self.init_headers(headers)\n if self.filename is not None:\n content_disposition_filename = quote(self.filename)\n if content_disposition_filename != self.filename:\n content_disposition = \"attachment; filename*=utf-8''{}\".format(\n content_disposition_filename\n )\n else:\n content_disposition = f'attachment; filename=\"{self.filename}\"'\n self.headers.setdefault(\"content-disposition\", content_disposition)\n self.stat_result = stat_result\n if stat_result is not None:\n self.set_stat_headers(stat_result)\n\n def set_stat_headers(self, stat_result: os.stat_result) -> None:\n content_length = str(stat_result.st_size)\n last_modified = formatdate(stat_result.st_mtime, usegmt=True)\n etag_base = str(stat_result.st_mtime) + \"-\" + str(stat_result.st_size)\n etag = md5_hexdigest(etag_base.encode(), usedforsecurity=False)\n\n self.headers.setdefault(\"content-length\", content_length)\n self.headers.setdefault(\"last-modified\", last_modified)\n self.headers.setdefault(\"etag\", etag)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if self.stat_result is None:\n try:\n stat_result = await anyio.to_thread.run_sync(os.stat, self.path)\n self.set_stat_headers(stat_result)\n except FileNotFoundError:\n raise RuntimeError(f\"File at path {self.path} does not exist.\")\n else:\n mode = stat_result.st_mode\n if not stat.S_ISREG(mode):\n raise RuntimeError(f\"File at path {self.path} is not a file.\")\n await send(\n {\n \"type\": \"http.response.start\",\n \"status\": self.status_code,\n \"headers\": self.raw_headers,\n }\n )\n if self.send_header_only:\n await send({\"type\": \"http.response.body\", \"body\": b\"\", \"more_body\": False})\n else:\n async with await anyio.open_file(self.path, mode=\"rb\") as file:\n more_body = True\n while more_body:\n chunk = await file.read(self.chunk_size)\n more_body = len(chunk) == self.chunk_size\n await send(\n {\n \"type\": \"http.response.body\",\n \"body\": chunk,\n \"more_body\": more_body,\n }\n )\n if self.background is not None:\n await self.background()\n", "path": "starlette/responses.py"}]}
| 3,972 | 157 |
gh_patches_debug_38581
|
rasdani/github-patches
|
git_diff
|
kartoza__prj.app-217
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash on listing current sponsors
When opening the sponsors view (with some sponsors and sponsor periods created) we get a crash.
http://changelog.inasafe.org/en/qgis/sponsor/list/
Sentry info:
http://sentry.kartoza.com/kartoza/projecta-live/group/5848/
Relevant code.
```
def current_sponsor(self):
today = datetime.datetime.now().replace(tzinfo=utc)
end = self.end_date.replace(tzinfo=utc) # <-- offending line
if end < today:
return False
else:
return True
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/changes/models/sponsorship_period.py`
Content:
```
1 __author__ = 'rischan'
2
3 import string
4 import random
5 from django.utils import timezone
6 from django.core.urlresolvers import reverse
7 from django.utils.text import slugify
8 from core.settings.contrib import STOP_WORDS
9 from django.db import models
10 from django.utils.translation import ugettext_lazy as _
11 from django.contrib.auth.models import User
12
13
14 class ApprovedSponsorshipPeriodManager(models.Manager):
15 """Custom sponsor manager that shows only approved records."""
16
17 def get_queryset(self):
18 """Query set generator"""
19 return super(
20 ApprovedSponsorshipPeriodManager, self).get_queryset().filter(
21 approved=True)
22
23
24 class UnapprovedSponsorshipPeriodManager(models.Manager):
25 """Custom sponsor manager that shows only unapproved records."""
26
27 def get_queryset(self):
28 """Query set generator"""
29 return super(
30 UnapprovedSponsorshipPeriodManager, self).get_queryset().filter(
31 approved=False)
32
33
34 class SponsorshipPeriod(models.Model):
35 """A sponsorship period model e.g. gui, backend, web site etc."""
36
37 start_date = models.DateField(
38 _("Start date"),
39 help_text='Start date of sponsorship period',
40 default=timezone.now)
41
42 end_date = models.DateField(
43 _("End date"),
44 help_text='End date of sponsorship period',
45 default=timezone.now)
46
47 approved = models.BooleanField(
48 help_text=_(
49 'Whether this sponsorship period has been approved for use by '
50 'the project owner.'),
51 default=False
52 )
53
54 author = models.ForeignKey(User)
55 slug = models.SlugField()
56 project = models.ForeignKey('base.Project')
57 objects = models.Manager()
58 approved_objects = ApprovedSponsorshipPeriodManager()
59 unapproved_objects = UnapprovedSponsorshipPeriodManager()
60 sponsor = models.ForeignKey(
61 'Sponsor',
62 help_text='Input the sponsor name',
63 )
64 sponsorshiplevel = models.ForeignKey(
65 'SponsorshipLevel',
66 help_text='This level take from Sponsorship Level, '
67 'you can add it by using Sponsorship Level menu',
68 )
69 # noinspection PyClassicStyleClass
70
71 class Meta:
72 """Meta options for the sponsor class."""
73 unique_together = (
74 ('project', 'slug')
75 )
76 app_label = 'changes'
77 ordering = ['start_date']
78
79 def save(self, *args, **kwargs):
80
81 if not self.pk:
82 name = self.slug_generator()
83 words = name.split()
84 filtered_words = [t for t in words if t.lower() not in STOP_WORDS]
85 new_list = ' '.join(filtered_words)
86 self.slug = slugify(new_list)[:50]
87 super(SponsorshipPeriod, self).save(*args, **kwargs)
88
89 def slug_generator(self, size=6, chars=string.ascii_lowercase):
90 return ''.join(random.choice(chars) for _ in range(size))
91
92 def __unicode__(self):
93 return u'%s - %s : %s' % (
94 self.start_date,
95 self.end_date
96 )
97
98 def get_absolute_url(self):
99 return reverse('sponsorshipperiod-detail', kwargs={
100 'slug': self.slug,
101 'project_slug': self.project.slug
102 })
103
104 def current_sponsor(self):
105 today = timezone.now()
106 end = self.end_date
107 if end < today:
108 return False
109 else:
110 return True
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django_project/changes/models/sponsorship_period.py b/django_project/changes/models/sponsorship_period.py
--- a/django_project/changes/models/sponsorship_period.py
+++ b/django_project/changes/models/sponsorship_period.py
@@ -1,15 +1,19 @@
-__author__ = 'rischan'
+# coding=utf-8
import string
import random
+import datetime
from django.utils import timezone
from django.core.urlresolvers import reverse
from django.utils.text import slugify
+# noinspection PyPackageRequirements
from core.settings.contrib import STOP_WORDS
from django.db import models
from django.utils.translation import ugettext_lazy as _
from django.contrib.auth.models import User
+__author__ = 'rischan'
+
class ApprovedSponsorshipPeriodManager(models.Manager):
"""Custom sponsor manager that shows only approved records."""
@@ -57,14 +61,16 @@
objects = models.Manager()
approved_objects = ApprovedSponsorshipPeriodManager()
unapproved_objects = UnapprovedSponsorshipPeriodManager()
+ # noinspection PyUnresolvedReferences
sponsor = models.ForeignKey(
- 'Sponsor',
- help_text='Input the sponsor name',
+ 'Sponsor',
+ help_text='Input the sponsor name',
)
+ # noinspection PyUnresolvedReferences
sponsorshiplevel = models.ForeignKey(
- 'SponsorshipLevel',
- help_text='This level take from Sponsorship Level, '
- 'you can add it by using Sponsorship Level menu',
+ 'SponsorshipLevel',
+ help_text='This level take from Sponsorship Level, '
+ 'you can add it by using Sponsorship Level menu',
)
# noinspection PyClassicStyleClass
@@ -86,11 +92,13 @@
self.slug = slugify(new_list)[:50]
super(SponsorshipPeriod, self).save(*args, **kwargs)
- def slug_generator(self, size=6, chars=string.ascii_lowercase):
+ @staticmethod
+ def slug_generator(size=6, chars=string.ascii_lowercase):
return ''.join(random.choice(chars) for _ in range(size))
def __unicode__(self):
return u'%s - %s : %s' % (
+ self.sponsor.name,
self.start_date,
self.end_date
)
@@ -102,7 +110,7 @@
})
def current_sponsor(self):
- today = timezone.now()
+ today = datetime.datetime.now().date()
end = self.end_date
if end < today:
return False
|
{"golden_diff": "diff --git a/django_project/changes/models/sponsorship_period.py b/django_project/changes/models/sponsorship_period.py\n--- a/django_project/changes/models/sponsorship_period.py\n+++ b/django_project/changes/models/sponsorship_period.py\n@@ -1,15 +1,19 @@\n-__author__ = 'rischan'\n+# coding=utf-8\n \n import string\n import random\n+import datetime\n from django.utils import timezone\n from django.core.urlresolvers import reverse\n from django.utils.text import slugify\n+# noinspection PyPackageRequirements\n from core.settings.contrib import STOP_WORDS\n from django.db import models\n from django.utils.translation import ugettext_lazy as _\n from django.contrib.auth.models import User\n \n+__author__ = 'rischan'\n+\n \n class ApprovedSponsorshipPeriodManager(models.Manager):\n \"\"\"Custom sponsor manager that shows only approved records.\"\"\"\n@@ -57,14 +61,16 @@\n objects = models.Manager()\n approved_objects = ApprovedSponsorshipPeriodManager()\n unapproved_objects = UnapprovedSponsorshipPeriodManager()\n+ # noinspection PyUnresolvedReferences\n sponsor = models.ForeignKey(\n- 'Sponsor',\n- help_text='Input the sponsor name',\n+ 'Sponsor',\n+ help_text='Input the sponsor name',\n )\n+ # noinspection PyUnresolvedReferences\n sponsorshiplevel = models.ForeignKey(\n- 'SponsorshipLevel',\n- help_text='This level take from Sponsorship Level, '\n- 'you can add it by using Sponsorship Level menu',\n+ 'SponsorshipLevel',\n+ help_text='This level take from Sponsorship Level, '\n+ 'you can add it by using Sponsorship Level menu',\n )\n # noinspection PyClassicStyleClass\n \n@@ -86,11 +92,13 @@\n self.slug = slugify(new_list)[:50]\n super(SponsorshipPeriod, self).save(*args, **kwargs)\n \n- def slug_generator(self, size=6, chars=string.ascii_lowercase):\n+ @staticmethod\n+ def slug_generator(size=6, chars=string.ascii_lowercase):\n return ''.join(random.choice(chars) for _ in range(size))\n \n def __unicode__(self):\n return u'%s - %s : %s' % (\n+ self.sponsor.name,\n self.start_date,\n self.end_date\n )\n@@ -102,7 +110,7 @@\n })\n \n def current_sponsor(self):\n- today = timezone.now()\n+ today = datetime.datetime.now().date()\n end = self.end_date\n if end < today:\n return False\n", "issue": "Crash on listing current sponsors\nWhen opening the sponsors view (with some sponsors and sponsor periods created) we get a crash.\n\nhttp://changelog.inasafe.org/en/qgis/sponsor/list/\n\nSentry info:\n\nhttp://sentry.kartoza.com/kartoza/projecta-live/group/5848/\n\nRelevant code.\n\n```\n def current_sponsor(self):\n today = datetime.datetime.now().replace(tzinfo=utc)\n end = self.end_date.replace(tzinfo=utc) # <-- offending line\n if end < today:\n return False\n else:\n return True\n```\n\n", "before_files": [{"content": "__author__ = 'rischan'\n\nimport string\nimport random\nfrom django.utils import timezone\nfrom django.core.urlresolvers import reverse\nfrom django.utils.text import slugify\nfrom core.settings.contrib import STOP_WORDS\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.contrib.auth.models import User\n\n\nclass ApprovedSponsorshipPeriodManager(models.Manager):\n \"\"\"Custom sponsor manager that shows only approved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n ApprovedSponsorshipPeriodManager, self).get_queryset().filter(\n approved=True)\n\n\nclass UnapprovedSponsorshipPeriodManager(models.Manager):\n \"\"\"Custom sponsor manager that shows only unapproved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n UnapprovedSponsorshipPeriodManager, self).get_queryset().filter(\n approved=False)\n\n\nclass SponsorshipPeriod(models.Model):\n \"\"\"A sponsorship period model e.g. gui, backend, web site etc.\"\"\"\n\n start_date = models.DateField(\n _(\"Start date\"),\n help_text='Start date of sponsorship period',\n default=timezone.now)\n\n end_date = models.DateField(\n _(\"End date\"),\n help_text='End date of sponsorship period',\n default=timezone.now)\n\n approved = models.BooleanField(\n help_text=_(\n 'Whether this sponsorship period has been approved for use by '\n 'the project owner.'),\n default=False\n )\n\n author = models.ForeignKey(User)\n slug = models.SlugField()\n project = models.ForeignKey('base.Project')\n objects = models.Manager()\n approved_objects = ApprovedSponsorshipPeriodManager()\n unapproved_objects = UnapprovedSponsorshipPeriodManager()\n sponsor = models.ForeignKey(\n 'Sponsor',\n help_text='Input the sponsor name',\n )\n sponsorshiplevel = models.ForeignKey(\n 'SponsorshipLevel',\n help_text='This level take from Sponsorship Level, '\n 'you can add it by using Sponsorship Level menu',\n )\n # noinspection PyClassicStyleClass\n\n class Meta:\n \"\"\"Meta options for the sponsor class.\"\"\"\n unique_together = (\n ('project', 'slug')\n )\n app_label = 'changes'\n ordering = ['start_date']\n\n def save(self, *args, **kwargs):\n\n if not self.pk:\n name = self.slug_generator()\n words = name.split()\n filtered_words = [t for t in words if t.lower() not in STOP_WORDS]\n new_list = ' '.join(filtered_words)\n self.slug = slugify(new_list)[:50]\n super(SponsorshipPeriod, self).save(*args, **kwargs)\n\n def slug_generator(self, size=6, chars=string.ascii_lowercase):\n return ''.join(random.choice(chars) for _ in range(size))\n\n def __unicode__(self):\n return u'%s - %s : %s' % (\n self.start_date,\n self.end_date\n )\n\n def get_absolute_url(self):\n return reverse('sponsorshipperiod-detail', kwargs={\n 'slug': self.slug,\n 'project_slug': self.project.slug\n })\n\n def current_sponsor(self):\n today = timezone.now()\n end = self.end_date\n if end < today:\n return False\n else:\n return True\n", "path": "django_project/changes/models/sponsorship_period.py"}], "after_files": [{"content": "# coding=utf-8\n\nimport string\nimport random\nimport datetime\nfrom django.utils import timezone\nfrom django.core.urlresolvers import reverse\nfrom django.utils.text import slugify\n# noinspection PyPackageRequirements\nfrom core.settings.contrib import STOP_WORDS\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.contrib.auth.models import User\n\n__author__ = 'rischan'\n\n\nclass ApprovedSponsorshipPeriodManager(models.Manager):\n \"\"\"Custom sponsor manager that shows only approved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n ApprovedSponsorshipPeriodManager, self).get_queryset().filter(\n approved=True)\n\n\nclass UnapprovedSponsorshipPeriodManager(models.Manager):\n \"\"\"Custom sponsor manager that shows only unapproved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n UnapprovedSponsorshipPeriodManager, self).get_queryset().filter(\n approved=False)\n\n\nclass SponsorshipPeriod(models.Model):\n \"\"\"A sponsorship period model e.g. gui, backend, web site etc.\"\"\"\n\n start_date = models.DateField(\n _(\"Start date\"),\n help_text='Start date of sponsorship period',\n default=timezone.now)\n\n end_date = models.DateField(\n _(\"End date\"),\n help_text='End date of sponsorship period',\n default=timezone.now)\n\n approved = models.BooleanField(\n help_text=_(\n 'Whether this sponsorship period has been approved for use by '\n 'the project owner.'),\n default=False\n )\n\n author = models.ForeignKey(User)\n slug = models.SlugField()\n project = models.ForeignKey('base.Project')\n objects = models.Manager()\n approved_objects = ApprovedSponsorshipPeriodManager()\n unapproved_objects = UnapprovedSponsorshipPeriodManager()\n # noinspection PyUnresolvedReferences\n sponsor = models.ForeignKey(\n 'Sponsor',\n help_text='Input the sponsor name',\n )\n # noinspection PyUnresolvedReferences\n sponsorshiplevel = models.ForeignKey(\n 'SponsorshipLevel',\n help_text='This level take from Sponsorship Level, '\n 'you can add it by using Sponsorship Level menu',\n )\n # noinspection PyClassicStyleClass\n\n class Meta:\n \"\"\"Meta options for the sponsor class.\"\"\"\n unique_together = (\n ('project', 'slug')\n )\n app_label = 'changes'\n ordering = ['start_date']\n\n def save(self, *args, **kwargs):\n\n if not self.pk:\n name = self.slug_generator()\n words = name.split()\n filtered_words = [t for t in words if t.lower() not in STOP_WORDS]\n new_list = ' '.join(filtered_words)\n self.slug = slugify(new_list)[:50]\n super(SponsorshipPeriod, self).save(*args, **kwargs)\n\n @staticmethod\n def slug_generator(size=6, chars=string.ascii_lowercase):\n return ''.join(random.choice(chars) for _ in range(size))\n\n def __unicode__(self):\n return u'%s - %s : %s' % (\n self.sponsor.name,\n self.start_date,\n self.end_date\n )\n\n def get_absolute_url(self):\n return reverse('sponsorshipperiod-detail', kwargs={\n 'slug': self.slug,\n 'project_slug': self.project.slug\n })\n\n def current_sponsor(self):\n today = datetime.datetime.now().date()\n end = self.end_date\n if end < today:\n return False\n else:\n return True\n", "path": "django_project/changes/models/sponsorship_period.py"}]}
| 1,332 | 574 |
gh_patches_debug_5592
|
rasdani/github-patches
|
git_diff
|
MongoEngine__mongoengine-1862
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Name of text index
Is it possible to set name of text index?
I have nearly 10 fields which I want to use in text index and I can't, because limited by length of index name (see: https://docs.mongodb.com/v3.2/reference/limits/#Index-Name-Length)
Also, I don't want to use Wildcard index (btw, is it possible in mongoengine?)
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mongoengine/context_managers.py`
Content:
```
1 from contextlib import contextmanager
2 from pymongo.write_concern import WriteConcern
3 from mongoengine.common import _import_class
4 from mongoengine.connection import DEFAULT_CONNECTION_NAME, get_db
5
6
7 __all__ = ('switch_db', 'switch_collection', 'no_dereference',
8 'no_sub_classes', 'query_counter', 'set_write_concern')
9
10
11 class switch_db(object):
12 """switch_db alias context manager.
13
14 Example ::
15
16 # Register connections
17 register_connection('default', 'mongoenginetest')
18 register_connection('testdb-1', 'mongoenginetest2')
19
20 class Group(Document):
21 name = StringField()
22
23 Group(name='test').save() # Saves in the default db
24
25 with switch_db(Group, 'testdb-1') as Group:
26 Group(name='hello testdb!').save() # Saves in testdb-1
27 """
28
29 def __init__(self, cls, db_alias):
30 """Construct the switch_db context manager
31
32 :param cls: the class to change the registered db
33 :param db_alias: the name of the specific database to use
34 """
35 self.cls = cls
36 self.collection = cls._get_collection()
37 self.db_alias = db_alias
38 self.ori_db_alias = cls._meta.get('db_alias', DEFAULT_CONNECTION_NAME)
39
40 def __enter__(self):
41 """Change the db_alias and clear the cached collection."""
42 self.cls._meta['db_alias'] = self.db_alias
43 self.cls._collection = None
44 return self.cls
45
46 def __exit__(self, t, value, traceback):
47 """Reset the db_alias and collection."""
48 self.cls._meta['db_alias'] = self.ori_db_alias
49 self.cls._collection = self.collection
50
51
52 class switch_collection(object):
53 """switch_collection alias context manager.
54
55 Example ::
56
57 class Group(Document):
58 name = StringField()
59
60 Group(name='test').save() # Saves in the default db
61
62 with switch_collection(Group, 'group1') as Group:
63 Group(name='hello testdb!').save() # Saves in group1 collection
64 """
65
66 def __init__(self, cls, collection_name):
67 """Construct the switch_collection context manager.
68
69 :param cls: the class to change the registered db
70 :param collection_name: the name of the collection to use
71 """
72 self.cls = cls
73 self.ori_collection = cls._get_collection()
74 self.ori_get_collection_name = cls._get_collection_name
75 self.collection_name = collection_name
76
77 def __enter__(self):
78 """Change the _get_collection_name and clear the cached collection."""
79
80 @classmethod
81 def _get_collection_name(cls):
82 return self.collection_name
83
84 self.cls._get_collection_name = _get_collection_name
85 self.cls._collection = None
86 return self.cls
87
88 def __exit__(self, t, value, traceback):
89 """Reset the collection."""
90 self.cls._collection = self.ori_collection
91 self.cls._get_collection_name = self.ori_get_collection_name
92
93
94 class no_dereference(object):
95 """no_dereference context manager.
96
97 Turns off all dereferencing in Documents for the duration of the context
98 manager::
99
100 with no_dereference(Group) as Group:
101 Group.objects.find()
102 """
103
104 def __init__(self, cls):
105 """Construct the no_dereference context manager.
106
107 :param cls: the class to turn dereferencing off on
108 """
109 self.cls = cls
110
111 ReferenceField = _import_class('ReferenceField')
112 GenericReferenceField = _import_class('GenericReferenceField')
113 ComplexBaseField = _import_class('ComplexBaseField')
114
115 self.deref_fields = [k for k, v in self.cls._fields.iteritems()
116 if isinstance(v, (ReferenceField,
117 GenericReferenceField,
118 ComplexBaseField))]
119
120 def __enter__(self):
121 """Change the objects default and _auto_dereference values."""
122 for field in self.deref_fields:
123 self.cls._fields[field]._auto_dereference = False
124 return self.cls
125
126 def __exit__(self, t, value, traceback):
127 """Reset the default and _auto_dereference values."""
128 for field in self.deref_fields:
129 self.cls._fields[field]._auto_dereference = True
130 return self.cls
131
132
133 class no_sub_classes(object):
134 """no_sub_classes context manager.
135
136 Only returns instances of this class and no sub (inherited) classes::
137
138 with no_sub_classes(Group) as Group:
139 Group.objects.find()
140 """
141
142 def __init__(self, cls):
143 """Construct the no_sub_classes context manager.
144
145 :param cls: the class to turn querying sub classes on
146 """
147 self.cls = cls
148
149 def __enter__(self):
150 """Change the objects default and _auto_dereference values."""
151 self.cls._all_subclasses = self.cls._subclasses
152 self.cls._subclasses = (self.cls,)
153 return self.cls
154
155 def __exit__(self, t, value, traceback):
156 """Reset the default and _auto_dereference values."""
157 self.cls._subclasses = self.cls._all_subclasses
158 delattr(self.cls, '_all_subclasses')
159 return self.cls
160
161
162 class query_counter(object):
163 """Query_counter context manager to get the number of queries."""
164
165 def __init__(self):
166 """Construct the query_counter."""
167 self.counter = 0
168 self.db = get_db()
169
170 def __enter__(self):
171 """On every with block we need to drop the profile collection."""
172 self.db.set_profiling_level(0)
173 self.db.system.profile.drop()
174 self.db.set_profiling_level(2)
175 return self
176
177 def __exit__(self, t, value, traceback):
178 """Reset the profiling level."""
179 self.db.set_profiling_level(0)
180
181 def __eq__(self, value):
182 """== Compare querycounter."""
183 counter = self._get_count()
184 return value == counter
185
186 def __ne__(self, value):
187 """!= Compare querycounter."""
188 return not self.__eq__(value)
189
190 def __lt__(self, value):
191 """< Compare querycounter."""
192 return self._get_count() < value
193
194 def __le__(self, value):
195 """<= Compare querycounter."""
196 return self._get_count() <= value
197
198 def __gt__(self, value):
199 """> Compare querycounter."""
200 return self._get_count() > value
201
202 def __ge__(self, value):
203 """>= Compare querycounter."""
204 return self._get_count() >= value
205
206 def __int__(self):
207 """int representation."""
208 return self._get_count()
209
210 def __repr__(self):
211 """repr query_counter as the number of queries."""
212 return u"%s" % self._get_count()
213
214 def _get_count(self):
215 """Get the number of queries."""
216 ignore_query = {'ns': {'$ne': '%s.system.indexes' % self.db.name}}
217 count = self.db.system.profile.find(ignore_query).count() - self.counter
218 self.counter += 1
219 return count
220
221
222 @contextmanager
223 def set_write_concern(collection, write_concerns):
224 combined_concerns = dict(collection.write_concern.document.items())
225 combined_concerns.update(write_concerns)
226 yield collection.with_options(write_concern=WriteConcern(**combined_concerns))
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mongoengine/context_managers.py b/mongoengine/context_managers.py
--- a/mongoengine/context_managers.py
+++ b/mongoengine/context_managers.py
@@ -215,7 +215,7 @@
"""Get the number of queries."""
ignore_query = {'ns': {'$ne': '%s.system.indexes' % self.db.name}}
count = self.db.system.profile.find(ignore_query).count() - self.counter
- self.counter += 1
+ self.counter += 1 # Account for the query we just fired
return count
|
{"golden_diff": "diff --git a/mongoengine/context_managers.py b/mongoengine/context_managers.py\n--- a/mongoengine/context_managers.py\n+++ b/mongoengine/context_managers.py\n@@ -215,7 +215,7 @@\n \"\"\"Get the number of queries.\"\"\"\n ignore_query = {'ns': {'$ne': '%s.system.indexes' % self.db.name}}\n count = self.db.system.profile.find(ignore_query).count() - self.counter\n- self.counter += 1\n+ self.counter += 1 # Account for the query we just fired\n return count\n", "issue": "Name of text index\nIs it possible to set name of text index?\r\n\r\nI have nearly 10 fields which I want to use in text index and I can't, because limited by length of index name (see: https://docs.mongodb.com/v3.2/reference/limits/#Index-Name-Length)\r\n\r\nAlso, I don't want to use Wildcard index (btw, is it possible in mongoengine?)\r\n\r\nThanks!\n", "before_files": [{"content": "from contextlib import contextmanager\nfrom pymongo.write_concern import WriteConcern\nfrom mongoengine.common import _import_class\nfrom mongoengine.connection import DEFAULT_CONNECTION_NAME, get_db\n\n\n__all__ = ('switch_db', 'switch_collection', 'no_dereference',\n 'no_sub_classes', 'query_counter', 'set_write_concern')\n\n\nclass switch_db(object):\n \"\"\"switch_db alias context manager.\n\n Example ::\n\n # Register connections\n register_connection('default', 'mongoenginetest')\n register_connection('testdb-1', 'mongoenginetest2')\n\n class Group(Document):\n name = StringField()\n\n Group(name='test').save() # Saves in the default db\n\n with switch_db(Group, 'testdb-1') as Group:\n Group(name='hello testdb!').save() # Saves in testdb-1\n \"\"\"\n\n def __init__(self, cls, db_alias):\n \"\"\"Construct the switch_db context manager\n\n :param cls: the class to change the registered db\n :param db_alias: the name of the specific database to use\n \"\"\"\n self.cls = cls\n self.collection = cls._get_collection()\n self.db_alias = db_alias\n self.ori_db_alias = cls._meta.get('db_alias', DEFAULT_CONNECTION_NAME)\n\n def __enter__(self):\n \"\"\"Change the db_alias and clear the cached collection.\"\"\"\n self.cls._meta['db_alias'] = self.db_alias\n self.cls._collection = None\n return self.cls\n\n def __exit__(self, t, value, traceback):\n \"\"\"Reset the db_alias and collection.\"\"\"\n self.cls._meta['db_alias'] = self.ori_db_alias\n self.cls._collection = self.collection\n\n\nclass switch_collection(object):\n \"\"\"switch_collection alias context manager.\n\n Example ::\n\n class Group(Document):\n name = StringField()\n\n Group(name='test').save() # Saves in the default db\n\n with switch_collection(Group, 'group1') as Group:\n Group(name='hello testdb!').save() # Saves in group1 collection\n \"\"\"\n\n def __init__(self, cls, collection_name):\n \"\"\"Construct the switch_collection context manager.\n\n :param cls: the class to change the registered db\n :param collection_name: the name of the collection to use\n \"\"\"\n self.cls = cls\n self.ori_collection = cls._get_collection()\n self.ori_get_collection_name = cls._get_collection_name\n self.collection_name = collection_name\n\n def __enter__(self):\n \"\"\"Change the _get_collection_name and clear the cached collection.\"\"\"\n\n @classmethod\n def _get_collection_name(cls):\n return self.collection_name\n\n self.cls._get_collection_name = _get_collection_name\n self.cls._collection = None\n return self.cls\n\n def __exit__(self, t, value, traceback):\n \"\"\"Reset the collection.\"\"\"\n self.cls._collection = self.ori_collection\n self.cls._get_collection_name = self.ori_get_collection_name\n\n\nclass no_dereference(object):\n \"\"\"no_dereference context manager.\n\n Turns off all dereferencing in Documents for the duration of the context\n manager::\n\n with no_dereference(Group) as Group:\n Group.objects.find()\n \"\"\"\n\n def __init__(self, cls):\n \"\"\"Construct the no_dereference context manager.\n\n :param cls: the class to turn dereferencing off on\n \"\"\"\n self.cls = cls\n\n ReferenceField = _import_class('ReferenceField')\n GenericReferenceField = _import_class('GenericReferenceField')\n ComplexBaseField = _import_class('ComplexBaseField')\n\n self.deref_fields = [k for k, v in self.cls._fields.iteritems()\n if isinstance(v, (ReferenceField,\n GenericReferenceField,\n ComplexBaseField))]\n\n def __enter__(self):\n \"\"\"Change the objects default and _auto_dereference values.\"\"\"\n for field in self.deref_fields:\n self.cls._fields[field]._auto_dereference = False\n return self.cls\n\n def __exit__(self, t, value, traceback):\n \"\"\"Reset the default and _auto_dereference values.\"\"\"\n for field in self.deref_fields:\n self.cls._fields[field]._auto_dereference = True\n return self.cls\n\n\nclass no_sub_classes(object):\n \"\"\"no_sub_classes context manager.\n\n Only returns instances of this class and no sub (inherited) classes::\n\n with no_sub_classes(Group) as Group:\n Group.objects.find()\n \"\"\"\n\n def __init__(self, cls):\n \"\"\"Construct the no_sub_classes context manager.\n\n :param cls: the class to turn querying sub classes on\n \"\"\"\n self.cls = cls\n\n def __enter__(self):\n \"\"\"Change the objects default and _auto_dereference values.\"\"\"\n self.cls._all_subclasses = self.cls._subclasses\n self.cls._subclasses = (self.cls,)\n return self.cls\n\n def __exit__(self, t, value, traceback):\n \"\"\"Reset the default and _auto_dereference values.\"\"\"\n self.cls._subclasses = self.cls._all_subclasses\n delattr(self.cls, '_all_subclasses')\n return self.cls\n\n\nclass query_counter(object):\n \"\"\"Query_counter context manager to get the number of queries.\"\"\"\n\n def __init__(self):\n \"\"\"Construct the query_counter.\"\"\"\n self.counter = 0\n self.db = get_db()\n\n def __enter__(self):\n \"\"\"On every with block we need to drop the profile collection.\"\"\"\n self.db.set_profiling_level(0)\n self.db.system.profile.drop()\n self.db.set_profiling_level(2)\n return self\n\n def __exit__(self, t, value, traceback):\n \"\"\"Reset the profiling level.\"\"\"\n self.db.set_profiling_level(0)\n\n def __eq__(self, value):\n \"\"\"== Compare querycounter.\"\"\"\n counter = self._get_count()\n return value == counter\n\n def __ne__(self, value):\n \"\"\"!= Compare querycounter.\"\"\"\n return not self.__eq__(value)\n\n def __lt__(self, value):\n \"\"\"< Compare querycounter.\"\"\"\n return self._get_count() < value\n\n def __le__(self, value):\n \"\"\"<= Compare querycounter.\"\"\"\n return self._get_count() <= value\n\n def __gt__(self, value):\n \"\"\"> Compare querycounter.\"\"\"\n return self._get_count() > value\n\n def __ge__(self, value):\n \"\"\">= Compare querycounter.\"\"\"\n return self._get_count() >= value\n\n def __int__(self):\n \"\"\"int representation.\"\"\"\n return self._get_count()\n\n def __repr__(self):\n \"\"\"repr query_counter as the number of queries.\"\"\"\n return u\"%s\" % self._get_count()\n\n def _get_count(self):\n \"\"\"Get the number of queries.\"\"\"\n ignore_query = {'ns': {'$ne': '%s.system.indexes' % self.db.name}}\n count = self.db.system.profile.find(ignore_query).count() - self.counter\n self.counter += 1\n return count\n\n\n@contextmanager\ndef set_write_concern(collection, write_concerns):\n combined_concerns = dict(collection.write_concern.document.items())\n combined_concerns.update(write_concerns)\n yield collection.with_options(write_concern=WriteConcern(**combined_concerns))\n", "path": "mongoengine/context_managers.py"}], "after_files": [{"content": "from contextlib import contextmanager\nfrom pymongo.write_concern import WriteConcern\nfrom mongoengine.common import _import_class\nfrom mongoengine.connection import DEFAULT_CONNECTION_NAME, get_db\n\n\n__all__ = ('switch_db', 'switch_collection', 'no_dereference',\n 'no_sub_classes', 'query_counter', 'set_write_concern')\n\n\nclass switch_db(object):\n \"\"\"switch_db alias context manager.\n\n Example ::\n\n # Register connections\n register_connection('default', 'mongoenginetest')\n register_connection('testdb-1', 'mongoenginetest2')\n\n class Group(Document):\n name = StringField()\n\n Group(name='test').save() # Saves in the default db\n\n with switch_db(Group, 'testdb-1') as Group:\n Group(name='hello testdb!').save() # Saves in testdb-1\n \"\"\"\n\n def __init__(self, cls, db_alias):\n \"\"\"Construct the switch_db context manager\n\n :param cls: the class to change the registered db\n :param db_alias: the name of the specific database to use\n \"\"\"\n self.cls = cls\n self.collection = cls._get_collection()\n self.db_alias = db_alias\n self.ori_db_alias = cls._meta.get('db_alias', DEFAULT_CONNECTION_NAME)\n\n def __enter__(self):\n \"\"\"Change the db_alias and clear the cached collection.\"\"\"\n self.cls._meta['db_alias'] = self.db_alias\n self.cls._collection = None\n return self.cls\n\n def __exit__(self, t, value, traceback):\n \"\"\"Reset the db_alias and collection.\"\"\"\n self.cls._meta['db_alias'] = self.ori_db_alias\n self.cls._collection = self.collection\n\n\nclass switch_collection(object):\n \"\"\"switch_collection alias context manager.\n\n Example ::\n\n class Group(Document):\n name = StringField()\n\n Group(name='test').save() # Saves in the default db\n\n with switch_collection(Group, 'group1') as Group:\n Group(name='hello testdb!').save() # Saves in group1 collection\n \"\"\"\n\n def __init__(self, cls, collection_name):\n \"\"\"Construct the switch_collection context manager.\n\n :param cls: the class to change the registered db\n :param collection_name: the name of the collection to use\n \"\"\"\n self.cls = cls\n self.ori_collection = cls._get_collection()\n self.ori_get_collection_name = cls._get_collection_name\n self.collection_name = collection_name\n\n def __enter__(self):\n \"\"\"Change the _get_collection_name and clear the cached collection.\"\"\"\n\n @classmethod\n def _get_collection_name(cls):\n return self.collection_name\n\n self.cls._get_collection_name = _get_collection_name\n self.cls._collection = None\n return self.cls\n\n def __exit__(self, t, value, traceback):\n \"\"\"Reset the collection.\"\"\"\n self.cls._collection = self.ori_collection\n self.cls._get_collection_name = self.ori_get_collection_name\n\n\nclass no_dereference(object):\n \"\"\"no_dereference context manager.\n\n Turns off all dereferencing in Documents for the duration of the context\n manager::\n\n with no_dereference(Group) as Group:\n Group.objects.find()\n \"\"\"\n\n def __init__(self, cls):\n \"\"\"Construct the no_dereference context manager.\n\n :param cls: the class to turn dereferencing off on\n \"\"\"\n self.cls = cls\n\n ReferenceField = _import_class('ReferenceField')\n GenericReferenceField = _import_class('GenericReferenceField')\n ComplexBaseField = _import_class('ComplexBaseField')\n\n self.deref_fields = [k for k, v in self.cls._fields.iteritems()\n if isinstance(v, (ReferenceField,\n GenericReferenceField,\n ComplexBaseField))]\n\n def __enter__(self):\n \"\"\"Change the objects default and _auto_dereference values.\"\"\"\n for field in self.deref_fields:\n self.cls._fields[field]._auto_dereference = False\n return self.cls\n\n def __exit__(self, t, value, traceback):\n \"\"\"Reset the default and _auto_dereference values.\"\"\"\n for field in self.deref_fields:\n self.cls._fields[field]._auto_dereference = True\n return self.cls\n\n\nclass no_sub_classes(object):\n \"\"\"no_sub_classes context manager.\n\n Only returns instances of this class and no sub (inherited) classes::\n\n with no_sub_classes(Group) as Group:\n Group.objects.find()\n \"\"\"\n\n def __init__(self, cls):\n \"\"\"Construct the no_sub_classes context manager.\n\n :param cls: the class to turn querying sub classes on\n \"\"\"\n self.cls = cls\n\n def __enter__(self):\n \"\"\"Change the objects default and _auto_dereference values.\"\"\"\n self.cls._all_subclasses = self.cls._subclasses\n self.cls._subclasses = (self.cls,)\n return self.cls\n\n def __exit__(self, t, value, traceback):\n \"\"\"Reset the default and _auto_dereference values.\"\"\"\n self.cls._subclasses = self.cls._all_subclasses\n delattr(self.cls, '_all_subclasses')\n return self.cls\n\n\nclass query_counter(object):\n \"\"\"Query_counter context manager to get the number of queries.\"\"\"\n\n def __init__(self):\n \"\"\"Construct the query_counter.\"\"\"\n self.counter = 0\n self.db = get_db()\n\n def __enter__(self):\n \"\"\"On every with block we need to drop the profile collection.\"\"\"\n self.db.set_profiling_level(0)\n self.db.system.profile.drop()\n self.db.set_profiling_level(2)\n return self\n\n def __exit__(self, t, value, traceback):\n \"\"\"Reset the profiling level.\"\"\"\n self.db.set_profiling_level(0)\n\n def __eq__(self, value):\n \"\"\"== Compare querycounter.\"\"\"\n counter = self._get_count()\n return value == counter\n\n def __ne__(self, value):\n \"\"\"!= Compare querycounter.\"\"\"\n return not self.__eq__(value)\n\n def __lt__(self, value):\n \"\"\"< Compare querycounter.\"\"\"\n return self._get_count() < value\n\n def __le__(self, value):\n \"\"\"<= Compare querycounter.\"\"\"\n return self._get_count() <= value\n\n def __gt__(self, value):\n \"\"\"> Compare querycounter.\"\"\"\n return self._get_count() > value\n\n def __ge__(self, value):\n \"\"\">= Compare querycounter.\"\"\"\n return self._get_count() >= value\n\n def __int__(self):\n \"\"\"int representation.\"\"\"\n return self._get_count()\n\n def __repr__(self):\n \"\"\"repr query_counter as the number of queries.\"\"\"\n return u\"%s\" % self._get_count()\n\n def _get_count(self):\n \"\"\"Get the number of queries.\"\"\"\n ignore_query = {'ns': {'$ne': '%s.system.indexes' % self.db.name}}\n count = self.db.system.profile.find(ignore_query).count() - self.counter\n self.counter += 1 # Account for the query we just fired\n return count\n\n\n@contextmanager\ndef set_write_concern(collection, write_concerns):\n combined_concerns = dict(collection.write_concern.document.items())\n combined_concerns.update(write_concerns)\n yield collection.with_options(write_concern=WriteConcern(**combined_concerns))\n", "path": "mongoengine/context_managers.py"}]}
| 2,554 | 130 |
gh_patches_debug_29501
|
rasdani/github-patches
|
git_diff
|
adap__flower-1347
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
sklearn-logreg-mnist example is outdated
### Describe the bug
The sklearn-logreg-mnist example is outdated and does not work with flower 1.0.0. I will make a pull request to fix this.
### Steps/Code to Reproduce
sh ./run.sh
### Expected Results
The example should run with no errors, both on the client and on the server.
### Actual Results
Number of errors arise when the script runs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/sklearn-logreg-mnist/server.py`
Content:
```
1 import flwr as fl
2 import utils
3 from sklearn.metrics import log_loss
4 from sklearn.linear_model import LogisticRegression
5 from typing import Dict
6
7
8 def fit_round(server_round: int) -> Dict:
9 """Send round number to client."""
10 return {"server_round": server_round}
11
12
13 def get_evaluate_fn(model: LogisticRegression):
14 """Return an evaluation function for server-side evaluation."""
15
16 # Load test data here to avoid the overhead of doing it in `evaluate` itself
17 _, (X_test, y_test) = utils.load_mnist()
18
19 # The `evaluate` function will be called after every round
20 def evaluate(parameters: fl.common.Weights):
21 # Update model with the latest parameters
22 utils.set_model_params(model, parameters)
23 loss = log_loss(y_test, model.predict_proba(X_test))
24 accuracy = model.score(X_test, y_test)
25 return loss, {"accuracy": accuracy}
26
27 return evaluate
28
29
30 # Start Flower server for five rounds of federated learning
31 if __name__ == "__main__":
32 model = LogisticRegression()
33 utils.set_initial_params(model)
34 strategy = fl.server.strategy.FedAvg(
35 min_available_clients=2,
36 evaluate_fn=get_evaluate_fn(model),
37 on_fit_config_fn=fit_round,
38 )
39 fl.server.start_server(
40 server_address="0.0.0.0:8080",
41 strategy=strategy,
42 config={"num_rounds": 5},
43 )
44
```
Path: `examples/sklearn-logreg-mnist/client.py`
Content:
```
1 import warnings
2 import flwr as fl
3 import numpy as np
4
5 from sklearn.linear_model import LogisticRegression
6 from sklearn.metrics import log_loss
7
8 import utils
9
10 if __name__ == "__main__":
11 # Load MNIST dataset from https://www.openml.org/d/554
12 (X_train, y_train), (X_test, y_test) = utils.load_mnist()
13
14 # Split train set into 10 partitions and randomly use one for training.
15 partition_id = np.random.choice(10)
16 (X_train, y_train) = utils.partition(X_train, y_train, 10)[partition_id]
17
18 # Create LogisticRegression Model
19 model = LogisticRegression(
20 penalty="l2",
21 max_iter=1, # local epoch
22 warm_start=True, # prevent refreshing weights when fitting
23 )
24
25 # Setting initial parameters, akin to model.compile for keras models
26 utils.set_initial_params(model)
27
28 # Define Flower client
29 class MnistClient(fl.client.NumPyClient):
30 def get_parameters(self): # type: ignore
31 return utils.get_model_parameters(model)
32
33 def fit(self, parameters, config): # type: ignore
34 utils.set_model_params(model, parameters)
35 # Ignore convergence failure due to low local epochs
36 with warnings.catch_warnings():
37 warnings.simplefilter("ignore")
38 model.fit(X_train, y_train)
39 print(f"Training finished for round {config['server_round']}")
40 return utils.get_model_parameters(model), len(X_train), {}
41
42 def evaluate(self, parameters, config): # type: ignore
43 utils.set_model_params(model, parameters)
44 loss = log_loss(y_test, model.predict_proba(X_test))
45 accuracy = model.score(X_test, y_test)
46 return loss, len(X_test), {"accuracy": accuracy}
47
48 # Start Flower client
49 fl.client.start_numpy_client("0.0.0.0:8080", client=MnistClient())
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/sklearn-logreg-mnist/client.py b/examples/sklearn-logreg-mnist/client.py
--- a/examples/sklearn-logreg-mnist/client.py
+++ b/examples/sklearn-logreg-mnist/client.py
@@ -27,7 +27,7 @@
# Define Flower client
class MnistClient(fl.client.NumPyClient):
- def get_parameters(self): # type: ignore
+ def get_parameters(self, config): # type: ignore
return utils.get_model_parameters(model)
def fit(self, parameters, config): # type: ignore
@@ -46,4 +46,4 @@
return loss, len(X_test), {"accuracy": accuracy}
# Start Flower client
- fl.client.start_numpy_client("0.0.0.0:8080", client=MnistClient())
+ fl.client.start_numpy_client(server_address="0.0.0.0:8080", client=MnistClient())
diff --git a/examples/sklearn-logreg-mnist/server.py b/examples/sklearn-logreg-mnist/server.py
--- a/examples/sklearn-logreg-mnist/server.py
+++ b/examples/sklearn-logreg-mnist/server.py
@@ -17,7 +17,7 @@
_, (X_test, y_test) = utils.load_mnist()
# The `evaluate` function will be called after every round
- def evaluate(parameters: fl.common.Weights):
+ def evaluate(server_round, parameters: fl.common.NDArrays, config):
# Update model with the latest parameters
utils.set_model_params(model, parameters)
loss = log_loss(y_test, model.predict_proba(X_test))
@@ -39,5 +39,5 @@
fl.server.start_server(
server_address="0.0.0.0:8080",
strategy=strategy,
- config={"num_rounds": 5},
+ config=fl.server.ServerConfig(num_rounds=5),
)
|
{"golden_diff": "diff --git a/examples/sklearn-logreg-mnist/client.py b/examples/sklearn-logreg-mnist/client.py\n--- a/examples/sklearn-logreg-mnist/client.py\n+++ b/examples/sklearn-logreg-mnist/client.py\n@@ -27,7 +27,7 @@\n \n # Define Flower client\n class MnistClient(fl.client.NumPyClient):\n- def get_parameters(self): # type: ignore\n+ def get_parameters(self, config): # type: ignore\n return utils.get_model_parameters(model)\n \n def fit(self, parameters, config): # type: ignore\n@@ -46,4 +46,4 @@\n return loss, len(X_test), {\"accuracy\": accuracy}\n \n # Start Flower client\n- fl.client.start_numpy_client(\"0.0.0.0:8080\", client=MnistClient())\n+ fl.client.start_numpy_client(server_address=\"0.0.0.0:8080\", client=MnistClient())\ndiff --git a/examples/sklearn-logreg-mnist/server.py b/examples/sklearn-logreg-mnist/server.py\n--- a/examples/sklearn-logreg-mnist/server.py\n+++ b/examples/sklearn-logreg-mnist/server.py\n@@ -17,7 +17,7 @@\n _, (X_test, y_test) = utils.load_mnist()\n \n # The `evaluate` function will be called after every round\n- def evaluate(parameters: fl.common.Weights):\n+ def evaluate(server_round, parameters: fl.common.NDArrays, config):\n # Update model with the latest parameters\n utils.set_model_params(model, parameters)\n loss = log_loss(y_test, model.predict_proba(X_test))\n@@ -39,5 +39,5 @@\n fl.server.start_server(\n server_address=\"0.0.0.0:8080\",\n strategy=strategy,\n- config={\"num_rounds\": 5},\n+ config=fl.server.ServerConfig(num_rounds=5),\n )\n", "issue": "sklearn-logreg-mnist example is outdated\n### Describe the bug\n\nThe sklearn-logreg-mnist example is outdated and does not work with flower 1.0.0. I will make a pull request to fix this.\n\n### Steps/Code to Reproduce\n\nsh ./run.sh\n\n### Expected Results\n\nThe example should run with no errors, both on the client and on the server.\n\n### Actual Results\n\nNumber of errors arise when the script runs.\n", "before_files": [{"content": "import flwr as fl\nimport utils\nfrom sklearn.metrics import log_loss\nfrom sklearn.linear_model import LogisticRegression\nfrom typing import Dict\n\n\ndef fit_round(server_round: int) -> Dict:\n \"\"\"Send round number to client.\"\"\"\n return {\"server_round\": server_round}\n\n\ndef get_evaluate_fn(model: LogisticRegression):\n \"\"\"Return an evaluation function for server-side evaluation.\"\"\"\n\n # Load test data here to avoid the overhead of doing it in `evaluate` itself\n _, (X_test, y_test) = utils.load_mnist()\n\n # The `evaluate` function will be called after every round\n def evaluate(parameters: fl.common.Weights):\n # Update model with the latest parameters\n utils.set_model_params(model, parameters)\n loss = log_loss(y_test, model.predict_proba(X_test))\n accuracy = model.score(X_test, y_test)\n return loss, {\"accuracy\": accuracy}\n\n return evaluate\n\n\n# Start Flower server for five rounds of federated learning\nif __name__ == \"__main__\":\n model = LogisticRegression()\n utils.set_initial_params(model)\n strategy = fl.server.strategy.FedAvg(\n min_available_clients=2,\n evaluate_fn=get_evaluate_fn(model),\n on_fit_config_fn=fit_round,\n )\n fl.server.start_server(\n server_address=\"0.0.0.0:8080\",\n strategy=strategy,\n config={\"num_rounds\": 5},\n )\n", "path": "examples/sklearn-logreg-mnist/server.py"}, {"content": "import warnings\nimport flwr as fl\nimport numpy as np\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import log_loss\n\nimport utils\n\nif __name__ == \"__main__\":\n # Load MNIST dataset from https://www.openml.org/d/554\n (X_train, y_train), (X_test, y_test) = utils.load_mnist()\n\n # Split train set into 10 partitions and randomly use one for training.\n partition_id = np.random.choice(10)\n (X_train, y_train) = utils.partition(X_train, y_train, 10)[partition_id]\n\n # Create LogisticRegression Model\n model = LogisticRegression(\n penalty=\"l2\",\n max_iter=1, # local epoch\n warm_start=True, # prevent refreshing weights when fitting\n )\n\n # Setting initial parameters, akin to model.compile for keras models\n utils.set_initial_params(model)\n\n # Define Flower client\n class MnistClient(fl.client.NumPyClient):\n def get_parameters(self): # type: ignore\n return utils.get_model_parameters(model)\n\n def fit(self, parameters, config): # type: ignore\n utils.set_model_params(model, parameters)\n # Ignore convergence failure due to low local epochs\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n model.fit(X_train, y_train)\n print(f\"Training finished for round {config['server_round']}\")\n return utils.get_model_parameters(model), len(X_train), {}\n\n def evaluate(self, parameters, config): # type: ignore\n utils.set_model_params(model, parameters)\n loss = log_loss(y_test, model.predict_proba(X_test))\n accuracy = model.score(X_test, y_test)\n return loss, len(X_test), {\"accuracy\": accuracy}\n\n # Start Flower client\n fl.client.start_numpy_client(\"0.0.0.0:8080\", client=MnistClient())\n", "path": "examples/sklearn-logreg-mnist/client.py"}], "after_files": [{"content": "import flwr as fl\nimport utils\nfrom sklearn.metrics import log_loss\nfrom sklearn.linear_model import LogisticRegression\nfrom typing import Dict\n\n\ndef fit_round(server_round: int) -> Dict:\n \"\"\"Send round number to client.\"\"\"\n return {\"server_round\": server_round}\n\n\ndef get_evaluate_fn(model: LogisticRegression):\n \"\"\"Return an evaluation function for server-side evaluation.\"\"\"\n\n # Load test data here to avoid the overhead of doing it in `evaluate` itself\n _, (X_test, y_test) = utils.load_mnist()\n\n # The `evaluate` function will be called after every round\n def evaluate(server_round, parameters: fl.common.NDArrays, config):\n # Update model with the latest parameters\n utils.set_model_params(model, parameters)\n loss = log_loss(y_test, model.predict_proba(X_test))\n accuracy = model.score(X_test, y_test)\n return loss, {\"accuracy\": accuracy}\n\n return evaluate\n\n\n# Start Flower server for five rounds of federated learning\nif __name__ == \"__main__\":\n model = LogisticRegression()\n utils.set_initial_params(model)\n strategy = fl.server.strategy.FedAvg(\n min_available_clients=2,\n evaluate_fn=get_evaluate_fn(model),\n on_fit_config_fn=fit_round,\n )\n fl.server.start_server(\n server_address=\"0.0.0.0:8080\",\n strategy=strategy,\n config=fl.server.ServerConfig(num_rounds=5),\n )\n", "path": "examples/sklearn-logreg-mnist/server.py"}, {"content": "import warnings\nimport flwr as fl\nimport numpy as np\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import log_loss\n\nimport utils\n\nif __name__ == \"__main__\":\n # Load MNIST dataset from https://www.openml.org/d/554\n (X_train, y_train), (X_test, y_test) = utils.load_mnist()\n\n # Split train set into 10 partitions and randomly use one for training.\n partition_id = np.random.choice(10)\n (X_train, y_train) = utils.partition(X_train, y_train, 10)[partition_id]\n\n # Create LogisticRegression Model\n model = LogisticRegression(\n penalty=\"l2\",\n max_iter=1, # local epoch\n warm_start=True, # prevent refreshing weights when fitting\n )\n\n # Setting initial parameters, akin to model.compile for keras models\n utils.set_initial_params(model)\n\n # Define Flower client\n class MnistClient(fl.client.NumPyClient):\n def get_parameters(self, config): # type: ignore\n return utils.get_model_parameters(model)\n\n def fit(self, parameters, config): # type: ignore\n utils.set_model_params(model, parameters)\n # Ignore convergence failure due to low local epochs\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n model.fit(X_train, y_train)\n print(f\"Training finished for round {config['server_round']}\")\n return utils.get_model_parameters(model), len(X_train), {}\n\n def evaluate(self, parameters, config): # type: ignore\n utils.set_model_params(model, parameters)\n loss = log_loss(y_test, model.predict_proba(X_test))\n accuracy = model.score(X_test, y_test)\n return loss, len(X_test), {\"accuracy\": accuracy}\n\n # Start Flower client\n fl.client.start_numpy_client(server_address=\"0.0.0.0:8080\", client=MnistClient())\n", "path": "examples/sklearn-logreg-mnist/client.py"}]}
| 1,283 | 441 |
gh_patches_debug_13768
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-4915
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Export-Button Problem in Modul "Brainstorming (with map)" on Prod, Stage and Dev
**URL:** https://meinberlin-dev.liqd.net/dashboard/projects/multimodul-test-merkmalkategorie/basic/
**user:** initiator, admin
**expected behaviour:** as I user I want to export all Ideas and Comments in all moduls which have Ideas/Proposals
**behaviour:** In the Modules "Brainstorming/Brainstorming with Map" I cannot see the Excel-Export-Button and therefore not export the ideas/comments in my project.
**important screensize:** -
**device & browser:** -
**Comment/Question:** Every other modul with proposals/ideas has the excel-export-button. There's a workaround when recreating the right URL, I can get to the Excel-Export page. In this case: https://meinberlin-dev.liqd.net/dashboard/modules/brainstorming-mit-karte-7/export/mapidea/
<img width="311" alt="Bildschirmfoto 2023-02-03 um 10 50 25" src="https://user-images.githubusercontent.com/113608720/216568760-5075d601-eb68-44f1-9209-a3b547d994f9.png">
Screenshot?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/mapideas/dashboard.py`
Content:
```
1 from django.urls import reverse
2 from django.utils.translation import gettext_lazy as _
3
4 from adhocracy4.dashboard import DashboardComponent
5 from adhocracy4.dashboard import components
6
7 from . import exports
8 from . import views
9
10
11 class ExportMapIdeaComponent(DashboardComponent):
12 identifier = "mapidea_export"
13 weight = 50
14 label = _("Export Excel")
15
16 def is_effective(self, module):
17 return (
18 module.blueprint_type == "MIC"
19 and not module.project.is_draft
20 and not module.is_draft
21 )
22
23 def get_progress(self, module):
24 return 0, 0
25
26 def get_base_url(self, module):
27 return reverse(
28 "a4dashboard:mapidea-export-module",
29 kwargs={
30 "module_slug": module.slug,
31 },
32 )
33
34 def get_urls(self):
35 return [
36 (
37 r"^modules/(?P<module_slug>[-\w_]+)/export/mapidea/$",
38 views.MapIdeaDashboardExportView.as_view(component=self),
39 "mapidea-export-module",
40 ),
41 (
42 r"^modules/(?P<module_slug>[-\w_]+)/export/mapidea/ideas/$",
43 exports.MapIdeaExportView.as_view(),
44 "mapidea-export",
45 ),
46 (
47 r"^modules/(?P<module_slug>[-\w_]+)/export/mapidea/comments/$",
48 exports.MapIdeaCommentExportView.as_view(),
49 "mapidea-comment-export",
50 ),
51 ]
52
53
54 components.register_module(ExportMapIdeaComponent())
55
```
Path: `meinberlin/apps/ideas/dashboard.py`
Content:
```
1 from django.urls import reverse
2 from django.utils.translation import gettext_lazy as _
3
4 from adhocracy4.dashboard import DashboardComponent
5 from adhocracy4.dashboard import components
6
7 from . import exports
8 from . import views
9
10
11 class ExportIdeaComponent(DashboardComponent):
12 identifier = "idea_export"
13 weight = 50
14 label = _("Export Excel")
15
16 def is_effective(self, module):
17 return (
18 module.blueprint_type == "IC"
19 and not module.project.is_draft
20 and not module.is_draft
21 )
22
23 def get_progress(self, module):
24 return 0, 0
25
26 def get_base_url(self, module):
27 return reverse(
28 "a4dashboard:idea-export-module",
29 kwargs={
30 "module_slug": module.slug,
31 },
32 )
33
34 def get_urls(self):
35 return [
36 (
37 r"^modules/(?P<module_slug>[-\w_]+)/export/idea/$",
38 views.IdeaDashboardExportView.as_view(component=self),
39 "idea-export-module",
40 ),
41 (
42 r"^modules/(?P<module_slug>[-\w_]+)/export/idea/ideas/$",
43 exports.IdeaExportView.as_view(),
44 "idea-export",
45 ),
46 (
47 r"^modules/(?P<module_slug>[-\w_]+)/export/idea/comments/$",
48 exports.IdeaCommentExportView.as_view(),
49 "idea-comment-export",
50 ),
51 ]
52
53
54 components.register_module(ExportIdeaComponent())
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/meinberlin/apps/ideas/dashboard.py b/meinberlin/apps/ideas/dashboard.py
--- a/meinberlin/apps/ideas/dashboard.py
+++ b/meinberlin/apps/ideas/dashboard.py
@@ -15,7 +15,7 @@
def is_effective(self, module):
return (
- module.blueprint_type == "IC"
+ module.blueprint_type in ["IC", "BS"]
and not module.project.is_draft
and not module.is_draft
)
diff --git a/meinberlin/apps/mapideas/dashboard.py b/meinberlin/apps/mapideas/dashboard.py
--- a/meinberlin/apps/mapideas/dashboard.py
+++ b/meinberlin/apps/mapideas/dashboard.py
@@ -15,7 +15,7 @@
def is_effective(self, module):
return (
- module.blueprint_type == "MIC"
+ module.blueprint_type in ["MIC", "MBS"]
and not module.project.is_draft
and not module.is_draft
)
|
{"golden_diff": "diff --git a/meinberlin/apps/ideas/dashboard.py b/meinberlin/apps/ideas/dashboard.py\n--- a/meinberlin/apps/ideas/dashboard.py\n+++ b/meinberlin/apps/ideas/dashboard.py\n@@ -15,7 +15,7 @@\n \n def is_effective(self, module):\n return (\n- module.blueprint_type == \"IC\"\n+ module.blueprint_type in [\"IC\", \"BS\"]\n and not module.project.is_draft\n and not module.is_draft\n )\ndiff --git a/meinberlin/apps/mapideas/dashboard.py b/meinberlin/apps/mapideas/dashboard.py\n--- a/meinberlin/apps/mapideas/dashboard.py\n+++ b/meinberlin/apps/mapideas/dashboard.py\n@@ -15,7 +15,7 @@\n \n def is_effective(self, module):\n return (\n- module.blueprint_type == \"MIC\"\n+ module.blueprint_type in [\"MIC\", \"MBS\"]\n and not module.project.is_draft\n and not module.is_draft\n )\n", "issue": "Export-Button Problem in Modul \"Brainstorming (with map)\" on Prod, Stage and Dev\n**URL:** https://meinberlin-dev.liqd.net/dashboard/projects/multimodul-test-merkmalkategorie/basic/\r\n**user:** initiator, admin\r\n**expected behaviour:** as I user I want to export all Ideas and Comments in all moduls which have Ideas/Proposals\r\n**behaviour:** In the Modules \"Brainstorming/Brainstorming with Map\" I cannot see the Excel-Export-Button and therefore not export the ideas/comments in my project. \r\n**important screensize:** - \r\n**device & browser:** - \r\n**Comment/Question:** Every other modul with proposals/ideas has the excel-export-button. There's a workaround when recreating the right URL, I can get to the Excel-Export page. In this case: https://meinberlin-dev.liqd.net/dashboard/modules/brainstorming-mit-karte-7/export/mapidea/\r\n<img width=\"311\" alt=\"Bildschirm\u00adfoto 2023-02-03 um 10 50 25\" src=\"https://user-images.githubusercontent.com/113608720/216568760-5075d601-eb68-44f1-9209-a3b547d994f9.png\">\r\n\r\n\r\nScreenshot?\r\n\n", "before_files": [{"content": "from django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\n\nfrom adhocracy4.dashboard import DashboardComponent\nfrom adhocracy4.dashboard import components\n\nfrom . import exports\nfrom . import views\n\n\nclass ExportMapIdeaComponent(DashboardComponent):\n identifier = \"mapidea_export\"\n weight = 50\n label = _(\"Export Excel\")\n\n def is_effective(self, module):\n return (\n module.blueprint_type == \"MIC\"\n and not module.project.is_draft\n and not module.is_draft\n )\n\n def get_progress(self, module):\n return 0, 0\n\n def get_base_url(self, module):\n return reverse(\n \"a4dashboard:mapidea-export-module\",\n kwargs={\n \"module_slug\": module.slug,\n },\n )\n\n def get_urls(self):\n return [\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/mapidea/$\",\n views.MapIdeaDashboardExportView.as_view(component=self),\n \"mapidea-export-module\",\n ),\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/mapidea/ideas/$\",\n exports.MapIdeaExportView.as_view(),\n \"mapidea-export\",\n ),\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/mapidea/comments/$\",\n exports.MapIdeaCommentExportView.as_view(),\n \"mapidea-comment-export\",\n ),\n ]\n\n\ncomponents.register_module(ExportMapIdeaComponent())\n", "path": "meinberlin/apps/mapideas/dashboard.py"}, {"content": "from django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\n\nfrom adhocracy4.dashboard import DashboardComponent\nfrom adhocracy4.dashboard import components\n\nfrom . import exports\nfrom . import views\n\n\nclass ExportIdeaComponent(DashboardComponent):\n identifier = \"idea_export\"\n weight = 50\n label = _(\"Export Excel\")\n\n def is_effective(self, module):\n return (\n module.blueprint_type == \"IC\"\n and not module.project.is_draft\n and not module.is_draft\n )\n\n def get_progress(self, module):\n return 0, 0\n\n def get_base_url(self, module):\n return reverse(\n \"a4dashboard:idea-export-module\",\n kwargs={\n \"module_slug\": module.slug,\n },\n )\n\n def get_urls(self):\n return [\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/idea/$\",\n views.IdeaDashboardExportView.as_view(component=self),\n \"idea-export-module\",\n ),\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/idea/ideas/$\",\n exports.IdeaExportView.as_view(),\n \"idea-export\",\n ),\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/idea/comments/$\",\n exports.IdeaCommentExportView.as_view(),\n \"idea-comment-export\",\n ),\n ]\n\n\ncomponents.register_module(ExportIdeaComponent())\n", "path": "meinberlin/apps/ideas/dashboard.py"}], "after_files": [{"content": "from django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\n\nfrom adhocracy4.dashboard import DashboardComponent\nfrom adhocracy4.dashboard import components\n\nfrom . import exports\nfrom . import views\n\n\nclass ExportMapIdeaComponent(DashboardComponent):\n identifier = \"mapidea_export\"\n weight = 50\n label = _(\"Export Excel\")\n\n def is_effective(self, module):\n return (\n module.blueprint_type in [\"MIC\", \"MBS\"]\n and not module.project.is_draft\n and not module.is_draft\n )\n\n def get_progress(self, module):\n return 0, 0\n\n def get_base_url(self, module):\n return reverse(\n \"a4dashboard:mapidea-export-module\",\n kwargs={\n \"module_slug\": module.slug,\n },\n )\n\n def get_urls(self):\n return [\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/mapidea/$\",\n views.MapIdeaDashboardExportView.as_view(component=self),\n \"mapidea-export-module\",\n ),\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/mapidea/ideas/$\",\n exports.MapIdeaExportView.as_view(),\n \"mapidea-export\",\n ),\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/mapidea/comments/$\",\n exports.MapIdeaCommentExportView.as_view(),\n \"mapidea-comment-export\",\n ),\n ]\n\n\ncomponents.register_module(ExportMapIdeaComponent())\n", "path": "meinberlin/apps/mapideas/dashboard.py"}, {"content": "from django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\n\nfrom adhocracy4.dashboard import DashboardComponent\nfrom adhocracy4.dashboard import components\n\nfrom . import exports\nfrom . import views\n\n\nclass ExportIdeaComponent(DashboardComponent):\n identifier = \"idea_export\"\n weight = 50\n label = _(\"Export Excel\")\n\n def is_effective(self, module):\n return (\n module.blueprint_type in [\"IC\", \"BS\"]\n and not module.project.is_draft\n and not module.is_draft\n )\n\n def get_progress(self, module):\n return 0, 0\n\n def get_base_url(self, module):\n return reverse(\n \"a4dashboard:idea-export-module\",\n kwargs={\n \"module_slug\": module.slug,\n },\n )\n\n def get_urls(self):\n return [\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/idea/$\",\n views.IdeaDashboardExportView.as_view(component=self),\n \"idea-export-module\",\n ),\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/idea/ideas/$\",\n exports.IdeaExportView.as_view(),\n \"idea-export\",\n ),\n (\n r\"^modules/(?P<module_slug>[-\\w_]+)/export/idea/comments/$\",\n exports.IdeaCommentExportView.as_view(),\n \"idea-comment-export\",\n ),\n ]\n\n\ncomponents.register_module(ExportIdeaComponent())\n", "path": "meinberlin/apps/ideas/dashboard.py"}]}
| 1,446 | 227 |
gh_patches_debug_9528
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-23319
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
empty_strided
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/torch/creation_ops.py`
Content:
```
1 # local
2 import ivy
3 from ivy.functional.frontends.torch.func_wrapper import (
4 to_ivy_arrays_and_back,
5 to_ivy_shape,
6 )
7 from ivy.func_wrapper import with_unsupported_dtypes
8 import ivy.functional.frontends.torch as torch_frontend
9
10
11 @to_ivy_arrays_and_back
12 @with_unsupported_dtypes({"2.0.1 and below": ("float16",)}, "torch")
13 def arange(
14 start=0,
15 end=None,
16 step=1,
17 *,
18 out=None,
19 dtype=None,
20 layout=None,
21 device=None,
22 requires_grad=False,
23 ):
24 return ivy.arange(start, end, step, dtype=dtype, device=device, out=out)
25
26
27 @to_ivy_arrays_and_back
28 def as_strided(input, size, stride, storage_offset=None):
29 ind = ivy.array([0], dtype=ivy.int64)
30 for i, (size_i, stride_i) in enumerate(zip(size, stride)):
31 r_size = [1] * len(stride)
32 r_size[i] = -1
33 ind = ind + ivy.reshape(ivy.arange(size_i), r_size) * stride_i
34 if storage_offset:
35 ind = ind + storage_offset
36 # in case the input is a non-contiguous native array,
37 # the return will differ from torch.as_strided
38 if ivy.is_ivy_array(input) and input.base is not None:
39 return ivy.gather(ivy.flatten(input.base), ind)
40 return ivy.gather(ivy.flatten(input), ind)
41
42
43 @to_ivy_arrays_and_back
44 def as_tensor(
45 data,
46 *,
47 dtype=None,
48 device=None,
49 ):
50 if dtype is None:
51 if isinstance(data, int):
52 dtype = ivy.int64
53 elif isinstance(data, float):
54 dtype = torch_frontend.get_default_dtype()
55 elif isinstance(data, (list, tuple)):
56 if all(isinstance(d, int) for d in data):
57 dtype = ivy.int64
58 else:
59 dtype = torch_frontend.get_default_dtype()
60 return ivy.asarray(data, dtype=dtype, device=device)
61
62
63 @to_ivy_arrays_and_back
64 def asarray(
65 obj,
66 *,
67 dtype=None,
68 device=None,
69 copy=None,
70 ):
71 return ivy.asarray(obj, copy=copy, dtype=dtype, device=device)
72
73
74 @to_ivy_arrays_and_back
75 def empty(
76 *args,
77 size=None,
78 out=None,
79 dtype=None,
80 layout=None,
81 device=None,
82 requires_grad=False,
83 pin_memory=False,
84 memory_format=None,
85 ):
86 if args and size:
87 raise TypeError("empty() got multiple values for argument 'shape'")
88 if size is None:
89 size = args[0] if isinstance(args[0], (tuple, list, ivy.Shape)) else args
90 return ivy.empty(shape=size, dtype=dtype, device=device, out=out)
91
92
93 @to_ivy_arrays_and_back
94 def empty_like(
95 input,
96 *,
97 dtype=None,
98 layout=None,
99 device=None,
100 requires_grad=False,
101 memory_format=None,
102 ):
103 ret = ivy.empty_like(input, dtype=dtype, device=device)
104 return ret
105
106
107 @to_ivy_arrays_and_back
108 def eye(
109 n, m=None, *, out=None, dtype=None, layout=None, device=None, requires_grad=False
110 ):
111 return ivy.eye(n, m, dtype=dtype, device=device, out=out)
112
113
114 @to_ivy_arrays_and_back
115 def from_dlpack(ext_tensor):
116 return ivy.from_dlpack(ext_tensor)
117
118
119 @to_ivy_arrays_and_back
120 def from_numpy(data, /):
121 return ivy.asarray(data, dtype=ivy.dtype(data))
122
123
124 @to_ivy_arrays_and_back
125 def frombuffer(
126 buffer,
127 *,
128 dtype,
129 count=-1,
130 offset=0,
131 requires_grad=False,
132 ):
133 return ivy.frombuffer(buffer, dtype=dtype, count=count, offset=offset)
134
135
136 @to_ivy_arrays_and_back
137 def full(
138 size,
139 fill_value,
140 *,
141 out=None,
142 dtype=None,
143 layout=None,
144 device=None,
145 requires_grad=None,
146 ):
147 ret = ivy.full(size, fill_value, dtype=dtype, device=device, out=out)
148 return ret
149
150
151 @to_ivy_arrays_and_back
152 def full_like(
153 input,
154 fill_value,
155 *,
156 dtype=None,
157 layout=None,
158 device=None,
159 requires_grad=False,
160 memory_format=None,
161 ):
162 fill_value = ivy.to_scalar(fill_value)
163 return ivy.full_like(input, fill_value, dtype=dtype, device=device)
164
165
166 @to_ivy_arrays_and_back
167 def heaviside(input, values, *, out=None):
168 return ivy.heaviside(input, values, out=out)
169
170
171 @to_ivy_arrays_and_back
172 @with_unsupported_dtypes({"2.0.1 and below": ("float16",)}, "torch")
173 def linspace(
174 start,
175 end,
176 steps,
177 *,
178 out=None,
179 dtype=None,
180 device=None,
181 layout=None,
182 requires_grad=False,
183 ):
184 ret = ivy.linspace(start, end, num=steps, dtype=dtype, device=device, out=out)
185 return ret
186
187
188 @to_ivy_arrays_and_back
189 @with_unsupported_dtypes({"2.0.1 and below": ("float16",)}, "torch")
190 def logspace(
191 start,
192 end,
193 steps,
194 *,
195 base=10.0,
196 out=None,
197 dtype=None,
198 layout=None,
199 device=None,
200 requires_grad=False,
201 ):
202 ret = ivy.logspace(
203 start, end, num=steps, base=base, dtype=dtype, device=device, out=out
204 )
205 return ret
206
207
208 @to_ivy_shape
209 @to_ivy_arrays_and_back
210 def ones(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):
211 if args and size:
212 raise TypeError("ones() got multiple values for argument 'shape'")
213 if size is None:
214 size = args[0] if isinstance(args[0], (tuple, list, ivy.Shape)) else args
215 return ivy.ones(shape=size, dtype=dtype, device=device, out=out)
216
217
218 @to_ivy_arrays_and_back
219 def ones_like_v_0p3p0_to_0p3p1(input, out=None):
220 return ivy.ones_like(input, out=None)
221
222
223 @to_ivy_arrays_and_back
224 def ones_like_v_0p4p0_and_above(
225 input,
226 *,
227 dtype=None,
228 layout=None,
229 device=None,
230 requires_grad=False,
231 memory_format=None,
232 ):
233 ret = ivy.ones_like(input, dtype=dtype, device=device)
234 return ret
235
236
237 @to_ivy_arrays_and_back
238 @with_unsupported_dtypes({"2.0.1 and below": ("float16",)}, "torch")
239 def range(
240 *args,
241 dtype=None,
242 layout=None,
243 device=None,
244 requires_grad=False,
245 ):
246 if len(args) == 1:
247 end = args[0]
248 start = 0
249 step = 1
250 elif len(args) == 2:
251 end = args[1]
252 start = args[0]
253 step = 1
254 elif len(args) == 3:
255 start, end, step = args
256 else:
257 ivy.utils.assertions.check_true(
258 len(args) == 1 or len(args) == 3,
259 "only 1 or 3 positional arguments are supported",
260 )
261 range_vec = []
262 elem = start
263 while 1:
264 range_vec = range_vec + [elem]
265 elem += step
266 if start == end:
267 break
268 if start < end:
269 if elem > end:
270 break
271 else:
272 if elem < end:
273 break
274 return ivy.array(range_vec, dtype=dtype, device=device)
275
276
277 @to_ivy_arrays_and_back
278 def tensor(
279 data,
280 *,
281 dtype=None,
282 device=None,
283 requires_grad=False,
284 pin_memory=False,
285 ):
286 return ivy.array(data, dtype=dtype, device=device)
287
288
289 @to_ivy_shape
290 @to_ivy_arrays_and_back
291 def zeros(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):
292 if args and size:
293 raise TypeError("zeros() got multiple values for argument 'shape'")
294 if size is None:
295 size = args[0] if isinstance(args[0], (tuple, list, ivy.Shape)) else args
296 return ivy.zeros(shape=size, dtype=dtype, device=device, out=out)
297
298
299 @to_ivy_arrays_and_back
300 def zeros_like(
301 input,
302 *,
303 dtype=None,
304 layout=None,
305 device=None,
306 requires_grad=False,
307 memory_format=None,
308 ):
309 ret = ivy.zeros_like(input, dtype=dtype, device=device)
310 return ret
311
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ivy/functional/frontends/torch/creation_ops.py b/ivy/functional/frontends/torch/creation_ops.py
--- a/ivy/functional/frontends/torch/creation_ops.py
+++ b/ivy/functional/frontends/torch/creation_ops.py
@@ -93,6 +93,24 @@
return ret
+@to_ivy_arrays_and_back
+def empty_strided(
+ size,
+ stride,
+ *,
+ dtype=None,
+ layout=None,
+ device=None,
+ requires_grad=False,
+ pin_memory=False,
+):
+ max_offsets = [(s - 1) * st for s, st in zip(size, stride)]
+ items = sum(max_offsets) + 1
+ empty_array = empty(items, dtype=dtype, device=device)
+ strided_array = as_strided(empty_array, size, stride)
+ return strided_array
+
+
@to_ivy_arrays_and_back
def eye(
n, m=None, *, out=None, dtype=None, layout=None, device=None, requires_grad=False
|
{"golden_diff": "diff --git a/ivy/functional/frontends/torch/creation_ops.py b/ivy/functional/frontends/torch/creation_ops.py\n--- a/ivy/functional/frontends/torch/creation_ops.py\n+++ b/ivy/functional/frontends/torch/creation_ops.py\n@@ -93,6 +93,24 @@\n return ret\n \n \n+@to_ivy_arrays_and_back\n+def empty_strided(\n+ size,\n+ stride,\n+ *,\n+ dtype=None,\n+ layout=None,\n+ device=None,\n+ requires_grad=False,\n+ pin_memory=False,\n+):\n+ max_offsets = [(s - 1) * st for s, st in zip(size, stride)]\n+ items = sum(max_offsets) + 1\n+ empty_array = empty(items, dtype=dtype, device=device)\n+ strided_array = as_strided(empty_array, size, stride)\n+ return strided_array\n+\n+\n @to_ivy_arrays_and_back\n def eye(\n n, m=None, *, out=None, dtype=None, layout=None, device=None, requires_grad=False\n", "issue": "empty_strided\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.torch.func_wrapper import (\n to_ivy_arrays_and_back,\n to_ivy_shape,\n)\nfrom ivy.func_wrapper import with_unsupported_dtypes\nimport ivy.functional.frontends.torch as torch_frontend\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\ndef arange(\n start=0,\n end=None,\n step=1,\n *,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n return ivy.arange(start, end, step, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef as_strided(input, size, stride, storage_offset=None):\n ind = ivy.array([0], dtype=ivy.int64)\n for i, (size_i, stride_i) in enumerate(zip(size, stride)):\n r_size = [1] * len(stride)\n r_size[i] = -1\n ind = ind + ivy.reshape(ivy.arange(size_i), r_size) * stride_i\n if storage_offset:\n ind = ind + storage_offset\n # in case the input is a non-contiguous native array,\n # the return will differ from torch.as_strided\n if ivy.is_ivy_array(input) and input.base is not None:\n return ivy.gather(ivy.flatten(input.base), ind)\n return ivy.gather(ivy.flatten(input), ind)\n\n\n@to_ivy_arrays_and_back\ndef as_tensor(\n data,\n *,\n dtype=None,\n device=None,\n):\n if dtype is None:\n if isinstance(data, int):\n dtype = ivy.int64\n elif isinstance(data, float):\n dtype = torch_frontend.get_default_dtype()\n elif isinstance(data, (list, tuple)):\n if all(isinstance(d, int) for d in data):\n dtype = ivy.int64\n else:\n dtype = torch_frontend.get_default_dtype()\n return ivy.asarray(data, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef asarray(\n obj,\n *,\n dtype=None,\n device=None,\n copy=None,\n):\n return ivy.asarray(obj, copy=copy, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef empty(\n *args,\n size=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n memory_format=None,\n):\n if args and size:\n raise TypeError(\"empty() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list, ivy.Shape)) else args\n return ivy.empty(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef empty_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.empty_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef eye(\n n, m=None, *, out=None, dtype=None, layout=None, device=None, requires_grad=False\n):\n return ivy.eye(n, m, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef from_dlpack(ext_tensor):\n return ivy.from_dlpack(ext_tensor)\n\n\n@to_ivy_arrays_and_back\ndef from_numpy(data, /):\n return ivy.asarray(data, dtype=ivy.dtype(data))\n\n\n@to_ivy_arrays_and_back\ndef frombuffer(\n buffer,\n *,\n dtype,\n count=-1,\n offset=0,\n requires_grad=False,\n):\n return ivy.frombuffer(buffer, dtype=dtype, count=count, offset=offset)\n\n\n@to_ivy_arrays_and_back\ndef full(\n size,\n fill_value,\n *,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=None,\n):\n ret = ivy.full(size, fill_value, dtype=dtype, device=device, out=out)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef full_like(\n input,\n fill_value,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n fill_value = ivy.to_scalar(fill_value)\n return ivy.full_like(input, fill_value, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef heaviside(input, values, *, out=None):\n return ivy.heaviside(input, values, out=out)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\ndef linspace(\n start,\n end,\n steps,\n *,\n out=None,\n dtype=None,\n device=None,\n layout=None,\n requires_grad=False,\n):\n ret = ivy.linspace(start, end, num=steps, dtype=dtype, device=device, out=out)\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\ndef logspace(\n start,\n end,\n steps,\n *,\n base=10.0,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n ret = ivy.logspace(\n start, end, num=steps, base=base, dtype=dtype, device=device, out=out\n )\n return ret\n\n\n@to_ivy_shape\n@to_ivy_arrays_and_back\ndef ones(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):\n if args and size:\n raise TypeError(\"ones() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list, ivy.Shape)) else args\n return ivy.ones(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef ones_like_v_0p3p0_to_0p3p1(input, out=None):\n return ivy.ones_like(input, out=None)\n\n\n@to_ivy_arrays_and_back\ndef ones_like_v_0p4p0_and_above(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.ones_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\ndef range(\n *args,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n if len(args) == 1:\n end = args[0]\n start = 0\n step = 1\n elif len(args) == 2:\n end = args[1]\n start = args[0]\n step = 1\n elif len(args) == 3:\n start, end, step = args\n else:\n ivy.utils.assertions.check_true(\n len(args) == 1 or len(args) == 3,\n \"only 1 or 3 positional arguments are supported\",\n )\n range_vec = []\n elem = start\n while 1:\n range_vec = range_vec + [elem]\n elem += step\n if start == end:\n break\n if start < end:\n if elem > end:\n break\n else:\n if elem < end:\n break\n return ivy.array(range_vec, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef tensor(\n data,\n *,\n dtype=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n):\n return ivy.array(data, dtype=dtype, device=device)\n\n\n@to_ivy_shape\n@to_ivy_arrays_and_back\ndef zeros(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):\n if args and size:\n raise TypeError(\"zeros() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list, ivy.Shape)) else args\n return ivy.zeros(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef zeros_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.zeros_like(input, dtype=dtype, device=device)\n return ret\n", "path": "ivy/functional/frontends/torch/creation_ops.py"}], "after_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.torch.func_wrapper import (\n to_ivy_arrays_and_back,\n to_ivy_shape,\n)\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\ndef arange(\n start=0,\n end=None,\n step=1,\n *,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n return ivy.arange(start, end, step, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef as_strided(input, size, stride, storage_offset=None):\n ind = ivy.array([0], dtype=ivy.int64)\n for i, (size_i, stride_i) in enumerate(zip(size, stride)):\n r_size = [1] * len(stride)\n r_size[i] = -1\n ind = ind + ivy.reshape(ivy.arange(size_i), r_size) * stride_i\n if storage_offset:\n ind = ind + storage_offset\n # in case the input is a non-contiguous native array,\n # the return will differ from torch.as_strided\n if ivy.is_ivy_array(input) and input.base is not None:\n return ivy.gather(ivy.flatten(input.base), ind)\n return ivy.gather(ivy.flatten(input), ind)\n\n\n@to_ivy_arrays_and_back\ndef as_tensor(\n data,\n *,\n dtype=None,\n device=None,\n):\n return ivy.asarray(data, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef asarray(\n obj,\n *,\n dtype=None,\n device=None,\n copy=None,\n):\n return ivy.asarray(obj, copy=copy, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef empty(\n *args,\n size=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n memory_format=None,\n):\n if args and size:\n raise TypeError(\"empty() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list, ivy.Shape)) else args\n return ivy.empty(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef empty_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.empty_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef empty_strided(\n size,\n stride,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n):\n max_offsets = [(s - 1) * st for s, st in zip(size, stride)]\n items = sum(max_offsets) + 1\n empty_array = empty(items, dtype=dtype, device=device)\n strided_array = as_strided(empty_array, size, stride)\n return strided_array\n\n\n@to_ivy_arrays_and_back\ndef eye(\n n, m=None, *, out=None, dtype=None, layout=None, device=None, requires_grad=False\n):\n return ivy.eye(n, m, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef from_dlpack(ext_tensor):\n return ivy.from_dlpack(ext_tensor)\n\n\n@to_ivy_arrays_and_back\ndef from_numpy(data, /):\n return ivy.asarray(data, dtype=ivy.dtype(data))\n\n\n@to_ivy_arrays_and_back\ndef frombuffer(\n buffer,\n *,\n dtype,\n count=-1,\n offset=0,\n requires_grad=False,\n):\n return ivy.frombuffer(buffer, dtype=dtype, count=count, offset=offset)\n\n\n@to_ivy_arrays_and_back\ndef full(\n size,\n fill_value,\n *,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=None,\n):\n ret = ivy.full(size, fill_value, dtype=dtype, device=device, out=out)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef full_like(\n input,\n fill_value,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n fill_value = ivy.to_scalar(fill_value)\n return ivy.full_like(input, fill_value, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef heaviside(input, values, *, out=None):\n return ivy.heaviside(input, values, out=out)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\ndef linspace(\n start,\n end,\n steps,\n *,\n out=None,\n dtype=None,\n device=None,\n layout=None,\n requires_grad=False,\n):\n ret = ivy.linspace(start, end, num=steps, dtype=dtype, device=device, out=out)\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\ndef logspace(\n start,\n end,\n steps,\n *,\n base=10.0,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n ret = ivy.logspace(\n start, end, num=steps, base=base, dtype=dtype, device=device, out=out\n )\n return ret\n\n\n@to_ivy_shape\n@to_ivy_arrays_and_back\ndef ones(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):\n if args and size:\n raise TypeError(\"ones() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list, ivy.Shape)) else args\n return ivy.ones(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef ones_like_v_0p3p0_to_0p3p1(input, out=None):\n return ivy.ones_like(input, out=None)\n\n\n@to_ivy_arrays_and_back\ndef ones_like_v_0p4p0_and_above(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.ones_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.0.1 and below\": (\"float16\",)}, \"torch\")\ndef range(\n *args,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n if len(args) == 1:\n end = args[0]\n start = 0\n step = 1\n elif len(args) == 2:\n end = args[1]\n start = args[0]\n step = 1\n elif len(args) == 3:\n start, end, step = args\n else:\n ivy.utils.assertions.check_true(\n len(args) == 1 or len(args) == 3,\n \"only 1 or 3 positional arguments are supported\",\n )\n range_vec = []\n elem = start\n while 1:\n range_vec = range_vec + [elem]\n elem += step\n if start == end:\n break\n if start < end:\n if elem > end:\n break\n else:\n if elem < end:\n break\n return ivy.array(range_vec, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef tensor(\n data,\n *,\n dtype=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n):\n return ivy.array(data, dtype=dtype, device=device)\n\n\n@to_ivy_shape\n@to_ivy_arrays_and_back\ndef zeros(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):\n if args and size:\n raise TypeError(\"zeros() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list, ivy.Shape)) else args\n return ivy.zeros(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef zeros_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.zeros_like(input, dtype=dtype, device=device)\n return ret\n", "path": "ivy/functional/frontends/torch/creation_ops.py"}]}
| 3,050 | 242 |
gh_patches_debug_52881
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-55707
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to edit WHEN conditions from issue alert
### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
1. Create an issue alert with a few WHEN conditions
2. Save it
3. Go to the Alert details page
4. Click on Edit rule
5. Delete all the WHEN conditions
6. Click on Save
7. When you're back to the Alert details page, the WHEN conditions are still there, and the "Updated alert rule" message appears
### Expected Result
The users should be able to edit the alert rules
### Actual Result
The alert rule stays the same after editing
### Product Area
Alerts
### Link
_No response_
### DSN
_No response_
### Version
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/mediators/project_rules/updater.py`
Content:
```
1 from django.db import router
2 from rest_framework.request import Request
3
4 from sentry.mediators.mediator import Mediator
5 from sentry.mediators.param import Param
6 from sentry.models import Actor, Project, Rule
7
8
9 class Updater(Mediator):
10 rule = Param(Rule)
11 name = Param(str, required=False)
12 owner = Param(int, required=False)
13 environment = Param(int, required=False)
14 project = Param(Project)
15 action_match = Param(str, required=False)
16 filter_match = Param(str, required=False)
17 actions = Param(list, required=False)
18 conditions = Param(list, required=False)
19 frequency = Param(int, required=False)
20 request = Param(Request, required=False)
21 using = router.db_for_write(Project)
22
23 def call(self):
24 self._update_name()
25 self._update_owner()
26 self._update_environment()
27 self._update_project()
28 self._update_actions()
29 self._update_action_match()
30 self._update_filter_match()
31 self._update_conditions()
32 self._update_frequency()
33 self.rule.save()
34 return self.rule
35
36 def _update_name(self):
37 if self.name:
38 self.rule.label = self.name
39
40 def _update_owner(self) -> None:
41 self.rule.owner = Actor.objects.get(id=self.owner) if self.owner else None
42
43 def _update_environment(self):
44 self.rule.environment_id = self.environment
45
46 def _update_project(self):
47 if self.project:
48 self.rule.project = self.project
49
50 def _update_actions(self):
51 if self.actions:
52 self.rule.data["actions"] = self.actions
53
54 def _update_action_match(self):
55 if self.action_match:
56 self.rule.data["action_match"] = self.action_match
57
58 def _update_filter_match(self):
59 if self.filter_match:
60 self.rule.data["filter_match"] = self.filter_match
61
62 def _update_conditions(self):
63 if self.conditions:
64 self.rule.data["conditions"] = self.conditions
65
66 def _update_frequency(self):
67 if self.frequency:
68 self.rule.data["frequency"] = self.frequency
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/mediators/project_rules/updater.py b/src/sentry/mediators/project_rules/updater.py
--- a/src/sentry/mediators/project_rules/updater.py
+++ b/src/sentry/mediators/project_rules/updater.py
@@ -60,8 +60,7 @@
self.rule.data["filter_match"] = self.filter_match
def _update_conditions(self):
- if self.conditions:
- self.rule.data["conditions"] = self.conditions
+ self.rule.data["conditions"] = self.conditions or []
def _update_frequency(self):
if self.frequency:
|
{"golden_diff": "diff --git a/src/sentry/mediators/project_rules/updater.py b/src/sentry/mediators/project_rules/updater.py\n--- a/src/sentry/mediators/project_rules/updater.py\n+++ b/src/sentry/mediators/project_rules/updater.py\n@@ -60,8 +60,7 @@\n self.rule.data[\"filter_match\"] = self.filter_match\n \n def _update_conditions(self):\n- if self.conditions:\n- self.rule.data[\"conditions\"] = self.conditions\n+ self.rule.data[\"conditions\"] = self.conditions or []\n \n def _update_frequency(self):\n if self.frequency:\n", "issue": "Unable to edit WHEN conditions from issue alert\n### Environment\n\nSaaS (https://sentry.io/)\n\n### Steps to Reproduce\n\n1. Create an issue alert with a few WHEN conditions\r\n2. Save it\r\n3. Go to the Alert details page\r\n4. Click on Edit rule\r\n5. Delete all the WHEN conditions\r\n6. Click on Save\r\n7. When you're back to the Alert details page, the WHEN conditions are still there, and the \"Updated alert rule\" message appears\n\n### Expected Result\n\nThe users should be able to edit the alert rules\n\n### Actual Result\n\nThe alert rule stays the same after editing\n\n### Product Area\n\nAlerts\n\n### Link\n\n_No response_\n\n### DSN\n\n_No response_\n\n### Version\n\n_No response_\n", "before_files": [{"content": "from django.db import router\nfrom rest_framework.request import Request\n\nfrom sentry.mediators.mediator import Mediator\nfrom sentry.mediators.param import Param\nfrom sentry.models import Actor, Project, Rule\n\n\nclass Updater(Mediator):\n rule = Param(Rule)\n name = Param(str, required=False)\n owner = Param(int, required=False)\n environment = Param(int, required=False)\n project = Param(Project)\n action_match = Param(str, required=False)\n filter_match = Param(str, required=False)\n actions = Param(list, required=False)\n conditions = Param(list, required=False)\n frequency = Param(int, required=False)\n request = Param(Request, required=False)\n using = router.db_for_write(Project)\n\n def call(self):\n self._update_name()\n self._update_owner()\n self._update_environment()\n self._update_project()\n self._update_actions()\n self._update_action_match()\n self._update_filter_match()\n self._update_conditions()\n self._update_frequency()\n self.rule.save()\n return self.rule\n\n def _update_name(self):\n if self.name:\n self.rule.label = self.name\n\n def _update_owner(self) -> None:\n self.rule.owner = Actor.objects.get(id=self.owner) if self.owner else None\n\n def _update_environment(self):\n self.rule.environment_id = self.environment\n\n def _update_project(self):\n if self.project:\n self.rule.project = self.project\n\n def _update_actions(self):\n if self.actions:\n self.rule.data[\"actions\"] = self.actions\n\n def _update_action_match(self):\n if self.action_match:\n self.rule.data[\"action_match\"] = self.action_match\n\n def _update_filter_match(self):\n if self.filter_match:\n self.rule.data[\"filter_match\"] = self.filter_match\n\n def _update_conditions(self):\n if self.conditions:\n self.rule.data[\"conditions\"] = self.conditions\n\n def _update_frequency(self):\n if self.frequency:\n self.rule.data[\"frequency\"] = self.frequency\n", "path": "src/sentry/mediators/project_rules/updater.py"}], "after_files": [{"content": "from django.db import router\nfrom rest_framework.request import Request\n\nfrom sentry.mediators.mediator import Mediator\nfrom sentry.mediators.param import Param\nfrom sentry.models import Actor, Project, Rule\n\n\nclass Updater(Mediator):\n rule = Param(Rule)\n name = Param(str, required=False)\n owner = Param(int, required=False)\n environment = Param(int, required=False)\n project = Param(Project)\n action_match = Param(str, required=False)\n filter_match = Param(str, required=False)\n actions = Param(list, required=False)\n conditions = Param(list, required=False)\n frequency = Param(int, required=False)\n request = Param(Request, required=False)\n using = router.db_for_write(Project)\n\n def call(self):\n self._update_name()\n self._update_owner()\n self._update_environment()\n self._update_project()\n self._update_actions()\n self._update_action_match()\n self._update_filter_match()\n self._update_conditions()\n self._update_frequency()\n self.rule.save()\n return self.rule\n\n def _update_name(self):\n if self.name:\n self.rule.label = self.name\n\n def _update_owner(self) -> None:\n self.rule.owner = Actor.objects.get(id=self.owner) if self.owner else None\n\n def _update_environment(self):\n self.rule.environment_id = self.environment\n\n def _update_project(self):\n if self.project:\n self.rule.project = self.project\n\n def _update_actions(self):\n if self.actions:\n self.rule.data[\"actions\"] = self.actions\n\n def _update_action_match(self):\n if self.action_match:\n self.rule.data[\"action_match\"] = self.action_match\n\n def _update_filter_match(self):\n if self.filter_match:\n self.rule.data[\"filter_match\"] = self.filter_match\n\n def _update_conditions(self):\n self.rule.data[\"conditions\"] = self.conditions or []\n\n def _update_frequency(self):\n if self.frequency:\n self.rule.data[\"frequency\"] = self.frequency\n", "path": "src/sentry/mediators/project_rules/updater.py"}]}
| 1,000 | 132 |
gh_patches_debug_36777
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-769
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WeightNormalization with RNNs: shape issue
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab
- TensorFlow version and how it was installed (source or binary): 2.0.0 binary
- TensorFlow-Addons version and how it was installed (source or binary): 0.6.0 binary
- Python version: 3.6.8
- Is GPU used? (yes/no): yes
**Describe the bug**
WeightNormalization layer wrapper cannot be used with RNNs if the input sequence has undetermined length. See code for errors.
**Code to reproduce the issue**
```
import tensorflow as tf
import tensorflow_addons as tfa
n_features = 3
seq_length = None
rnn_units = 4
input_layer = tf.keras.layers.Input(shape=(seq_length, n_features))
rnn_layer = tf.keras.layers.SimpleRNN(rnn_units)
dense_layer = tf.keras.layers.Dense(1)
wn_rnn_layer = tfa.layers.WeightNormalization(rnn_layer)
wn_model = tf.keras.models.Sequential(layers=(input_layer, wn_rnn_layer, dense_layer))
```
yields
```
ValueError: as_list() is not defined on an unknown TensorShape.
```
Note that:
1. The same code without using `WeightNormalization` runs.
2. Interestingly, adding the lines
```
batch_size = 1
input_layer = tf.keras.layers.Input(batch_shape=(batch_size, seq_length, n_features))
rnn_layer = tf.keras.layers.SimpleRNN(rnn_units, return_sequences=True)
dense_layer = tf.keras.layers.Dense(1)
wn_rnn_layer = tfa.layers.WeightNormalization(rnn_layer)
wn_model = tf.keras.models.Sequential(layers=(input_layer, wn_rnn_layer, dense_layer))
```
gives
```
IndexError: list assignment index out of range
```
instead.
**Other info / logs**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/layers/wrappers.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # =============================================================================
15 from __future__ import absolute_import
16 from __future__ import division
17 from __future__ import print_function
18
19 import tensorflow as tf
20
21
22 @tf.keras.utils.register_keras_serializable(package='Addons')
23 class WeightNormalization(tf.keras.layers.Wrapper):
24 """This wrapper reparameterizes a layer by decoupling the weight's
25 magnitude and direction.
26
27 This speeds up convergence by improving the
28 conditioning of the optimization problem.
29 Weight Normalization: A Simple Reparameterization to Accelerate
30 Training of Deep Neural Networks: https://arxiv.org/abs/1602.07868
31 Tim Salimans, Diederik P. Kingma (2016)
32 WeightNormalization wrapper works for keras and tf layers.
33 ```python
34 net = WeightNormalization(
35 tf.keras.layers.Conv2D(2, 2, activation='relu'),
36 input_shape=(32, 32, 3),
37 data_init=True)(x)
38 net = WeightNormalization(
39 tf.keras.layers.Conv2D(16, 5, activation='relu'),
40 data_init=True)(net)
41 net = WeightNormalization(
42 tf.keras.layers.Dense(120, activation='relu'),
43 data_init=True)(net)
44 net = WeightNormalization(
45 tf.keras.layers.Dense(n_classes),
46 data_init=True)(net)
47 ```
48 Arguments:
49 layer: a layer instance.
50 data_init: If `True` use data dependent variable initialization
51 Raises:
52 ValueError: If not initialized with a `Layer` instance.
53 ValueError: If `Layer` does not contain a `kernel` of weights
54 NotImplementedError: If `data_init` is True and running graph execution
55 """
56
57 def __init__(self, layer, data_init=True, **kwargs):
58 super(WeightNormalization, self).__init__(layer, **kwargs)
59 self.data_init = data_init
60 self._track_trackable(layer, name='layer')
61
62 def build(self, input_shape):
63 """Build `Layer`"""
64 input_shape = tf.TensorShape(input_shape).as_list()
65 self.input_spec = tf.keras.layers.InputSpec(
66 shape=[None] + input_shape[1:])
67
68 if not self.layer.built:
69 self.layer.build(input_shape)
70
71 if not hasattr(self.layer, 'kernel'):
72 raise ValueError('`WeightNormalization` must wrap a layer that'
73 ' contains a `kernel` for weights')
74
75 # The kernel's filter or unit dimension is -1
76 self.layer_depth = int(self.layer.kernel.shape[-1])
77 self.kernel_norm_axes = list(range(self.layer.kernel.shape.rank - 1))
78
79 self.g = self.add_weight(
80 name='g',
81 shape=(self.layer_depth,),
82 initializer='ones',
83 dtype=self.layer.kernel.dtype,
84 trainable=True)
85 self.v = self.layer.kernel
86
87 self._initialized = self.add_weight(
88 name='initialized',
89 shape=None,
90 initializer='zeros',
91 dtype=tf.dtypes.bool,
92 trainable=False)
93
94 if self.data_init:
95 # Used for data initialization in self._data_dep_init.
96 with tf.name_scope('data_dep_init'):
97 layer_config = tf.keras.layers.serialize(self.layer)
98 layer_config['config']['trainable'] = False
99 self._naked_clone_layer = tf.keras.layers.deserialize(
100 layer_config)
101 self._naked_clone_layer.build(input_shape)
102 self._naked_clone_layer.set_weights(self.layer.get_weights())
103 self._naked_clone_layer.activation = None
104
105 self.built = True
106
107 def call(self, inputs):
108 """Call `Layer`"""
109
110 def _do_nothing():
111 return tf.identity(self.g)
112
113 def _update_weights():
114 # Ensure we read `self.g` after _update_weights.
115 with tf.control_dependencies(self._initialize_weights(inputs)):
116 return tf.identity(self.g)
117
118 g = tf.cond(self._initialized, _do_nothing, _update_weights)
119
120 with tf.name_scope('compute_weights'):
121 # Replace kernel by normalized weight variable.
122 self.layer.kernel = tf.nn.l2_normalize(
123 self.v, axis=self.kernel_norm_axes) * g
124
125 # Ensure we calculate result after updating kernel.
126 update_kernel = tf.identity(self.layer.kernel)
127 with tf.control_dependencies([update_kernel]):
128 outputs = self.layer(inputs)
129 return outputs
130
131 def compute_output_shape(self, input_shape):
132 return tf.TensorShape(
133 self.layer.compute_output_shape(input_shape).as_list())
134
135 def _initialize_weights(self, inputs):
136 """Initialize weight g.
137
138 The initial value of g could either from the initial value in v,
139 or by the input value if self.data_init is True.
140 """
141 with tf.control_dependencies([
142 tf.debugging.assert_equal( # pylint: disable=bad-continuation
143 self._initialized,
144 False,
145 message='The layer has been initialized.')
146 ]):
147 if self.data_init:
148 assign_tensors = self._data_dep_init(inputs)
149 else:
150 assign_tensors = self._init_norm()
151 assign_tensors.append(self._initialized.assign(True))
152 return assign_tensors
153
154 def _init_norm(self):
155 """Set the weight g with the norm of the weight vector."""
156 with tf.name_scope('init_norm'):
157 v_flat = tf.reshape(self.v, [-1, self.layer_depth])
158 v_norm = tf.linalg.norm(v_flat, axis=0)
159 g_tensor = self.g.assign(tf.reshape(v_norm, (self.layer_depth,)))
160 return [g_tensor]
161
162 def _data_dep_init(self, inputs):
163 """Data dependent initialization."""
164 with tf.name_scope('data_dep_init'):
165 # Generate data dependent init values
166 x_init = self._naked_clone_layer(inputs)
167 data_norm_axes = list(range(x_init.shape.rank - 1))
168 m_init, v_init = tf.nn.moments(x_init, data_norm_axes)
169 scale_init = 1. / tf.math.sqrt(v_init + 1e-10)
170
171 # Assign data dependent init values
172 g_tensor = self.g.assign(self.g * scale_init)
173 if hasattr(self.layer, 'bias') and self.layer.bias is not None:
174 bias_tensor = self.layer.bias.assign(-m_init * scale_init)
175 return [g_tensor, bias_tensor]
176 else:
177 return [g_tensor]
178
179 def get_config(self):
180 config = {'data_init': self.data_init}
181 base_config = super(WeightNormalization, self).get_config()
182 return dict(list(base_config.items()) + list(config.items()))
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tensorflow_addons/layers/wrappers.py b/tensorflow_addons/layers/wrappers.py
--- a/tensorflow_addons/layers/wrappers.py
+++ b/tensorflow_addons/layers/wrappers.py
@@ -58,31 +58,34 @@
super(WeightNormalization, self).__init__(layer, **kwargs)
self.data_init = data_init
self._track_trackable(layer, name='layer')
+ self.is_rnn = isinstance(self.layer, tf.keras.layers.RNN)
def build(self, input_shape):
"""Build `Layer`"""
- input_shape = tf.TensorShape(input_shape).as_list()
+ input_shape = tf.TensorShape(input_shape)
self.input_spec = tf.keras.layers.InputSpec(
shape=[None] + input_shape[1:])
if not self.layer.built:
self.layer.build(input_shape)
- if not hasattr(self.layer, 'kernel'):
+ kernel_layer = self.layer.cell if self.is_rnn else self.layer
+
+ if not hasattr(kernel_layer, 'kernel'):
raise ValueError('`WeightNormalization` must wrap a layer that'
' contains a `kernel` for weights')
# The kernel's filter or unit dimension is -1
- self.layer_depth = int(self.layer.kernel.shape[-1])
- self.kernel_norm_axes = list(range(self.layer.kernel.shape.rank - 1))
+ self.layer_depth = int(kernel_layer.kernel.shape[-1])
+ self.kernel_norm_axes = list(range(kernel_layer.kernel.shape.rank - 1))
self.g = self.add_weight(
name='g',
shape=(self.layer_depth,),
initializer='ones',
- dtype=self.layer.kernel.dtype,
+ dtype=kernel_layer.kernel.dtype,
trainable=True)
- self.v = self.layer.kernel
+ self.v = kernel_layer.kernel
self._initialized = self.add_weight(
name='initialized',
@@ -100,7 +103,10 @@
layer_config)
self._naked_clone_layer.build(input_shape)
self._naked_clone_layer.set_weights(self.layer.get_weights())
- self._naked_clone_layer.activation = None
+ if self.is_rnn:
+ self._naked_clone_layer.cell.activation = None
+ else:
+ self._naked_clone_layer.activation = None
self.built = True
|
{"golden_diff": "diff --git a/tensorflow_addons/layers/wrappers.py b/tensorflow_addons/layers/wrappers.py\n--- a/tensorflow_addons/layers/wrappers.py\n+++ b/tensorflow_addons/layers/wrappers.py\n@@ -58,31 +58,34 @@\n super(WeightNormalization, self).__init__(layer, **kwargs)\n self.data_init = data_init\n self._track_trackable(layer, name='layer')\n+ self.is_rnn = isinstance(self.layer, tf.keras.layers.RNN)\n \n def build(self, input_shape):\n \"\"\"Build `Layer`\"\"\"\n- input_shape = tf.TensorShape(input_shape).as_list()\n+ input_shape = tf.TensorShape(input_shape)\n self.input_spec = tf.keras.layers.InputSpec(\n shape=[None] + input_shape[1:])\n \n if not self.layer.built:\n self.layer.build(input_shape)\n \n- if not hasattr(self.layer, 'kernel'):\n+ kernel_layer = self.layer.cell if self.is_rnn else self.layer\n+\n+ if not hasattr(kernel_layer, 'kernel'):\n raise ValueError('`WeightNormalization` must wrap a layer that'\n ' contains a `kernel` for weights')\n \n # The kernel's filter or unit dimension is -1\n- self.layer_depth = int(self.layer.kernel.shape[-1])\n- self.kernel_norm_axes = list(range(self.layer.kernel.shape.rank - 1))\n+ self.layer_depth = int(kernel_layer.kernel.shape[-1])\n+ self.kernel_norm_axes = list(range(kernel_layer.kernel.shape.rank - 1))\n \n self.g = self.add_weight(\n name='g',\n shape=(self.layer_depth,),\n initializer='ones',\n- dtype=self.layer.kernel.dtype,\n+ dtype=kernel_layer.kernel.dtype,\n trainable=True)\n- self.v = self.layer.kernel\n+ self.v = kernel_layer.kernel\n \n self._initialized = self.add_weight(\n name='initialized',\n@@ -100,7 +103,10 @@\n layer_config)\n self._naked_clone_layer.build(input_shape)\n self._naked_clone_layer.set_weights(self.layer.get_weights())\n- self._naked_clone_layer.activation = None\n+ if self.is_rnn:\n+ self._naked_clone_layer.cell.activation = None\n+ else:\n+ self._naked_clone_layer.activation = None\n \n self.built = True\n", "issue": "WeightNormalization with RNNs: shape issue\n**System information**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab\r\n- TensorFlow version and how it was installed (source or binary): 2.0.0 binary\r\n- TensorFlow-Addons version and how it was installed (source or binary): 0.6.0 binary\r\n- Python version: 3.6.8\r\n- Is GPU used? (yes/no): yes\r\n\r\n**Describe the bug**\r\n\r\nWeightNormalization layer wrapper cannot be used with RNNs if the input sequence has undetermined length. See code for errors.\r\n\r\n**Code to reproduce the issue**\r\n\r\n```\r\nimport tensorflow as tf\r\nimport tensorflow_addons as tfa\r\n\r\nn_features = 3\r\nseq_length = None\r\nrnn_units = 4\r\n\r\ninput_layer = tf.keras.layers.Input(shape=(seq_length, n_features))\r\nrnn_layer = tf.keras.layers.SimpleRNN(rnn_units)\r\ndense_layer = tf.keras.layers.Dense(1)\r\nwn_rnn_layer = tfa.layers.WeightNormalization(rnn_layer)\r\nwn_model = tf.keras.models.Sequential(layers=(input_layer, wn_rnn_layer, dense_layer))\r\n```\r\nyields\r\n```\r\nValueError: as_list() is not defined on an unknown TensorShape.\r\n```\r\n\r\nNote that:\r\n1. The same code without using `WeightNormalization` runs.\r\n2. Interestingly, adding the lines\r\n```\r\nbatch_size = 1\r\ninput_layer = tf.keras.layers.Input(batch_shape=(batch_size, seq_length, n_features))\r\nrnn_layer = tf.keras.layers.SimpleRNN(rnn_units, return_sequences=True)\r\ndense_layer = tf.keras.layers.Dense(1)\r\nwn_rnn_layer = tfa.layers.WeightNormalization(rnn_layer)\r\nwn_model = tf.keras.models.Sequential(layers=(input_layer, wn_rnn_layer, dense_layer))\r\n```\r\ngives\r\n```\r\nIndexError: list assignment index out of range\r\n```\r\ninstead.\r\n\r\n**Other info / logs**\r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# =============================================================================\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\n\[email protected]_keras_serializable(package='Addons')\nclass WeightNormalization(tf.keras.layers.Wrapper):\n \"\"\"This wrapper reparameterizes a layer by decoupling the weight's\n magnitude and direction.\n\n This speeds up convergence by improving the\n conditioning of the optimization problem.\n Weight Normalization: A Simple Reparameterization to Accelerate\n Training of Deep Neural Networks: https://arxiv.org/abs/1602.07868\n Tim Salimans, Diederik P. Kingma (2016)\n WeightNormalization wrapper works for keras and tf layers.\n ```python\n net = WeightNormalization(\n tf.keras.layers.Conv2D(2, 2, activation='relu'),\n input_shape=(32, 32, 3),\n data_init=True)(x)\n net = WeightNormalization(\n tf.keras.layers.Conv2D(16, 5, activation='relu'),\n data_init=True)(net)\n net = WeightNormalization(\n tf.keras.layers.Dense(120, activation='relu'),\n data_init=True)(net)\n net = WeightNormalization(\n tf.keras.layers.Dense(n_classes),\n data_init=True)(net)\n ```\n Arguments:\n layer: a layer instance.\n data_init: If `True` use data dependent variable initialization\n Raises:\n ValueError: If not initialized with a `Layer` instance.\n ValueError: If `Layer` does not contain a `kernel` of weights\n NotImplementedError: If `data_init` is True and running graph execution\n \"\"\"\n\n def __init__(self, layer, data_init=True, **kwargs):\n super(WeightNormalization, self).__init__(layer, **kwargs)\n self.data_init = data_init\n self._track_trackable(layer, name='layer')\n\n def build(self, input_shape):\n \"\"\"Build `Layer`\"\"\"\n input_shape = tf.TensorShape(input_shape).as_list()\n self.input_spec = tf.keras.layers.InputSpec(\n shape=[None] + input_shape[1:])\n\n if not self.layer.built:\n self.layer.build(input_shape)\n\n if not hasattr(self.layer, 'kernel'):\n raise ValueError('`WeightNormalization` must wrap a layer that'\n ' contains a `kernel` for weights')\n\n # The kernel's filter or unit dimension is -1\n self.layer_depth = int(self.layer.kernel.shape[-1])\n self.kernel_norm_axes = list(range(self.layer.kernel.shape.rank - 1))\n\n self.g = self.add_weight(\n name='g',\n shape=(self.layer_depth,),\n initializer='ones',\n dtype=self.layer.kernel.dtype,\n trainable=True)\n self.v = self.layer.kernel\n\n self._initialized = self.add_weight(\n name='initialized',\n shape=None,\n initializer='zeros',\n dtype=tf.dtypes.bool,\n trainable=False)\n\n if self.data_init:\n # Used for data initialization in self._data_dep_init.\n with tf.name_scope('data_dep_init'):\n layer_config = tf.keras.layers.serialize(self.layer)\n layer_config['config']['trainable'] = False\n self._naked_clone_layer = tf.keras.layers.deserialize(\n layer_config)\n self._naked_clone_layer.build(input_shape)\n self._naked_clone_layer.set_weights(self.layer.get_weights())\n self._naked_clone_layer.activation = None\n\n self.built = True\n\n def call(self, inputs):\n \"\"\"Call `Layer`\"\"\"\n\n def _do_nothing():\n return tf.identity(self.g)\n\n def _update_weights():\n # Ensure we read `self.g` after _update_weights.\n with tf.control_dependencies(self._initialize_weights(inputs)):\n return tf.identity(self.g)\n\n g = tf.cond(self._initialized, _do_nothing, _update_weights)\n\n with tf.name_scope('compute_weights'):\n # Replace kernel by normalized weight variable.\n self.layer.kernel = tf.nn.l2_normalize(\n self.v, axis=self.kernel_norm_axes) * g\n\n # Ensure we calculate result after updating kernel.\n update_kernel = tf.identity(self.layer.kernel)\n with tf.control_dependencies([update_kernel]):\n outputs = self.layer(inputs)\n return outputs\n\n def compute_output_shape(self, input_shape):\n return tf.TensorShape(\n self.layer.compute_output_shape(input_shape).as_list())\n\n def _initialize_weights(self, inputs):\n \"\"\"Initialize weight g.\n\n The initial value of g could either from the initial value in v,\n or by the input value if self.data_init is True.\n \"\"\"\n with tf.control_dependencies([\n tf.debugging.assert_equal( # pylint: disable=bad-continuation\n self._initialized,\n False,\n message='The layer has been initialized.')\n ]):\n if self.data_init:\n assign_tensors = self._data_dep_init(inputs)\n else:\n assign_tensors = self._init_norm()\n assign_tensors.append(self._initialized.assign(True))\n return assign_tensors\n\n def _init_norm(self):\n \"\"\"Set the weight g with the norm of the weight vector.\"\"\"\n with tf.name_scope('init_norm'):\n v_flat = tf.reshape(self.v, [-1, self.layer_depth])\n v_norm = tf.linalg.norm(v_flat, axis=0)\n g_tensor = self.g.assign(tf.reshape(v_norm, (self.layer_depth,)))\n return [g_tensor]\n\n def _data_dep_init(self, inputs):\n \"\"\"Data dependent initialization.\"\"\"\n with tf.name_scope('data_dep_init'):\n # Generate data dependent init values\n x_init = self._naked_clone_layer(inputs)\n data_norm_axes = list(range(x_init.shape.rank - 1))\n m_init, v_init = tf.nn.moments(x_init, data_norm_axes)\n scale_init = 1. / tf.math.sqrt(v_init + 1e-10)\n\n # Assign data dependent init values\n g_tensor = self.g.assign(self.g * scale_init)\n if hasattr(self.layer, 'bias') and self.layer.bias is not None:\n bias_tensor = self.layer.bias.assign(-m_init * scale_init)\n return [g_tensor, bias_tensor]\n else:\n return [g_tensor]\n\n def get_config(self):\n config = {'data_init': self.data_init}\n base_config = super(WeightNormalization, self).get_config()\n return dict(list(base_config.items()) + list(config.items()))\n", "path": "tensorflow_addons/layers/wrappers.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# =============================================================================\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\n\n\[email protected]_keras_serializable(package='Addons')\nclass WeightNormalization(tf.keras.layers.Wrapper):\n \"\"\"This wrapper reparameterizes a layer by decoupling the weight's\n magnitude and direction.\n\n This speeds up convergence by improving the\n conditioning of the optimization problem.\n Weight Normalization: A Simple Reparameterization to Accelerate\n Training of Deep Neural Networks: https://arxiv.org/abs/1602.07868\n Tim Salimans, Diederik P. Kingma (2016)\n WeightNormalization wrapper works for keras and tf layers.\n ```python\n net = WeightNormalization(\n tf.keras.layers.Conv2D(2, 2, activation='relu'),\n input_shape=(32, 32, 3),\n data_init=True)(x)\n net = WeightNormalization(\n tf.keras.layers.Conv2D(16, 5, activation='relu'),\n data_init=True)(net)\n net = WeightNormalization(\n tf.keras.layers.Dense(120, activation='relu'),\n data_init=True)(net)\n net = WeightNormalization(\n tf.keras.layers.Dense(n_classes),\n data_init=True)(net)\n ```\n Arguments:\n layer: a layer instance.\n data_init: If `True` use data dependent variable initialization\n Raises:\n ValueError: If not initialized with a `Layer` instance.\n ValueError: If `Layer` does not contain a `kernel` of weights\n NotImplementedError: If `data_init` is True and running graph execution\n \"\"\"\n\n def __init__(self, layer, data_init=True, **kwargs):\n super(WeightNormalization, self).__init__(layer, **kwargs)\n self.data_init = data_init\n self._track_trackable(layer, name='layer')\n self.is_rnn = isinstance(self.layer, tf.keras.layers.RNN)\n\n def build(self, input_shape):\n \"\"\"Build `Layer`\"\"\"\n input_shape = tf.TensorShape(input_shape)\n self.input_spec = tf.keras.layers.InputSpec(\n shape=[None] + input_shape[1:])\n\n if not self.layer.built:\n self.layer.build(input_shape)\n\n kernel_layer = self.layer.cell if self.is_rnn else self.layer\n\n if not hasattr(kernel_layer, 'kernel'):\n raise ValueError('`WeightNormalization` must wrap a layer that'\n ' contains a `kernel` for weights')\n\n # The kernel's filter or unit dimension is -1\n self.layer_depth = int(kernel_layer.kernel.shape[-1])\n self.kernel_norm_axes = list(range(kernel_layer.kernel.shape.rank - 1))\n\n self.g = self.add_weight(\n name='g',\n shape=(self.layer_depth,),\n initializer='ones',\n dtype=kernel_layer.kernel.dtype,\n trainable=True)\n self.v = kernel_layer.kernel\n\n self._initialized = self.add_weight(\n name='initialized',\n shape=None,\n initializer='zeros',\n dtype=tf.dtypes.bool,\n trainable=False)\n\n if self.data_init:\n # Used for data initialization in self._data_dep_init.\n with tf.name_scope('data_dep_init'):\n layer_config = tf.keras.layers.serialize(self.layer)\n layer_config['config']['trainable'] = False\n self._naked_clone_layer = tf.keras.layers.deserialize(\n layer_config)\n self._naked_clone_layer.build(input_shape)\n self._naked_clone_layer.set_weights(self.layer.get_weights())\n if self.is_rnn:\n self._naked_clone_layer.cell.activation = None\n else:\n self._naked_clone_layer.activation = None\n\n self.built = True\n\n def call(self, inputs):\n \"\"\"Call `Layer`\"\"\"\n\n def _do_nothing():\n return tf.identity(self.g)\n\n def _update_weights():\n # Ensure we read `self.g` after _update_weights.\n with tf.control_dependencies(self._initialize_weights(inputs)):\n return tf.identity(self.g)\n\n g = tf.cond(self._initialized, _do_nothing, _update_weights)\n\n with tf.name_scope('compute_weights'):\n # Replace kernel by normalized weight variable.\n self.layer.kernel = tf.nn.l2_normalize(\n self.v, axis=self.kernel_norm_axes) * g\n\n # Ensure we calculate result after updating kernel.\n update_kernel = tf.identity(self.layer.kernel)\n with tf.control_dependencies([update_kernel]):\n outputs = self.layer(inputs)\n return outputs\n\n def compute_output_shape(self, input_shape):\n return tf.TensorShape(\n self.layer.compute_output_shape(input_shape).as_list())\n\n def _initialize_weights(self, inputs):\n \"\"\"Initialize weight g.\n\n The initial value of g could either from the initial value in v,\n or by the input value if self.data_init is True.\n \"\"\"\n with tf.control_dependencies([\n tf.debugging.assert_equal( # pylint: disable=bad-continuation\n self._initialized,\n False,\n message='The layer has been initialized.')\n ]):\n if self.data_init:\n assign_tensors = self._data_dep_init(inputs)\n else:\n assign_tensors = self._init_norm()\n assign_tensors.append(self._initialized.assign(True))\n return assign_tensors\n\n def _init_norm(self):\n \"\"\"Set the weight g with the norm of the weight vector.\"\"\"\n with tf.name_scope('init_norm'):\n v_flat = tf.reshape(self.v, [-1, self.layer_depth])\n v_norm = tf.linalg.norm(v_flat, axis=0)\n g_tensor = self.g.assign(tf.reshape(v_norm, (self.layer_depth,)))\n return [g_tensor]\n\n def _data_dep_init(self, inputs):\n \"\"\"Data dependent initialization.\"\"\"\n with tf.name_scope('data_dep_init'):\n # Generate data dependent init values\n x_init = self._naked_clone_layer(inputs)\n data_norm_axes = list(range(x_init.shape.rank - 1))\n m_init, v_init = tf.nn.moments(x_init, data_norm_axes)\n scale_init = 1. / tf.math.sqrt(v_init + 1e-10)\n\n # Assign data dependent init values\n g_tensor = self.g.assign(self.g * scale_init)\n if hasattr(self.layer, 'bias') and self.layer.bias is not None:\n bias_tensor = self.layer.bias.assign(-m_init * scale_init)\n return [g_tensor, bias_tensor]\n else:\n return [g_tensor]\n\n def get_config(self):\n config = {'data_init': self.data_init}\n base_config = super(WeightNormalization, self).get_config()\n return dict(list(base_config.items()) + list(config.items()))\n", "path": "tensorflow_addons/layers/wrappers.py"}]}
| 2,650 | 520 |
gh_patches_debug_4967
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-1894
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
aws cloudformation create-change-set with 'template-url' broken
```
$ aws --region eu-west-1 cloudformation create-change-set --change-set-name test --stack-name autobuild --template-url https://s3-eu-west-1.amazonaws.com/BUCKET/TEMPLATE.json --parameters ... --capabilities CAPABILITY_IAM
Error parsing parameter '--template-url': Unable to retrieve https://s3-eu-west-1.amazonaws.com/BUCKET/TEMPLATE.json: received non 200 status code of 403
```
The bucket is not public, and access is controlled via IAM.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/paramfile.py`
Content:
```
1 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import logging
14 import os
15
16 from botocore.vendored import requests
17 from awscli.compat import six
18
19 from awscli.compat import compat_open
20
21
22 logger = logging.getLogger(__name__)
23
24 # These are special cased arguments that do _not_ get the
25 # special param file processing. This is typically because it
26 # refers to an actual URI of some sort and we don't want to actually
27 # download the content (i.e TemplateURL in cloudformation).
28 PARAMFILE_DISABLED = set([
29 'apigateway.put-integration.uri',
30 'cloudformation.create-stack.template-url',
31 'cloudformation.update-stack.template-url',
32 'cloudformation.validate-template.template-url',
33 'cloudformation.estimate-template-cost.template-url',
34
35 'cloudformation.create-stack.stack-policy-url',
36 'cloudformation.update-stack.stack-policy-url',
37 'cloudformation.set-stack-policy.stack-policy-url',
38
39 'cloudformation.update-stack.stack-policy-during-update-url',
40 # We will want to change the event name to ``s3`` as opposed to
41 # custom in the near future along with ``s3`` to ``s3api``.
42 'custom.cp.website-redirect',
43 'custom.mv.website-redirect',
44 'custom.sync.website-redirect',
45
46 'iam.create-open-id-connect-provider.url',
47
48 'machinelearning.predict.predict-endpoint',
49
50 'sqs.add-permission.queue-url',
51 'sqs.change-message-visibility.queue-url',
52 'sqs.change-message-visibility-batch.queue-url',
53 'sqs.delete-message.queue-url',
54 'sqs.delete-message-batch.queue-url',
55 'sqs.delete-queue.queue-url',
56 'sqs.get-queue-attributes.queue-url',
57 'sqs.list-dead-letter-source-queues.queue-url',
58 'sqs.receive-message.queue-url',
59 'sqs.remove-permission.queue-url',
60 'sqs.send-message.queue-url',
61 'sqs.send-message-batch.queue-url',
62 'sqs.set-queue-attributes.queue-url',
63 'sqs.purge-queue.queue-url',
64
65 's3.copy-object.website-redirect-location',
66 's3.create-multipart-upload.website-redirect-location',
67 's3.put-object.website-redirect-location',
68
69 # Double check that this has been renamed!
70 'sns.subscribe.notification-endpoint',
71 ])
72
73
74 class ResourceLoadingError(Exception):
75 pass
76
77
78 def get_paramfile(path):
79 """Load parameter based on a resource URI.
80
81 It is possible to pass parameters to operations by referring
82 to files or URI's. If such a reference is detected, this
83 function attempts to retrieve the data from the file or URI
84 and returns it. If there are any errors or if the ``path``
85 does not appear to refer to a file or URI, a ``None`` is
86 returned.
87
88 :type path: str
89 :param path: The resource URI, e.g. file://foo.txt. This value
90 may also be a non resource URI, in which case ``None`` is returned.
91
92 :return: The loaded value associated with the resource URI.
93 If the provided ``path`` is not a resource URI, then a
94 value of ``None`` is returned.
95
96 """
97 data = None
98 if isinstance(path, six.string_types):
99 for prefix, function_spec in PREFIX_MAP.items():
100 if path.startswith(prefix):
101 function, kwargs = function_spec
102 data = function(prefix, path, **kwargs)
103 return data
104
105
106 def get_file(prefix, path, mode):
107 file_path = os.path.expandvars(os.path.expanduser(path[len(prefix):]))
108 try:
109 with compat_open(file_path, mode) as f:
110 return f.read()
111 except UnicodeDecodeError:
112 raise ResourceLoadingError(
113 'Unable to load paramfile (%s), text contents could '
114 'not be decoded. If this is a binary file, please use the '
115 'fileb:// prefix instead of the file:// prefix.' % file_path)
116 except (OSError, IOError) as e:
117 raise ResourceLoadingError('Unable to load paramfile %s: %s' % (
118 path, e))
119
120
121 def get_uri(prefix, uri):
122 try:
123 r = requests.get(uri)
124 if r.status_code == 200:
125 return r.text
126 else:
127 raise ResourceLoadingError(
128 "received non 200 status code of %s" % (
129 r.status_code))
130 except Exception as e:
131 raise ResourceLoadingError('Unable to retrieve %s: %s' % (uri, e))
132
133
134 PREFIX_MAP = {
135 'file://': (get_file, {'mode': 'r'}),
136 'fileb://': (get_file, {'mode': 'rb'}),
137 'http://': (get_uri, {}),
138 'https://': (get_uri, {}),
139 }
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/awscli/paramfile.py b/awscli/paramfile.py
--- a/awscli/paramfile.py
+++ b/awscli/paramfile.py
@@ -29,6 +29,7 @@
'apigateway.put-integration.uri',
'cloudformation.create-stack.template-url',
'cloudformation.update-stack.template-url',
+ 'cloudformation.create-change-set.template-url',
'cloudformation.validate-template.template-url',
'cloudformation.estimate-template-cost.template-url',
|
{"golden_diff": "diff --git a/awscli/paramfile.py b/awscli/paramfile.py\n--- a/awscli/paramfile.py\n+++ b/awscli/paramfile.py\n@@ -29,6 +29,7 @@\n 'apigateway.put-integration.uri',\n 'cloudformation.create-stack.template-url',\n 'cloudformation.update-stack.template-url',\n+ 'cloudformation.create-change-set.template-url',\n 'cloudformation.validate-template.template-url',\n 'cloudformation.estimate-template-cost.template-url',\n", "issue": "aws cloudformation create-change-set with 'template-url' broken\n```\n$ aws --region eu-west-1 cloudformation create-change-set --change-set-name test --stack-name autobuild --template-url https://s3-eu-west-1.amazonaws.com/BUCKET/TEMPLATE.json --parameters ... --capabilities CAPABILITY_IAM\n\nError parsing parameter '--template-url': Unable to retrieve https://s3-eu-west-1.amazonaws.com/BUCKET/TEMPLATE.json: received non 200 status code of 403\n```\n\nThe bucket is not public, and access is controlled via IAM.\n\n", "before_files": [{"content": "# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport logging\nimport os\n\nfrom botocore.vendored import requests\nfrom awscli.compat import six\n\nfrom awscli.compat import compat_open\n\n\nlogger = logging.getLogger(__name__)\n\n# These are special cased arguments that do _not_ get the\n# special param file processing. This is typically because it\n# refers to an actual URI of some sort and we don't want to actually\n# download the content (i.e TemplateURL in cloudformation).\nPARAMFILE_DISABLED = set([\n 'apigateway.put-integration.uri',\n 'cloudformation.create-stack.template-url',\n 'cloudformation.update-stack.template-url',\n 'cloudformation.validate-template.template-url',\n 'cloudformation.estimate-template-cost.template-url',\n\n 'cloudformation.create-stack.stack-policy-url',\n 'cloudformation.update-stack.stack-policy-url',\n 'cloudformation.set-stack-policy.stack-policy-url',\n\n 'cloudformation.update-stack.stack-policy-during-update-url',\n # We will want to change the event name to ``s3`` as opposed to\n # custom in the near future along with ``s3`` to ``s3api``.\n 'custom.cp.website-redirect',\n 'custom.mv.website-redirect',\n 'custom.sync.website-redirect',\n\n 'iam.create-open-id-connect-provider.url',\n\n 'machinelearning.predict.predict-endpoint',\n\n 'sqs.add-permission.queue-url',\n 'sqs.change-message-visibility.queue-url',\n 'sqs.change-message-visibility-batch.queue-url',\n 'sqs.delete-message.queue-url',\n 'sqs.delete-message-batch.queue-url',\n 'sqs.delete-queue.queue-url',\n 'sqs.get-queue-attributes.queue-url',\n 'sqs.list-dead-letter-source-queues.queue-url',\n 'sqs.receive-message.queue-url',\n 'sqs.remove-permission.queue-url',\n 'sqs.send-message.queue-url',\n 'sqs.send-message-batch.queue-url',\n 'sqs.set-queue-attributes.queue-url',\n 'sqs.purge-queue.queue-url',\n\n 's3.copy-object.website-redirect-location',\n 's3.create-multipart-upload.website-redirect-location',\n 's3.put-object.website-redirect-location',\n\n # Double check that this has been renamed!\n 'sns.subscribe.notification-endpoint',\n])\n\n\nclass ResourceLoadingError(Exception):\n pass\n\n\ndef get_paramfile(path):\n \"\"\"Load parameter based on a resource URI.\n\n It is possible to pass parameters to operations by referring\n to files or URI's. If such a reference is detected, this\n function attempts to retrieve the data from the file or URI\n and returns it. If there are any errors or if the ``path``\n does not appear to refer to a file or URI, a ``None`` is\n returned.\n\n :type path: str\n :param path: The resource URI, e.g. file://foo.txt. This value\n may also be a non resource URI, in which case ``None`` is returned.\n\n :return: The loaded value associated with the resource URI.\n If the provided ``path`` is not a resource URI, then a\n value of ``None`` is returned.\n\n \"\"\"\n data = None\n if isinstance(path, six.string_types):\n for prefix, function_spec in PREFIX_MAP.items():\n if path.startswith(prefix):\n function, kwargs = function_spec\n data = function(prefix, path, **kwargs)\n return data\n\n\ndef get_file(prefix, path, mode):\n file_path = os.path.expandvars(os.path.expanduser(path[len(prefix):]))\n try:\n with compat_open(file_path, mode) as f:\n return f.read()\n except UnicodeDecodeError:\n raise ResourceLoadingError(\n 'Unable to load paramfile (%s), text contents could '\n 'not be decoded. If this is a binary file, please use the '\n 'fileb:// prefix instead of the file:// prefix.' % file_path)\n except (OSError, IOError) as e:\n raise ResourceLoadingError('Unable to load paramfile %s: %s' % (\n path, e))\n\n\ndef get_uri(prefix, uri):\n try:\n r = requests.get(uri)\n if r.status_code == 200:\n return r.text\n else:\n raise ResourceLoadingError(\n \"received non 200 status code of %s\" % (\n r.status_code))\n except Exception as e:\n raise ResourceLoadingError('Unable to retrieve %s: %s' % (uri, e))\n\n\nPREFIX_MAP = {\n 'file://': (get_file, {'mode': 'r'}),\n 'fileb://': (get_file, {'mode': 'rb'}),\n 'http://': (get_uri, {}),\n 'https://': (get_uri, {}),\n}\n", "path": "awscli/paramfile.py"}], "after_files": [{"content": "# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport logging\nimport os\n\nfrom botocore.vendored import requests\nfrom awscli.compat import six\n\nfrom awscli.compat import compat_open\n\n\nlogger = logging.getLogger(__name__)\n\n# These are special cased arguments that do _not_ get the\n# special param file processing. This is typically because it\n# refers to an actual URI of some sort and we don't want to actually\n# download the content (i.e TemplateURL in cloudformation).\nPARAMFILE_DISABLED = set([\n 'apigateway.put-integration.uri',\n 'cloudformation.create-stack.template-url',\n 'cloudformation.update-stack.template-url',\n 'cloudformation.create-change-set.template-url',\n 'cloudformation.validate-template.template-url',\n 'cloudformation.estimate-template-cost.template-url',\n\n 'cloudformation.create-stack.stack-policy-url',\n 'cloudformation.update-stack.stack-policy-url',\n 'cloudformation.set-stack-policy.stack-policy-url',\n\n 'cloudformation.update-stack.stack-policy-during-update-url',\n # We will want to change the event name to ``s3`` as opposed to\n # custom in the near future along with ``s3`` to ``s3api``.\n 'custom.cp.website-redirect',\n 'custom.mv.website-redirect',\n 'custom.sync.website-redirect',\n\n 'iam.create-open-id-connect-provider.url',\n\n 'machinelearning.predict.predict-endpoint',\n\n 'sqs.add-permission.queue-url',\n 'sqs.change-message-visibility.queue-url',\n 'sqs.change-message-visibility-batch.queue-url',\n 'sqs.delete-message.queue-url',\n 'sqs.delete-message-batch.queue-url',\n 'sqs.delete-queue.queue-url',\n 'sqs.get-queue-attributes.queue-url',\n 'sqs.list-dead-letter-source-queues.queue-url',\n 'sqs.receive-message.queue-url',\n 'sqs.remove-permission.queue-url',\n 'sqs.send-message.queue-url',\n 'sqs.send-message-batch.queue-url',\n 'sqs.set-queue-attributes.queue-url',\n 'sqs.purge-queue.queue-url',\n\n 's3.copy-object.website-redirect-location',\n 's3.create-multipart-upload.website-redirect-location',\n 's3.put-object.website-redirect-location',\n\n # Double check that this has been renamed!\n 'sns.subscribe.notification-endpoint',\n])\n\n\nclass ResourceLoadingError(Exception):\n pass\n\n\ndef get_paramfile(path):\n \"\"\"Load parameter based on a resource URI.\n\n It is possible to pass parameters to operations by referring\n to files or URI's. If such a reference is detected, this\n function attempts to retrieve the data from the file or URI\n and returns it. If there are any errors or if the ``path``\n does not appear to refer to a file or URI, a ``None`` is\n returned.\n\n :type path: str\n :param path: The resource URI, e.g. file://foo.txt. This value\n may also be a non resource URI, in which case ``None`` is returned.\n\n :return: The loaded value associated with the resource URI.\n If the provided ``path`` is not a resource URI, then a\n value of ``None`` is returned.\n\n \"\"\"\n data = None\n if isinstance(path, six.string_types):\n for prefix, function_spec in PREFIX_MAP.items():\n if path.startswith(prefix):\n function, kwargs = function_spec\n data = function(prefix, path, **kwargs)\n return data\n\n\ndef get_file(prefix, path, mode):\n file_path = os.path.expandvars(os.path.expanduser(path[len(prefix):]))\n try:\n with compat_open(file_path, mode) as f:\n return f.read()\n except UnicodeDecodeError:\n raise ResourceLoadingError(\n 'Unable to load paramfile (%s), text contents could '\n 'not be decoded. If this is a binary file, please use the '\n 'fileb:// prefix instead of the file:// prefix.' % file_path)\n except (OSError, IOError) as e:\n raise ResourceLoadingError('Unable to load paramfile %s: %s' % (\n path, e))\n\n\ndef get_uri(prefix, uri):\n try:\n r = requests.get(uri)\n if r.status_code == 200:\n return r.text\n else:\n raise ResourceLoadingError(\n \"received non 200 status code of %s\" % (\n r.status_code))\n except Exception as e:\n raise ResourceLoadingError('Unable to retrieve %s: %s' % (uri, e))\n\n\nPREFIX_MAP = {\n 'file://': (get_file, {'mode': 'r'}),\n 'fileb://': (get_file, {'mode': 'rb'}),\n 'http://': (get_uri, {}),\n 'https://': (get_uri, {}),\n}\n", "path": "awscli/paramfile.py"}]}
| 1,881 | 107 |
gh_patches_debug_17454
|
rasdani/github-patches
|
git_diff
|
pyg-team__pytorch_geometric-7391
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`CaptumExplainer` cannot be called multiple times in a row without creating an error
### 🐛 Describe the bug
Trying to call an instance of `CaptumExplainer` twice raises an error.
To replicate, change https://github.com/pyg-team/pytorch_geometric/blob/dfd32668aea953c8bb56f97364d8e028f267bde6/examples/explain/captum_explainer.py#L60
to
```python
explanation = explainer(data.x, data.edge_index, index=node_index)
explanation_2 = explanation = explainer(data.x, data.edge_index, index=node_index+1)
```
which will raise
```
Traceback (most recent call last):
File ".../pytorch_geometric/examples/explain/captum_explainer.py", line 62, in <module>
explanation_2 = explainer(data.x, data.edge_index, index=11)
File ".../pytorch_geometric/torch_geometric/explain/explainer.py", line 198, in __call__
explanation = self.algorithm(
File ".../pytorch_geometric/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File ".../torch_geometric/explain/algorithm/captum_explainer.py", line 153, in forward
self.attribution_method = self.attribution_method(captum_model)
TypeError: 'IntegratedGradients' object is not callable
```
This is because on the second call, the `CaptumExplainer` tries to recreates an attribution method from `captum`:
https://github.com/pyg-team/pytorch_geometric/blob/dfd32668aea953c8bb56f97364d8e028f267bde6/torch_geometric/explain/algorithm/captum_explainer.py#L153
### Environment
* PyG version: (from source, latest commit to date on master)
* PyTorch version: 2.0.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch_geometric/explain/algorithm/captum_explainer.py`
Content:
```
1 import inspect
2 import logging
3 import warnings
4 from typing import Any, Dict, Optional, Union
5
6 import torch
7 from torch import Tensor
8
9 from torch_geometric.explain import Explanation, HeteroExplanation
10 from torch_geometric.explain.algorithm import ExplainerAlgorithm
11 from torch_geometric.explain.algorithm.captum import (
12 CaptumHeteroModel,
13 CaptumModel,
14 MaskLevelType,
15 convert_captum_output,
16 to_captum_input,
17 )
18 from torch_geometric.explain.config import MaskType, ModelMode
19 from torch_geometric.typing import EdgeType, NodeType
20
21
22 class CaptumExplainer(ExplainerAlgorithm):
23 """A `Captum <https://captum.ai>`__-based explainer for identifying compact
24 subgraph structures and node features that play a crucial role in the
25 predictions made by a GNN.
26
27 This explainer algorithm uses :captum:`null` `Captum <https://captum.ai/>`_
28 to compute attributions.
29
30 Currently, the following attribution methods are supported:
31
32 * :class:`captum.attr.IntegratedGradients`
33 * :class:`captum.attr.Saliency`
34 * :class:`captum.attr.InputXGradient`
35 * :class:`captum.attr.Deconvolution`
36 * :class:`captum.attr.ShapleyValueSampling`
37 * :class:`captum.attr.GuidedBackprop`
38
39 Args:
40 attribution_method (Attribution or str): The Captum attribution method
41 to use. Can be a string or a :class:`captum.attr` method.
42 **kwargs: Additional arguments for the Captum attribution method.
43 """
44 SUPPORTED_METHODS = [ # TODO: Add support for more methods.
45 'IntegratedGradients',
46 'Saliency',
47 'InputXGradient',
48 'Deconvolution',
49 'ShapleyValueSampling',
50 'GuidedBackprop',
51 ]
52
53 def __init__(
54 self,
55 attribution_method: Union[str, Any],
56 **kwargs,
57 ):
58 super().__init__()
59
60 import captum.attr # noqa
61
62 if isinstance(attribution_method, str):
63 self.attribution_method = getattr(
64 captum.attr,
65 attribution_method,
66 )
67 else:
68 self.attribution_method = attribution_method
69
70 if not self._is_supported_attribution_method():
71 raise ValueError(f"{self.__class__.__name__} does not support "
72 f"attribution method "
73 f"{self.attribution_method.__name__}")
74
75 if kwargs.get('internal_batch_size', 1) != 1:
76 warnings.warn("Overriding 'internal_batch_size' to 1")
77
78 if 'internal_batch_size' in self._get_attribute_parameters():
79 kwargs['internal_batch_size'] = 1
80
81 self.kwargs = kwargs
82
83 def _get_mask_type(self) -> MaskLevelType:
84 r"""Based on the explainer config, return the mask type."""
85 node_mask_type = self.explainer_config.node_mask_type
86 edge_mask_type = self.explainer_config.edge_mask_type
87 if node_mask_type is not None and edge_mask_type is not None:
88 mask_type = MaskLevelType.node_and_edge
89 elif node_mask_type is not None:
90 mask_type = MaskLevelType.node
91 elif edge_mask_type is not None:
92 mask_type = MaskLevelType.edge
93 else:
94 raise ValueError("Neither node mask type nor "
95 "edge mask type is specified.")
96 return mask_type
97
98 def _get_attribute_parameters(self) -> Dict[str, Any]:
99 r"""Returns the attribute arguments."""
100 signature = inspect.signature(self.attribution_method.attribute)
101 return signature.parameters
102
103 def _needs_baseline(self) -> bool:
104 r"""Checks if the method needs a baseline."""
105 parameters = self._get_attribute_parameters()
106 if 'baselines' in parameters:
107 param = parameters['baselines']
108 if param.default is inspect.Parameter.empty:
109 return True
110 return False
111
112 def _is_supported_attribution_method(self) -> bool:
113 r"""Returns :obj:`True` if `self.attribution_method` is supported."""
114 # This is redundant for now since all supported methods need a baseline
115 if self._needs_baseline():
116 return False
117 elif self.attribution_method.__name__ in self.SUPPORTED_METHODS:
118 return True
119 return False
120
121 def forward(
122 self,
123 model: torch.nn.Module,
124 x: Union[Tensor, Dict[NodeType, Tensor]],
125 edge_index: Union[Tensor, Dict[EdgeType, Tensor]],
126 *,
127 target: Tensor,
128 index: Optional[Union[int, Tensor]] = None,
129 **kwargs,
130 ) -> Union[Explanation, HeteroExplanation]:
131
132 mask_type = self._get_mask_type()
133
134 inputs, add_forward_args = to_captum_input(
135 x,
136 edge_index,
137 mask_type,
138 *kwargs.values(),
139 )
140
141 if isinstance(x, dict):
142 metadata = (list(x.keys()), list(edge_index.keys()))
143 captum_model = CaptumHeteroModel(
144 model,
145 mask_type,
146 index,
147 metadata,
148 )
149 else:
150 metadata = None
151 captum_model = CaptumModel(model, mask_type, index)
152
153 self.attribution_method = self.attribution_method(captum_model)
154
155 # In captum, the target is the index for which
156 # the attribution is computed.
157 if self.model_config.mode == ModelMode.regression:
158 target = None
159 else:
160 target = target[index]
161
162 attributions = self.attribution_method.attribute(
163 inputs=inputs,
164 target=target,
165 additional_forward_args=add_forward_args,
166 **self.kwargs,
167 )
168
169 node_mask, edge_mask = convert_captum_output(
170 attributions,
171 mask_type,
172 metadata,
173 )
174
175 if not isinstance(x, dict):
176 return Explanation(node_mask=node_mask, edge_mask=edge_mask)
177
178 explanation = HeteroExplanation()
179 explanation.set_value_dict('node_mask', node_mask)
180 explanation.set_value_dict('edge_mask', edge_mask)
181 return explanation
182
183 def supports(self) -> bool:
184 node_mask_type = self.explainer_config.node_mask_type
185 if node_mask_type not in [None, MaskType.attributes]:
186 logging.error(f"'{self.__class__.__name__}' only supports "
187 f"'node_mask_type' None or 'attributes' "
188 f"(got '{node_mask_type.value}')")
189 return False
190
191 # TODO (ramona): Confirm that output type is valid.
192 return True
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torch_geometric/explain/algorithm/captum_explainer.py b/torch_geometric/explain/algorithm/captum_explainer.py
--- a/torch_geometric/explain/algorithm/captum_explainer.py
+++ b/torch_geometric/explain/algorithm/captum_explainer.py
@@ -150,7 +150,7 @@
metadata = None
captum_model = CaptumModel(model, mask_type, index)
- self.attribution_method = self.attribution_method(captum_model)
+ attribution_method = self.attribution_method(captum_model)
# In captum, the target is the index for which
# the attribution is computed.
@@ -159,7 +159,7 @@
else:
target = target[index]
- attributions = self.attribution_method.attribute(
+ attributions = attribution_method.attribute(
inputs=inputs,
target=target,
additional_forward_args=add_forward_args,
|
{"golden_diff": "diff --git a/torch_geometric/explain/algorithm/captum_explainer.py b/torch_geometric/explain/algorithm/captum_explainer.py\n--- a/torch_geometric/explain/algorithm/captum_explainer.py\n+++ b/torch_geometric/explain/algorithm/captum_explainer.py\n@@ -150,7 +150,7 @@\n metadata = None\n captum_model = CaptumModel(model, mask_type, index)\n \n- self.attribution_method = self.attribution_method(captum_model)\n+ attribution_method = self.attribution_method(captum_model)\n \n # In captum, the target is the index for which\n # the attribution is computed.\n@@ -159,7 +159,7 @@\n else:\n target = target[index]\n \n- attributions = self.attribution_method.attribute(\n+ attributions = attribution_method.attribute(\n inputs=inputs,\n target=target,\n additional_forward_args=add_forward_args,\n", "issue": "`CaptumExplainer` cannot be called multiple times in a row without creating an error\n### \ud83d\udc1b Describe the bug\n\nTrying to call an instance of `CaptumExplainer` twice raises an error.\r\n\r\n\r\nTo replicate, change https://github.com/pyg-team/pytorch_geometric/blob/dfd32668aea953c8bb56f97364d8e028f267bde6/examples/explain/captum_explainer.py#L60\r\n\r\nto \r\n\r\n```python\r\nexplanation = explainer(data.x, data.edge_index, index=node_index)\r\nexplanation_2 = explanation = explainer(data.x, data.edge_index, index=node_index+1)\r\n```\r\n\r\nwhich will raise\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \".../pytorch_geometric/examples/explain/captum_explainer.py\", line 62, in <module>\r\n explanation_2 = explainer(data.x, data.edge_index, index=11)\r\n File \".../pytorch_geometric/torch_geometric/explain/explainer.py\", line 198, in __call__\r\n explanation = self.algorithm(\r\n File \".../pytorch_geometric/env/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \".../torch_geometric/explain/algorithm/captum_explainer.py\", line 153, in forward\r\n self.attribution_method = self.attribution_method(captum_model)\r\nTypeError: 'IntegratedGradients' object is not callable\r\n```\r\n\r\nThis is because on the second call, the `CaptumExplainer` tries to recreates an attribution method from `captum`:\r\n\r\nhttps://github.com/pyg-team/pytorch_geometric/blob/dfd32668aea953c8bb56f97364d8e028f267bde6/torch_geometric/explain/algorithm/captum_explainer.py#L153 \r\n\n\n### Environment\n\n* PyG version: (from source, latest commit to date on master)\r\n* PyTorch version: 2.0.0\n", "before_files": [{"content": "import inspect\nimport logging\nimport warnings\nfrom typing import Any, Dict, Optional, Union\n\nimport torch\nfrom torch import Tensor\n\nfrom torch_geometric.explain import Explanation, HeteroExplanation\nfrom torch_geometric.explain.algorithm import ExplainerAlgorithm\nfrom torch_geometric.explain.algorithm.captum import (\n CaptumHeteroModel,\n CaptumModel,\n MaskLevelType,\n convert_captum_output,\n to_captum_input,\n)\nfrom torch_geometric.explain.config import MaskType, ModelMode\nfrom torch_geometric.typing import EdgeType, NodeType\n\n\nclass CaptumExplainer(ExplainerAlgorithm):\n \"\"\"A `Captum <https://captum.ai>`__-based explainer for identifying compact\n subgraph structures and node features that play a crucial role in the\n predictions made by a GNN.\n\n This explainer algorithm uses :captum:`null` `Captum <https://captum.ai/>`_\n to compute attributions.\n\n Currently, the following attribution methods are supported:\n\n * :class:`captum.attr.IntegratedGradients`\n * :class:`captum.attr.Saliency`\n * :class:`captum.attr.InputXGradient`\n * :class:`captum.attr.Deconvolution`\n * :class:`captum.attr.ShapleyValueSampling`\n * :class:`captum.attr.GuidedBackprop`\n\n Args:\n attribution_method (Attribution or str): The Captum attribution method\n to use. Can be a string or a :class:`captum.attr` method.\n **kwargs: Additional arguments for the Captum attribution method.\n \"\"\"\n SUPPORTED_METHODS = [ # TODO: Add support for more methods.\n 'IntegratedGradients',\n 'Saliency',\n 'InputXGradient',\n 'Deconvolution',\n 'ShapleyValueSampling',\n 'GuidedBackprop',\n ]\n\n def __init__(\n self,\n attribution_method: Union[str, Any],\n **kwargs,\n ):\n super().__init__()\n\n import captum.attr # noqa\n\n if isinstance(attribution_method, str):\n self.attribution_method = getattr(\n captum.attr,\n attribution_method,\n )\n else:\n self.attribution_method = attribution_method\n\n if not self._is_supported_attribution_method():\n raise ValueError(f\"{self.__class__.__name__} does not support \"\n f\"attribution method \"\n f\"{self.attribution_method.__name__}\")\n\n if kwargs.get('internal_batch_size', 1) != 1:\n warnings.warn(\"Overriding 'internal_batch_size' to 1\")\n\n if 'internal_batch_size' in self._get_attribute_parameters():\n kwargs['internal_batch_size'] = 1\n\n self.kwargs = kwargs\n\n def _get_mask_type(self) -> MaskLevelType:\n r\"\"\"Based on the explainer config, return the mask type.\"\"\"\n node_mask_type = self.explainer_config.node_mask_type\n edge_mask_type = self.explainer_config.edge_mask_type\n if node_mask_type is not None and edge_mask_type is not None:\n mask_type = MaskLevelType.node_and_edge\n elif node_mask_type is not None:\n mask_type = MaskLevelType.node\n elif edge_mask_type is not None:\n mask_type = MaskLevelType.edge\n else:\n raise ValueError(\"Neither node mask type nor \"\n \"edge mask type is specified.\")\n return mask_type\n\n def _get_attribute_parameters(self) -> Dict[str, Any]:\n r\"\"\"Returns the attribute arguments.\"\"\"\n signature = inspect.signature(self.attribution_method.attribute)\n return signature.parameters\n\n def _needs_baseline(self) -> bool:\n r\"\"\"Checks if the method needs a baseline.\"\"\"\n parameters = self._get_attribute_parameters()\n if 'baselines' in parameters:\n param = parameters['baselines']\n if param.default is inspect.Parameter.empty:\n return True\n return False\n\n def _is_supported_attribution_method(self) -> bool:\n r\"\"\"Returns :obj:`True` if `self.attribution_method` is supported.\"\"\"\n # This is redundant for now since all supported methods need a baseline\n if self._needs_baseline():\n return False\n elif self.attribution_method.__name__ in self.SUPPORTED_METHODS:\n return True\n return False\n\n def forward(\n self,\n model: torch.nn.Module,\n x: Union[Tensor, Dict[NodeType, Tensor]],\n edge_index: Union[Tensor, Dict[EdgeType, Tensor]],\n *,\n target: Tensor,\n index: Optional[Union[int, Tensor]] = None,\n **kwargs,\n ) -> Union[Explanation, HeteroExplanation]:\n\n mask_type = self._get_mask_type()\n\n inputs, add_forward_args = to_captum_input(\n x,\n edge_index,\n mask_type,\n *kwargs.values(),\n )\n\n if isinstance(x, dict):\n metadata = (list(x.keys()), list(edge_index.keys()))\n captum_model = CaptumHeteroModel(\n model,\n mask_type,\n index,\n metadata,\n )\n else:\n metadata = None\n captum_model = CaptumModel(model, mask_type, index)\n\n self.attribution_method = self.attribution_method(captum_model)\n\n # In captum, the target is the index for which\n # the attribution is computed.\n if self.model_config.mode == ModelMode.regression:\n target = None\n else:\n target = target[index]\n\n attributions = self.attribution_method.attribute(\n inputs=inputs,\n target=target,\n additional_forward_args=add_forward_args,\n **self.kwargs,\n )\n\n node_mask, edge_mask = convert_captum_output(\n attributions,\n mask_type,\n metadata,\n )\n\n if not isinstance(x, dict):\n return Explanation(node_mask=node_mask, edge_mask=edge_mask)\n\n explanation = HeteroExplanation()\n explanation.set_value_dict('node_mask', node_mask)\n explanation.set_value_dict('edge_mask', edge_mask)\n return explanation\n\n def supports(self) -> bool:\n node_mask_type = self.explainer_config.node_mask_type\n if node_mask_type not in [None, MaskType.attributes]:\n logging.error(f\"'{self.__class__.__name__}' only supports \"\n f\"'node_mask_type' None or 'attributes' \"\n f\"(got '{node_mask_type.value}')\")\n return False\n\n # TODO (ramona): Confirm that output type is valid.\n return True\n", "path": "torch_geometric/explain/algorithm/captum_explainer.py"}], "after_files": [{"content": "import inspect\nimport logging\nimport warnings\nfrom typing import Any, Dict, Optional, Union\n\nimport torch\nfrom torch import Tensor\n\nfrom torch_geometric.explain import Explanation, HeteroExplanation\nfrom torch_geometric.explain.algorithm import ExplainerAlgorithm\nfrom torch_geometric.explain.algorithm.captum import (\n CaptumHeteroModel,\n CaptumModel,\n MaskLevelType,\n convert_captum_output,\n to_captum_input,\n)\nfrom torch_geometric.explain.config import MaskType, ModelMode\nfrom torch_geometric.typing import EdgeType, NodeType\n\n\nclass CaptumExplainer(ExplainerAlgorithm):\n \"\"\"A `Captum <https://captum.ai>`__-based explainer for identifying compact\n subgraph structures and node features that play a crucial role in the\n predictions made by a GNN.\n\n This explainer algorithm uses :captum:`null` `Captum <https://captum.ai/>`_\n to compute attributions.\n\n Currently, the following attribution methods are supported:\n\n * :class:`captum.attr.IntegratedGradients`\n * :class:`captum.attr.Saliency`\n * :class:`captum.attr.InputXGradient`\n * :class:`captum.attr.Deconvolution`\n * :class:`captum.attr.ShapleyValueSampling`\n * :class:`captum.attr.GuidedBackprop`\n\n Args:\n attribution_method (Attribution or str): The Captum attribution method\n to use. Can be a string or a :class:`captum.attr` method.\n **kwargs: Additional arguments for the Captum attribution method.\n \"\"\"\n SUPPORTED_METHODS = [ # TODO: Add support for more methods.\n 'IntegratedGradients',\n 'Saliency',\n 'InputXGradient',\n 'Deconvolution',\n 'ShapleyValueSampling',\n 'GuidedBackprop',\n ]\n\n def __init__(\n self,\n attribution_method: Union[str, Any],\n **kwargs,\n ):\n super().__init__()\n\n import captum.attr # noqa\n\n if isinstance(attribution_method, str):\n self.attribution_method = getattr(\n captum.attr,\n attribution_method,\n )\n else:\n self.attribution_method = attribution_method\n\n if not self._is_supported_attribution_method():\n raise ValueError(f\"{self.__class__.__name__} does not support \"\n f\"attribution method \"\n f\"{self.attribution_method.__name__}\")\n\n if kwargs.get('internal_batch_size', 1) != 1:\n warnings.warn(\"Overriding 'internal_batch_size' to 1\")\n\n if 'internal_batch_size' in self._get_attribute_parameters():\n kwargs['internal_batch_size'] = 1\n\n self.kwargs = kwargs\n\n def _get_mask_type(self) -> MaskLevelType:\n r\"\"\"Based on the explainer config, return the mask type.\"\"\"\n node_mask_type = self.explainer_config.node_mask_type\n edge_mask_type = self.explainer_config.edge_mask_type\n if node_mask_type is not None and edge_mask_type is not None:\n mask_type = MaskLevelType.node_and_edge\n elif node_mask_type is not None:\n mask_type = MaskLevelType.node\n elif edge_mask_type is not None:\n mask_type = MaskLevelType.edge\n else:\n raise ValueError(\"Neither node mask type nor \"\n \"edge mask type is specified.\")\n return mask_type\n\n def _get_attribute_parameters(self) -> Dict[str, Any]:\n r\"\"\"Returns the attribute arguments.\"\"\"\n signature = inspect.signature(self.attribution_method.attribute)\n return signature.parameters\n\n def _needs_baseline(self) -> bool:\n r\"\"\"Checks if the method needs a baseline.\"\"\"\n parameters = self._get_attribute_parameters()\n if 'baselines' in parameters:\n param = parameters['baselines']\n if param.default is inspect.Parameter.empty:\n return True\n return False\n\n def _is_supported_attribution_method(self) -> bool:\n r\"\"\"Returns :obj:`True` if `self.attribution_method` is supported.\"\"\"\n # This is redundant for now since all supported methods need a baseline\n if self._needs_baseline():\n return False\n elif self.attribution_method.__name__ in self.SUPPORTED_METHODS:\n return True\n return False\n\n def forward(\n self,\n model: torch.nn.Module,\n x: Union[Tensor, Dict[NodeType, Tensor]],\n edge_index: Union[Tensor, Dict[EdgeType, Tensor]],\n *,\n target: Tensor,\n index: Optional[Union[int, Tensor]] = None,\n **kwargs,\n ) -> Union[Explanation, HeteroExplanation]:\n\n mask_type = self._get_mask_type()\n\n inputs, add_forward_args = to_captum_input(\n x,\n edge_index,\n mask_type,\n *kwargs.values(),\n )\n\n if isinstance(x, dict):\n metadata = (list(x.keys()), list(edge_index.keys()))\n captum_model = CaptumHeteroModel(\n model,\n mask_type,\n index,\n metadata,\n )\n else:\n metadata = None\n captum_model = CaptumModel(model, mask_type, index)\n\n attribution_method = self.attribution_method(captum_model)\n\n # In captum, the target is the index for which\n # the attribution is computed.\n if self.model_config.mode == ModelMode.regression:\n target = None\n else:\n target = target[index]\n\n attributions = attribution_method.attribute(\n inputs=inputs,\n target=target,\n additional_forward_args=add_forward_args,\n **self.kwargs,\n )\n\n node_mask, edge_mask = convert_captum_output(\n attributions,\n mask_type,\n metadata,\n )\n\n if not isinstance(x, dict):\n return Explanation(node_mask=node_mask, edge_mask=edge_mask)\n\n explanation = HeteroExplanation()\n explanation.set_value_dict('node_mask', node_mask)\n explanation.set_value_dict('edge_mask', edge_mask)\n return explanation\n\n def supports(self) -> bool:\n node_mask_type = self.explainer_config.node_mask_type\n if node_mask_type not in [None, MaskType.attributes]:\n logging.error(f\"'{self.__class__.__name__}' only supports \"\n f\"'node_mask_type' None or 'attributes' \"\n f\"(got '{node_mask_type.value}')\")\n return False\n\n # TODO (ramona): Confirm that output type is valid.\n return True\n", "path": "torch_geometric/explain/algorithm/captum_explainer.py"}]}
| 2,641 | 220 |
gh_patches_debug_8246
|
rasdani/github-patches
|
git_diff
|
searxng__searxng-3135
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Custom Links in the Footer
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
**Is your feature request related to a problem? Please describe.**
No.
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like**
Support for custom footer links. Currently, all that can be set are links to git repos, project-related pages, and instance owner mailto:. I would like to link back to a status page for the things that I host, and any other arbitrary thing. Preferably this would be a key(link name) -> value(url) map somewhere in the config.
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
Editing the templates manually.
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/settings_defaults.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Implementation of the default settings.
4
5 """
6
7 import typing
8 import numbers
9 import errno
10 import os
11 import logging
12 from base64 import b64decode
13 from os.path import dirname, abspath
14
15 from .sxng_locales import sxng_locales
16
17 searx_dir = abspath(dirname(__file__))
18
19 logger = logging.getLogger('searx')
20 OUTPUT_FORMATS = ['html', 'csv', 'json', 'rss']
21 SXNG_LOCALE_TAGS = ['all', 'auto'] + list(l[0] for l in sxng_locales)
22 SIMPLE_STYLE = ('auto', 'light', 'dark')
23 CATEGORIES_AS_TABS = {
24 'general': {},
25 'images': {},
26 'videos': {},
27 'news': {},
28 'map': {},
29 'music': {},
30 'it': {},
31 'science': {},
32 'files': {},
33 'social media': {},
34 }
35 STR_TO_BOOL = {
36 '0': False,
37 'false': False,
38 'off': False,
39 '1': True,
40 'true': True,
41 'on': True,
42 }
43 _UNDEFINED = object()
44
45
46 class SettingsValue:
47 """Check and update a setting value"""
48
49 def __init__(
50 self,
51 type_definition: typing.Union[None, typing.Any, typing.Tuple[typing.Any]] = None,
52 default: typing.Any = None,
53 environ_name: str = None,
54 ):
55 self.type_definition = (
56 type_definition if type_definition is None or isinstance(type_definition, tuple) else (type_definition,)
57 )
58 self.default = default
59 self.environ_name = environ_name
60
61 @property
62 def type_definition_repr(self):
63 types_str = [t.__name__ if isinstance(t, type) else repr(t) for t in self.type_definition]
64 return ', '.join(types_str)
65
66 def check_type_definition(self, value: typing.Any) -> None:
67 if value in self.type_definition:
68 return
69 type_list = tuple(t for t in self.type_definition if isinstance(t, type))
70 if not isinstance(value, type_list):
71 raise ValueError('The value has to be one of these types/values: {}'.format(self.type_definition_repr))
72
73 def __call__(self, value: typing.Any) -> typing.Any:
74 if value == _UNDEFINED:
75 value = self.default
76 # override existing value with environ
77 if self.environ_name and self.environ_name in os.environ:
78 value = os.environ[self.environ_name]
79 if self.type_definition == (bool,):
80 value = STR_TO_BOOL[value.lower()]
81
82 self.check_type_definition(value)
83 return value
84
85
86 class SettingSublistValue(SettingsValue):
87 """Check the value is a sublist of type definition."""
88
89 def check_type_definition(self, value: typing.Any) -> typing.Any:
90 if not isinstance(value, list):
91 raise ValueError('The value has to a list')
92 for item in value:
93 if not item in self.type_definition[0]:
94 raise ValueError('{} not in {}'.format(item, self.type_definition))
95
96
97 class SettingsDirectoryValue(SettingsValue):
98 """Check and update a setting value that is a directory path"""
99
100 def check_type_definition(self, value: typing.Any) -> typing.Any:
101 super().check_type_definition(value)
102 if not os.path.isdir(value):
103 raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), value)
104
105 def __call__(self, value: typing.Any) -> typing.Any:
106 if value == '':
107 value = self.default
108 return super().__call__(value)
109
110
111 class SettingsBytesValue(SettingsValue):
112 """str are base64 decoded"""
113
114 def __call__(self, value: typing.Any) -> typing.Any:
115 if isinstance(value, str):
116 value = b64decode(value)
117 return super().__call__(value)
118
119
120 def apply_schema(settings, schema, path_list):
121 error = False
122 for key, value in schema.items():
123 if isinstance(value, SettingsValue):
124 try:
125 settings[key] = value(settings.get(key, _UNDEFINED))
126 except Exception as e: # pylint: disable=broad-except
127 # don't stop now: check other values
128 logger.error('%s: %s', '.'.join([*path_list, key]), e)
129 error = True
130 elif isinstance(value, dict):
131 error = error or apply_schema(settings.setdefault(key, {}), schema[key], [*path_list, key])
132 else:
133 settings.setdefault(key, value)
134 if len(path_list) == 0 and error:
135 raise ValueError('Invalid settings.yml')
136 return error
137
138
139 SCHEMA = {
140 'general': {
141 'debug': SettingsValue(bool, False, 'SEARXNG_DEBUG'),
142 'instance_name': SettingsValue(str, 'SearXNG'),
143 'privacypolicy_url': SettingsValue((None, False, str), None),
144 'contact_url': SettingsValue((None, False, str), None),
145 'donation_url': SettingsValue((bool, str), "https://docs.searxng.org/donate.html"),
146 'enable_metrics': SettingsValue(bool, True),
147 },
148 'brand': {
149 'issue_url': SettingsValue(str, 'https://github.com/searxng/searxng/issues'),
150 'new_issue_url': SettingsValue(str, 'https://github.com/searxng/searxng/issues/new'),
151 'docs_url': SettingsValue(str, 'https://docs.searxng.org'),
152 'public_instances': SettingsValue((False, str), 'https://searx.space'),
153 'wiki_url': SettingsValue(str, 'https://github.com/searxng/searxng/wiki'),
154 },
155 'search': {
156 'safe_search': SettingsValue((0, 1, 2), 0),
157 'autocomplete': SettingsValue(str, ''),
158 'autocomplete_min': SettingsValue(int, 4),
159 'default_lang': SettingsValue(tuple(SXNG_LOCALE_TAGS + ['']), ''),
160 'languages': SettingSublistValue(SXNG_LOCALE_TAGS, SXNG_LOCALE_TAGS),
161 'ban_time_on_fail': SettingsValue(numbers.Real, 5),
162 'max_ban_time_on_fail': SettingsValue(numbers.Real, 120),
163 'suspended_times': {
164 'SearxEngineAccessDenied': SettingsValue(numbers.Real, 86400),
165 'SearxEngineCaptcha': SettingsValue(numbers.Real, 86400),
166 'SearxEngineTooManyRequests': SettingsValue(numbers.Real, 3600),
167 'cf_SearxEngineCaptcha': SettingsValue(numbers.Real, 1296000),
168 'cf_SearxEngineAccessDenied': SettingsValue(numbers.Real, 86400),
169 'recaptcha_SearxEngineCaptcha': SettingsValue(numbers.Real, 604800),
170 },
171 'formats': SettingsValue(list, OUTPUT_FORMATS),
172 'max_page': SettingsValue(int, 0),
173 },
174 'server': {
175 'port': SettingsValue((int, str), 8888, 'SEARXNG_PORT'),
176 'bind_address': SettingsValue(str, '127.0.0.1', 'SEARXNG_BIND_ADDRESS'),
177 'limiter': SettingsValue(bool, False),
178 'public_instance': SettingsValue(bool, False),
179 'secret_key': SettingsValue(str, environ_name='SEARXNG_SECRET'),
180 'base_url': SettingsValue((False, str), False, 'SEARXNG_BASE_URL'),
181 'image_proxy': SettingsValue(bool, False),
182 'http_protocol_version': SettingsValue(('1.0', '1.1'), '1.0'),
183 'method': SettingsValue(('POST', 'GET'), 'POST'),
184 'default_http_headers': SettingsValue(dict, {}),
185 },
186 'redis': {
187 'url': SettingsValue((None, False, str), False, 'SEARXNG_REDIS_URL'),
188 },
189 'ui': {
190 'static_path': SettingsDirectoryValue(str, os.path.join(searx_dir, 'static')),
191 'static_use_hash': SettingsValue(bool, False),
192 'templates_path': SettingsDirectoryValue(str, os.path.join(searx_dir, 'templates')),
193 'default_theme': SettingsValue(str, 'simple'),
194 'default_locale': SettingsValue(str, ''),
195 'theme_args': {
196 'simple_style': SettingsValue(SIMPLE_STYLE, 'auto'),
197 },
198 'center_alignment': SettingsValue(bool, False),
199 'results_on_new_tab': SettingsValue(bool, False),
200 'advanced_search': SettingsValue(bool, False),
201 'query_in_title': SettingsValue(bool, False),
202 'infinite_scroll': SettingsValue(bool, False),
203 'cache_url': SettingsValue(str, 'https://web.archive.org/web/'),
204 'search_on_category_select': SettingsValue(bool, True),
205 'hotkeys': SettingsValue(('default', 'vim'), 'default'),
206 },
207 'preferences': {
208 'lock': SettingsValue(list, []),
209 },
210 'outgoing': {
211 'useragent_suffix': SettingsValue(str, ''),
212 'request_timeout': SettingsValue(numbers.Real, 3.0),
213 'enable_http2': SettingsValue(bool, True),
214 'verify': SettingsValue((bool, str), True),
215 'max_request_timeout': SettingsValue((None, numbers.Real), None),
216 'pool_connections': SettingsValue(int, 100),
217 'pool_maxsize': SettingsValue(int, 10),
218 'keepalive_expiry': SettingsValue(numbers.Real, 5.0),
219 # default maximum redirect
220 # from https://github.com/psf/requests/blob/8c211a96cdbe9fe320d63d9e1ae15c5c07e179f8/requests/models.py#L55
221 'max_redirects': SettingsValue(int, 30),
222 'retries': SettingsValue(int, 0),
223 'proxies': SettingsValue((None, str, dict), None),
224 'source_ips': SettingsValue((None, str, list), None),
225 # Tor configuration
226 'using_tor_proxy': SettingsValue(bool, False),
227 'extra_proxy_timeout': SettingsValue(int, 0),
228 'networks': {},
229 },
230 'result_proxy': {
231 'url': SettingsValue((None, str), None),
232 'key': SettingsBytesValue((None, bytes), None),
233 'proxify_results': SettingsValue(bool, False),
234 },
235 'plugins': SettingsValue(list, []),
236 'enabled_plugins': SettingsValue((None, list), None),
237 'checker': {
238 'off_when_debug': SettingsValue(bool, True, None),
239 'scheduling': SettingsValue((None, dict), None, None),
240 },
241 'categories_as_tabs': SettingsValue(dict, CATEGORIES_AS_TABS),
242 'engines': SettingsValue(list, []),
243 'doi_resolvers': {},
244 }
245
246
247 def settings_set_defaults(settings):
248 apply_schema(settings, SCHEMA, [])
249 return settings
250
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/settings_defaults.py b/searx/settings_defaults.py
--- a/searx/settings_defaults.py
+++ b/searx/settings_defaults.py
@@ -151,6 +151,7 @@
'docs_url': SettingsValue(str, 'https://docs.searxng.org'),
'public_instances': SettingsValue((False, str), 'https://searx.space'),
'wiki_url': SettingsValue(str, 'https://github.com/searxng/searxng/wiki'),
+ 'custom': SettingsValue(dict, {'links': {}}),
},
'search': {
'safe_search': SettingsValue((0, 1, 2), 0),
|
{"golden_diff": "diff --git a/searx/settings_defaults.py b/searx/settings_defaults.py\n--- a/searx/settings_defaults.py\n+++ b/searx/settings_defaults.py\n@@ -151,6 +151,7 @@\n 'docs_url': SettingsValue(str, 'https://docs.searxng.org'),\n 'public_instances': SettingsValue((False, str), 'https://searx.space'),\n 'wiki_url': SettingsValue(str, 'https://github.com/searxng/searxng/wiki'),\n+ 'custom': SettingsValue(dict, {'links': {}}),\n },\n 'search': {\n 'safe_search': SettingsValue((0, 1, 2), 0),\n", "issue": "Custom Links in the Footer\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Is your feature request related to a problem? Please describe.**\r\n\r\nNo.\r\n<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->\r\n\r\n**Describe the solution you'd like**\r\n\r\nSupport for custom footer links. Currently, all that can be set are links to git repos, project-related pages, and instance owner mailto:. I would like to link back to a status page for the things that I host, and any other arbitrary thing. Preferably this would be a key(link name) -> value(url) map somewhere in the config.\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\n**Describe alternatives you've considered**\r\n\r\nEditing the templates manually.\r\n<!-- A clear and concise description of any alternative solutions or features you've considered. -->\r\n\r\n**Additional context**\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Implementation of the default settings.\n\n\"\"\"\n\nimport typing\nimport numbers\nimport errno\nimport os\nimport logging\nfrom base64 import b64decode\nfrom os.path import dirname, abspath\n\nfrom .sxng_locales import sxng_locales\n\nsearx_dir = abspath(dirname(__file__))\n\nlogger = logging.getLogger('searx')\nOUTPUT_FORMATS = ['html', 'csv', 'json', 'rss']\nSXNG_LOCALE_TAGS = ['all', 'auto'] + list(l[0] for l in sxng_locales)\nSIMPLE_STYLE = ('auto', 'light', 'dark')\nCATEGORIES_AS_TABS = {\n 'general': {},\n 'images': {},\n 'videos': {},\n 'news': {},\n 'map': {},\n 'music': {},\n 'it': {},\n 'science': {},\n 'files': {},\n 'social media': {},\n}\nSTR_TO_BOOL = {\n '0': False,\n 'false': False,\n 'off': False,\n '1': True,\n 'true': True,\n 'on': True,\n}\n_UNDEFINED = object()\n\n\nclass SettingsValue:\n \"\"\"Check and update a setting value\"\"\"\n\n def __init__(\n self,\n type_definition: typing.Union[None, typing.Any, typing.Tuple[typing.Any]] = None,\n default: typing.Any = None,\n environ_name: str = None,\n ):\n self.type_definition = (\n type_definition if type_definition is None or isinstance(type_definition, tuple) else (type_definition,)\n )\n self.default = default\n self.environ_name = environ_name\n\n @property\n def type_definition_repr(self):\n types_str = [t.__name__ if isinstance(t, type) else repr(t) for t in self.type_definition]\n return ', '.join(types_str)\n\n def check_type_definition(self, value: typing.Any) -> None:\n if value in self.type_definition:\n return\n type_list = tuple(t for t in self.type_definition if isinstance(t, type))\n if not isinstance(value, type_list):\n raise ValueError('The value has to be one of these types/values: {}'.format(self.type_definition_repr))\n\n def __call__(self, value: typing.Any) -> typing.Any:\n if value == _UNDEFINED:\n value = self.default\n # override existing value with environ\n if self.environ_name and self.environ_name in os.environ:\n value = os.environ[self.environ_name]\n if self.type_definition == (bool,):\n value = STR_TO_BOOL[value.lower()]\n\n self.check_type_definition(value)\n return value\n\n\nclass SettingSublistValue(SettingsValue):\n \"\"\"Check the value is a sublist of type definition.\"\"\"\n\n def check_type_definition(self, value: typing.Any) -> typing.Any:\n if not isinstance(value, list):\n raise ValueError('The value has to a list')\n for item in value:\n if not item in self.type_definition[0]:\n raise ValueError('{} not in {}'.format(item, self.type_definition))\n\n\nclass SettingsDirectoryValue(SettingsValue):\n \"\"\"Check and update a setting value that is a directory path\"\"\"\n\n def check_type_definition(self, value: typing.Any) -> typing.Any:\n super().check_type_definition(value)\n if not os.path.isdir(value):\n raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), value)\n\n def __call__(self, value: typing.Any) -> typing.Any:\n if value == '':\n value = self.default\n return super().__call__(value)\n\n\nclass SettingsBytesValue(SettingsValue):\n \"\"\"str are base64 decoded\"\"\"\n\n def __call__(self, value: typing.Any) -> typing.Any:\n if isinstance(value, str):\n value = b64decode(value)\n return super().__call__(value)\n\n\ndef apply_schema(settings, schema, path_list):\n error = False\n for key, value in schema.items():\n if isinstance(value, SettingsValue):\n try:\n settings[key] = value(settings.get(key, _UNDEFINED))\n except Exception as e: # pylint: disable=broad-except\n # don't stop now: check other values\n logger.error('%s: %s', '.'.join([*path_list, key]), e)\n error = True\n elif isinstance(value, dict):\n error = error or apply_schema(settings.setdefault(key, {}), schema[key], [*path_list, key])\n else:\n settings.setdefault(key, value)\n if len(path_list) == 0 and error:\n raise ValueError('Invalid settings.yml')\n return error\n\n\nSCHEMA = {\n 'general': {\n 'debug': SettingsValue(bool, False, 'SEARXNG_DEBUG'),\n 'instance_name': SettingsValue(str, 'SearXNG'),\n 'privacypolicy_url': SettingsValue((None, False, str), None),\n 'contact_url': SettingsValue((None, False, str), None),\n 'donation_url': SettingsValue((bool, str), \"https://docs.searxng.org/donate.html\"),\n 'enable_metrics': SettingsValue(bool, True),\n },\n 'brand': {\n 'issue_url': SettingsValue(str, 'https://github.com/searxng/searxng/issues'),\n 'new_issue_url': SettingsValue(str, 'https://github.com/searxng/searxng/issues/new'),\n 'docs_url': SettingsValue(str, 'https://docs.searxng.org'),\n 'public_instances': SettingsValue((False, str), 'https://searx.space'),\n 'wiki_url': SettingsValue(str, 'https://github.com/searxng/searxng/wiki'),\n },\n 'search': {\n 'safe_search': SettingsValue((0, 1, 2), 0),\n 'autocomplete': SettingsValue(str, ''),\n 'autocomplete_min': SettingsValue(int, 4),\n 'default_lang': SettingsValue(tuple(SXNG_LOCALE_TAGS + ['']), ''),\n 'languages': SettingSublistValue(SXNG_LOCALE_TAGS, SXNG_LOCALE_TAGS),\n 'ban_time_on_fail': SettingsValue(numbers.Real, 5),\n 'max_ban_time_on_fail': SettingsValue(numbers.Real, 120),\n 'suspended_times': {\n 'SearxEngineAccessDenied': SettingsValue(numbers.Real, 86400),\n 'SearxEngineCaptcha': SettingsValue(numbers.Real, 86400),\n 'SearxEngineTooManyRequests': SettingsValue(numbers.Real, 3600),\n 'cf_SearxEngineCaptcha': SettingsValue(numbers.Real, 1296000),\n 'cf_SearxEngineAccessDenied': SettingsValue(numbers.Real, 86400),\n 'recaptcha_SearxEngineCaptcha': SettingsValue(numbers.Real, 604800),\n },\n 'formats': SettingsValue(list, OUTPUT_FORMATS),\n 'max_page': SettingsValue(int, 0),\n },\n 'server': {\n 'port': SettingsValue((int, str), 8888, 'SEARXNG_PORT'),\n 'bind_address': SettingsValue(str, '127.0.0.1', 'SEARXNG_BIND_ADDRESS'),\n 'limiter': SettingsValue(bool, False),\n 'public_instance': SettingsValue(bool, False),\n 'secret_key': SettingsValue(str, environ_name='SEARXNG_SECRET'),\n 'base_url': SettingsValue((False, str), False, 'SEARXNG_BASE_URL'),\n 'image_proxy': SettingsValue(bool, False),\n 'http_protocol_version': SettingsValue(('1.0', '1.1'), '1.0'),\n 'method': SettingsValue(('POST', 'GET'), 'POST'),\n 'default_http_headers': SettingsValue(dict, {}),\n },\n 'redis': {\n 'url': SettingsValue((None, False, str), False, 'SEARXNG_REDIS_URL'),\n },\n 'ui': {\n 'static_path': SettingsDirectoryValue(str, os.path.join(searx_dir, 'static')),\n 'static_use_hash': SettingsValue(bool, False),\n 'templates_path': SettingsDirectoryValue(str, os.path.join(searx_dir, 'templates')),\n 'default_theme': SettingsValue(str, 'simple'),\n 'default_locale': SettingsValue(str, ''),\n 'theme_args': {\n 'simple_style': SettingsValue(SIMPLE_STYLE, 'auto'),\n },\n 'center_alignment': SettingsValue(bool, False),\n 'results_on_new_tab': SettingsValue(bool, False),\n 'advanced_search': SettingsValue(bool, False),\n 'query_in_title': SettingsValue(bool, False),\n 'infinite_scroll': SettingsValue(bool, False),\n 'cache_url': SettingsValue(str, 'https://web.archive.org/web/'),\n 'search_on_category_select': SettingsValue(bool, True),\n 'hotkeys': SettingsValue(('default', 'vim'), 'default'),\n },\n 'preferences': {\n 'lock': SettingsValue(list, []),\n },\n 'outgoing': {\n 'useragent_suffix': SettingsValue(str, ''),\n 'request_timeout': SettingsValue(numbers.Real, 3.0),\n 'enable_http2': SettingsValue(bool, True),\n 'verify': SettingsValue((bool, str), True),\n 'max_request_timeout': SettingsValue((None, numbers.Real), None),\n 'pool_connections': SettingsValue(int, 100),\n 'pool_maxsize': SettingsValue(int, 10),\n 'keepalive_expiry': SettingsValue(numbers.Real, 5.0),\n # default maximum redirect\n # from https://github.com/psf/requests/blob/8c211a96cdbe9fe320d63d9e1ae15c5c07e179f8/requests/models.py#L55\n 'max_redirects': SettingsValue(int, 30),\n 'retries': SettingsValue(int, 0),\n 'proxies': SettingsValue((None, str, dict), None),\n 'source_ips': SettingsValue((None, str, list), None),\n # Tor configuration\n 'using_tor_proxy': SettingsValue(bool, False),\n 'extra_proxy_timeout': SettingsValue(int, 0),\n 'networks': {},\n },\n 'result_proxy': {\n 'url': SettingsValue((None, str), None),\n 'key': SettingsBytesValue((None, bytes), None),\n 'proxify_results': SettingsValue(bool, False),\n },\n 'plugins': SettingsValue(list, []),\n 'enabled_plugins': SettingsValue((None, list), None),\n 'checker': {\n 'off_when_debug': SettingsValue(bool, True, None),\n 'scheduling': SettingsValue((None, dict), None, None),\n },\n 'categories_as_tabs': SettingsValue(dict, CATEGORIES_AS_TABS),\n 'engines': SettingsValue(list, []),\n 'doi_resolvers': {},\n}\n\n\ndef settings_set_defaults(settings):\n apply_schema(settings, SCHEMA, [])\n return settings\n", "path": "searx/settings_defaults.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Implementation of the default settings.\n\n\"\"\"\n\nimport typing\nimport numbers\nimport errno\nimport os\nimport logging\nfrom base64 import b64decode\nfrom os.path import dirname, abspath\n\nfrom .sxng_locales import sxng_locales\n\nsearx_dir = abspath(dirname(__file__))\n\nlogger = logging.getLogger('searx')\nOUTPUT_FORMATS = ['html', 'csv', 'json', 'rss']\nSXNG_LOCALE_TAGS = ['all', 'auto'] + list(l[0] for l in sxng_locales)\nSIMPLE_STYLE = ('auto', 'light', 'dark')\nCATEGORIES_AS_TABS = {\n 'general': {},\n 'images': {},\n 'videos': {},\n 'news': {},\n 'map': {},\n 'music': {},\n 'it': {},\n 'science': {},\n 'files': {},\n 'social media': {},\n}\nSTR_TO_BOOL = {\n '0': False,\n 'false': False,\n 'off': False,\n '1': True,\n 'true': True,\n 'on': True,\n}\n_UNDEFINED = object()\n\n\nclass SettingsValue:\n \"\"\"Check and update a setting value\"\"\"\n\n def __init__(\n self,\n type_definition: typing.Union[None, typing.Any, typing.Tuple[typing.Any]] = None,\n default: typing.Any = None,\n environ_name: str = None,\n ):\n self.type_definition = (\n type_definition if type_definition is None or isinstance(type_definition, tuple) else (type_definition,)\n )\n self.default = default\n self.environ_name = environ_name\n\n @property\n def type_definition_repr(self):\n types_str = [t.__name__ if isinstance(t, type) else repr(t) for t in self.type_definition]\n return ', '.join(types_str)\n\n def check_type_definition(self, value: typing.Any) -> None:\n if value in self.type_definition:\n return\n type_list = tuple(t for t in self.type_definition if isinstance(t, type))\n if not isinstance(value, type_list):\n raise ValueError('The value has to be one of these types/values: {}'.format(self.type_definition_repr))\n\n def __call__(self, value: typing.Any) -> typing.Any:\n if value == _UNDEFINED:\n value = self.default\n # override existing value with environ\n if self.environ_name and self.environ_name in os.environ:\n value = os.environ[self.environ_name]\n if self.type_definition == (bool,):\n value = STR_TO_BOOL[value.lower()]\n\n self.check_type_definition(value)\n return value\n\n\nclass SettingSublistValue(SettingsValue):\n \"\"\"Check the value is a sublist of type definition.\"\"\"\n\n def check_type_definition(self, value: typing.Any) -> typing.Any:\n if not isinstance(value, list):\n raise ValueError('The value has to a list')\n for item in value:\n if not item in self.type_definition[0]:\n raise ValueError('{} not in {}'.format(item, self.type_definition))\n\n\nclass SettingsDirectoryValue(SettingsValue):\n \"\"\"Check and update a setting value that is a directory path\"\"\"\n\n def check_type_definition(self, value: typing.Any) -> typing.Any:\n super().check_type_definition(value)\n if not os.path.isdir(value):\n raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), value)\n\n def __call__(self, value: typing.Any) -> typing.Any:\n if value == '':\n value = self.default\n return super().__call__(value)\n\n\nclass SettingsBytesValue(SettingsValue):\n \"\"\"str are base64 decoded\"\"\"\n\n def __call__(self, value: typing.Any) -> typing.Any:\n if isinstance(value, str):\n value = b64decode(value)\n return super().__call__(value)\n\n\ndef apply_schema(settings, schema, path_list):\n error = False\n for key, value in schema.items():\n if isinstance(value, SettingsValue):\n try:\n settings[key] = value(settings.get(key, _UNDEFINED))\n except Exception as e: # pylint: disable=broad-except\n # don't stop now: check other values\n logger.error('%s: %s', '.'.join([*path_list, key]), e)\n error = True\n elif isinstance(value, dict):\n error = error or apply_schema(settings.setdefault(key, {}), schema[key], [*path_list, key])\n else:\n settings.setdefault(key, value)\n if len(path_list) == 0 and error:\n raise ValueError('Invalid settings.yml')\n return error\n\n\nSCHEMA = {\n 'general': {\n 'debug': SettingsValue(bool, False, 'SEARXNG_DEBUG'),\n 'instance_name': SettingsValue(str, 'SearXNG'),\n 'privacypolicy_url': SettingsValue((None, False, str), None),\n 'contact_url': SettingsValue((None, False, str), None),\n 'donation_url': SettingsValue((bool, str), \"https://docs.searxng.org/donate.html\"),\n 'enable_metrics': SettingsValue(bool, True),\n },\n 'brand': {\n 'issue_url': SettingsValue(str, 'https://github.com/searxng/searxng/issues'),\n 'new_issue_url': SettingsValue(str, 'https://github.com/searxng/searxng/issues/new'),\n 'docs_url': SettingsValue(str, 'https://docs.searxng.org'),\n 'public_instances': SettingsValue((False, str), 'https://searx.space'),\n 'wiki_url': SettingsValue(str, 'https://github.com/searxng/searxng/wiki'),\n 'custom': SettingsValue(dict, {'links': {}}),\n },\n 'search': {\n 'safe_search': SettingsValue((0, 1, 2), 0),\n 'autocomplete': SettingsValue(str, ''),\n 'autocomplete_min': SettingsValue(int, 4),\n 'default_lang': SettingsValue(tuple(SXNG_LOCALE_TAGS + ['']), ''),\n 'languages': SettingSublistValue(SXNG_LOCALE_TAGS, SXNG_LOCALE_TAGS),\n 'ban_time_on_fail': SettingsValue(numbers.Real, 5),\n 'max_ban_time_on_fail': SettingsValue(numbers.Real, 120),\n 'suspended_times': {\n 'SearxEngineAccessDenied': SettingsValue(numbers.Real, 86400),\n 'SearxEngineCaptcha': SettingsValue(numbers.Real, 86400),\n 'SearxEngineTooManyRequests': SettingsValue(numbers.Real, 3600),\n 'cf_SearxEngineCaptcha': SettingsValue(numbers.Real, 1296000),\n 'cf_SearxEngineAccessDenied': SettingsValue(numbers.Real, 86400),\n 'recaptcha_SearxEngineCaptcha': SettingsValue(numbers.Real, 604800),\n },\n 'formats': SettingsValue(list, OUTPUT_FORMATS),\n 'max_page': SettingsValue(int, 0),\n },\n 'server': {\n 'port': SettingsValue((int, str), 8888, 'SEARXNG_PORT'),\n 'bind_address': SettingsValue(str, '127.0.0.1', 'SEARXNG_BIND_ADDRESS'),\n 'limiter': SettingsValue(bool, False),\n 'public_instance': SettingsValue(bool, False),\n 'secret_key': SettingsValue(str, environ_name='SEARXNG_SECRET'),\n 'base_url': SettingsValue((False, str), False, 'SEARXNG_BASE_URL'),\n 'image_proxy': SettingsValue(bool, False),\n 'http_protocol_version': SettingsValue(('1.0', '1.1'), '1.0'),\n 'method': SettingsValue(('POST', 'GET'), 'POST'),\n 'default_http_headers': SettingsValue(dict, {}),\n },\n 'redis': {\n 'url': SettingsValue((None, False, str), False, 'SEARXNG_REDIS_URL'),\n },\n 'ui': {\n 'static_path': SettingsDirectoryValue(str, os.path.join(searx_dir, 'static')),\n 'static_use_hash': SettingsValue(bool, False),\n 'templates_path': SettingsDirectoryValue(str, os.path.join(searx_dir, 'templates')),\n 'default_theme': SettingsValue(str, 'simple'),\n 'default_locale': SettingsValue(str, ''),\n 'theme_args': {\n 'simple_style': SettingsValue(SIMPLE_STYLE, 'auto'),\n },\n 'center_alignment': SettingsValue(bool, False),\n 'results_on_new_tab': SettingsValue(bool, False),\n 'advanced_search': SettingsValue(bool, False),\n 'query_in_title': SettingsValue(bool, False),\n 'infinite_scroll': SettingsValue(bool, False),\n 'cache_url': SettingsValue(str, 'https://web.archive.org/web/'),\n 'search_on_category_select': SettingsValue(bool, True),\n 'hotkeys': SettingsValue(('default', 'vim'), 'default'),\n },\n 'preferences': {\n 'lock': SettingsValue(list, []),\n },\n 'outgoing': {\n 'useragent_suffix': SettingsValue(str, ''),\n 'request_timeout': SettingsValue(numbers.Real, 3.0),\n 'enable_http2': SettingsValue(bool, True),\n 'verify': SettingsValue((bool, str), True),\n 'max_request_timeout': SettingsValue((None, numbers.Real), None),\n 'pool_connections': SettingsValue(int, 100),\n 'pool_maxsize': SettingsValue(int, 10),\n 'keepalive_expiry': SettingsValue(numbers.Real, 5.0),\n # default maximum redirect\n # from https://github.com/psf/requests/blob/8c211a96cdbe9fe320d63d9e1ae15c5c07e179f8/requests/models.py#L55\n 'max_redirects': SettingsValue(int, 30),\n 'retries': SettingsValue(int, 0),\n 'proxies': SettingsValue((None, str, dict), None),\n 'source_ips': SettingsValue((None, str, list), None),\n # Tor configuration\n 'using_tor_proxy': SettingsValue(bool, False),\n 'extra_proxy_timeout': SettingsValue(int, 0),\n 'networks': {},\n },\n 'result_proxy': {\n 'url': SettingsValue((None, str), None),\n 'key': SettingsBytesValue((None, bytes), None),\n 'proxify_results': SettingsValue(bool, False),\n },\n 'plugins': SettingsValue(list, []),\n 'enabled_plugins': SettingsValue((None, list), None),\n 'checker': {\n 'off_when_debug': SettingsValue(bool, True, None),\n 'scheduling': SettingsValue((None, dict), None, None),\n },\n 'categories_as_tabs': SettingsValue(dict, CATEGORIES_AS_TABS),\n 'engines': SettingsValue(list, []),\n 'doi_resolvers': {},\n}\n\n\ndef settings_set_defaults(settings):\n apply_schema(settings, SCHEMA, [])\n return settings\n", "path": "searx/settings_defaults.py"}]}
| 3,526 | 154 |
gh_patches_debug_41874
|
rasdani/github-patches
|
git_diff
|
avocado-framework__avocado-4277
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
status server uri should be different from listen
nrunner.status_server_uri should not be used to both, server and client. We should have different settings for configuring the client.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `avocado/plugins/runner_nrunner.py`
Content:
```
1 # This program is free software; you can redistribute it and/or modify
2 # it under the terms of the GNU General Public License as published by
3 # the Free Software Foundation; either version 2 of the License, or
4 # (at your option) any later version.
5 #
6 # This program is distributed in the hope that it will be useful,
7 # but WITHOUT ANY WARRANTY; without even the implied warranty of
8 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
9 #
10 # See LICENSE for more details.
11 #
12 # Copyright: Red Hat Inc. 2019-2020
13 # Authors: Cleber Rosa <[email protected]>
14
15 """
16 NRunner based implementation of job compliant runner
17 """
18
19 import asyncio
20 import json
21 import multiprocessing
22 import os
23 import random
24 from copy import copy
25
26 from avocado.core import nrunner
27 from avocado.core.dispatcher import SpawnerDispatcher
28 from avocado.core.plugin_interfaces import CLI, Init
29 from avocado.core.plugin_interfaces import Runner as RunnerInterface
30 from avocado.core.settings import settings
31 from avocado.core.status.repo import StatusRepo
32 from avocado.core.status.server import StatusServer
33 from avocado.core.task.runtime import RuntimeTask
34 from avocado.core.task.statemachine import TaskStateMachine, Worker
35 from avocado.core.test_id import TestID
36 from avocado.core.teststatus import mapping
37
38
39 class RunnerInit(Init):
40
41 name = 'nrunner'
42 description = 'nrunner initialization'
43
44 def initialize(self):
45 section = 'nrunner'
46 help_msg = 'Shuffle the tasks to be executed'
47 settings.register_option(section=section,
48 key='shuffle',
49 default=False,
50 help_msg=help_msg,
51 key_type=bool)
52
53 help_msg = 'URI for the status server, usually a "HOST:PORT" string'
54 settings.register_option(section=section,
55 key='status_server_uri',
56 default='127.0.0.1:8888',
57 metavar="HOST:PORT",
58 help_msg=help_msg)
59
60 help_msg = ('Number of maximum number tasks running in parallel. You '
61 'can disable parallel execution by setting this to 1. '
62 'Defaults to the amount of CPUs on this machine.')
63 settings.register_option(section=section,
64 key='max_parallel_tasks',
65 default=multiprocessing.cpu_count(),
66 key_type=int,
67 help_msg=help_msg)
68
69 help_msg = ("Spawn tasks in a specific spawner. Available spawners: "
70 "'process' and 'podman'")
71 settings.register_option(section=section,
72 key="spawner",
73 default='process',
74 help_msg=help_msg)
75
76
77 class RunnerCLI(CLI):
78
79 name = 'nrunner'
80 description = 'nrunner command line options for "run"'
81
82 def configure(self, parser):
83 super(RunnerCLI, self).configure(parser)
84 parser = parser.subcommands.choices.get('run', None)
85 if parser is None:
86 return
87
88 parser = parser.add_argument_group('nrunner specific options')
89 settings.add_argparser_to_option(namespace='nrunner.shuffle',
90 parser=parser,
91 long_arg='--nrunner-shuffle',
92 action='store_true')
93
94 settings.add_argparser_to_option(namespace='nrunner.status_server_uri',
95 parser=parser,
96 long_arg='--nrunner-status-server-uri')
97
98 settings.add_argparser_to_option(namespace='nrunner.max_parallel_tasks',
99 parser=parser,
100 long_arg='--nrunner-max-parallel-tasks')
101
102 settings.add_argparser_to_option(namespace='nrunner.spawner',
103 parser=parser,
104 long_arg='--nrunner-spawner')
105
106 def run(self, config):
107 pass
108
109
110 class Runner(RunnerInterface):
111
112 name = 'nrunner'
113 description = 'nrunner based implementation of job compliant runner'
114
115 def _save_to_file(self, filename, buff, mode='wb'):
116 with open(filename, mode) as fp:
117 fp.write(buff)
118
119 def _populate_task_logdir(self, base_path, task, statuses, debug=False):
120 # We are copying here to avoid printing duplicated information
121 local_statuses = copy(statuses)
122 last = local_statuses[-1]
123 try:
124 stdout = last.pop('stdout')
125 except KeyError:
126 stdout = None
127 try:
128 stderr = last.pop('stderr')
129 except KeyError:
130 stderr = None
131
132 # Create task dir
133 task_path = os.path.join(base_path, task.identifier.str_filesystem)
134 os.makedirs(task_path, exist_ok=True)
135
136 # Save stdout and stderr
137 if stdout is not None:
138 stdout_file = os.path.join(task_path, 'stdout')
139 self._save_to_file(stdout_file, stdout)
140 if stderr is not None:
141 stderr_file = os.path.join(task_path, 'stderr')
142 self._save_to_file(stderr_file, stderr)
143
144 # Save debug
145 if debug:
146 debug = os.path.join(task_path, 'debug')
147 with open(debug, 'w') as fp:
148 json.dump(local_statuses, fp)
149
150 data_file = os.path.join(task_path, 'data')
151 with open(data_file, 'w') as fp:
152 fp.write("{}\n".format(task.output_dir))
153
154 def _get_all_runtime_tasks(self, test_suite):
155 result = []
156 no_digits = len(str(len(test_suite)))
157 for index, task in enumerate(test_suite.tests, start=1):
158 task.known_runners = nrunner.RUNNERS_REGISTRY_PYTHON_CLASS
159 # this is all rubbish data
160 if test_suite.name:
161 prefix = "{}-{}".format(test_suite.name, index)
162 else:
163 prefix = index
164 test_id = TestID(prefix,
165 task.runnable.uri,
166 None,
167 no_digits)
168 task.identifier = test_id
169 result.append(RuntimeTask(task))
170 return result
171
172 def _start_status_server(self, status_server_uri):
173 # pylint: disable=W0201
174 self.status_repo = StatusRepo()
175 # pylint: disable=W0201
176 self.status_server = StatusServer(status_server_uri,
177 self.status_repo)
178 asyncio.ensure_future(self.status_server.serve_forever())
179
180 async def _update_status(self, job):
181 tasks_by_id = {str(runtime_task.task.identifier): runtime_task.task
182 for runtime_task in self.tasks}
183 while True:
184 try:
185 (task_id, status, _) = self.status_repo.status_journal_summary.pop(0)
186
187 except IndexError:
188 await asyncio.sleep(0.05)
189 continue
190
191 task = tasks_by_id.get(task_id)
192 early_state = {'name': task.identifier,
193 'job_logdir': job.logdir,
194 'job_unique_id': job.unique_id}
195 if status == 'started':
196 job.result.start_test(early_state)
197 job.result_events_dispatcher.map_method('start_test',
198 job.result,
199 early_state)
200 elif status == 'finished':
201 this_task_data = self.status_repo.get_task_data(task_id)
202 last_task_status = this_task_data[-1]
203 test_state = {'status': last_task_status.get('result').upper()}
204 test_state.update(early_state)
205
206 time_start = this_task_data[0]['time']
207 time_end = last_task_status['time']
208 time_elapsed = time_end - time_start
209 test_state['time_start'] = time_start
210 test_state['time_end'] = time_end
211 test_state['time_elapsed'] = time_elapsed
212
213 # fake log dir, needed by some result plugins such as HTML
214 test_state['logdir'] = ''
215
216 base_path = os.path.join(job.logdir, 'test-results')
217 self._populate_task_logdir(base_path,
218 task,
219 this_task_data,
220 job.config.get('core.debug'))
221
222 job.result.check_test(test_state)
223 job.result_events_dispatcher.map_method('end_test',
224 job.result,
225 test_state)
226
227 if not mapping[test_state['status']]:
228 self.summary.add("FAIL")
229
230 def run_suite(self, job, test_suite):
231 # pylint: disable=W0201
232 self.summary = set()
233
234 test_suite.tests, _ = nrunner.check_tasks_requirements(test_suite.tests)
235 job.result.tests_total = test_suite.size # no support for variants yet
236
237 status_server_uri = test_suite.config.get('nrunner.status_server_uri')
238 self._start_status_server(status_server_uri)
239
240 # pylint: disable=W0201
241 self.tasks = self._get_all_runtime_tasks(test_suite)
242 if test_suite.config.get('nrunner.shuffle'):
243 random.shuffle(self.tasks)
244 tsm = TaskStateMachine(self.tasks)
245 spawner_name = test_suite.config.get('nrunner.spawner')
246 spawner = SpawnerDispatcher(test_suite.config)[spawner_name].obj
247 max_running = test_suite.config.get('nrunner.max_parallel_tasks')
248 workers = [Worker(tsm, spawner, max_running=max_running).run()
249 for _ in range(max_running)]
250 asyncio.ensure_future(self._update_status(job))
251 loop = asyncio.get_event_loop()
252 try:
253 loop.run_until_complete(asyncio.wait_for(asyncio.gather(*workers),
254 job.timeout or None))
255 except (KeyboardInterrupt, asyncio.TimeoutError):
256 self.summary.add("INTERRUPTED")
257
258 # Wait until all messages may have been processed by the
259 # status_updater. This should be replaced by a mechanism
260 # that only waits if there are missing status messages to
261 # be processed, and, only for a given amount of time.
262 # Tests with non received status will always show as SKIP
263 # because of result reconciliation.
264 loop.run_until_complete(asyncio.sleep(0.05))
265
266 job.result.end_tests()
267 return self.summary
268
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/avocado/plugins/runner_nrunner.py b/avocado/plugins/runner_nrunner.py
--- a/avocado/plugins/runner_nrunner.py
+++ b/avocado/plugins/runner_nrunner.py
@@ -50,7 +50,17 @@
help_msg=help_msg,
key_type=bool)
- help_msg = 'URI for the status server, usually a "HOST:PORT" string'
+ help_msg = ('URI for listing the status server. Usually '
+ 'a "HOST:PORT" string')
+ settings.register_option(section=section,
+ key='status_server_listen',
+ default='127.0.0.1:8888',
+ metavar="HOST:PORT",
+ help_msg=help_msg)
+
+ help_msg = ('URI for connecting to the status server, usually '
+ 'a "HOST:PORT" string. Use this if your status server '
+ 'is in another host, or different port')
settings.register_option(section=section,
key='status_server_uri',
default='127.0.0.1:8888',
@@ -91,17 +101,16 @@
long_arg='--nrunner-shuffle',
action='store_true')
- settings.add_argparser_to_option(namespace='nrunner.status_server_uri',
- parser=parser,
- long_arg='--nrunner-status-server-uri')
-
- settings.add_argparser_to_option(namespace='nrunner.max_parallel_tasks',
- parser=parser,
- long_arg='--nrunner-max-parallel-tasks')
+ # namespace mapping
+ ns = {'nrunner.status_server_listen': '--nrunner-status-server-listen',
+ 'nrunner.status_server_uri': '--nrunner-status-server-uri',
+ 'nrunner.max_parallel_tasks': '--nrunner-max-parallel-tasks',
+ 'nrunner.spawner': '--nrunner-spawner'}
- settings.add_argparser_to_option(namespace='nrunner.spawner',
- parser=parser,
- long_arg='--nrunner-spawner')
+ for k, v in ns.items():
+ settings.add_argparser_to_option(namespace=k,
+ parser=parser,
+ long_arg=v)
def run(self, config):
pass
@@ -169,11 +178,11 @@
result.append(RuntimeTask(task))
return result
- def _start_status_server(self, status_server_uri):
+ def _start_status_server(self, status_server_listen):
# pylint: disable=W0201
self.status_repo = StatusRepo()
# pylint: disable=W0201
- self.status_server = StatusServer(status_server_uri,
+ self.status_server = StatusServer(status_server_listen,
self.status_repo)
asyncio.ensure_future(self.status_server.serve_forever())
@@ -234,8 +243,8 @@
test_suite.tests, _ = nrunner.check_tasks_requirements(test_suite.tests)
job.result.tests_total = test_suite.size # no support for variants yet
- status_server_uri = test_suite.config.get('nrunner.status_server_uri')
- self._start_status_server(status_server_uri)
+ listen = test_suite.config.get('nrunner.status_server_listen')
+ self._start_status_server(listen)
# pylint: disable=W0201
self.tasks = self._get_all_runtime_tasks(test_suite)
|
{"golden_diff": "diff --git a/avocado/plugins/runner_nrunner.py b/avocado/plugins/runner_nrunner.py\n--- a/avocado/plugins/runner_nrunner.py\n+++ b/avocado/plugins/runner_nrunner.py\n@@ -50,7 +50,17 @@\n help_msg=help_msg,\n key_type=bool)\n \n- help_msg = 'URI for the status server, usually a \"HOST:PORT\" string'\n+ help_msg = ('URI for listing the status server. Usually '\n+ 'a \"HOST:PORT\" string')\n+ settings.register_option(section=section,\n+ key='status_server_listen',\n+ default='127.0.0.1:8888',\n+ metavar=\"HOST:PORT\",\n+ help_msg=help_msg)\n+\n+ help_msg = ('URI for connecting to the status server, usually '\n+ 'a \"HOST:PORT\" string. Use this if your status server '\n+ 'is in another host, or different port')\n settings.register_option(section=section,\n key='status_server_uri',\n default='127.0.0.1:8888',\n@@ -91,17 +101,16 @@\n long_arg='--nrunner-shuffle',\n action='store_true')\n \n- settings.add_argparser_to_option(namespace='nrunner.status_server_uri',\n- parser=parser,\n- long_arg='--nrunner-status-server-uri')\n-\n- settings.add_argparser_to_option(namespace='nrunner.max_parallel_tasks',\n- parser=parser,\n- long_arg='--nrunner-max-parallel-tasks')\n+ # namespace mapping\n+ ns = {'nrunner.status_server_listen': '--nrunner-status-server-listen',\n+ 'nrunner.status_server_uri': '--nrunner-status-server-uri',\n+ 'nrunner.max_parallel_tasks': '--nrunner-max-parallel-tasks',\n+ 'nrunner.spawner': '--nrunner-spawner'}\n \n- settings.add_argparser_to_option(namespace='nrunner.spawner',\n- parser=parser,\n- long_arg='--nrunner-spawner')\n+ for k, v in ns.items():\n+ settings.add_argparser_to_option(namespace=k,\n+ parser=parser,\n+ long_arg=v)\n \n def run(self, config):\n pass\n@@ -169,11 +178,11 @@\n result.append(RuntimeTask(task))\n return result\n \n- def _start_status_server(self, status_server_uri):\n+ def _start_status_server(self, status_server_listen):\n # pylint: disable=W0201\n self.status_repo = StatusRepo()\n # pylint: disable=W0201\n- self.status_server = StatusServer(status_server_uri,\n+ self.status_server = StatusServer(status_server_listen,\n self.status_repo)\n asyncio.ensure_future(self.status_server.serve_forever())\n \n@@ -234,8 +243,8 @@\n test_suite.tests, _ = nrunner.check_tasks_requirements(test_suite.tests)\n job.result.tests_total = test_suite.size # no support for variants yet\n \n- status_server_uri = test_suite.config.get('nrunner.status_server_uri')\n- self._start_status_server(status_server_uri)\n+ listen = test_suite.config.get('nrunner.status_server_listen')\n+ self._start_status_server(listen)\n \n # pylint: disable=W0201\n self.tasks = self._get_all_runtime_tasks(test_suite)\n", "issue": "status server uri should be different from listen\nnrunner.status_server_uri should not be used to both, server and client. We should have different settings for configuring the client.\n", "before_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: Red Hat Inc. 2019-2020\n# Authors: Cleber Rosa <[email protected]>\n\n\"\"\"\nNRunner based implementation of job compliant runner\n\"\"\"\n\nimport asyncio\nimport json\nimport multiprocessing\nimport os\nimport random\nfrom copy import copy\n\nfrom avocado.core import nrunner\nfrom avocado.core.dispatcher import SpawnerDispatcher\nfrom avocado.core.plugin_interfaces import CLI, Init\nfrom avocado.core.plugin_interfaces import Runner as RunnerInterface\nfrom avocado.core.settings import settings\nfrom avocado.core.status.repo import StatusRepo\nfrom avocado.core.status.server import StatusServer\nfrom avocado.core.task.runtime import RuntimeTask\nfrom avocado.core.task.statemachine import TaskStateMachine, Worker\nfrom avocado.core.test_id import TestID\nfrom avocado.core.teststatus import mapping\n\n\nclass RunnerInit(Init):\n\n name = 'nrunner'\n description = 'nrunner initialization'\n\n def initialize(self):\n section = 'nrunner'\n help_msg = 'Shuffle the tasks to be executed'\n settings.register_option(section=section,\n key='shuffle',\n default=False,\n help_msg=help_msg,\n key_type=bool)\n\n help_msg = 'URI for the status server, usually a \"HOST:PORT\" string'\n settings.register_option(section=section,\n key='status_server_uri',\n default='127.0.0.1:8888',\n metavar=\"HOST:PORT\",\n help_msg=help_msg)\n\n help_msg = ('Number of maximum number tasks running in parallel. You '\n 'can disable parallel execution by setting this to 1. '\n 'Defaults to the amount of CPUs on this machine.')\n settings.register_option(section=section,\n key='max_parallel_tasks',\n default=multiprocessing.cpu_count(),\n key_type=int,\n help_msg=help_msg)\n\n help_msg = (\"Spawn tasks in a specific spawner. Available spawners: \"\n \"'process' and 'podman'\")\n settings.register_option(section=section,\n key=\"spawner\",\n default='process',\n help_msg=help_msg)\n\n\nclass RunnerCLI(CLI):\n\n name = 'nrunner'\n description = 'nrunner command line options for \"run\"'\n\n def configure(self, parser):\n super(RunnerCLI, self).configure(parser)\n parser = parser.subcommands.choices.get('run', None)\n if parser is None:\n return\n\n parser = parser.add_argument_group('nrunner specific options')\n settings.add_argparser_to_option(namespace='nrunner.shuffle',\n parser=parser,\n long_arg='--nrunner-shuffle',\n action='store_true')\n\n settings.add_argparser_to_option(namespace='nrunner.status_server_uri',\n parser=parser,\n long_arg='--nrunner-status-server-uri')\n\n settings.add_argparser_to_option(namespace='nrunner.max_parallel_tasks',\n parser=parser,\n long_arg='--nrunner-max-parallel-tasks')\n\n settings.add_argparser_to_option(namespace='nrunner.spawner',\n parser=parser,\n long_arg='--nrunner-spawner')\n\n def run(self, config):\n pass\n\n\nclass Runner(RunnerInterface):\n\n name = 'nrunner'\n description = 'nrunner based implementation of job compliant runner'\n\n def _save_to_file(self, filename, buff, mode='wb'):\n with open(filename, mode) as fp:\n fp.write(buff)\n\n def _populate_task_logdir(self, base_path, task, statuses, debug=False):\n # We are copying here to avoid printing duplicated information\n local_statuses = copy(statuses)\n last = local_statuses[-1]\n try:\n stdout = last.pop('stdout')\n except KeyError:\n stdout = None\n try:\n stderr = last.pop('stderr')\n except KeyError:\n stderr = None\n\n # Create task dir\n task_path = os.path.join(base_path, task.identifier.str_filesystem)\n os.makedirs(task_path, exist_ok=True)\n\n # Save stdout and stderr\n if stdout is not None:\n stdout_file = os.path.join(task_path, 'stdout')\n self._save_to_file(stdout_file, stdout)\n if stderr is not None:\n stderr_file = os.path.join(task_path, 'stderr')\n self._save_to_file(stderr_file, stderr)\n\n # Save debug\n if debug:\n debug = os.path.join(task_path, 'debug')\n with open(debug, 'w') as fp:\n json.dump(local_statuses, fp)\n\n data_file = os.path.join(task_path, 'data')\n with open(data_file, 'w') as fp:\n fp.write(\"{}\\n\".format(task.output_dir))\n\n def _get_all_runtime_tasks(self, test_suite):\n result = []\n no_digits = len(str(len(test_suite)))\n for index, task in enumerate(test_suite.tests, start=1):\n task.known_runners = nrunner.RUNNERS_REGISTRY_PYTHON_CLASS\n # this is all rubbish data\n if test_suite.name:\n prefix = \"{}-{}\".format(test_suite.name, index)\n else:\n prefix = index\n test_id = TestID(prefix,\n task.runnable.uri,\n None,\n no_digits)\n task.identifier = test_id\n result.append(RuntimeTask(task))\n return result\n\n def _start_status_server(self, status_server_uri):\n # pylint: disable=W0201\n self.status_repo = StatusRepo()\n # pylint: disable=W0201\n self.status_server = StatusServer(status_server_uri,\n self.status_repo)\n asyncio.ensure_future(self.status_server.serve_forever())\n\n async def _update_status(self, job):\n tasks_by_id = {str(runtime_task.task.identifier): runtime_task.task\n for runtime_task in self.tasks}\n while True:\n try:\n (task_id, status, _) = self.status_repo.status_journal_summary.pop(0)\n\n except IndexError:\n await asyncio.sleep(0.05)\n continue\n\n task = tasks_by_id.get(task_id)\n early_state = {'name': task.identifier,\n 'job_logdir': job.logdir,\n 'job_unique_id': job.unique_id}\n if status == 'started':\n job.result.start_test(early_state)\n job.result_events_dispatcher.map_method('start_test',\n job.result,\n early_state)\n elif status == 'finished':\n this_task_data = self.status_repo.get_task_data(task_id)\n last_task_status = this_task_data[-1]\n test_state = {'status': last_task_status.get('result').upper()}\n test_state.update(early_state)\n\n time_start = this_task_data[0]['time']\n time_end = last_task_status['time']\n time_elapsed = time_end - time_start\n test_state['time_start'] = time_start\n test_state['time_end'] = time_end\n test_state['time_elapsed'] = time_elapsed\n\n # fake log dir, needed by some result plugins such as HTML\n test_state['logdir'] = ''\n\n base_path = os.path.join(job.logdir, 'test-results')\n self._populate_task_logdir(base_path,\n task,\n this_task_data,\n job.config.get('core.debug'))\n\n job.result.check_test(test_state)\n job.result_events_dispatcher.map_method('end_test',\n job.result,\n test_state)\n\n if not mapping[test_state['status']]:\n self.summary.add(\"FAIL\")\n\n def run_suite(self, job, test_suite):\n # pylint: disable=W0201\n self.summary = set()\n\n test_suite.tests, _ = nrunner.check_tasks_requirements(test_suite.tests)\n job.result.tests_total = test_suite.size # no support for variants yet\n\n status_server_uri = test_suite.config.get('nrunner.status_server_uri')\n self._start_status_server(status_server_uri)\n\n # pylint: disable=W0201\n self.tasks = self._get_all_runtime_tasks(test_suite)\n if test_suite.config.get('nrunner.shuffle'):\n random.shuffle(self.tasks)\n tsm = TaskStateMachine(self.tasks)\n spawner_name = test_suite.config.get('nrunner.spawner')\n spawner = SpawnerDispatcher(test_suite.config)[spawner_name].obj\n max_running = test_suite.config.get('nrunner.max_parallel_tasks')\n workers = [Worker(tsm, spawner, max_running=max_running).run()\n for _ in range(max_running)]\n asyncio.ensure_future(self._update_status(job))\n loop = asyncio.get_event_loop()\n try:\n loop.run_until_complete(asyncio.wait_for(asyncio.gather(*workers),\n job.timeout or None))\n except (KeyboardInterrupt, asyncio.TimeoutError):\n self.summary.add(\"INTERRUPTED\")\n\n # Wait until all messages may have been processed by the\n # status_updater. This should be replaced by a mechanism\n # that only waits if there are missing status messages to\n # be processed, and, only for a given amount of time.\n # Tests with non received status will always show as SKIP\n # because of result reconciliation.\n loop.run_until_complete(asyncio.sleep(0.05))\n\n job.result.end_tests()\n return self.summary\n", "path": "avocado/plugins/runner_nrunner.py"}], "after_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: Red Hat Inc. 2019-2020\n# Authors: Cleber Rosa <[email protected]>\n\n\"\"\"\nNRunner based implementation of job compliant runner\n\"\"\"\n\nimport asyncio\nimport json\nimport multiprocessing\nimport os\nimport random\nfrom copy import copy\n\nfrom avocado.core import nrunner\nfrom avocado.core.dispatcher import SpawnerDispatcher\nfrom avocado.core.plugin_interfaces import CLI, Init\nfrom avocado.core.plugin_interfaces import Runner as RunnerInterface\nfrom avocado.core.settings import settings\nfrom avocado.core.status.repo import StatusRepo\nfrom avocado.core.status.server import StatusServer\nfrom avocado.core.task.runtime import RuntimeTask\nfrom avocado.core.task.statemachine import TaskStateMachine, Worker\nfrom avocado.core.test_id import TestID\nfrom avocado.core.teststatus import mapping\n\n\nclass RunnerInit(Init):\n\n name = 'nrunner'\n description = 'nrunner initialization'\n\n def initialize(self):\n section = 'nrunner'\n help_msg = 'Shuffle the tasks to be executed'\n settings.register_option(section=section,\n key='shuffle',\n default=False,\n help_msg=help_msg,\n key_type=bool)\n\n help_msg = ('URI for listing the status server. Usually '\n 'a \"HOST:PORT\" string')\n settings.register_option(section=section,\n key='status_server_listen',\n default='127.0.0.1:8888',\n metavar=\"HOST:PORT\",\n help_msg=help_msg)\n\n help_msg = ('URI for connecting to the status server, usually '\n 'a \"HOST:PORT\" string. Use this if your status server '\n 'is in another host, or different port')\n settings.register_option(section=section,\n key='status_server_uri',\n default='127.0.0.1:8888',\n metavar=\"HOST:PORT\",\n help_msg=help_msg)\n\n help_msg = ('Number of maximum number tasks running in parallel. You '\n 'can disable parallel execution by setting this to 1. '\n 'Defaults to the amount of CPUs on this machine.')\n settings.register_option(section=section,\n key='max_parallel_tasks',\n default=multiprocessing.cpu_count(),\n key_type=int,\n help_msg=help_msg)\n\n help_msg = (\"Spawn tasks in a specific spawner. Available spawners: \"\n \"'process' and 'podman'\")\n settings.register_option(section=section,\n key=\"spawner\",\n default='process',\n help_msg=help_msg)\n\n\nclass RunnerCLI(CLI):\n\n name = 'nrunner'\n description = 'nrunner command line options for \"run\"'\n\n def configure(self, parser):\n super(RunnerCLI, self).configure(parser)\n parser = parser.subcommands.choices.get('run', None)\n if parser is None:\n return\n\n parser = parser.add_argument_group('nrunner specific options')\n settings.add_argparser_to_option(namespace='nrunner.shuffle',\n parser=parser,\n long_arg='--nrunner-shuffle',\n action='store_true')\n\n # namespace mapping\n ns = {'nrunner.status_server_listen': '--nrunner-status-server-listen',\n 'nrunner.status_server_uri': '--nrunner-status-server-uri',\n 'nrunner.max_parallel_tasks': '--nrunner-max-parallel-tasks',\n 'nrunner.spawner': '--nrunner-spawner'}\n\n for k, v in ns.items():\n settings.add_argparser_to_option(namespace=k,\n parser=parser,\n long_arg=v)\n\n def run(self, config):\n pass\n\n\nclass Runner(RunnerInterface):\n\n name = 'nrunner'\n description = 'nrunner based implementation of job compliant runner'\n\n def _save_to_file(self, filename, buff, mode='wb'):\n with open(filename, mode) as fp:\n fp.write(buff)\n\n def _populate_task_logdir(self, base_path, task, statuses, debug=False):\n # We are copying here to avoid printing duplicated information\n local_statuses = copy(statuses)\n last = local_statuses[-1]\n try:\n stdout = last.pop('stdout')\n except KeyError:\n stdout = None\n try:\n stderr = last.pop('stderr')\n except KeyError:\n stderr = None\n\n # Create task dir\n task_path = os.path.join(base_path, task.identifier.str_filesystem)\n os.makedirs(task_path, exist_ok=True)\n\n # Save stdout and stderr\n if stdout is not None:\n stdout_file = os.path.join(task_path, 'stdout')\n self._save_to_file(stdout_file, stdout)\n if stderr is not None:\n stderr_file = os.path.join(task_path, 'stderr')\n self._save_to_file(stderr_file, stderr)\n\n # Save debug\n if debug:\n debug = os.path.join(task_path, 'debug')\n with open(debug, 'w') as fp:\n json.dump(local_statuses, fp)\n\n data_file = os.path.join(task_path, 'data')\n with open(data_file, 'w') as fp:\n fp.write(\"{}\\n\".format(task.output_dir))\n\n def _get_all_runtime_tasks(self, test_suite):\n result = []\n no_digits = len(str(len(test_suite)))\n for index, task in enumerate(test_suite.tests, start=1):\n task.known_runners = nrunner.RUNNERS_REGISTRY_PYTHON_CLASS\n # this is all rubbish data\n if test_suite.name:\n prefix = \"{}-{}\".format(test_suite.name, index)\n else:\n prefix = index\n test_id = TestID(prefix,\n task.runnable.uri,\n None,\n no_digits)\n task.identifier = test_id\n result.append(RuntimeTask(task))\n return result\n\n def _start_status_server(self, status_server_listen):\n # pylint: disable=W0201\n self.status_repo = StatusRepo()\n # pylint: disable=W0201\n self.status_server = StatusServer(status_server_listen,\n self.status_repo)\n asyncio.ensure_future(self.status_server.serve_forever())\n\n async def _update_status(self, job):\n tasks_by_id = {str(runtime_task.task.identifier): runtime_task.task\n for runtime_task in self.tasks}\n while True:\n try:\n (task_id, status, _) = self.status_repo.status_journal_summary.pop(0)\n\n except IndexError:\n await asyncio.sleep(0.05)\n continue\n\n task = tasks_by_id.get(task_id)\n early_state = {'name': task.identifier,\n 'job_logdir': job.logdir,\n 'job_unique_id': job.unique_id}\n if status == 'started':\n job.result.start_test(early_state)\n job.result_events_dispatcher.map_method('start_test',\n job.result,\n early_state)\n elif status == 'finished':\n this_task_data = self.status_repo.get_task_data(task_id)\n last_task_status = this_task_data[-1]\n test_state = {'status': last_task_status.get('result').upper()}\n test_state.update(early_state)\n\n time_start = this_task_data[0]['time']\n time_end = last_task_status['time']\n time_elapsed = time_end - time_start\n test_state['time_start'] = time_start\n test_state['time_end'] = time_end\n test_state['time_elapsed'] = time_elapsed\n\n # fake log dir, needed by some result plugins such as HTML\n test_state['logdir'] = ''\n\n base_path = os.path.join(job.logdir, 'test-results')\n self._populate_task_logdir(base_path,\n task,\n this_task_data,\n job.config.get('core.debug'))\n\n job.result.check_test(test_state)\n job.result_events_dispatcher.map_method('end_test',\n job.result,\n test_state)\n\n if not mapping[test_state['status']]:\n self.summary.add(\"FAIL\")\n\n def run_suite(self, job, test_suite):\n # pylint: disable=W0201\n self.summary = set()\n\n test_suite.tests, _ = nrunner.check_tasks_requirements(test_suite.tests)\n job.result.tests_total = test_suite.size # no support for variants yet\n\n listen = test_suite.config.get('nrunner.status_server_listen')\n self._start_status_server(listen)\n\n # pylint: disable=W0201\n self.tasks = self._get_all_runtime_tasks(test_suite)\n if test_suite.config.get('nrunner.shuffle'):\n random.shuffle(self.tasks)\n tsm = TaskStateMachine(self.tasks)\n spawner_name = test_suite.config.get('nrunner.spawner')\n spawner = SpawnerDispatcher(test_suite.config)[spawner_name].obj\n max_running = test_suite.config.get('nrunner.max_parallel_tasks')\n workers = [Worker(tsm, spawner, max_running=max_running).run()\n for _ in range(max_running)]\n asyncio.ensure_future(self._update_status(job))\n loop = asyncio.get_event_loop()\n try:\n loop.run_until_complete(asyncio.wait_for(asyncio.gather(*workers),\n job.timeout or None))\n except (KeyboardInterrupt, asyncio.TimeoutError):\n self.summary.add(\"INTERRUPTED\")\n\n # Wait until all messages may have been processed by the\n # status_updater. This should be replaced by a mechanism\n # that only waits if there are missing status messages to\n # be processed, and, only for a given amount of time.\n # Tests with non received status will always show as SKIP\n # because of result reconciliation.\n loop.run_until_complete(asyncio.sleep(0.05))\n\n job.result.end_tests()\n return self.summary\n", "path": "avocado/plugins/runner_nrunner.py"}]}
| 3,071 | 756 |
gh_patches_debug_5986
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-1845
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Not setting `USER` directive in processor Dockerfile fails silently
Building a processor image without setting `USER` directive in Dockerfile will result in the container never being marked as ready and triggers a (silent) error in sentry:
https://sentry.io/organizations/grand-challenge/issues/2396054397/?project=303639&query=is%3Aunresolved
It should fail properly returning a validation error to the user indicating that the `USER` directive should be set in the Dockerfile according to docker best practices.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/components/tasks.py`
Content:
```
1 import json
2 import tarfile
3 import uuid
4 from datetime import timedelta
5 from typing import Dict
6
7 from billiard.exceptions import SoftTimeLimitExceeded, TimeLimitExceeded
8 from celery import shared_task
9 from django.apps import apps
10 from django.conf import settings
11 from django.core.exceptions import ValidationError
12 from django.core.files import File
13 from django.db import OperationalError
14 from django.db.models import DateTimeField, ExpressionWrapper, F
15 from django.utils.timezone import now
16
17 from grandchallenge.components.backends.docker import ComponentException
18 from grandchallenge.components.emails import send_invalid_dockerfile_email
19 from grandchallenge.jqfileupload.widgets.uploader import StagedAjaxFile
20
21
22 @shared_task()
23 def validate_docker_image(*, pk: uuid.UUID, app_label: str, model_name: str):
24 model = apps.get_model(app_label=app_label, model_name=model_name)
25
26 instance = model.objects.get(pk=pk)
27
28 if not instance.image:
29 # Create the image from the staged file
30 uploaded_image = StagedAjaxFile(instance.staged_image_uuid)
31 with uploaded_image.open() as f:
32 instance.image.save(uploaded_image.name, File(f))
33
34 try:
35 image_sha256 = _validate_docker_image_manifest(
36 model=model, instance=instance
37 )
38 except ValidationError:
39 send_invalid_dockerfile_email(container_image=instance)
40 raise
41
42 model.objects.filter(pk=instance.pk).update(
43 image_sha256=f"sha256:{image_sha256}", ready=True
44 )
45
46
47 def _validate_docker_image_manifest(*, model, instance) -> str:
48 manifest = _extract_docker_image_file(
49 model=model, instance=instance, filename="manifest.json"
50 )
51 manifest = json.loads(manifest)
52
53 if len(manifest) != 1:
54 model.objects.filter(pk=instance.pk).update(
55 status=(
56 f"The container image file should only have 1 image. "
57 f"This file contains {len(manifest)}."
58 )
59 )
60 raise ValidationError("Invalid Dockerfile")
61
62 image_sha256 = manifest[0]["Config"][:64]
63
64 config = _extract_docker_image_file(
65 model=model, instance=instance, filename=f"{image_sha256}.json"
66 )
67 config = json.loads(config)
68
69 if str(config["config"]["User"].lower()) in ["", "root", "0"]:
70 model.objects.filter(pk=instance.pk).update(
71 status=(
72 "The container runs as root. Please add a user, group and "
73 "USER instruction to your Dockerfile, rebuild, test and "
74 "upload the container again, see "
75 "https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user"
76 )
77 )
78 raise ValidationError("Invalid Dockerfile")
79
80 return image_sha256
81
82
83 def _extract_docker_image_file(*, model, instance, filename: str):
84 """Extract a file from the root of a tarball."""
85 try:
86 with instance.image.open(mode="rb") as im, tarfile.open(
87 fileobj=im, mode="r"
88 ) as t:
89 member = dict(zip(t.getnames(), t.getmembers()))[filename]
90 file = t.extractfile(member).read()
91 return file
92 except (KeyError, tarfile.ReadError):
93 model.objects.filter(pk=instance.pk).update(
94 status=(
95 f"{filename} not found at the root of the container image "
96 f"file. Was this created with docker save?"
97 )
98 )
99 raise ValidationError("Invalid Dockerfile")
100
101
102 def retry_if_dropped(func):
103 """
104 Retry a function that relies on an open database connection.
105
106 Use this decorator when you have a long running task as sometimes the db
107 connection will drop.
108 """
109
110 def wrapper(*args, **kwargs):
111 n_tries = 0
112 max_tries = 2
113 err = None
114
115 while n_tries < max_tries:
116 n_tries += 1
117
118 try:
119 return func(*args, **kwargs)
120 except OperationalError as e:
121 err = e
122
123 # This needs to be a local import
124 from django.db import connection
125
126 connection.close()
127
128 raise err
129
130 return wrapper
131
132
133 @retry_if_dropped
134 def get_model_instance(*, pk, app_label, model_name):
135 model = apps.get_model(app_label=app_label, model_name=model_name)
136 return model.objects.get(pk=pk)
137
138
139 @shared_task
140 def execute_job(
141 *_, job_pk: uuid.UUID, job_app_label: str, job_model_name: str
142 ) -> None:
143 Job = apps.get_model( # noqa: N806
144 app_label=job_app_label, model_name=job_model_name
145 )
146 job = Job.objects.get(pk=job_pk)
147
148 if job.status in [job.PENDING, job.RETRY]:
149 job.update_status(status=job.STARTED)
150 else:
151 raise RuntimeError("Job is not set to be executed.")
152
153 if not job.container.ready:
154 msg = f"Method {job.container.pk} was not ready to be used."
155 job.update_status(status=job.FAILURE, error_message=msg)
156 raise RuntimeError(msg)
157 try:
158 with job.executor_cls(
159 job_id=str(job.pk),
160 job_class=Job,
161 input_files=job.input_files,
162 output_interfaces=job.output_interfaces,
163 exec_image=job.container.image,
164 exec_image_sha256=job.container.image_sha256,
165 memory_limit=job.container.requires_memory_gb,
166 ) as ev:
167 # This call is potentially very long
168 ev.execute()
169 except ComponentException as e:
170 job = get_model_instance(
171 pk=job_pk, app_label=job_app_label, model_name=job_model_name
172 )
173 job.update_status(
174 status=job.FAILURE,
175 stdout=ev.stdout,
176 stderr=ev.stderr,
177 error_message=str(e),
178 )
179 except (SoftTimeLimitExceeded, TimeLimitExceeded):
180 job = get_model_instance(
181 pk=job_pk, app_label=job_app_label, model_name=job_model_name
182 )
183 job.update_status(
184 status=job.FAILURE,
185 stdout=ev.stdout,
186 stderr=ev.stderr,
187 error_message="Time limit exceeded.",
188 )
189 except Exception:
190 job = get_model_instance(
191 pk=job_pk, app_label=job_app_label, model_name=job_model_name
192 )
193 job.update_status(
194 status=job.FAILURE,
195 stdout=ev.stdout,
196 stderr=ev.stderr,
197 error_message="An unexpected error occurred.",
198 )
199 raise
200 else:
201 job = get_model_instance(
202 pk=job_pk, app_label=job_app_label, model_name=job_model_name
203 )
204 job.update_status(
205 status=job.SUCCESS, stdout=ev.stdout, stderr=ev.stderr
206 )
207
208
209 @shared_task
210 def mark_long_running_jobs_failed(
211 *, app_label: str, model_name: str, extra_filters: Dict[str, str] = None
212 ):
213 """
214 Mark jobs that have been started but did not finish (maybe due to
215 an unrecoverable hardware error). It will mark tasks FAILED that have the
216 status STARTED after 1.2x the task limit (which is different for each
217 queue), so, this must be scheduled on the same queue that the execute_job
218 task is run for this app_label and model_name.
219
220 If the task is still running on Celery then it will still be able to
221 report as passed later.
222 """
223 Job = apps.get_model( # noqa: N806
224 app_label=app_label, model_name=model_name
225 )
226
227 jobs_to_mark = Job.objects.filter(
228 started_at__lt=now()
229 - 1.2 * timedelta(seconds=settings.CELERY_TASK_TIME_LIMIT),
230 status=Job.STARTED,
231 )
232
233 if extra_filters:
234 jobs_to_mark = jobs_to_mark.filter(**extra_filters)
235
236 for j in jobs_to_mark:
237 j.update_status(
238 status=Job.FAILURE, error_message="Time limit exceeded."
239 )
240
241 return [j.pk for j in jobs_to_mark]
242
243
244 @shared_task
245 def start_service(*, pk: uuid.UUID, app_label: str, model_name: str):
246 session = get_model_instance(
247 pk=pk, app_label=app_label, model_name=model_name
248 )
249 session.start()
250
251
252 @shared_task
253 def stop_service(*, pk: uuid.UUID, app_label: str, model_name: str):
254 session = get_model_instance(
255 pk=pk, app_label=app_label, model_name=model_name
256 )
257 session.stop()
258
259
260 @shared_task
261 def stop_expired_services(*, app_label: str, model_name: str, region: str):
262 model = apps.get_model(app_label=app_label, model_name=model_name)
263
264 services_to_stop = (
265 model.objects.annotate(
266 expires=ExpressionWrapper(
267 F("created") + F("maximum_duration"),
268 output_field=DateTimeField(),
269 )
270 )
271 .filter(expires__lt=now(), region=region)
272 .exclude(status=model.STOPPED)
273 )
274
275 for service in services_to_stop:
276 service.stop()
277
278 return [str(s) for s in services_to_stop]
279
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/grandchallenge/components/tasks.py b/app/grandchallenge/components/tasks.py
--- a/app/grandchallenge/components/tasks.py
+++ b/app/grandchallenge/components/tasks.py
@@ -66,7 +66,9 @@
)
config = json.loads(config)
- if str(config["config"]["User"].lower()) in ["", "root", "0"]:
+ if "User" not in config["config"] or str(
+ config["config"]["User"].lower()
+ ) in ["", "root", "0"]:
model.objects.filter(pk=instance.pk).update(
status=(
"The container runs as root. Please add a user, group and "
|
{"golden_diff": "diff --git a/app/grandchallenge/components/tasks.py b/app/grandchallenge/components/tasks.py\n--- a/app/grandchallenge/components/tasks.py\n+++ b/app/grandchallenge/components/tasks.py\n@@ -66,7 +66,9 @@\n )\n config = json.loads(config)\n \n- if str(config[\"config\"][\"User\"].lower()) in [\"\", \"root\", \"0\"]:\n+ if \"User\" not in config[\"config\"] or str(\n+ config[\"config\"][\"User\"].lower()\n+ ) in [\"\", \"root\", \"0\"]:\n model.objects.filter(pk=instance.pk).update(\n status=(\n \"The container runs as root. Please add a user, group and \"\n", "issue": "Not setting `USER` directive in processor Dockerfile fails silently\nBuilding a processor image without setting `USER` directive in Dockerfile will result in the container never being marked as ready and triggers a (silent) error in sentry: \r\nhttps://sentry.io/organizations/grand-challenge/issues/2396054397/?project=303639&query=is%3Aunresolved\r\n\r\nIt should fail properly returning a validation error to the user indicating that the `USER` directive should be set in the Dockerfile according to docker best practices.\r\n\n", "before_files": [{"content": "import json\nimport tarfile\nimport uuid\nfrom datetime import timedelta\nfrom typing import Dict\n\nfrom billiard.exceptions import SoftTimeLimitExceeded, TimeLimitExceeded\nfrom celery import shared_task\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.core.files import File\nfrom django.db import OperationalError\nfrom django.db.models import DateTimeField, ExpressionWrapper, F\nfrom django.utils.timezone import now\n\nfrom grandchallenge.components.backends.docker import ComponentException\nfrom grandchallenge.components.emails import send_invalid_dockerfile_email\nfrom grandchallenge.jqfileupload.widgets.uploader import StagedAjaxFile\n\n\n@shared_task()\ndef validate_docker_image(*, pk: uuid.UUID, app_label: str, model_name: str):\n model = apps.get_model(app_label=app_label, model_name=model_name)\n\n instance = model.objects.get(pk=pk)\n\n if not instance.image:\n # Create the image from the staged file\n uploaded_image = StagedAjaxFile(instance.staged_image_uuid)\n with uploaded_image.open() as f:\n instance.image.save(uploaded_image.name, File(f))\n\n try:\n image_sha256 = _validate_docker_image_manifest(\n model=model, instance=instance\n )\n except ValidationError:\n send_invalid_dockerfile_email(container_image=instance)\n raise\n\n model.objects.filter(pk=instance.pk).update(\n image_sha256=f\"sha256:{image_sha256}\", ready=True\n )\n\n\ndef _validate_docker_image_manifest(*, model, instance) -> str:\n manifest = _extract_docker_image_file(\n model=model, instance=instance, filename=\"manifest.json\"\n )\n manifest = json.loads(manifest)\n\n if len(manifest) != 1:\n model.objects.filter(pk=instance.pk).update(\n status=(\n f\"The container image file should only have 1 image. \"\n f\"This file contains {len(manifest)}.\"\n )\n )\n raise ValidationError(\"Invalid Dockerfile\")\n\n image_sha256 = manifest[0][\"Config\"][:64]\n\n config = _extract_docker_image_file(\n model=model, instance=instance, filename=f\"{image_sha256}.json\"\n )\n config = json.loads(config)\n\n if str(config[\"config\"][\"User\"].lower()) in [\"\", \"root\", \"0\"]:\n model.objects.filter(pk=instance.pk).update(\n status=(\n \"The container runs as root. Please add a user, group and \"\n \"USER instruction to your Dockerfile, rebuild, test and \"\n \"upload the container again, see \"\n \"https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user\"\n )\n )\n raise ValidationError(\"Invalid Dockerfile\")\n\n return image_sha256\n\n\ndef _extract_docker_image_file(*, model, instance, filename: str):\n \"\"\"Extract a file from the root of a tarball.\"\"\"\n try:\n with instance.image.open(mode=\"rb\") as im, tarfile.open(\n fileobj=im, mode=\"r\"\n ) as t:\n member = dict(zip(t.getnames(), t.getmembers()))[filename]\n file = t.extractfile(member).read()\n return file\n except (KeyError, tarfile.ReadError):\n model.objects.filter(pk=instance.pk).update(\n status=(\n f\"{filename} not found at the root of the container image \"\n f\"file. Was this created with docker save?\"\n )\n )\n raise ValidationError(\"Invalid Dockerfile\")\n\n\ndef retry_if_dropped(func):\n \"\"\"\n Retry a function that relies on an open database connection.\n\n Use this decorator when you have a long running task as sometimes the db\n connection will drop.\n \"\"\"\n\n def wrapper(*args, **kwargs):\n n_tries = 0\n max_tries = 2\n err = None\n\n while n_tries < max_tries:\n n_tries += 1\n\n try:\n return func(*args, **kwargs)\n except OperationalError as e:\n err = e\n\n # This needs to be a local import\n from django.db import connection\n\n connection.close()\n\n raise err\n\n return wrapper\n\n\n@retry_if_dropped\ndef get_model_instance(*, pk, app_label, model_name):\n model = apps.get_model(app_label=app_label, model_name=model_name)\n return model.objects.get(pk=pk)\n\n\n@shared_task\ndef execute_job(\n *_, job_pk: uuid.UUID, job_app_label: str, job_model_name: str\n) -> None:\n Job = apps.get_model( # noqa: N806\n app_label=job_app_label, model_name=job_model_name\n )\n job = Job.objects.get(pk=job_pk)\n\n if job.status in [job.PENDING, job.RETRY]:\n job.update_status(status=job.STARTED)\n else:\n raise RuntimeError(\"Job is not set to be executed.\")\n\n if not job.container.ready:\n msg = f\"Method {job.container.pk} was not ready to be used.\"\n job.update_status(status=job.FAILURE, error_message=msg)\n raise RuntimeError(msg)\n try:\n with job.executor_cls(\n job_id=str(job.pk),\n job_class=Job,\n input_files=job.input_files,\n output_interfaces=job.output_interfaces,\n exec_image=job.container.image,\n exec_image_sha256=job.container.image_sha256,\n memory_limit=job.container.requires_memory_gb,\n ) as ev:\n # This call is potentially very long\n ev.execute()\n except ComponentException as e:\n job = get_model_instance(\n pk=job_pk, app_label=job_app_label, model_name=job_model_name\n )\n job.update_status(\n status=job.FAILURE,\n stdout=ev.stdout,\n stderr=ev.stderr,\n error_message=str(e),\n )\n except (SoftTimeLimitExceeded, TimeLimitExceeded):\n job = get_model_instance(\n pk=job_pk, app_label=job_app_label, model_name=job_model_name\n )\n job.update_status(\n status=job.FAILURE,\n stdout=ev.stdout,\n stderr=ev.stderr,\n error_message=\"Time limit exceeded.\",\n )\n except Exception:\n job = get_model_instance(\n pk=job_pk, app_label=job_app_label, model_name=job_model_name\n )\n job.update_status(\n status=job.FAILURE,\n stdout=ev.stdout,\n stderr=ev.stderr,\n error_message=\"An unexpected error occurred.\",\n )\n raise\n else:\n job = get_model_instance(\n pk=job_pk, app_label=job_app_label, model_name=job_model_name\n )\n job.update_status(\n status=job.SUCCESS, stdout=ev.stdout, stderr=ev.stderr\n )\n\n\n@shared_task\ndef mark_long_running_jobs_failed(\n *, app_label: str, model_name: str, extra_filters: Dict[str, str] = None\n):\n \"\"\"\n Mark jobs that have been started but did not finish (maybe due to\n an unrecoverable hardware error). It will mark tasks FAILED that have the\n status STARTED after 1.2x the task limit (which is different for each\n queue), so, this must be scheduled on the same queue that the execute_job\n task is run for this app_label and model_name.\n\n If the task is still running on Celery then it will still be able to\n report as passed later.\n \"\"\"\n Job = apps.get_model( # noqa: N806\n app_label=app_label, model_name=model_name\n )\n\n jobs_to_mark = Job.objects.filter(\n started_at__lt=now()\n - 1.2 * timedelta(seconds=settings.CELERY_TASK_TIME_LIMIT),\n status=Job.STARTED,\n )\n\n if extra_filters:\n jobs_to_mark = jobs_to_mark.filter(**extra_filters)\n\n for j in jobs_to_mark:\n j.update_status(\n status=Job.FAILURE, error_message=\"Time limit exceeded.\"\n )\n\n return [j.pk for j in jobs_to_mark]\n\n\n@shared_task\ndef start_service(*, pk: uuid.UUID, app_label: str, model_name: str):\n session = get_model_instance(\n pk=pk, app_label=app_label, model_name=model_name\n )\n session.start()\n\n\n@shared_task\ndef stop_service(*, pk: uuid.UUID, app_label: str, model_name: str):\n session = get_model_instance(\n pk=pk, app_label=app_label, model_name=model_name\n )\n session.stop()\n\n\n@shared_task\ndef stop_expired_services(*, app_label: str, model_name: str, region: str):\n model = apps.get_model(app_label=app_label, model_name=model_name)\n\n services_to_stop = (\n model.objects.annotate(\n expires=ExpressionWrapper(\n F(\"created\") + F(\"maximum_duration\"),\n output_field=DateTimeField(),\n )\n )\n .filter(expires__lt=now(), region=region)\n .exclude(status=model.STOPPED)\n )\n\n for service in services_to_stop:\n service.stop()\n\n return [str(s) for s in services_to_stop]\n", "path": "app/grandchallenge/components/tasks.py"}], "after_files": [{"content": "import json\nimport tarfile\nimport uuid\nfrom datetime import timedelta\nfrom typing import Dict\n\nfrom billiard.exceptions import SoftTimeLimitExceeded, TimeLimitExceeded\nfrom celery import shared_task\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.core.files import File\nfrom django.db import OperationalError\nfrom django.db.models import DateTimeField, ExpressionWrapper, F\nfrom django.utils.timezone import now\n\nfrom grandchallenge.components.backends.docker import ComponentException\nfrom grandchallenge.components.emails import send_invalid_dockerfile_email\nfrom grandchallenge.jqfileupload.widgets.uploader import StagedAjaxFile\n\n\n@shared_task()\ndef validate_docker_image(*, pk: uuid.UUID, app_label: str, model_name: str):\n model = apps.get_model(app_label=app_label, model_name=model_name)\n\n instance = model.objects.get(pk=pk)\n\n if not instance.image:\n # Create the image from the staged file\n uploaded_image = StagedAjaxFile(instance.staged_image_uuid)\n with uploaded_image.open() as f:\n instance.image.save(uploaded_image.name, File(f))\n\n try:\n image_sha256 = _validate_docker_image_manifest(\n model=model, instance=instance\n )\n except ValidationError:\n send_invalid_dockerfile_email(container_image=instance)\n raise\n\n model.objects.filter(pk=instance.pk).update(\n image_sha256=f\"sha256:{image_sha256}\", ready=True\n )\n\n\ndef _validate_docker_image_manifest(*, model, instance) -> str:\n manifest = _extract_docker_image_file(\n model=model, instance=instance, filename=\"manifest.json\"\n )\n manifest = json.loads(manifest)\n\n if len(manifest) != 1:\n model.objects.filter(pk=instance.pk).update(\n status=(\n f\"The container image file should only have 1 image. \"\n f\"This file contains {len(manifest)}.\"\n )\n )\n raise ValidationError(\"Invalid Dockerfile\")\n\n image_sha256 = manifest[0][\"Config\"][:64]\n\n config = _extract_docker_image_file(\n model=model, instance=instance, filename=f\"{image_sha256}.json\"\n )\n config = json.loads(config)\n\n if \"User\" not in config[\"config\"] or str(\n config[\"config\"][\"User\"].lower()\n ) in [\"\", \"root\", \"0\"]:\n model.objects.filter(pk=instance.pk).update(\n status=(\n \"The container runs as root. Please add a user, group and \"\n \"USER instruction to your Dockerfile, rebuild, test and \"\n \"upload the container again, see \"\n \"https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user\"\n )\n )\n raise ValidationError(\"Invalid Dockerfile\")\n\n return image_sha256\n\n\ndef _extract_docker_image_file(*, model, instance, filename: str):\n \"\"\"Extract a file from the root of a tarball.\"\"\"\n try:\n with instance.image.open(mode=\"rb\") as im, tarfile.open(\n fileobj=im, mode=\"r\"\n ) as t:\n member = dict(zip(t.getnames(), t.getmembers()))[filename]\n file = t.extractfile(member).read()\n return file\n except (KeyError, tarfile.ReadError):\n model.objects.filter(pk=instance.pk).update(\n status=(\n f\"{filename} not found at the root of the container image \"\n f\"file. Was this created with docker save?\"\n )\n )\n raise ValidationError(\"Invalid Dockerfile\")\n\n\ndef retry_if_dropped(func):\n \"\"\"\n Retry a function that relies on an open database connection.\n\n Use this decorator when you have a long running task as sometimes the db\n connection will drop.\n \"\"\"\n\n def wrapper(*args, **kwargs):\n n_tries = 0\n max_tries = 2\n err = None\n\n while n_tries < max_tries:\n n_tries += 1\n\n try:\n return func(*args, **kwargs)\n except OperationalError as e:\n err = e\n\n # This needs to be a local import\n from django.db import connection\n\n connection.close()\n\n raise err\n\n return wrapper\n\n\n@retry_if_dropped\ndef get_model_instance(*, pk, app_label, model_name):\n model = apps.get_model(app_label=app_label, model_name=model_name)\n return model.objects.get(pk=pk)\n\n\n@shared_task\ndef execute_job(\n *_, job_pk: uuid.UUID, job_app_label: str, job_model_name: str\n) -> None:\n Job = apps.get_model( # noqa: N806\n app_label=job_app_label, model_name=job_model_name\n )\n job = Job.objects.get(pk=job_pk)\n\n if job.status in [job.PENDING, job.RETRY]:\n job.update_status(status=job.STARTED)\n else:\n raise RuntimeError(\"Job is not set to be executed.\")\n\n if not job.container.ready:\n msg = f\"Method {job.container.pk} was not ready to be used.\"\n job.update_status(status=job.FAILURE, error_message=msg)\n raise RuntimeError(msg)\n try:\n with job.executor_cls(\n job_id=str(job.pk),\n job_class=Job,\n input_files=job.input_files,\n output_interfaces=job.output_interfaces,\n exec_image=job.container.image,\n exec_image_sha256=job.container.image_sha256,\n memory_limit=job.container.requires_memory_gb,\n ) as ev:\n # This call is potentially very long\n ev.execute()\n except ComponentException as e:\n job = get_model_instance(\n pk=job_pk, app_label=job_app_label, model_name=job_model_name\n )\n job.update_status(\n status=job.FAILURE,\n stdout=ev.stdout,\n stderr=ev.stderr,\n error_message=str(e),\n )\n except (SoftTimeLimitExceeded, TimeLimitExceeded):\n job = get_model_instance(\n pk=job_pk, app_label=job_app_label, model_name=job_model_name\n )\n job.update_status(\n status=job.FAILURE,\n stdout=ev.stdout,\n stderr=ev.stderr,\n error_message=\"Time limit exceeded.\",\n )\n except Exception:\n job = get_model_instance(\n pk=job_pk, app_label=job_app_label, model_name=job_model_name\n )\n job.update_status(\n status=job.FAILURE,\n stdout=ev.stdout,\n stderr=ev.stderr,\n error_message=\"An unexpected error occurred.\",\n )\n raise\n else:\n job = get_model_instance(\n pk=job_pk, app_label=job_app_label, model_name=job_model_name\n )\n job.update_status(\n status=job.SUCCESS, stdout=ev.stdout, stderr=ev.stderr\n )\n\n\n@shared_task\ndef mark_long_running_jobs_failed(\n *, app_label: str, model_name: str, extra_filters: Dict[str, str] = None\n):\n \"\"\"\n Mark jobs that have been started but did not finish (maybe due to\n an unrecoverable hardware error). It will mark tasks FAILED that have the\n status STARTED after 1.2x the task limit (which is different for each\n queue), so, this must be scheduled on the same queue that the execute_job\n task is run for this app_label and model_name.\n\n If the task is still running on Celery then it will still be able to\n report as passed later.\n \"\"\"\n Job = apps.get_model( # noqa: N806\n app_label=app_label, model_name=model_name\n )\n\n jobs_to_mark = Job.objects.filter(\n started_at__lt=now()\n - 1.2 * timedelta(seconds=settings.CELERY_TASK_TIME_LIMIT),\n status=Job.STARTED,\n )\n\n if extra_filters:\n jobs_to_mark = jobs_to_mark.filter(**extra_filters)\n\n for j in jobs_to_mark:\n j.update_status(\n status=Job.FAILURE, error_message=\"Time limit exceeded.\"\n )\n\n return [j.pk for j in jobs_to_mark]\n\n\n@shared_task\ndef start_service(*, pk: uuid.UUID, app_label: str, model_name: str):\n session = get_model_instance(\n pk=pk, app_label=app_label, model_name=model_name\n )\n session.start()\n\n\n@shared_task\ndef stop_service(*, pk: uuid.UUID, app_label: str, model_name: str):\n session = get_model_instance(\n pk=pk, app_label=app_label, model_name=model_name\n )\n session.stop()\n\n\n@shared_task\ndef stop_expired_services(*, app_label: str, model_name: str, region: str):\n model = apps.get_model(app_label=app_label, model_name=model_name)\n\n services_to_stop = (\n model.objects.annotate(\n expires=ExpressionWrapper(\n F(\"created\") + F(\"maximum_duration\"),\n output_field=DateTimeField(),\n )\n )\n .filter(expires__lt=now(), region=region)\n .exclude(status=model.STOPPED)\n )\n\n for service in services_to_stop:\n service.stop()\n\n return [str(s) for s in services_to_stop]\n", "path": "app/grandchallenge/components/tasks.py"}]}
| 3,139 | 150 |
gh_patches_debug_31714
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-3306
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Prometheus exporter should convert non-monotonic sums to gauges
The [current implementation](https://github.com/open-telemetry/opentelemetry-python/blob/main/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py#L255) of Sum export in the prometheus exporter does not differentiate between monotonic and non-monotonic sums.
The [prometheus compatibility spec for sums](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/compatibility/prometheus_and_openmetrics.md#sums) says: `If the aggregation temporality is cumulative and the sum is non-monotonic, it MUST be converted to a Prometheus Gauge.`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This library allows export of metrics data to `Prometheus <https://prometheus.io/>`_.
17
18 Usage
19 -----
20
21 The **OpenTelemetry Prometheus Exporter** allows export of `OpenTelemetry`_
22 metrics to `Prometheus`_.
23
24
25 .. _Prometheus: https://prometheus.io/
26 .. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
27
28 .. code:: python
29
30 from prometheus_client import start_http_server
31
32 from opentelemetry.exporter.prometheus import PrometheusMetricReader
33 from opentelemetry.metrics import get_meter_provider, set_meter_provider
34 from opentelemetry.sdk.metrics import MeterProvider
35
36 # Start Prometheus client
37 start_http_server(port=8000, addr="localhost")
38
39 # Exporter to export metrics to Prometheus
40 prefix = "MyAppPrefix"
41 reader = PrometheusMetricReader(prefix)
42
43 # Meter is responsible for creating and recording metrics
44 set_meter_provider(MeterProvider(metric_readers=[reader]))
45 meter = get_meter_provider().get_meter("myapp", "0.1.2")
46
47 counter = meter.create_counter(
48 "requests",
49 "requests",
50 "number of requests",
51 )
52
53 # Labels are used to identify key-values that are associated with a specific
54 # metric that you want to record. These are useful for pre-aggregation and can
55 # be used to store custom dimensions pertaining to a metric
56 labels = {"environment": "staging"}
57
58 counter.add(25, labels)
59 input("Press any key to exit...")
60
61 API
62 ---
63 """
64
65 from collections import deque
66 from itertools import chain
67 from json import dumps
68 from logging import getLogger
69 from re import IGNORECASE, UNICODE, compile
70 from typing import Dict, Sequence, Tuple, Union
71
72 from prometheus_client.core import (
73 REGISTRY,
74 CounterMetricFamily,
75 GaugeMetricFamily,
76 HistogramMetricFamily,
77 InfoMetricFamily,
78 )
79 from prometheus_client.core import Metric as PrometheusMetric
80
81 from opentelemetry.sdk.metrics import Counter
82 from opentelemetry.sdk.metrics import Histogram as HistogramInstrument
83 from opentelemetry.sdk.metrics import (
84 ObservableCounter,
85 ObservableGauge,
86 ObservableUpDownCounter,
87 UpDownCounter,
88 )
89 from opentelemetry.sdk.metrics.export import (
90 AggregationTemporality,
91 Gauge,
92 Histogram,
93 HistogramDataPoint,
94 MetricReader,
95 MetricsData,
96 Sum,
97 )
98
99 _logger = getLogger(__name__)
100
101 _TARGET_INFO_NAME = "target"
102 _TARGET_INFO_DESCRIPTION = "Target metadata"
103
104
105 def _convert_buckets(
106 bucket_counts: Sequence[int], explicit_bounds: Sequence[float]
107 ) -> Sequence[Tuple[str, int]]:
108 buckets = []
109 total_count = 0
110 for upper_bound, count in zip(
111 chain(explicit_bounds, ["+Inf"]),
112 bucket_counts,
113 ):
114 total_count += count
115 buckets.append((f"{upper_bound}", total_count))
116
117 return buckets
118
119
120 class PrometheusMetricReader(MetricReader):
121 """Prometheus metric exporter for OpenTelemetry."""
122
123 def __init__(self, disable_target_info: bool = False) -> None:
124 super().__init__(
125 preferred_temporality={
126 Counter: AggregationTemporality.CUMULATIVE,
127 UpDownCounter: AggregationTemporality.CUMULATIVE,
128 HistogramInstrument: AggregationTemporality.CUMULATIVE,
129 ObservableCounter: AggregationTemporality.CUMULATIVE,
130 ObservableUpDownCounter: AggregationTemporality.CUMULATIVE,
131 ObservableGauge: AggregationTemporality.CUMULATIVE,
132 }
133 )
134 self._collector = _CustomCollector(disable_target_info)
135 REGISTRY.register(self._collector)
136 self._collector._callback = self.collect
137
138 def _receive_metrics(
139 self,
140 metrics_data: MetricsData,
141 timeout_millis: float = 10_000,
142 **kwargs,
143 ) -> None:
144 if metrics_data is None:
145 return
146 self._collector.add_metrics_data(metrics_data)
147
148 def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:
149 REGISTRY.unregister(self._collector)
150
151
152 class _CustomCollector:
153 """_CustomCollector represents the Prometheus Collector object
154
155 See more:
156 https://github.com/prometheus/client_python#custom-collectors
157 """
158
159 def __init__(self, disable_target_info: bool = False):
160 self._callback = None
161 self._metrics_datas = deque()
162 self._non_letters_digits_underscore_re = compile(
163 r"[^\w]", UNICODE | IGNORECASE
164 )
165 self._disable_target_info = disable_target_info
166 self._target_info = None
167
168 def add_metrics_data(self, metrics_data: MetricsData) -> None:
169 """Add metrics to Prometheus data"""
170 self._metrics_datas.append(metrics_data)
171
172 def collect(self) -> None:
173 """Collect fetches the metrics from OpenTelemetry
174 and delivers them as Prometheus Metrics.
175 Collect is invoked every time a ``prometheus.Gatherer`` is run
176 for example when the HTTP endpoint is invoked by Prometheus.
177 """
178 if self._callback is not None:
179 self._callback()
180
181 metric_family_id_metric_family = {}
182
183 if len(self._metrics_datas):
184 if not self._disable_target_info:
185 if self._target_info is None:
186 attributes = {}
187 for res in self._metrics_datas[0].resource_metrics:
188 attributes = {**attributes, **res.resource.attributes}
189
190 self._target_info = self._create_info_metric(
191 _TARGET_INFO_NAME, _TARGET_INFO_DESCRIPTION, attributes
192 )
193 metric_family_id_metric_family[
194 _TARGET_INFO_NAME
195 ] = self._target_info
196
197 while self._metrics_datas:
198 self._translate_to_prometheus(
199 self._metrics_datas.popleft(), metric_family_id_metric_family
200 )
201
202 if metric_family_id_metric_family:
203 for metric_family in metric_family_id_metric_family.values():
204 yield metric_family
205
206 # pylint: disable=too-many-locals,too-many-branches
207 def _translate_to_prometheus(
208 self,
209 metrics_data: MetricsData,
210 metric_family_id_metric_family: Dict[str, PrometheusMetric],
211 ):
212 metrics = []
213
214 for resource_metrics in metrics_data.resource_metrics:
215 for scope_metrics in resource_metrics.scope_metrics:
216 for metric in scope_metrics.metrics:
217 metrics.append(metric)
218
219 for metric in metrics:
220 label_valuess = []
221 values = []
222
223 pre_metric_family_ids = []
224
225 metric_name = ""
226 metric_name += self._sanitize(metric.name)
227
228 metric_description = metric.description or ""
229
230 for number_data_point in metric.data.data_points:
231 label_keys = []
232 label_values = []
233
234 for key, value in number_data_point.attributes.items():
235 label_keys.append(self._sanitize(key))
236 label_values.append(self._check_value(value))
237
238 pre_metric_family_ids.append(
239 "|".join(
240 [
241 metric_name,
242 metric_description,
243 "%".join(label_keys),
244 metric.unit,
245 ]
246 )
247 )
248
249 label_valuess.append(label_values)
250 if isinstance(number_data_point, HistogramDataPoint):
251 values.append(
252 {
253 "bucket_counts": number_data_point.bucket_counts,
254 "explicit_bounds": (
255 number_data_point.explicit_bounds
256 ),
257 "sum": number_data_point.sum,
258 }
259 )
260 else:
261 values.append(number_data_point.value)
262
263 for pre_metric_family_id, label_values, value in zip(
264 pre_metric_family_ids, label_valuess, values
265 ):
266 if isinstance(metric.data, Sum):
267
268 metric_family_id = "|".join(
269 [pre_metric_family_id, CounterMetricFamily.__name__]
270 )
271
272 if metric_family_id not in metric_family_id_metric_family:
273 metric_family_id_metric_family[
274 metric_family_id
275 ] = CounterMetricFamily(
276 name=metric_name,
277 documentation=metric_description,
278 labels=label_keys,
279 unit=metric.unit,
280 )
281 metric_family_id_metric_family[
282 metric_family_id
283 ].add_metric(labels=label_values, value=value)
284 elif isinstance(metric.data, Gauge):
285
286 metric_family_id = "|".join(
287 [pre_metric_family_id, GaugeMetricFamily.__name__]
288 )
289
290 if (
291 metric_family_id
292 not in metric_family_id_metric_family.keys()
293 ):
294 metric_family_id_metric_family[
295 metric_family_id
296 ] = GaugeMetricFamily(
297 name=metric_name,
298 documentation=metric_description,
299 labels=label_keys,
300 unit=metric.unit,
301 )
302 metric_family_id_metric_family[
303 metric_family_id
304 ].add_metric(labels=label_values, value=value)
305 elif isinstance(metric.data, Histogram):
306
307 metric_family_id = "|".join(
308 [pre_metric_family_id, HistogramMetricFamily.__name__]
309 )
310
311 if (
312 metric_family_id
313 not in metric_family_id_metric_family.keys()
314 ):
315 metric_family_id_metric_family[
316 metric_family_id
317 ] = HistogramMetricFamily(
318 name=metric_name,
319 documentation=metric_description,
320 labels=label_keys,
321 unit=metric.unit,
322 )
323 metric_family_id_metric_family[
324 metric_family_id
325 ].add_metric(
326 labels=label_values,
327 buckets=_convert_buckets(
328 value["bucket_counts"], value["explicit_bounds"]
329 ),
330 sum_value=value["sum"],
331 )
332 else:
333 _logger.warning(
334 "Unsupported metric data. %s", type(metric.data)
335 )
336
337 def _sanitize(self, key: str) -> str:
338 """sanitize the given metric name or label according to Prometheus rule.
339 Replace all characters other than [A-Za-z0-9_] with '_'.
340 """
341 return self._non_letters_digits_underscore_re.sub("_", key)
342
343 # pylint: disable=no-self-use
344 def _check_value(self, value: Union[int, float, str, Sequence]) -> str:
345 """Check the label value and return is appropriate representation"""
346 if not isinstance(value, str):
347 return dumps(value, default=str)
348 return str(value)
349
350 def _create_info_metric(
351 self, name: str, description: str, attributes: Dict[str, str]
352 ) -> InfoMetricFamily:
353 """Create an Info Metric Family with list of attributes"""
354 info = InfoMetricFamily(name, description, labels=attributes)
355 info.add_metric(labels=list(attributes.keys()), value=attributes)
356 return info
357
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
--- a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
+++ b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
@@ -263,7 +263,25 @@
for pre_metric_family_id, label_values, value in zip(
pre_metric_family_ids, label_valuess, values
):
- if isinstance(metric.data, Sum):
+ is_non_monotonic_sum = (
+ isinstance(metric.data, Sum)
+ and metric.data.is_monotonic is False
+ )
+ is_cumulative = (
+ isinstance(metric.data, Sum)
+ and metric.data.aggregation_temporality
+ == AggregationTemporality.CUMULATIVE
+ )
+
+ # The prometheus compatibility spec for sums says: If the aggregation temporality is cumulative and the sum is non-monotonic, it MUST be converted to a Prometheus Gauge.
+ should_convert_sum_to_gauge = (
+ is_non_monotonic_sum and is_cumulative
+ )
+
+ if (
+ isinstance(metric.data, Sum)
+ and not should_convert_sum_to_gauge
+ ):
metric_family_id = "|".join(
[pre_metric_family_id, CounterMetricFamily.__name__]
@@ -281,7 +299,10 @@
metric_family_id_metric_family[
metric_family_id
].add_metric(labels=label_values, value=value)
- elif isinstance(metric.data, Gauge):
+ elif (
+ isinstance(metric.data, Gauge)
+ or should_convert_sum_to_gauge
+ ):
metric_family_id = "|".join(
[pre_metric_family_id, GaugeMetricFamily.__name__]
|
{"golden_diff": "diff --git a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py\n--- a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py\n+++ b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py\n@@ -263,7 +263,25 @@\n for pre_metric_family_id, label_values, value in zip(\n pre_metric_family_ids, label_valuess, values\n ):\n- if isinstance(metric.data, Sum):\n+ is_non_monotonic_sum = (\n+ isinstance(metric.data, Sum)\n+ and metric.data.is_monotonic is False\n+ )\n+ is_cumulative = (\n+ isinstance(metric.data, Sum)\n+ and metric.data.aggregation_temporality\n+ == AggregationTemporality.CUMULATIVE\n+ )\n+\n+ # The prometheus compatibility spec for sums says: If the aggregation temporality is cumulative and the sum is non-monotonic, it MUST be converted to a Prometheus Gauge.\n+ should_convert_sum_to_gauge = (\n+ is_non_monotonic_sum and is_cumulative\n+ )\n+\n+ if (\n+ isinstance(metric.data, Sum)\n+ and not should_convert_sum_to_gauge\n+ ):\n \n metric_family_id = \"|\".join(\n [pre_metric_family_id, CounterMetricFamily.__name__]\n@@ -281,7 +299,10 @@\n metric_family_id_metric_family[\n metric_family_id\n ].add_metric(labels=label_values, value=value)\n- elif isinstance(metric.data, Gauge):\n+ elif (\n+ isinstance(metric.data, Gauge)\n+ or should_convert_sum_to_gauge\n+ ):\n \n metric_family_id = \"|\".join(\n [pre_metric_family_id, GaugeMetricFamily.__name__]\n", "issue": "Prometheus exporter should convert non-monotonic sums to gauges\nThe [current implementation](https://github.com/open-telemetry/opentelemetry-python/blob/main/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py#L255) of Sum export in the prometheus exporter does not differentiate between monotonic and non-monotonic sums.\r\n\r\nThe [prometheus compatibility spec for sums](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/compatibility/prometheus_and_openmetrics.md#sums) says: `If the aggregation temporality is cumulative and the sum is non-monotonic, it MUST be converted to a Prometheus Gauge.`.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis library allows export of metrics data to `Prometheus <https://prometheus.io/>`_.\n\nUsage\n-----\n\nThe **OpenTelemetry Prometheus Exporter** allows export of `OpenTelemetry`_\nmetrics to `Prometheus`_.\n\n\n.. _Prometheus: https://prometheus.io/\n.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/\n\n.. code:: python\n\n from prometheus_client import start_http_server\n\n from opentelemetry.exporter.prometheus import PrometheusMetricReader\n from opentelemetry.metrics import get_meter_provider, set_meter_provider\n from opentelemetry.sdk.metrics import MeterProvider\n\n # Start Prometheus client\n start_http_server(port=8000, addr=\"localhost\")\n\n # Exporter to export metrics to Prometheus\n prefix = \"MyAppPrefix\"\n reader = PrometheusMetricReader(prefix)\n\n # Meter is responsible for creating and recording metrics\n set_meter_provider(MeterProvider(metric_readers=[reader]))\n meter = get_meter_provider().get_meter(\"myapp\", \"0.1.2\")\n\n counter = meter.create_counter(\n \"requests\",\n \"requests\",\n \"number of requests\",\n )\n\n # Labels are used to identify key-values that are associated with a specific\n # metric that you want to record. These are useful for pre-aggregation and can\n # be used to store custom dimensions pertaining to a metric\n labels = {\"environment\": \"staging\"}\n\n counter.add(25, labels)\n input(\"Press any key to exit...\")\n\nAPI\n---\n\"\"\"\n\nfrom collections import deque\nfrom itertools import chain\nfrom json import dumps\nfrom logging import getLogger\nfrom re import IGNORECASE, UNICODE, compile\nfrom typing import Dict, Sequence, Tuple, Union\n\nfrom prometheus_client.core import (\n REGISTRY,\n CounterMetricFamily,\n GaugeMetricFamily,\n HistogramMetricFamily,\n InfoMetricFamily,\n)\nfrom prometheus_client.core import Metric as PrometheusMetric\n\nfrom opentelemetry.sdk.metrics import Counter\nfrom opentelemetry.sdk.metrics import Histogram as HistogramInstrument\nfrom opentelemetry.sdk.metrics import (\n ObservableCounter,\n ObservableGauge,\n ObservableUpDownCounter,\n UpDownCounter,\n)\nfrom opentelemetry.sdk.metrics.export import (\n AggregationTemporality,\n Gauge,\n Histogram,\n HistogramDataPoint,\n MetricReader,\n MetricsData,\n Sum,\n)\n\n_logger = getLogger(__name__)\n\n_TARGET_INFO_NAME = \"target\"\n_TARGET_INFO_DESCRIPTION = \"Target metadata\"\n\n\ndef _convert_buckets(\n bucket_counts: Sequence[int], explicit_bounds: Sequence[float]\n) -> Sequence[Tuple[str, int]]:\n buckets = []\n total_count = 0\n for upper_bound, count in zip(\n chain(explicit_bounds, [\"+Inf\"]),\n bucket_counts,\n ):\n total_count += count\n buckets.append((f\"{upper_bound}\", total_count))\n\n return buckets\n\n\nclass PrometheusMetricReader(MetricReader):\n \"\"\"Prometheus metric exporter for OpenTelemetry.\"\"\"\n\n def __init__(self, disable_target_info: bool = False) -> None:\n super().__init__(\n preferred_temporality={\n Counter: AggregationTemporality.CUMULATIVE,\n UpDownCounter: AggregationTemporality.CUMULATIVE,\n HistogramInstrument: AggregationTemporality.CUMULATIVE,\n ObservableCounter: AggregationTemporality.CUMULATIVE,\n ObservableUpDownCounter: AggregationTemporality.CUMULATIVE,\n ObservableGauge: AggregationTemporality.CUMULATIVE,\n }\n )\n self._collector = _CustomCollector(disable_target_info)\n REGISTRY.register(self._collector)\n self._collector._callback = self.collect\n\n def _receive_metrics(\n self,\n metrics_data: MetricsData,\n timeout_millis: float = 10_000,\n **kwargs,\n ) -> None:\n if metrics_data is None:\n return\n self._collector.add_metrics_data(metrics_data)\n\n def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:\n REGISTRY.unregister(self._collector)\n\n\nclass _CustomCollector:\n \"\"\"_CustomCollector represents the Prometheus Collector object\n\n See more:\n https://github.com/prometheus/client_python#custom-collectors\n \"\"\"\n\n def __init__(self, disable_target_info: bool = False):\n self._callback = None\n self._metrics_datas = deque()\n self._non_letters_digits_underscore_re = compile(\n r\"[^\\w]\", UNICODE | IGNORECASE\n )\n self._disable_target_info = disable_target_info\n self._target_info = None\n\n def add_metrics_data(self, metrics_data: MetricsData) -> None:\n \"\"\"Add metrics to Prometheus data\"\"\"\n self._metrics_datas.append(metrics_data)\n\n def collect(self) -> None:\n \"\"\"Collect fetches the metrics from OpenTelemetry\n and delivers them as Prometheus Metrics.\n Collect is invoked every time a ``prometheus.Gatherer`` is run\n for example when the HTTP endpoint is invoked by Prometheus.\n \"\"\"\n if self._callback is not None:\n self._callback()\n\n metric_family_id_metric_family = {}\n\n if len(self._metrics_datas):\n if not self._disable_target_info:\n if self._target_info is None:\n attributes = {}\n for res in self._metrics_datas[0].resource_metrics:\n attributes = {**attributes, **res.resource.attributes}\n\n self._target_info = self._create_info_metric(\n _TARGET_INFO_NAME, _TARGET_INFO_DESCRIPTION, attributes\n )\n metric_family_id_metric_family[\n _TARGET_INFO_NAME\n ] = self._target_info\n\n while self._metrics_datas:\n self._translate_to_prometheus(\n self._metrics_datas.popleft(), metric_family_id_metric_family\n )\n\n if metric_family_id_metric_family:\n for metric_family in metric_family_id_metric_family.values():\n yield metric_family\n\n # pylint: disable=too-many-locals,too-many-branches\n def _translate_to_prometheus(\n self,\n metrics_data: MetricsData,\n metric_family_id_metric_family: Dict[str, PrometheusMetric],\n ):\n metrics = []\n\n for resource_metrics in metrics_data.resource_metrics:\n for scope_metrics in resource_metrics.scope_metrics:\n for metric in scope_metrics.metrics:\n metrics.append(metric)\n\n for metric in metrics:\n label_valuess = []\n values = []\n\n pre_metric_family_ids = []\n\n metric_name = \"\"\n metric_name += self._sanitize(metric.name)\n\n metric_description = metric.description or \"\"\n\n for number_data_point in metric.data.data_points:\n label_keys = []\n label_values = []\n\n for key, value in number_data_point.attributes.items():\n label_keys.append(self._sanitize(key))\n label_values.append(self._check_value(value))\n\n pre_metric_family_ids.append(\n \"|\".join(\n [\n metric_name,\n metric_description,\n \"%\".join(label_keys),\n metric.unit,\n ]\n )\n )\n\n label_valuess.append(label_values)\n if isinstance(number_data_point, HistogramDataPoint):\n values.append(\n {\n \"bucket_counts\": number_data_point.bucket_counts,\n \"explicit_bounds\": (\n number_data_point.explicit_bounds\n ),\n \"sum\": number_data_point.sum,\n }\n )\n else:\n values.append(number_data_point.value)\n\n for pre_metric_family_id, label_values, value in zip(\n pre_metric_family_ids, label_valuess, values\n ):\n if isinstance(metric.data, Sum):\n\n metric_family_id = \"|\".join(\n [pre_metric_family_id, CounterMetricFamily.__name__]\n )\n\n if metric_family_id not in metric_family_id_metric_family:\n metric_family_id_metric_family[\n metric_family_id\n ] = CounterMetricFamily(\n name=metric_name,\n documentation=metric_description,\n labels=label_keys,\n unit=metric.unit,\n )\n metric_family_id_metric_family[\n metric_family_id\n ].add_metric(labels=label_values, value=value)\n elif isinstance(metric.data, Gauge):\n\n metric_family_id = \"|\".join(\n [pre_metric_family_id, GaugeMetricFamily.__name__]\n )\n\n if (\n metric_family_id\n not in metric_family_id_metric_family.keys()\n ):\n metric_family_id_metric_family[\n metric_family_id\n ] = GaugeMetricFamily(\n name=metric_name,\n documentation=metric_description,\n labels=label_keys,\n unit=metric.unit,\n )\n metric_family_id_metric_family[\n metric_family_id\n ].add_metric(labels=label_values, value=value)\n elif isinstance(metric.data, Histogram):\n\n metric_family_id = \"|\".join(\n [pre_metric_family_id, HistogramMetricFamily.__name__]\n )\n\n if (\n metric_family_id\n not in metric_family_id_metric_family.keys()\n ):\n metric_family_id_metric_family[\n metric_family_id\n ] = HistogramMetricFamily(\n name=metric_name,\n documentation=metric_description,\n labels=label_keys,\n unit=metric.unit,\n )\n metric_family_id_metric_family[\n metric_family_id\n ].add_metric(\n labels=label_values,\n buckets=_convert_buckets(\n value[\"bucket_counts\"], value[\"explicit_bounds\"]\n ),\n sum_value=value[\"sum\"],\n )\n else:\n _logger.warning(\n \"Unsupported metric data. %s\", type(metric.data)\n )\n\n def _sanitize(self, key: str) -> str:\n \"\"\"sanitize the given metric name or label according to Prometheus rule.\n Replace all characters other than [A-Za-z0-9_] with '_'.\n \"\"\"\n return self._non_letters_digits_underscore_re.sub(\"_\", key)\n\n # pylint: disable=no-self-use\n def _check_value(self, value: Union[int, float, str, Sequence]) -> str:\n \"\"\"Check the label value and return is appropriate representation\"\"\"\n if not isinstance(value, str):\n return dumps(value, default=str)\n return str(value)\n\n def _create_info_metric(\n self, name: str, description: str, attributes: Dict[str, str]\n ) -> InfoMetricFamily:\n \"\"\"Create an Info Metric Family with list of attributes\"\"\"\n info = InfoMetricFamily(name, description, labels=attributes)\n info.add_metric(labels=list(attributes.keys()), value=attributes)\n return info\n", "path": "exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis library allows export of metrics data to `Prometheus <https://prometheus.io/>`_.\n\nUsage\n-----\n\nThe **OpenTelemetry Prometheus Exporter** allows export of `OpenTelemetry`_\nmetrics to `Prometheus`_.\n\n\n.. _Prometheus: https://prometheus.io/\n.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/\n\n.. code:: python\n\n from prometheus_client import start_http_server\n\n from opentelemetry.exporter.prometheus import PrometheusMetricReader\n from opentelemetry.metrics import get_meter_provider, set_meter_provider\n from opentelemetry.sdk.metrics import MeterProvider\n\n # Start Prometheus client\n start_http_server(port=8000, addr=\"localhost\")\n\n # Exporter to export metrics to Prometheus\n prefix = \"MyAppPrefix\"\n reader = PrometheusMetricReader(prefix)\n\n # Meter is responsible for creating and recording metrics\n set_meter_provider(MeterProvider(metric_readers=[reader]))\n meter = get_meter_provider().get_meter(\"myapp\", \"0.1.2\")\n\n counter = meter.create_counter(\n \"requests\",\n \"requests\",\n \"number of requests\",\n )\n\n # Labels are used to identify key-values that are associated with a specific\n # metric that you want to record. These are useful for pre-aggregation and can\n # be used to store custom dimensions pertaining to a metric\n labels = {\"environment\": \"staging\"}\n\n counter.add(25, labels)\n input(\"Press any key to exit...\")\n\nAPI\n---\n\"\"\"\n\nfrom collections import deque\nfrom itertools import chain\nfrom json import dumps\nfrom logging import getLogger\nfrom re import IGNORECASE, UNICODE, compile\nfrom typing import Dict, Sequence, Tuple, Union\n\nfrom prometheus_client.core import (\n REGISTRY,\n CounterMetricFamily,\n GaugeMetricFamily,\n HistogramMetricFamily,\n InfoMetricFamily,\n)\nfrom prometheus_client.core import Metric as PrometheusMetric\n\nfrom opentelemetry.sdk.metrics import Counter\nfrom opentelemetry.sdk.metrics import Histogram as HistogramInstrument\nfrom opentelemetry.sdk.metrics import (\n ObservableCounter,\n ObservableGauge,\n ObservableUpDownCounter,\n UpDownCounter,\n)\nfrom opentelemetry.sdk.metrics.export import (\n AggregationTemporality,\n Gauge,\n Histogram,\n HistogramDataPoint,\n MetricReader,\n MetricsData,\n Sum,\n)\n\n_logger = getLogger(__name__)\n\n_TARGET_INFO_NAME = \"target\"\n_TARGET_INFO_DESCRIPTION = \"Target metadata\"\n\n\ndef _convert_buckets(\n bucket_counts: Sequence[int], explicit_bounds: Sequence[float]\n) -> Sequence[Tuple[str, int]]:\n buckets = []\n total_count = 0\n for upper_bound, count in zip(\n chain(explicit_bounds, [\"+Inf\"]),\n bucket_counts,\n ):\n total_count += count\n buckets.append((f\"{upper_bound}\", total_count))\n\n return buckets\n\n\nclass PrometheusMetricReader(MetricReader):\n \"\"\"Prometheus metric exporter for OpenTelemetry.\"\"\"\n\n def __init__(self, disable_target_info: bool = False) -> None:\n super().__init__(\n preferred_temporality={\n Counter: AggregationTemporality.CUMULATIVE,\n UpDownCounter: AggregationTemporality.CUMULATIVE,\n HistogramInstrument: AggregationTemporality.CUMULATIVE,\n ObservableCounter: AggregationTemporality.CUMULATIVE,\n ObservableUpDownCounter: AggregationTemporality.CUMULATIVE,\n ObservableGauge: AggregationTemporality.CUMULATIVE,\n }\n )\n self._collector = _CustomCollector(disable_target_info)\n REGISTRY.register(self._collector)\n self._collector._callback = self.collect\n\n def _receive_metrics(\n self,\n metrics_data: MetricsData,\n timeout_millis: float = 10_000,\n **kwargs,\n ) -> None:\n if metrics_data is None:\n return\n self._collector.add_metrics_data(metrics_data)\n\n def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None:\n REGISTRY.unregister(self._collector)\n\n\nclass _CustomCollector:\n \"\"\"_CustomCollector represents the Prometheus Collector object\n\n See more:\n https://github.com/prometheus/client_python#custom-collectors\n \"\"\"\n\n def __init__(self, disable_target_info: bool = False):\n self._callback = None\n self._metrics_datas = deque()\n self._non_letters_digits_underscore_re = compile(\n r\"[^\\w]\", UNICODE | IGNORECASE\n )\n self._disable_target_info = disable_target_info\n self._target_info = None\n\n def add_metrics_data(self, metrics_data: MetricsData) -> None:\n \"\"\"Add metrics to Prometheus data\"\"\"\n self._metrics_datas.append(metrics_data)\n\n def collect(self) -> None:\n \"\"\"Collect fetches the metrics from OpenTelemetry\n and delivers them as Prometheus Metrics.\n Collect is invoked every time a ``prometheus.Gatherer`` is run\n for example when the HTTP endpoint is invoked by Prometheus.\n \"\"\"\n if self._callback is not None:\n self._callback()\n\n metric_family_id_metric_family = {}\n\n if len(self._metrics_datas):\n if not self._disable_target_info:\n if self._target_info is None:\n attributes = {}\n for res in self._metrics_datas[0].resource_metrics:\n attributes = {**attributes, **res.resource.attributes}\n\n self._target_info = self._create_info_metric(\n _TARGET_INFO_NAME, _TARGET_INFO_DESCRIPTION, attributes\n )\n metric_family_id_metric_family[\n _TARGET_INFO_NAME\n ] = self._target_info\n\n while self._metrics_datas:\n self._translate_to_prometheus(\n self._metrics_datas.popleft(), metric_family_id_metric_family\n )\n\n if metric_family_id_metric_family:\n for metric_family in metric_family_id_metric_family.values():\n yield metric_family\n\n # pylint: disable=too-many-locals,too-many-branches\n def _translate_to_prometheus(\n self,\n metrics_data: MetricsData,\n metric_family_id_metric_family: Dict[str, PrometheusMetric],\n ):\n metrics = []\n\n for resource_metrics in metrics_data.resource_metrics:\n for scope_metrics in resource_metrics.scope_metrics:\n for metric in scope_metrics.metrics:\n metrics.append(metric)\n\n for metric in metrics:\n label_valuess = []\n values = []\n\n pre_metric_family_ids = []\n\n metric_name = \"\"\n metric_name += self._sanitize(metric.name)\n\n metric_description = metric.description or \"\"\n\n for number_data_point in metric.data.data_points:\n label_keys = []\n label_values = []\n\n for key, value in number_data_point.attributes.items():\n label_keys.append(self._sanitize(key))\n label_values.append(self._check_value(value))\n\n pre_metric_family_ids.append(\n \"|\".join(\n [\n metric_name,\n metric_description,\n \"%\".join(label_keys),\n metric.unit,\n ]\n )\n )\n\n label_valuess.append(label_values)\n if isinstance(number_data_point, HistogramDataPoint):\n values.append(\n {\n \"bucket_counts\": number_data_point.bucket_counts,\n \"explicit_bounds\": (\n number_data_point.explicit_bounds\n ),\n \"sum\": number_data_point.sum,\n }\n )\n else:\n values.append(number_data_point.value)\n\n for pre_metric_family_id, label_values, value in zip(\n pre_metric_family_ids, label_valuess, values\n ):\n is_non_monotonic_sum = (\n isinstance(metric.data, Sum)\n and metric.data.is_monotonic is False\n )\n is_cumulative = (\n isinstance(metric.data, Sum)\n and metric.data.aggregation_temporality\n == AggregationTemporality.CUMULATIVE\n )\n\n # The prometheus compatibility spec for sums says: If the aggregation temporality is cumulative and the sum is non-monotonic, it MUST be converted to a Prometheus Gauge.\n should_convert_sum_to_gauge = (\n is_non_monotonic_sum and is_cumulative\n )\n\n if (\n isinstance(metric.data, Sum)\n and not should_convert_sum_to_gauge\n ):\n\n metric_family_id = \"|\".join(\n [pre_metric_family_id, CounterMetricFamily.__name__]\n )\n\n if metric_family_id not in metric_family_id_metric_family:\n metric_family_id_metric_family[\n metric_family_id\n ] = CounterMetricFamily(\n name=metric_name,\n documentation=metric_description,\n labels=label_keys,\n unit=metric.unit,\n )\n metric_family_id_metric_family[\n metric_family_id\n ].add_metric(labels=label_values, value=value)\n elif (\n isinstance(metric.data, Gauge)\n or should_convert_sum_to_gauge\n ):\n\n metric_family_id = \"|\".join(\n [pre_metric_family_id, GaugeMetricFamily.__name__]\n )\n\n if (\n metric_family_id\n not in metric_family_id_metric_family.keys()\n ):\n metric_family_id_metric_family[\n metric_family_id\n ] = GaugeMetricFamily(\n name=metric_name,\n documentation=metric_description,\n labels=label_keys,\n unit=metric.unit,\n )\n metric_family_id_metric_family[\n metric_family_id\n ].add_metric(labels=label_values, value=value)\n elif isinstance(metric.data, Histogram):\n\n metric_family_id = \"|\".join(\n [pre_metric_family_id, HistogramMetricFamily.__name__]\n )\n\n if (\n metric_family_id\n not in metric_family_id_metric_family.keys()\n ):\n metric_family_id_metric_family[\n metric_family_id\n ] = HistogramMetricFamily(\n name=metric_name,\n documentation=metric_description,\n labels=label_keys,\n unit=metric.unit,\n )\n metric_family_id_metric_family[\n metric_family_id\n ].add_metric(\n labels=label_values,\n buckets=_convert_buckets(\n value[\"bucket_counts\"], value[\"explicit_bounds\"]\n ),\n sum_value=value[\"sum\"],\n )\n else:\n _logger.warning(\n \"Unsupported metric data. %s\", type(metric.data)\n )\n\n def _sanitize(self, key: str) -> str:\n \"\"\"sanitize the given metric name or label according to Prometheus rule.\n Replace all characters other than [A-Za-z0-9_] with '_'.\n \"\"\"\n return self._non_letters_digits_underscore_re.sub(\"_\", key)\n\n # pylint: disable=no-self-use\n def _check_value(self, value: Union[int, float, str, Sequence]) -> str:\n \"\"\"Check the label value and return is appropriate representation\"\"\"\n if not isinstance(value, str):\n return dumps(value, default=str)\n return str(value)\n\n def _create_info_metric(\n self, name: str, description: str, attributes: Dict[str, str]\n ) -> InfoMetricFamily:\n \"\"\"Create an Info Metric Family with list of attributes\"\"\"\n info = InfoMetricFamily(name, description, labels=attributes)\n info.add_metric(labels=list(attributes.keys()), value=attributes)\n return info\n", "path": "exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py"}]}
| 3,758 | 434 |
gh_patches_debug_26754
|
rasdani/github-patches
|
git_diff
|
searxng__searxng-917
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Switch versioning format since SearXNG is rolling release
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
**Is your feature request related to a problem? Please describe.**
Right now we use the shortened git commit SHA as version. This can be hard for people to know which version an instance is running on.
**Describe the solution you'd like**
Instad of displaying version 1.0.0-commit_sha we should do a version like this for example: `2022-02-20-1` This way its more straight forward what version and instance is running and how old this version is. The `1.0.0` is not really needed in rolling release IMO.
**Describe alternatives you've considered**
Use the shortened commit SHA as version but still drop the `1.0.0`.
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/version.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 # pylint: disable=,missing-module-docstring,missing-class-docstring
4
5 import re
6 import os
7 import shlex
8 import subprocess
9 import logging
10
11 # fallback values
12 # if there is searx.version_frozen module, and it is not possible to get the git tag
13 VERSION_STRING = "1.0.0"
14 VERSION_TAG = "1.0.0"
15 GIT_URL = "unknow"
16 GIT_BRANCH = "unknow"
17
18 logger = logging.getLogger("searx")
19
20 SUBPROCESS_RUN_ENV = {
21 "PATH": os.environ["PATH"],
22 "LC_ALL": "C",
23 "LANGUAGE": "",
24 }
25
26
27 def subprocess_run(args, **kwargs):
28 """Call :py:func:`subprocess.run` and return (striped) stdout. If returncode is
29 non-zero, raise a :py:func:`subprocess.CalledProcessError`.
30 """
31 if not isinstance(args, (list, tuple)):
32 args = shlex.split(args)
33
34 kwargs["env"] = kwargs.get("env", SUBPROCESS_RUN_ENV)
35 kwargs["encoding"] = kwargs.get("encoding", "utf-8")
36 kwargs["stdout"] = subprocess.PIPE
37 kwargs["stderr"] = subprocess.PIPE
38 # raise CalledProcessError if returncode is non-zero
39 kwargs["check"] = True
40 proc = subprocess.run(args, **kwargs) # pylint: disable=subprocess-run-check
41 return proc.stdout.strip()
42
43
44 def get_git_url_and_branch():
45 try:
46 ref = subprocess_run("git rev-parse --abbrev-ref @{upstream}")
47 except subprocess.CalledProcessError:
48 ref = subprocess_run("git rev-parse --abbrev-ref master@{upstream}")
49 origin, git_branch = ref.split("/", 1)
50 git_url = subprocess_run(["git", "remote", "get-url", origin])
51
52 # get https:// url from git@ url
53 if git_url.startswith("git@"):
54 git_url = git_url.replace(":", "/", 2).replace("git@", "https://", 1)
55 if git_url.endswith(".git"):
56 git_url = git_url.replace(".git", "", 1)
57
58 return git_url, git_branch
59
60
61 def get_git_version():
62 try:
63 tag = subprocess_run("git describe HEAD")
64 # a. HEAD is on tag name, example: tag = "v1.0.1"
65 # b. HEAD is not a tag name, example "<tag>-<distance>-g<commit>"
66 tag_version, tag_distance, tag_commit = (tag.split("-") + ["", ""])[:3]
67 if re.match(r"v[0-9]+\.[0-9]+\.[0-9]+", tag_version):
68 # tag_version "v1.0.0" becomes "1.0.0" (without the v)
69 # other patterns are kept untouched
70 tag_version = tag_version[1:]
71 # remove "g" prefix from tag_commit
72 if tag_commit and tag_commit[0] == "g":
73 tag_commit = tag_commit[1:]
74 # set git_version to "1.0.0-590-0686e274" or '1.0.0'
75 git_version = "-".join(filter(bool, [tag_version, tag_distance, tag_commit]))
76 except subprocess.CalledProcessError:
77 # fall back to "YYYY.MM.DD.Hash" if there is no tag at all
78 git_version = subprocess_run(r"git show -s --format='%as-%h'")
79 # PEP 440: replace - with .
80 tag_version = git_version = git_version.replace("-", ".")
81
82 # add "-dirty" suffix if there are uncommited changes except searx/settings.yml
83 try:
84 subprocess_run("git diff --quiet -- . ':!searx/settings.yml' ':!utils/brand.env'")
85 except subprocess.CalledProcessError as e:
86 if e.returncode == 1:
87 git_version += "-dirty"
88 else:
89 logger.warning('"%s" returns an unexpected return code %i', e.returncode, e.cmd)
90 return git_version, tag_version
91
92
93 try:
94 from searx.version_frozen import VERSION_STRING, VERSION_TAG, GIT_URL, GIT_BRANCH
95 except ImportError:
96 try:
97 try:
98 VERSION_STRING, VERSION_TAG = get_git_version()
99 except subprocess.CalledProcessError as ex:
100 logger.error("Error while getting the version: %s", ex.stderr)
101 try:
102 GIT_URL, GIT_BRANCH = get_git_url_and_branch()
103 except subprocess.CalledProcessError as ex:
104 logger.error("Error while getting the git URL & branch: %s", ex.stderr)
105 except FileNotFoundError as ex:
106 logger.error("%s is not found, fallback to the default version", ex.filename)
107
108
109 logger.info("version: %s", VERSION_STRING)
110
111 if __name__ == "__main__":
112 import sys
113
114 if len(sys.argv) >= 2 and sys.argv[1] == "freeze":
115 # freeze the version (to create an archive outside a git repository)
116 python_code = f"""# SPDX-License-Identifier: AGPL-3.0-or-later
117 # this file is generated automatically by searx/version.py
118
119 VERSION_STRING = "{VERSION_STRING}"
120 VERSION_TAG = "{VERSION_TAG}"
121 GIT_URL = "{GIT_URL}"
122 GIT_BRANCH = "{GIT_BRANCH}"
123 """
124 with open(os.path.join(os.path.dirname(__file__), "version_frozen.py"), "w", encoding="utf8") as f:
125 f.write(python_code)
126 print(f"{f.name} created")
127 else:
128 # output shell code to set the variables
129 # usage: eval "$(python -m searx.version)"
130 shell_code = f"""
131 VERSION_STRING="{VERSION_STRING}"
132 VERSION_TAG="{VERSION_TAG}"
133 GIT_URL="{GIT_URL}"
134 GIT_BRANCH="{GIT_BRANCH}"
135 """
136 print(shell_code)
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/version.py b/searx/version.py
--- a/searx/version.py
+++ b/searx/version.py
@@ -2,7 +2,6 @@
# lint: pylint
# pylint: disable=,missing-module-docstring,missing-class-docstring
-import re
import os
import shlex
import subprocess
@@ -59,25 +58,8 @@
def get_git_version():
- try:
- tag = subprocess_run("git describe HEAD")
- # a. HEAD is on tag name, example: tag = "v1.0.1"
- # b. HEAD is not a tag name, example "<tag>-<distance>-g<commit>"
- tag_version, tag_distance, tag_commit = (tag.split("-") + ["", ""])[:3]
- if re.match(r"v[0-9]+\.[0-9]+\.[0-9]+", tag_version):
- # tag_version "v1.0.0" becomes "1.0.0" (without the v)
- # other patterns are kept untouched
- tag_version = tag_version[1:]
- # remove "g" prefix from tag_commit
- if tag_commit and tag_commit[0] == "g":
- tag_commit = tag_commit[1:]
- # set git_version to "1.0.0-590-0686e274" or '1.0.0'
- git_version = "-".join(filter(bool, [tag_version, tag_distance, tag_commit]))
- except subprocess.CalledProcessError:
- # fall back to "YYYY.MM.DD.Hash" if there is no tag at all
- git_version = subprocess_run(r"git show -s --format='%as-%h'")
- # PEP 440: replace - with .
- tag_version = git_version = git_version.replace("-", ".")
+ git_commit_date_hash = subprocess_run(r"git show -s --format='%cs-%h'").replace("-", ".", 2)
+ tag_version = git_version = git_commit_date_hash
# add "-dirty" suffix if there are uncommited changes except searx/settings.yml
try:
|
{"golden_diff": "diff --git a/searx/version.py b/searx/version.py\n--- a/searx/version.py\n+++ b/searx/version.py\n@@ -2,7 +2,6 @@\n # lint: pylint\n # pylint: disable=,missing-module-docstring,missing-class-docstring\n \n-import re\n import os\n import shlex\n import subprocess\n@@ -59,25 +58,8 @@\n \n \n def get_git_version():\n- try:\n- tag = subprocess_run(\"git describe HEAD\")\n- # a. HEAD is on tag name, example: tag = \"v1.0.1\"\n- # b. HEAD is not a tag name, example \"<tag>-<distance>-g<commit>\"\n- tag_version, tag_distance, tag_commit = (tag.split(\"-\") + [\"\", \"\"])[:3]\n- if re.match(r\"v[0-9]+\\.[0-9]+\\.[0-9]+\", tag_version):\n- # tag_version \"v1.0.0\" becomes \"1.0.0\" (without the v)\n- # other patterns are kept untouched\n- tag_version = tag_version[1:]\n- # remove \"g\" prefix from tag_commit\n- if tag_commit and tag_commit[0] == \"g\":\n- tag_commit = tag_commit[1:]\n- # set git_version to \"1.0.0-590-0686e274\" or '1.0.0'\n- git_version = \"-\".join(filter(bool, [tag_version, tag_distance, tag_commit]))\n- except subprocess.CalledProcessError:\n- # fall back to \"YYYY.MM.DD.Hash\" if there is no tag at all\n- git_version = subprocess_run(r\"git show -s --format='%as-%h'\")\n- # PEP 440: replace - with .\n- tag_version = git_version = git_version.replace(\"-\", \".\")\n+ git_commit_date_hash = subprocess_run(r\"git show -s --format='%cs-%h'\").replace(\"-\", \".\", 2)\n+ tag_version = git_version = git_commit_date_hash\n \n # add \"-dirty\" suffix if there are uncommited changes except searx/settings.yml\n try:\n", "issue": "Switch versioning format since SearXNG is rolling release\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Is your feature request related to a problem? Please describe.**\r\nRight now we use the shortened git commit SHA as version. This can be hard for people to know which version an instance is running on.\r\n\r\n**Describe the solution you'd like**\r\nInstad of displaying version 1.0.0-commit_sha we should do a version like this for example: `2022-02-20-1` This way its more straight forward what version and instance is running and how old this version is. The `1.0.0` is not really needed in rolling release IMO.\r\n\r\n**Describe alternatives you've considered**\r\nUse the shortened commit SHA as version but still drop the `1.0.0`.\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n# pylint: disable=,missing-module-docstring,missing-class-docstring\n\nimport re\nimport os\nimport shlex\nimport subprocess\nimport logging\n\n# fallback values\n# if there is searx.version_frozen module, and it is not possible to get the git tag\nVERSION_STRING = \"1.0.0\"\nVERSION_TAG = \"1.0.0\"\nGIT_URL = \"unknow\"\nGIT_BRANCH = \"unknow\"\n\nlogger = logging.getLogger(\"searx\")\n\nSUBPROCESS_RUN_ENV = {\n \"PATH\": os.environ[\"PATH\"],\n \"LC_ALL\": \"C\",\n \"LANGUAGE\": \"\",\n}\n\n\ndef subprocess_run(args, **kwargs):\n \"\"\"Call :py:func:`subprocess.run` and return (striped) stdout. If returncode is\n non-zero, raise a :py:func:`subprocess.CalledProcessError`.\n \"\"\"\n if not isinstance(args, (list, tuple)):\n args = shlex.split(args)\n\n kwargs[\"env\"] = kwargs.get(\"env\", SUBPROCESS_RUN_ENV)\n kwargs[\"encoding\"] = kwargs.get(\"encoding\", \"utf-8\")\n kwargs[\"stdout\"] = subprocess.PIPE\n kwargs[\"stderr\"] = subprocess.PIPE\n # raise CalledProcessError if returncode is non-zero\n kwargs[\"check\"] = True\n proc = subprocess.run(args, **kwargs) # pylint: disable=subprocess-run-check\n return proc.stdout.strip()\n\n\ndef get_git_url_and_branch():\n try:\n ref = subprocess_run(\"git rev-parse --abbrev-ref @{upstream}\")\n except subprocess.CalledProcessError:\n ref = subprocess_run(\"git rev-parse --abbrev-ref master@{upstream}\")\n origin, git_branch = ref.split(\"/\", 1)\n git_url = subprocess_run([\"git\", \"remote\", \"get-url\", origin])\n\n # get https:// url from git@ url\n if git_url.startswith(\"git@\"):\n git_url = git_url.replace(\":\", \"/\", 2).replace(\"git@\", \"https://\", 1)\n if git_url.endswith(\".git\"):\n git_url = git_url.replace(\".git\", \"\", 1)\n\n return git_url, git_branch\n\n\ndef get_git_version():\n try:\n tag = subprocess_run(\"git describe HEAD\")\n # a. HEAD is on tag name, example: tag = \"v1.0.1\"\n # b. HEAD is not a tag name, example \"<tag>-<distance>-g<commit>\"\n tag_version, tag_distance, tag_commit = (tag.split(\"-\") + [\"\", \"\"])[:3]\n if re.match(r\"v[0-9]+\\.[0-9]+\\.[0-9]+\", tag_version):\n # tag_version \"v1.0.0\" becomes \"1.0.0\" (without the v)\n # other patterns are kept untouched\n tag_version = tag_version[1:]\n # remove \"g\" prefix from tag_commit\n if tag_commit and tag_commit[0] == \"g\":\n tag_commit = tag_commit[1:]\n # set git_version to \"1.0.0-590-0686e274\" or '1.0.0'\n git_version = \"-\".join(filter(bool, [tag_version, tag_distance, tag_commit]))\n except subprocess.CalledProcessError:\n # fall back to \"YYYY.MM.DD.Hash\" if there is no tag at all\n git_version = subprocess_run(r\"git show -s --format='%as-%h'\")\n # PEP 440: replace - with .\n tag_version = git_version = git_version.replace(\"-\", \".\")\n\n # add \"-dirty\" suffix if there are uncommited changes except searx/settings.yml\n try:\n subprocess_run(\"git diff --quiet -- . ':!searx/settings.yml' ':!utils/brand.env'\")\n except subprocess.CalledProcessError as e:\n if e.returncode == 1:\n git_version += \"-dirty\"\n else:\n logger.warning('\"%s\" returns an unexpected return code %i', e.returncode, e.cmd)\n return git_version, tag_version\n\n\ntry:\n from searx.version_frozen import VERSION_STRING, VERSION_TAG, GIT_URL, GIT_BRANCH\nexcept ImportError:\n try:\n try:\n VERSION_STRING, VERSION_TAG = get_git_version()\n except subprocess.CalledProcessError as ex:\n logger.error(\"Error while getting the version: %s\", ex.stderr)\n try:\n GIT_URL, GIT_BRANCH = get_git_url_and_branch()\n except subprocess.CalledProcessError as ex:\n logger.error(\"Error while getting the git URL & branch: %s\", ex.stderr)\n except FileNotFoundError as ex:\n logger.error(\"%s is not found, fallback to the default version\", ex.filename)\n\n\nlogger.info(\"version: %s\", VERSION_STRING)\n\nif __name__ == \"__main__\":\n import sys\n\n if len(sys.argv) >= 2 and sys.argv[1] == \"freeze\":\n # freeze the version (to create an archive outside a git repository)\n python_code = f\"\"\"# SPDX-License-Identifier: AGPL-3.0-or-later\n# this file is generated automatically by searx/version.py\n\nVERSION_STRING = \"{VERSION_STRING}\"\nVERSION_TAG = \"{VERSION_TAG}\"\nGIT_URL = \"{GIT_URL}\"\nGIT_BRANCH = \"{GIT_BRANCH}\"\n\"\"\"\n with open(os.path.join(os.path.dirname(__file__), \"version_frozen.py\"), \"w\", encoding=\"utf8\") as f:\n f.write(python_code)\n print(f\"{f.name} created\")\n else:\n # output shell code to set the variables\n # usage: eval \"$(python -m searx.version)\"\n shell_code = f\"\"\"\nVERSION_STRING=\"{VERSION_STRING}\"\nVERSION_TAG=\"{VERSION_TAG}\"\nGIT_URL=\"{GIT_URL}\"\nGIT_BRANCH=\"{GIT_BRANCH}\"\n\"\"\"\n print(shell_code)\n", "path": "searx/version.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n# pylint: disable=,missing-module-docstring,missing-class-docstring\n\nimport os\nimport shlex\nimport subprocess\nimport logging\n\n# fallback values\n# if there is searx.version_frozen module, and it is not possible to get the git tag\nVERSION_STRING = \"1.0.0\"\nVERSION_TAG = \"1.0.0\"\nGIT_URL = \"unknow\"\nGIT_BRANCH = \"unknow\"\n\nlogger = logging.getLogger(\"searx\")\n\nSUBPROCESS_RUN_ENV = {\n \"PATH\": os.environ[\"PATH\"],\n \"LC_ALL\": \"C\",\n \"LANGUAGE\": \"\",\n}\n\n\ndef subprocess_run(args, **kwargs):\n \"\"\"Call :py:func:`subprocess.run` and return (striped) stdout. If returncode is\n non-zero, raise a :py:func:`subprocess.CalledProcessError`.\n \"\"\"\n if not isinstance(args, (list, tuple)):\n args = shlex.split(args)\n\n kwargs[\"env\"] = kwargs.get(\"env\", SUBPROCESS_RUN_ENV)\n kwargs[\"encoding\"] = kwargs.get(\"encoding\", \"utf-8\")\n kwargs[\"stdout\"] = subprocess.PIPE\n kwargs[\"stderr\"] = subprocess.PIPE\n # raise CalledProcessError if returncode is non-zero\n kwargs[\"check\"] = True\n proc = subprocess.run(args, **kwargs) # pylint: disable=subprocess-run-check\n return proc.stdout.strip()\n\n\ndef get_git_url_and_branch():\n try:\n ref = subprocess_run(\"git rev-parse --abbrev-ref @{upstream}\")\n except subprocess.CalledProcessError:\n ref = subprocess_run(\"git rev-parse --abbrev-ref master@{upstream}\")\n origin, git_branch = ref.split(\"/\", 1)\n git_url = subprocess_run([\"git\", \"remote\", \"get-url\", origin])\n\n # get https:// url from git@ url\n if git_url.startswith(\"git@\"):\n git_url = git_url.replace(\":\", \"/\", 2).replace(\"git@\", \"https://\", 1)\n if git_url.endswith(\".git\"):\n git_url = git_url.replace(\".git\", \"\", 1)\n\n return git_url, git_branch\n\n\ndef get_git_version():\n git_commit_date_hash = subprocess_run(r\"git show -s --format='%cs-%h'\").replace(\"-\", \".\", 2)\n tag_version = git_version = git_commit_date_hash\n\n # add \"-dirty\" suffix if there are uncommited changes except searx/settings.yml\n try:\n subprocess_run(\"git diff --quiet -- . ':!searx/settings.yml' ':!utils/brand.env'\")\n except subprocess.CalledProcessError as e:\n if e.returncode == 1:\n git_version += \"-dirty\"\n else:\n logger.warning('\"%s\" returns an unexpected return code %i', e.returncode, e.cmd)\n return git_version, tag_version\n\n\ntry:\n from searx.version_frozen import VERSION_STRING, VERSION_TAG, GIT_URL, GIT_BRANCH\nexcept ImportError:\n try:\n try:\n VERSION_STRING, VERSION_TAG = get_git_version()\n except subprocess.CalledProcessError as ex:\n logger.error(\"Error while getting the version: %s\", ex.stderr)\n try:\n GIT_URL, GIT_BRANCH = get_git_url_and_branch()\n except subprocess.CalledProcessError as ex:\n logger.error(\"Error while getting the git URL & branch: %s\", ex.stderr)\n except FileNotFoundError as ex:\n logger.error(\"%s is not found, fallback to the default version\", ex.filename)\n\n\nlogger.info(\"version: %s\", VERSION_STRING)\n\nif __name__ == \"__main__\":\n import sys\n\n if len(sys.argv) >= 2 and sys.argv[1] == \"freeze\":\n # freeze the version (to create an archive outside a git repository)\n python_code = f\"\"\"# SPDX-License-Identifier: AGPL-3.0-or-later\n# this file is generated automatically by searx/version.py\n\nVERSION_STRING = \"{VERSION_STRING}\"\nVERSION_TAG = \"{VERSION_TAG}\"\nGIT_URL = \"{GIT_URL}\"\nGIT_BRANCH = \"{GIT_BRANCH}\"\n\"\"\"\n with open(os.path.join(os.path.dirname(__file__), \"version_frozen.py\"), \"w\", encoding=\"utf8\") as f:\n f.write(python_code)\n print(f\"{f.name} created\")\n else:\n # output shell code to set the variables\n # usage: eval \"$(python -m searx.version)\"\n shell_code = f\"\"\"\nVERSION_STRING=\"{VERSION_STRING}\"\nVERSION_TAG=\"{VERSION_TAG}\"\nGIT_URL=\"{GIT_URL}\"\nGIT_BRANCH=\"{GIT_BRANCH}\"\n\"\"\"\n print(shell_code)\n", "path": "searx/version.py"}]}
| 2,052 | 489 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.